id
string | text
string | source
string | created
timestamp[s] | added
string | metadata
dict |
---|---|---|---|---|---|
1302.1261
|
# Second main theorem and unicity of meromorphic mappings for hypersurfaces of
projective
varieties in subgeneral position
Si Duc Quang
###### Abstract.
The purpose of this article is twofold. The first is to prove a second main
theorem for meromorphic mappings of ${\mathbf{C}}^{m}$ into a complex
projective variety intersecting hypersurfaces in subgeneral position with
truncated counting functions. The second is to show a uniqueness theorem for
these mappings which share few hypersurfaces without counting multiplicity.
††footnotetext: 2010 Mathematics Subject Classification: Primary 32H30,
32A22; Secondary 30D35.
Key words and phrases: second main theorem, uniqueness problem, meromorphic
mapping, truncated multiplicity.
## 1\. Introduction
Let $f$ be a linearly nondegenerate meromorphic mapping of ${\mathbf{C}}^{m}$
into ${\mathbf{P}}^{n}({\mathbf{C}})$ and let $\\{H_{j}\\}_{j=1}^{q}$ be $q$
hyperplanes in $N$-subgeneral position in ${\mathbf{P}}^{n}({\mathbf{C}})$.
Then the Cartan-Nochka’s second main theorem for meromorphic mappings and
hyperplanes (see [8], [9]) stated that
$||\ \ (q-2N+n-1)T(r,f)\leq\sum_{i=1}^{q}N^{[n]}_{H_{i}(f)}(r)+o(T(r,f)).$
The above Cartan-Nochka’s second main theorem plays a very essential role in
Nevanlinna theory, with many applications to Algebraic or Analytic geometry.
One of the most interesting applications of the above theorem is to study the
uniqueness problem of meromorphic mappings sharing hyperplanes. We state here
the uniqueness theorem of L. Smiley, which is one of the most early results on
this problem.
Theorem A. Let $f,g$ be two meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$. Let $H_{1},...,H_{q}$ be $q\ (q\geq 3n+2)$
hyperplanes of ${\mathbf{P}}^{n}({\mathbf{C}})$ located in general position.
Assume that $f^{-1}(\bigcup_{i=1}^{q}H_{i})=g^{-1}(\bigcup_{i=1}^{q}H_{i})$
and
$\dim f^{-1}(H_{i})\cap f^{-1}(H_{j})\leq m-2,\ \forall i\neq j.$
Then $f=g.$
Many authors have generalized the above result to the case of meromorphic
mappings and hypersurfaces.
In 2004, Min Ru [11] showed a second main theorem for algebraically
nondegenerate meromorphic mappings and a family of hypersurfaces of a complex
projective space ${\mathbf{P}}^{n}({\mathbf{C}})$ in general position. With
the same assumptions, T. T. H. An and H. T. Phuong [1] improved the result of
Min Ru by giving an explicit truncation level for counting functions. They
proved the following.
Theorem B (An - Phuong [1]) Let $f$ be an algebraically nondegenerate
holomorphic map of ${\mathbf{C}}$ into ${\mathbf{P}}^{n}({\mathbf{C}})$. Let
$\\{Q_{i}\\}_{i=1}^{q}$ be $q$ hypersurfaces in
${\mathbf{P}}^{n}({\mathbf{C}})$ in general position with $\deg Q_{i}=d_{i}\
(1\leq i\leq q)$. Let $d$ be the least common multiple of the
$d_{i}^{\prime}$s, $d=lcm(d_{1},...,d_{q})$. Let $0<\epsilon<1$ and let
$L\geq 2d[2^{n}(n+1)n(d+1)\epsilon^{-1}]^{n}.$
Then,
$||\
(q-n-1-\epsilon)T_{f}(r)\leq\sum_{i=1}^{q}\dfrac{1}{d_{i}}N^{[L]}_{Q_{i}(f)}(r)+o(T_{f}(r)).$
Using this result of An - Phuong, Dulock and Min Ru [2] gave a uniqueness
theorem for meromorphic mappings sharing a family of hypersurfaces in general
position. Then the natural question arise here: ”How to generalize these
results to the case where mappings take values in projective varieties and the
family of hypersurfaces is in subgeneral position?”
Now, let $V$ be a complex projective subvariety of
${\mathbf{P}}^{n}({\mathbf{C}})$ of dimension $k\ (k\leq n)$. Let
$Q_{1},...,Q_{q}\ (q\geq k+1)$ be $q$ hypersurfaces in
${\mathbf{P}}^{n}({\mathbf{C}})$. We say that the family
$\\{Q_{i}\\}_{i=1}^{q}$ is in general position in $V$ if
$V\cap(\bigcap_{j=1}^{k+1}Q_{i_{j}})=\emptyset\ \forall 1\leq
i_{1}<\cdots<i_{k+1}\leq q.$
In [5], G. Dethloff - D. D. Thai and T. V. Tan gave a concept of the notion
”subgeneral position” for a family hypersurfaces as follows.
Definition C. ($N$-subgeneral position in the sense of Dethloff - Thai - Tan
[5]). Let $V$ be a projective subvariety of ${\mathbf{P}}^{n}({\mathbf{C}})$
of dimension $k\ (k\leq n)$. Let $N\geq k$ and $q\geq N+1$. Hypersurfaces
$Q_{1},\cdots,Q_{q}$ in ${\mathbf{P}}^{n}({\mathbf{C}})$ with $V\not\subset
Q_{j}$ for all $j=1,\cdots,q$ are said to be in $N$-subgeneral position in $V$
if the two following conditions are satisfied:
(i) For every $1\leq j_{0}<\cdots<j_{N}\leq q,V\cap Q_{j_{0}}\cap\cdots\cap
Q_{j_{N}}=\emptyset$.
(ii) For any subset $J\subset\\{1,\cdots,q\\}$ such that $1\leq\sharp J\leq k$
and $\\{Q_{j},j\in J\\}$ are in general position in $V$ and
$V\cap(\bigcup_{j\in J}Q_{j})\neq\emptyset$, there exists an irreducible
component $\sigma_{J}$ of $V\cap(\bigcup_{j\in J}Q_{j})$ with
$\dim\sigma_{J}=\dim(V\cap(\bigcup_{j\in J}Q_{j}))$ such that for any
$i\in\\{1,\cdots,q\\}\setminus J$, if $\dim(V\cap(\bigcup_{j\in
J}Q_{j}))=\dim(V\cap Q_{i}\cap(\bigcup_{j\in J}Q_{j}))$, then $Q_{i}$ contains
$\sigma_{J}$.
With this notion of $N-$subgeneral position, the above three authors proved
the following second main theorem.
Theorem D (Dethloff - Thai - Tan [5]). Let $V$ be a complex projective
subvariety of ${\mathbf{P}}^{n}({\mathbf{C}})$ of dimension $k\ (k\leq n)$.
Let $\\{Q_{i}\\}_{i=1}^{q}$ be hypersurfaces of
${\mathbf{P}}^{n}({\mathbf{C}})$ in $N$-subgeneral position in $V$ in the
sense of Definition C, with $\deg Q_{i}=d_{i}\ (1\leq i\leq q)$. Let $d$ be
the least common multiple of $d_{i}^{\prime}$s, i.e.,
$d=lcm(d_{1},...,d_{q})$. Let $f$ be a algebraically nondegenerate meromorphic
mapping of ${\mathbf{C}}^{m}$ into $V$. If $q>2N-k+1$ then for every
$\epsilon>0$, there exist positive integers $L_{j}\ (1\leq j\leq q)$ depending
on $k,n,N,d_{i}\ (1\leq i\leq q),q,\epsilon$ in an explicit way such that
$||\
(q-2N+k-1-\epsilon)T_{f}(r)\leq\sum_{i=1}^{q}\dfrac{1}{d_{i}}N^{[L_{i}]}_{Q_{i}(f)}(r)+o(T_{f}(r)).$
We would like to note that in Definition C, the second condition (ii) is not
natural and it is very hard to examine this condition. Also the truncation
levels $L_{i}$, as same as the truncation level $L$ in Theorem B, is very
large and far from the sharp. Therefore, the application of them to truncated
multiplicity problems will be restricted.
The first purpose in the present paper is to give a new second main theorem
for meromorphic mappings into complex projective varieties, and a family of
hypersurfaces in subgeneral position (in the sense of a natural definition as
below) with a better truncation level for counting functions. Firstly, let us
state the following.
Now, let $V$ be a complex projective subvariety of
${\mathbf{P}}^{n}({\mathbf{C}})$ of dimension $k\ (k\leq n)$. Let $d$ be a
positive integer. We denote by $I(V)$ the ideal of homogeneous polynomials in
${\mathbf{C}}[x_{0},...,x_{n}]$ defining $V$, $H_{d}$ the ring of all
homogeneous polynomials in ${\mathbf{C}}[x_{0},...,x_{n}]$ of degree $d$
(which is also a vector space). We define
$I_{d}(V):=\dfrac{H_{d}}{I(V)\cap H_{d}}\text{ and }H_{V}(d):=\dim I_{d}(V).$
Then $H_{V}(d)$ is called Hilbert function of $V$. Each element of $I_{d}(V)$
which is an equivalent class of an element $Q\in H_{d},$ will be denoted by
$[Q]$,
Let $f:{\mathbf{C}}^{m}\longrightarrow V$ be a meromorphic mapping. We said
that $f$ is degenerate over $I_{d}(V)$ if there is $[Q]\in
I_{d}(V)\setminus\\{0\\}$ so that $Q(f)\equiv 0$, otherwise we said that $f$
is nondegenerate over $I_{d}(V)$. It is clear that if $f$ is algebraically
nondegenerate then $f$ is nondegenerate over $I_{d}(V)$ for every $d\geq 1.$
The family of hypersurfaces $\\{Q_{i}\\}_{i=1}^{q}$ is said to be in
$N-$subgeneral position with respect to $V$ if for any $1\leq
i_{1}<\cdots<i_{N+1}$,
$V\cap(\bigcap_{j=1}^{N+1}Q_{i_{j}})=\emptyset.$
We will prove the following Second Main Theorem.
###### Theorem 1.1.
Let $V$ be a complex projective subvariety of ${\mathbf{P}}^{n}({\mathbf{C}})$
of dimension $k\ (k\leq n)$. Let $\\{Q_{i}\\}_{i=1}^{q}$ be hypersurfaces of
${\mathbf{P}}^{n}({\mathbf{C}})$ in $N$-subgeneral position with respect to
$V$, with $\deg Q_{i}=d_{i}\ (1\leq i\leq q)$. Let $d$ be the least common
multiple of $d_{i}^{\prime}$s, i.e., $d=lcm(d_{1},...,d_{q})$. Let $f$ be a
meromorphic mapping of ${\mathbf{C}}^{m}$ into $V$ which is nondegenerate over
$I_{d}(V)$. If $q>\frac{(2N-k+1)H_{V}(d)}{k+1}$ then we have
$||\
\left(q-\dfrac{(2N-k+1)H_{V}(d)}{k+1}\right)T_{f}(r)\leq\sum_{i=1}^{q}\dfrac{1}{d_{i}}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)+o(T_{f}(r)).$
In the case where $V$ is a linear space of dimension $k$ and each $H_{i}$ is a
hyperplane, i.e., $d_{i}=1\ (1\leq i\leq q)$, then $H_{V}(d)=k+1$ and Theorem
1.1 gives us the above second main theorem of Cartan - Nochka. We note that
even the total defect given from the above Second Main Theorem is
$\frac{(2N-k+1)H_{V}(d)}{k+1}\geq n+1$, but the truncated level $(H_{V}(d)-1)$
of the counting function, which is bounded from above by $(\binom{n+d}{n}-1)$,
is much smaller than that in any previous Second Main Theorem for
hypersurfaces.
Also the notion of $N-$subgeneral position in our result is a natural
generalization of the case of hyperplanes. Therefore, in order to prove the
second main thoerem in our sittuation we have to make a generalization of
Nochka weights for the case of hypersurfaces in complex projective varieties.
In the last section of this paper, we prove a uniqueness theorem for
meromorphic mappings sharing hypersurfaces in subgeneral position without
counting multiplicity as follows.
###### Theorem 1.2.
Let $V$ be a complex projective subvariety of ${\mathbf{P}}^{n}({\mathbf{C}})$
of dimension $k\ (k\leq n)$. Let $\\{Q_{i}\\}_{i=1}^{q}$ be hypersurfaces in
${\mathbf{P}}^{n}({\mathbf{C}})$ in $N$-subgeneral position with respect to
$V$, $\deg Q_{i}=d_{i}\ (1\leq i\leq q)$. Let $d$ be the least common multiple
of $d_{i}^{\prime}$s, i.e., $d=lcm(d_{1},...,d_{q})$. Let $f$ and $g$ be
meromorphic mappings of ${\mathbf{C}}^{m}$ into $V$ which are nondegenerate
over $I_{d}(V)$. Assume that:
(i) $\dim(\mathrm{Zero}Q_{i}(f)\cap\mathrm{Zero}Q_{i}(f))\leq m-2$ for every
$1\leq i<j\leq q,$
(ii) $f=g$ on
$\bigcup_{i=1}^{q}(\mathrm{Zero}Q_{i}(f)\cup\mathrm{Zero}Q_{i}(g)).$
If $q>\frac{2(H_{V}(d)-1)}{d}+\frac{(2N-k+1)H_{V}(d)}{k+1}$ then $f=g.$
We see that with the same assumption, the number of hypersurfaces in our
result is smaller than that in the all previous results on uniqueness of
meromorphic mappings sharing hypersurfaces. Also in the case of mapping into
${\mathbf{P}}^{n}({\mathbf{C}})$ sharing hyperplanes in general position,
i.e., $V={\mathbf{P}}^{n}({\mathbf{C}}),H_{V}(d)=n+1,N=n=k$, the above theorem
gives us the uniqueness theorem of L. Smiley.
Acknowledgements. This work was completed while the author was staying at
Vietnam Institute for Advanced Study in Mathematics. The author would like to
thank the institute for support. This work is also supported in part by a
NAFOSTED grant of Vietnam.
## 2\. Basic notions and auxiliary results from Nevanlinna theory
2.1. We set $||z||=\big{(}|z_{1}|^{2}+\dots+|z_{m}|^{2}\big{)}^{1/2}$ for
$z=(z_{1},\dots,z_{m})\in{\mathbf{C}}^{m}$ and define
$\displaystyle B(r):=\\{z\in{\mathbf{C}}^{m}:||z||<r\\},\quad
S(r):=\\{z\in{\mathbf{C}}^{m}:||z||=r\\}\ (0<r<\infty).$
Define
$v_{m-1}(z):=\big{(}dd^{c}||z||^{2}\big{)}^{m-1}\quad\quad\text{and}$
$\sigma_{m}(z):=d^{c}\text{log}||z||^{2}\land\big{(}dd^{c}\text{log}||z||^{2}\big{)}^{m-1}\text{on}\quad{\mathbf{C}}^{m}\setminus\\{0\\}.$
For a divisor $\nu$ on ${\mathbf{C}}^{m}$ and for a positive integer $M$ or
$M=\infty$, we define the counting function of $\nu$ by
$\nu^{[M]}(z)=\min\ \\{M,\nu(z)\\},$ $\displaystyle
n(t)=\begin{cases}\int\limits_{|\nu|\,\cap B(t)}\nu(z)v_{m-1}&\text{ if }m\geq
2,\\\ \sum\limits_{|z|\leq t}\nu(z)&\text{ if }m=1.\end{cases}$
Similarly, we define $n^{[M]}(t).$
Define
$N(r,\nu)=\int\limits_{1}^{r}\dfrac{n(t)}{t^{2m-1}}dt\quad(1<r<\infty).$
Similarly, we define $N(r,\nu^{[M]})$ and denote it by $N^{[M]}(r,\nu)$.
Let $\varphi:{\mathbf{C}}^{m}\longrightarrow{\mathbf{C}}$ be a meromorphic
function. Denote by $\nu_{\varphi}$ the zero divisor of $\varphi$. Define
$N_{\varphi}(r)=N(r,\nu_{\varphi}),\
N_{\varphi}^{[M]}(r)=N^{[M]}(r,\nu_{\varphi}).$
For brevity we will omit the character [M] if $M=\infty$.
2.2. Let $f:{\mathbf{C}}^{m}\longrightarrow{\mathbf{P}}^{n}({\mathbf{C}})$ be
a meromorphic mapping. For arbitrarily fixed homogeneous coordinates
$(w_{0}:\dots:w_{n})$ on ${\mathbf{P}}^{n}({\mathbf{C}})$, we take a reduced
representation $f=(f_{0}:\dots:f_{n})$, which means that each $f_{i}$ is a
holomorphic function on ${\mathbf{C}}^{m}$ and
$f(z)=\big{(}f_{0}(z):\dots:f_{n}(z)\big{)}$ outside the analytic subset
$\\{f_{0}=\dots=f_{n}=0\\}$ of codimension $\geq 2$. Set
$\|f\|=\big{(}|f_{0}|^{2}+\dots+|f_{n}|^{2}\big{)}^{1/2}$.
The characteristic function of $f$ is defined by
$\displaystyle
T_{f}(r)=\int\limits_{S(r)}\log\|f\|\sigma_{m}-\int\limits_{S(1)}\log\|f\|\sigma_{m}.$
2.3. Let $\varphi$ be a nonzero meromorphic function on ${\mathbf{C}}^{m}$,
which are occasionally regarded as a meromorphic map into
${\mathbf{P}}^{1}({\mathbf{C}})$. The proximity function of $\varphi$ is
defined by
$m(r,\varphi)=\int_{S(r)}\log\max\ (|\varphi|,1)\sigma_{m}.$
The Nevanlinna’s characteristic function of $\varphi$ is define as follows
$T(r,\varphi)=N_{\frac{1}{\varphi}}(r)+m(r,\varphi).$
Then
$T_{\varphi}(r)=T(r,\varphi)+O(1).$
The function $\varphi$ is said to be small (with respect to $f$) if $||\
T_{\varphi}(r)=o(T_{f}(r))$. Here, by the notation ${}^{\prime\prime}||\
P^{\prime\prime}$ we mean the assertion $P$ holds for all $r\in[0,\infty)$
excluding a Borel subset $E$ of the interval $[0,\infty)$ with
$\int_{E}dr<\infty$.
2.4. Lemma on logarithmic derivative (Lemma 3.11 [12]). Let $f$ be a nonzero
meromorphic function on ${\mathbf{C}}^{m}.$ Then
$\biggl{|}\biggl{|}\quad
m\biggl{(}r,\dfrac{\mathcal{D}^{\alpha}(f)}{f}\biggl{)}=O(\log^{+}T(r,f))\
(\alpha\in\mathbf{Z}^{m}_{+}).$
Repeating the argument in (Prop. 4.5 [6]), we have the following:
Proposition 2.5. Let $\Phi_{0},...,\Phi_{k}$ be meromorphic functions on
${\mathbf{C}}^{m}$ such that $\\{\Phi_{0},...,\Phi_{k}\\}$ are linearly
independent over ${\mathbf{C}}.$ Then there exist an admissible set
$\\{\alpha_{i}=(\alpha_{i1},...,\alpha_{im})\\}_{i=0}^{k}\subset\mathbf{Z}^{m}_{+}$
with $|\alpha_{i}|=\sum_{j=1}^{m}|\alpha_{ij}|\leq k\ (0\leq i\leq k)$ such
that the following are satisfied:
(i)
$\\{{\mathcal{D}}^{\alpha_{i}}\Phi_{0},...,{\mathcal{D}}^{\alpha_{i}}\Phi_{k}\\}_{i=0}^{k}$
is linearly independent over $\mathcal{M},$ i.e.,
$\det{({\mathcal{D}}^{\alpha_{i}}\Phi_{j})}\not\equiv 0.$
(ii)
$\det\bigl{(}{\mathcal{D}}^{\alpha_{i}}(h\Phi_{j})\bigl{)}=h^{k+1}\cdot\det\bigl{(}{\mathcal{D}}^{\alpha_{i}}\Phi_{j}\bigl{)}$
for any nonzero meromorphic function $h$ on ${\mathbf{C}}^{m}.$
## 3\. Generalization of Nochka weights
Let $V$ be a complex projective subvariety of ${\mathbf{P}}^{n}({\mathbf{C}})$
of dimension $k\ (k\leq n)$. Let $\\{Q_{i}\\}_{i=1}^{q}$ be $q$ hypersurfaces
in ${\mathbf{P}}^{n}({\mathbf{C}})$ of the common degree $d$. Assume that each
$Q_{i}$ is defined by a homogeneous polynomial
$Q^{*}_{i}\in{\mathbf{C}}[x_{0},...,x_{n}]$. We regard
$I_{d}(V)=\dfrac{H_{d}}{I(V)\cup H_{d}}$ as a complex vector space and define
$\mathrm{rank}\\{Q_{i}\\}_{i\in R}=\mathrm{rank}\\{[Q^{*}_{i}]\\}_{i\in R}$
for every subset $R\subset\\{1,...,q\\}$. It is easy to see that
$\displaystyle\mathrm{rank}\\{Q_{i}\\}_{i\in
R}=\mathrm{rank}\\{[Q^{*}_{i}]\\}_{i\in R}\geq\dim V-\dim(\bigcap_{i\in
R}Q_{i}\cap V).$
###### Definition 3.1.
The family $\\{Q_{i}\\}_{i=1}^{q}$ is said to be in $N$-subgeneral position
with respect to $V$ if for any subset $R\subset\\{1,...,q\\}$ with $\sharp
R=N+1$ then $\bigcap_{i\in R}Q_{i}\cap V=\emptyset$.
Hence, if $\\{Q_{i}\\}_{i=1}^{q}$is in $N$-subgeneral position, by the above
equality, we have
$\mathrm{rank}\\{Q_{i}\\}_{i\in R}\geq\dim V-\dim(\bigcap_{i\in R}Q_{i}\cap
V)=k+1$
(here we note that $\dim(\emptyset)=-1$) for any subset
$R\subset\\{1,...,q\\}$ with $\sharp R=N+1$.
If $\\{Q_{i}\\}_{i=1}^{q}$ is in $n$-subgeneral position with respect to $V$
then we say that it is in general position with respect to $V$.
Taking a ${\mathbf{C}}-$basis of $I_{d}(V)$, we may consider $I_{d}(V)$ as a
${\mathbf{C}}-$ vector space ${\mathbf{C}}^{M}$ with $M=H_{V}(d)$.
Let $\\{H_{i}\\}_{i=1}^{q}$ be $q$ hyperplanes in ${\mathbf{C}}^{M}$ passing
through the coordinates origin. Assume that each $H_{i}$ is defined by the
linear equation
$a_{ij}z_{1}+\cdots a_{iM}z_{M}=0,$
where $a_{ij}\in{\mathbf{C}}\ (j=1,...,M),$ not all zeros. We define the
vector associated with $H_{i}$ by
$v_{i}=(a_{i1},...,a_{iM})\in{\mathbf{C}}^{M}.$
For each subset $R\subset\\{1,...,q\\}$, the rank of $\\{H_{i}\\}_{i\in R}$ is
defined by
$\mathrm{rank}\\{H_{i}\\}_{i\in R}=\mathrm{rank}\\{v_{i}\\}_{i\in R}.$
The family $\\{H_{i}\\}_{i=1}^{q}$ is said to be in $N$-subgeneral position if
for any subset $R\subset\\{1,...,q\\}$ with $\sharp R=N+1$, $\bigcap_{i\in
R}H_{i}=\\{0\\}$, i.e., $\mathrm{rank}\\{H_{i}\\}_{i\in R}=M.$
By Lemmas 3.3 and 3.4 in [9], we have the following.
###### Lemma 3.2.
Let $\\{H_{i}\\}_{i=1}^{q}$ be $q$ hyperplanes in ${\mathbf{C}}^{k+1}$ in
$N$-subgeneral position, and assume that $q>2N-k+1$. Then there are positive
rational constants $\omega_{i}\ (1\leq i\leq q)$ satisfying the following:
i) $0<\omega_{j}\leq 1,\ \forall i\in\\{1,...,q\\}$,
ii) Setting $\tilde{\omega}=\max_{j\in Q}\omega_{j}$, one gets
$\sum_{j=1}^{q}\omega_{j}=\tilde{\omega}(q-2N+k-1)+k+1.$
iii) $\dfrac{k+1}{2N-k+1}\leq\tilde{\omega}\leq\dfrac{k}{N}.$
iv) For $R\subset Q$ with $0<\sharp R\leq N+1$, then $\sum_{i\in
R}\omega_{i}\leq\mathrm{rank}\\{H_{i}\\}_{i\in R}$.
v) Let $E_{i}\geq 1\ (1\leq i\leq q)$ be arbitrarily given numbers. For
$R\subset Q$ with $0<\sharp R\leq N+1$, there is a subset $R^{o}\subset R$
such that $\sharp R^{o}=\mathrm{rank}\\{H_{i}\\}_{i\in
R^{o}}=\mathrm{rank}\\{H_{i}\\}_{i\in R}$ and
$\prod_{i\in R}E_{i}^{\omega_{i}}\leq\prod_{i\in R^{o}}E_{i}.$
The above $\omega_{j}$ are called $Nochka\ weights$, and $\tilde{\omega}$ is
called $Nochka\ constant.$
###### Lemma 3.3.
Let $H_{1},...H_{q}$ be $q$ hyperplanes in ${\mathbf{C}}^{M}$, $M\geq 2$,
passing through the coordinates origin. Let $k$ be a positive integer, $k\leq
M$. Then there exists a linear subspace $L\subset{\mathbf{C}}^{M}$ of
dimension $k$ such that $L\not\subset H_{i}\ (1\leq i\leq q)$ and
$\mathrm{rank}\\{H_{i_{1}}\cap L,\dots,H_{i_{l}}\cap
L\\}=\mathrm{rank}\\{H_{i_{1}},\dots,H_{i_{l}}\\}$
for every $1\leq l\leq k,1\leq i_{1}<\cdots<i_{l}\leq q.$
Proof. We prove the lemma by induction on $M\ (M\geq k)$ as follows.
$\bullet$ If $M=k$, by choosing $L={\mathbf{C}}^{M}$ we get the desired
conclusion of the lemma.
$\bullet$ If $M=M_{0}\geq k+1$. Assume that the lemma holds for every cases
where $k\leq M\leq M_{0}-1.$ Now we prove that the lemma also holds for the
case where $M=M_{0}.$
Indeed, we assume that each hyperplane $H_{i}$ is given by the linear equation
$a_{i1}x_{1}+\cdots+a_{iM_{0}}x_{M_{0}}=0,$
where $a_{ij}\in{\mathbf{C}},$ not all zeros, $(x_{1},...,x_{M_{0}})$ is an
affine coordinates system of ${\mathbf{C}}^{M_{0}}.$ We denote the vector
associated with $H_{i}$ by
$v_{i}=(a_{i1},...,a_{iM_{0}})\in{\mathbf{C}}^{M_{0}}\setminus\\{0\\}$. For
each subset $T$ of $\\{v_{1},...,v_{q}\\}$ satisfying $\sharp T\leq k$, we
denote by $V_{T}$ the vector subspace of ${\mathbf{C}}^{M_{0}}$ generated by
$T$. Since $\dim V_{T}\leq\sharp T\leq k<M_{0}$, $V_{T}$ is a proper vector
subspace of ${\mathbf{C}}^{M_{0}}$. Then $\bigcup_{T}V_{T}$ is nowhere dense
in ${\mathbf{C}}^{M_{0}}$. Hence, there exists a nonzero vector
$v=(a_{1},....,a_{M_{0}})\in{\mathbf{C}}^{M_{0}}\setminus\bigcup_{T}V_{T}.$
Denote by $H$ the hyperplane of ${\mathbf{C}}^{M_{0}}$ defined by
$a_{1}x_{1}+\cdots+a_{M_{0}}x_{M_{0}}=0.$
For each $v_{i}\in\\{v_{1},...,v_{M_{0}}\\}$, we have $v\not\in
V_{\\{v_{i}\\}}$ then $\\{v,v_{i}\\}$ is linearly independent over
${\mathbf{C}}$. It follows that $H_{i}\not\subset H.$ Therefore,
$H^{\prime}_{i}=H_{i}\cap H$ is a hyperplane of $H.$ Also we see that $\dim
H=M_{0}-1$
By the assumption that the lemma holds for $M=M_{0}-1$, then there exists a
linear subspace $L\subset H$ of dimension $k$ such that $L\not\subset
H^{\prime}_{i}\ (1\leq i\leq q)$ and
$\mathrm{rank}\\{H^{\prime}_{i_{1}}\cap L,\dots,H^{\prime}_{i_{l}}\cap
L\\}=\mathrm{rank}\\{H^{\prime}_{i_{1}},\dots,H^{\prime}_{i_{l}}\\}$
for every $1\leq l\leq k,1\leq i_{1}<\cdots<i_{l}\leq q.$
Since $L\not\subset H^{\prime}_{i}$, it is easy to see that $L\not\subset
H_{i}$ for each $i$ $(1\leq i\leq q).$ On the other hand, for every $1\leq
l\leq k,1\leq i_{1}<\cdots<i_{l}\leq q$, we see that $v\not\in
V_{\\{v_{i_{1}},...,v_{i_{l}}\\}}$. Then
$\mathrm{rank}\\{v_{i_{1}},...,v_{i_{l}},v\\}=\mathrm{rank}\\{v_{i_{1}},...,v_{i_{l}}\\}+1$.
This implies that
$\displaystyle\mathrm{rank}\\{H^{\prime}_{i_{1}},\dots,H^{\prime}_{i_{l}}\\}$
$\displaystyle=\dim
H-\dim(\bigcap_{j=1}^{l}H^{\prime}_{i_{j}})=M_{0}-1-\dim(H\cap\bigcap_{j=1}^{l}H_{i_{j}})$
$\displaystyle=\mathrm{rank}\\{H_{i_{1}},...,H_{i_{l}},H\\}-1=\mathrm{rank}\\{v_{i_{1}},...,v_{i_{l}},v\\}-1$
$\displaystyle=\mathrm{rank}\\{v_{i_{1}},...,v_{i_{l}}\\}=\mathrm{rank}\\{H_{i_{1}},...,H_{i_{l}}\\}.$
It follows that
$\displaystyle\mathrm{rank}\\{H_{i_{1}}\cap L,\dots,H_{i_{l}}\cap L\\}$
$\displaystyle=\dim L-\dim(L\cap\bigcap_{j=1}^{l}H_{i_{j}})=\dim
L-\dim(\bigcap_{j=1}^{l}(H^{\prime}_{i_{j}}\cap L))$
$\displaystyle=\mathrm{rank}\\{H^{\prime}_{i_{1}}\cap
L,\dots,H^{\prime}_{i_{l}}\cap
L\\}=\mathrm{rank}\\{H_{i_{1}},...,H_{i_{l}}\\}.$
Then we get the desired linear subspace $L$ in this case.
$\bullet$ By the inductive principle, the lemma holds for every $M$. Hence we
finish the proof of the lemma. $\square$
###### Lemma 3.4.
Let $V$ be a complex projective subvariety of ${\mathbf{P}}^{n}({\mathbf{C}})$
of dimension $k\ (k\leq n)$. Let $Q_{1},...,Q_{q}$ be $q\ (q>2N-k+1)$
hypersurfaces in ${\mathbf{P}}^{n}({\mathbf{C}})$ in $N-$ subgeneral position
with respect to $V$ of the common degree $d.$ Then there are positive rational
constants $\omega_{i}\ (1\leq i\leq q)$ satisfying the following:
i) $0<\omega_{i}\leq 1,\ \forall i\in\\{1,...,q\\}$,
ii) Setting $\tilde{\omega}=\max_{j\in Q}\omega_{j}$, one gets
$\sum_{j=1}^{q}\omega_{j}=\tilde{\omega}(q-2N+k-1)+k+1.$
iii) $\dfrac{k+1}{2N-k+1}\leq\tilde{\omega}\leq\dfrac{k}{N}.$
iv) For $R\subset\\{1,...,q\\}$ with $\sharp R=N+1$, then $\sum_{i\in
R}\omega_{i}\leq k+1$.
v) Let $E_{i}\geq 1\ (1\leq i\leq q)$ be arbitrarily given numbers. For
$R\subset\\{1,...,q\\}$ with $\sharp R=N+1$, there is a subset $R^{o}\subset
R$ such that $\sharp R^{o}=\mathrm{rank}\\{Q_{i}\\}_{i\in R^{o}}=k+1$ and
$\prod_{i\in R}E_{i}^{\omega_{i}}\leq\prod_{i\in R^{o}}E_{i}.$
Proof. We assume that each $Q_{i}$ is given by
$\sum_{I\in\mathcal{I}_{d}}a_{iI}x^{I}=0,$
where $\mathcal{I}_{d}=\\{(i_{0},...,i_{n})\in\mathbf{N}_{0}^{n+1}\ ;\
i_{0}+\cdots+i_{n}=d\\}$, $I=(i_{0},...,i_{n})\in\mathcal{I}_{d}$,
$x^{I}=x_{0}^{i_{0}}\cdots x_{n}^{i_{n}}$ and $a_{iI}\in{\mathbf{C}}\ (1\leq
i\leq q,I\in\mathcal{I}_{d})$. Setting
$Q^{*}_{i}(x)=\sum_{I\in\mathcal{I}_{d}}a_{iI}x^{I}.$ Then $Q^{*}_{i}\in
H_{d}$
Taking a ${\mathbf{C}}-$basis of $I_{d}(V)$, we may identify $I_{d}(V)$ with
${\mathbf{C}}-$vector space ${\mathbf{C}}^{M}$ with $M=H_{d}(V)$. For each
$Q_{i}$, we denote $v_{i}$ the vector in ${\mathbf{C}}^{M}$ which corresponds
to $[Q_{i}^{*}]$ by this identification. We denote by $H_{i}$ the hyperplane
in ${\mathbf{C}}^{M}$ associated with the vector $v_{i}$.
Then for each arbitrary subset $R\subset\\{1,...,q\\}$ with $\sharp R=N+1$, we
have
$\dim(\bigcap_{i\in R}Q_{i}\cap V)\geq\dim V-\mathrm{rank}\\{[Q_{i}]\\}_{i\in
R}=k-\mathrm{rank}\\{H_{i}\\}_{i\in R}.$
Hence
$\mathrm{rank}\\{H_{i}\\}_{i\in R}\geq k-\dim(\bigcap_{i\in R}Q_{i}\cap V)\geq
k-(-1)=k+1.$
By Lemma 3.3, there exists a linear subspace $L\subset{\mathbf{C}}^{M}$ of
dimension $k+1$ such that $L\not\subset H_{i}\ (1\leq i\leq q)$ and
$\mathrm{rank}\\{H_{i_{1}}\cap L,\dots,H_{i_{l}}\cap
L\\}=\mathrm{rank}\\{H_{i_{1}},\dots,H_{i_{l}}\\}$
for every $1\leq l\leq k+1,1\leq i_{1}<\cdots<i_{l}\leq q.$ Hence, for any
subset $R\in\\{1,...,q\\}$ with $\sharp R=N+1$, since
$\mathrm{rank}\\{H_{i}\\}_{i\in R}\geq k+1,$ there exists a subset
$R^{\prime}\subset R$ with $\sharp R^{\prime}=k+1$ and
$\mathrm{rank}\\{H_{i}\\}_{i\in R^{\prime}}=k+1.$ It implies that
$\mathrm{rank}\\{H_{i}\cap L\\}_{i\in R}\geq\mathrm{rank}\\{H_{i}\cap
L\\}_{i\in R^{\prime}}=\mathrm{rank}\\{H_{i}\\}_{i\in R^{\prime}}=k+1.$
This yields that $\mathrm{rank}\\{H_{i}\cap L\\}_{i\in R}=k+1,$ since $\dim
L=k+1$. Therefore, $\\{H_{i}\cap L\\}_{i=1}^{q}$ is a family of $q$
hyperplanes in $L$ in $N$-subgeneral position.
By Lemma 3.2, there exist Nochka weights $\\{\omega_{i}\\}_{i=1}^{q}$ for the
family $\\{H_{i}\cap L\\}_{i=1}^{q}$ in $L.$ It is clear that assertions
(i)-(iv) are automatically satisfied. Now for $R\subset\\{1,...,q\\}$ with
$\sharp R=N+1$, by Lemma 3.2(v) we have
$\sum_{i\in R}\omega_{i}\leq\mathrm{rank}\\{H_{i}\cap L\\}_{i\in R}=k+1$
and there is a subset $R^{o}\subset R$ such that:
$\displaystyle\sharp R^{o}=\mathrm{rank}\\{H_{i}\cap L\\}_{i\in
R^{0}}=\mathrm{rank}\\{H_{i}\cap L\\}_{i\in R}=k+1,$ $\displaystyle\prod_{i\in
R}E_{i}^{\omega_{i}}\leq\prod_{i\in R^{o}}E_{i},\ \ \forall E_{i}\geq 1\
(1\leq i\leq q),$ $\displaystyle\mathrm{rank}\\{Q_{i}\\}_{i\in
R^{0}}=\mathrm{rank}\\{H_{i}\cap L\\}_{i\in R^{0}}=k+1.$
Hence the assertion (v) is also satisfied.
The lemma is proved. $\square$
## 4\. Second main theorems for hypersurfaces
Let $\\{Q_{i}\\}_{i\in R}$ be a set of hypersurfaces in
${\mathbf{P}}^{n}({\mathbf{C}})$ of the common degree $d$. Assume that each
$Q_{i}$ is defined by
$\sum_{I\in\mathcal{I}_{d}}a_{iI}x^{I}=0,$
where $\mathcal{I}_{d}=\\{(i_{0},...,i_{n})\in\mathbf{N}_{0}^{n+1}\ ;\
i_{0}+\cdots+i_{n}=d\\}$, $I=(i_{0},...,i_{n})\in\mathcal{I}_{d},$
$x^{I}=x_{0}^{i_{0}}\cdots x_{n}^{i_{n}}$ and $(x_{0}:\cdots:x_{n})$ is
homogeneous coordinates of ${\mathbf{P}}^{n}({\mathbf{C}})$.
Let $f:{\mathbf{C}}^{m}\longrightarrow V\subset{\mathbf{P}}^{n}({\mathbf{C}})$
be an algebraically nondegenerate meromorphic mapping into $V$ with a reduced
representation $f=(f_{0}:\cdots:f_{n})$. We define
$Q_{i}(f)=\sum_{I\in\mathcal{I}_{d}}a_{iI}f^{I},$
where $f^{I}=f_{0}^{i_{0}}\cdots f_{n}^{i_{n}}$ for $I=(i_{0},...,i_{n})$.
Then we see that $f^{*}Q_{i}=\nu_{Q_{i}(f)}$ as divisors.
###### Lemma 4.1.
Let $\\{Q_{i}\\}_{i\in R}$ be a set of hypersurfaces in
${\mathbf{P}}^{n}({\mathbf{C}})$ of the common degree $d$ and let $f$ be a
meromorphic mapping of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$. Assume that $\bigcap_{i=1}^{q}Q_{i}\cap
V=\emptyset$. Then there exist positive constants $\alpha$ and $\beta$ such
that
$\alpha||f||^{d}\leq\max_{i\in R}|Q_{i}(f)|\leq\beta||f||^{d}.$
Proof. Let $(x_{0}:\cdots:x_{n})$ be homogeneous coordinates of
${\mathbf{P}}^{n}({\mathbf{C}})$. Assume that each $Q_{i}$ is defined by:
$\sum_{I\in\mathcal{I}_{d}}a_{iI}x^{I}=0.$ Set
$Q_{i}(x)=\sum_{I\in\mathcal{I}_{d}}a_{iI}x^{I}$ and consider the following
function
$h(x)=\dfrac{\max_{i\in R}|Q_{i}(x)|}{||x||^{d}},$
where $||x||=(\sum_{i=0}^{n}|x_{i}|^{2})^{\frac{1}{2}}$.
We see that the function $h$ a positive continuous function on $V$. By the
compactness of $V$, there exist positive constants $\alpha$ and $\beta$ such
that $\alpha=\min_{x\in P^{n}({\mathbf{C}})}h(x)$ and $\beta=\max_{x\in
P^{n}({\mathbf{C}})}h(x)$. Therefore, we have
$\alpha||f||^{d}\leq\max_{i\in R}|Q_{i}(f)|\leq\beta||f||^{d}.$
The lemma is proved. $\square$
###### Lemma 4.2.
Let $\\{Q_{i}\\}_{i=1}^{q}$ be a set of $q$ hypersurfaces in
${\mathbf{P}}^{n}({\mathbf{C}})$ of the common degree $d$. Then there exist
$(H_{d}(V)-k-1)$ hypersurfaces $\\{T_{i}\\}_{i=1}^{H_{d}(V)-k-1}$ in
${\mathbf{P}}^{n}({\mathbf{C}})$ such that for any subset $R\in\\{1,...,q\\}$
with $\sharp R=\mathrm{rank}\\{Q_{i}\\}_{i\in R}=k+1$ then
$\mathrm{rank}\\{\\{Q_{i}\\}_{i\in R}\cup\\{T_{i}\\}_{i=1}^{M-k}\\}=H_{V}(d).$
Proof. For each $i\ (1\leq i\leq q)$, take a homogeneous polynomial
$Q^{*}_{i}\in{\mathbf{C}}[x_{0},...,x_{n}]$ of degree $d$ defining $Q_{i}$. We
consider $I_{d}(V)$ as a ${\mathbf{C}}-$vector space of dimension $H_{d}(V)$.
For each subset $R\in\\{1,...,q\\}$ with $\sharp
R=\mathrm{rank}\\{Q^{*}_{i}\\}_{i\in R}=k+1$, we denote by $V_{R}$ the set of
all vectors $v=(v_{1},...,v_{H_{V}(d)-k-1})\in(I_{d}(V))^{H_{V}(d)-k-1}$ such
that $\\{\\{[Q^{*}_{i}]\\}_{i\in R},v_{1},...,v_{H_{V}(d)-k-1}\\}$ is linearly
dependent over ${\mathbf{C}}$. It is clear that $V_{R}$ is an algebraic subset
of $(I_{d}(V))^{H_{V}(d)-k-1}$. Since $\dim I_{d}(V)=H_{d}(V)$ and
$\mathrm{rank}\\{Q^{*}_{i}\\}_{i\in R}=k+1$, there exists
$v=(v_{1},...,v_{H_{V}(d)-k-1})\in(I_{d}(V))^{H_{V}(d)-k-1}$ such that
$\\{\\{[Q^{*}_{i}]\\}_{i\in R},v_{1},...,v_{H_{V}(d)-k-1}\\}$ is linearly
independent over ${\mathbf{C}}$, i.e., $v\not\in V_{R}$. Therefore $V_{R}$ is
a proper algebraic subset of $(I_{d}(V))^{H_{V}(d)-k-1}$ for each $R.$ This
implies that
$(I_{d}(V))^{H_{V}(d)-k-1}\setminus\bigcup_{R}V_{R}\neq\emptyset.$
Hence, there is
$(T^{+}_{1},...,T^{+}_{H_{V}(d)-k-1})\in(I_{d}(V))^{H_{V}(d)-k-1}\setminus\bigcup_{R}V_{R}$.
For each $T^{+}_{i}$, we take a representation $T^{*}_{i}\in H_{d}$ of it and
and take the hypersurface $T_{i}$ in ${\mathbf{P}}^{n}({\mathbf{C}})$, which
is defined by the homogeneous polynomial $T^{*}_{i}$ $(i=1,...,q)$. We have
$\mathrm{rank}\\{\\{Q_{i}\\}_{i\in
R}\cup\\{T_{i}\\}_{i=1}^{H_{V}(d)-k-1}\\}=\mathrm{rank}\\{\\{[Q^{*}_{i}]\\}_{i\in
R}\cup\\{[T^{*}_{i}]\\}_{i=1}^{H_{V}(d)-k-1}\\}=H_{V}(d)$
for every subset $R\in\\{1,...,q\\}$ with $\sharp
R=\mathrm{rank}\\{Q_{i}\\}_{i\in R}=k+1$.
The lemma is proved. $\square$
Proof of Theorem 1.1. We first prove the theorem for the case where all
$Q_{i}\ (i=1,...,q)$ have the same degree $d$.
It is easy to see that there is a positive constant $\beta$ such that
$\beta||f||^{d}\geq|Q_{i}(f)|$ for every $1\leq i\leq q.$ Set
$Q:=\\{1,\cdots,q\\}$. Let $\\{\omega_{i}\\}_{i=1}^{q}$ be as in Lemma 3.4 for
the family $\\{Q_{i}\\}_{i=1}^{q}$. Let $\\{T_{i}\\}_{i=1}^{M-k}$ be $(M-k)$
hypersurfaces in ${\mathbf{P}}^{n}({\mathbf{C}})$, which satisfy Lemma 4.2.
Take a ${\mathbf{C}}-$basis $\\{[A_{i}]\\}_{i=1}^{H_{V}(d)}$ of $I_{d}(V)$,
where $A_{i}\in H_{d}$. Since $f$ is nondegenerate over $I_{d}(V)$,
$\\{A_{i}(f);1\leq i\leq H_{V}(d)\\}$ is linearly independent over
${\mathbf{C}}$. Then there is an admissible set
$\\{\alpha_{1},\cdots,\alpha_{H_{V}(d)}\\}\subset\mathbf{Z}_{+}^{m}$ such that
$W\equiv\det\bigl{(}\mathcal{D}^{\alpha_{j}}A_{i}(f)(1\leq i\leq
H_{V}(d))\bigl{)}_{1\leq j\leq H_{V}(d)}\not\equiv 0$
and $|\alpha_{j}|\leq H_{V}(d)-1,\forall 1\leq j\leq H_{V}(d).$
For each $R^{o}=\\{r^{0}_{1},...,r^{0}_{k+1}\\}\subset\\{1,...,q\\}$ with
$\mathrm{rank}\\{Q_{i}\\}_{i\in R^{o}}=\sharp R^{o}=k+1$, set
$W_{R^{o}}\equiv\det\bigl{(}\mathcal{D}^{\alpha_{j}}Q_{r^{0}_{v}}(f)(1\leq
v\leq k+1),\mathcal{D}^{\alpha_{j}}T_{l}(f)(1\leq l\leq
H_{V}(d)-k-1)\bigl{)}_{1\leq j\leq H_{V}(d)}.$
Since $\mathrm{rank}\\{Q_{r^{0}_{v}}(1\leq v\leq k+1),T_{l}(1\leq l\leq
H_{V}(d)-k-1)\\}=H_{d}(V)$, there exist a nonzero constant $C_{R^{o}}$ such
that $W_{R^{o}}=C_{R^{o}}\cdot W$.
We denote by $\mathcal{R}^{o}$ the family of all subsets $R^{o}$ of
$\\{1,...,q\\}$ satisfying
$\mathrm{rank}\\{Q_{i}\\}_{i\in R^{o}}=\sharp R^{o}=k+1.$
Let $z$ be a fixed point. For each $R\subset Q$ with $\sharp R=N+1,$ we choose
$R^{o}\subset R$ such that $R^{o}\in\mathcal{R}^{o}$ and $R^{o}$ satisfies
Lemma 3.4 v) with respect to numbers
$\bigl{\\{}\dfrac{\beta||f(z)||^{d}}{|Q_{i}(f)(z)|}\bigl{\\}}_{i=1}^{q}$. On
the other hand, there exists $\bar{R}\subset Q$ with $\sharp\bar{R}=N+1$ such
that $|Q_{i}(f)(z)|\leq|Q_{j}(f)(z)|,\forall i\in\bar{R},j\not\in\bar{R}$.
Since $\bigcap_{i\in\bar{R}}Q_{i}=\emptyset$, by Lemma 4.1 there exists a
positive constant $\alpha_{\bar{R}}$ such that
$\alpha_{\bar{R}}||f||^{d}(z)\leq\max_{i\in\bar{R}}|Q_{i}(f)(z)|.$
Then we see that
$\displaystyle\dfrac{||f(z)||^{d(\sum_{i=1}^{q}\omega_{i})}|W(z)|}{|Q_{1}^{\omega_{1}}(f)(z)\cdots
Q_{q}^{\omega_{q}}(f)(z)|}$
$\displaystyle\leq\dfrac{|W(z)|}{\alpha^{q-N-1}_{\bar{R}}\beta^{N+1}}\prod_{i\in\bar{R}}\left(\dfrac{\beta||f(z)||^{d}}{|Q_{i}(f)(z)|}\right)^{\omega_{i}}$
$\displaystyle\leq
A_{\bar{R}}\dfrac{|W(z)|\cdot||f||^{d(k+1)}(z)}{\prod_{i\in\bar{R}^{o}}|Q_{i}(f)|(z)}$
$\displaystyle\leq
B_{\bar{R}}\dfrac{|W_{\bar{R}^{o}}(z)|\cdot||f||^{dH_{V}(d)}(z)}{\prod_{i\in\bar{R}^{o}}|Q_{i}(f)|(z)\prod_{i=1}^{H_{V}(d)-k-1}|T_{i}(f)|(z)},$
where $A_{\bar{R}},B_{\bar{R}}$ are positive constants.
Put
$S_{\bar{R}}=B_{\bar{R}}\dfrac{|W_{\bar{R}^{o}}|}{\prod_{i\in\bar{R}^{o}}|Q_{i}(f)|\prod_{i=1}^{H_{V}(d)-k-1}|T_{i}(f)|}$.
By the lemma on logarithmic derivative, it is easy to see that
$||\ \int_{S(r)}\log^{+}S_{\bar{R}}(z)\sigma_{m}=o(T_{f}(r)).$
Therefore, for each $z\in{\mathbf{C}}^{m}$, we have
$\log\left(\dfrac{||f(z)||^{d(\sum_{i=1}^{q}\omega_{i})}|W(z)|}{|Q_{1}^{\omega_{1}}(f)(z)\cdots
Q_{q}^{\omega_{q}}(f)(z)|}\right)\leq\log\left(||f||^{dH_{V}(d)}(z)\right)+\sum_{R\subset
Q,\sharp R=N+1}\log^{+}S_{R}.$
Integrating both sides of the above inequality over $S(r)$ with the note that:
$\sum_{i=1}^{q}\omega_{i}=\tilde{\omega}_{i}(q-2N+k-1)+k+1$, we have
(4.3) $\displaystyle||\
d(q-2N+k-1-\dfrac{H_{V}(d)-k-1}{\tilde{\omega}})T_{f}(r)\leq\sum_{i=1}^{q}\dfrac{\omega_{i}}{\tilde{\omega}}N_{Q_{i}(f)}(r)-\dfrac{1}{\tilde{\omega}}N_{W}(r)+o(T_{f}(r)).$
Claim.
$\sum_{i=1}^{q}\omega_{i}N_{Q_{i}(f)}(r)-N_{W}(r)\leq\sum_{i=1}^{q}\omega_{i}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)$.
Indeed, let $z$ be a zero of some $Q_{i}(f)(z)$ and $z\not\in
I(f)=\\{f_{0}=\cdots=f_{n}=0\\}$. Since $\\{Q_{i}\\}_{i=1}^{q}$ is in
$N$-subgeneral position, $z$ is not zero of more than $N$ functions
$Q_{i}(f)$. Without loss of generality, we may assume that $z$ is zero of
$Q_{i}(f)\ (1\leq i\leq k\leq N)$ and $z$ is not zero of $Q_{i}(f)$ with
$i>N$. Put $R=\\{1,...,N+1\\}$, choose $R^{1}\subset R$ with $\sharp
R^{1}=\mathrm{rank}\\{Q_{i}\\}_{i\in R^{1}}=k+1$ and $R^{1}$ satisfies Lemma
3.4 v) with respect to numbers
$\bigl{\\{}e^{\max\\{\nu_{Q_{i}(f)}(z)-H_{V}(d)+1,0\\}}\bigl{\\}}_{i=1}^{q}.$
Then we have
$\displaystyle\sum_{i\in
R}\omega_{i}\max\\{\nu_{Q_{i}(f)}(z)-H_{V}(d)+1,0\\}\leq\sum_{i\in
R^{1}}\max\\{\nu_{Q_{i}(f)}(z)-H_{V}(d)+1,0\\}.$
Then, it yields that
$\nu_{W}(z)=\nu_{W_{R^{1}}}(z)\geq\sum_{i\in
R^{1}}\max\\{\nu_{Q_{i}(f)}(z)-H_{V}(d)+1,0\\}\geq\sum_{i\in
R}\omega_{i}\max\\{\nu_{Q_{i}(f)}(z)-H_{V}(d)+1,0\\}.$
Thus
$\displaystyle\sum_{i=1}^{q}\omega_{i}$
$\displaystyle\nu_{Q_{i}(f)}(z)-\nu_{W}(z)=\sum_{i\in
R}\omega_{i}\nu_{Q_{i}(f)}(z)-\nu_{W}(z)$ $\displaystyle=\sum_{i\in
R}\omega_{i}\min\\{\nu_{Q_{i}(f)}(z),H_{V}(d)-1\\}+\sum_{i\in
R}\omega_{i}\max\\{\nu_{Q_{i}(f)}(z)-H_{V}(d)+1,0\\}-\nu_{W}(z)$
$\displaystyle\leq\sum_{i\in
R}\omega_{i}\min\\{\nu_{Q_{i}(f)}(z),H_{V}(d)+1\\}=\sum_{i=1}^{q}\omega_{i}\min\\{\nu_{Q_{i}(f)}(z),M\\}.$
Integrating both sides of this inequality, we get
$\sum_{i=1}^{q}\omega_{i}N_{Q_{i}(f)}(r)-N_{W}(r)\leq\sum_{i=1}^{q}\omega_{i}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r).$
This proves the claim.
Combining the claim and (4.3), we obtain
$\displaystyle||\ d(q-2N+k-1-\dfrac{H_{V}(d)-k-1}{\tilde{\omega}})T_{f}(r)$
$\displaystyle\leq\sum_{i=1}^{q}\dfrac{\omega_{i}}{\tilde{\omega}}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)+o(T_{f}(r))$
$\displaystyle\leq\sum_{i=1}^{q}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)+o(T_{f}(r)).$
Since $\tilde{\omega}\geq\dfrac{k+1}{2N-k+1}$, the above inequality implies
that
$\biggl{|}\biggl{|}\quad
d\left(q-\dfrac{(2N-k+1)H_{V}(d)}{k+1}\right)T_{f}(r)\leq\sum_{i=1}^{q}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)+o(T_{f}(r)).$
Hence, the theorem is proved in the case where all $Q_{i}$ have the same
degree.
We now prove the theorem for the general case where $\deg Q_{i}=d_{i}$.
Applying the above case for $f$ and the hypersurfaces
$Q^{\frac{d}{d_{i}}}_{i}\ (i=1,...,q)$ of the common degree $d$, we have
$\displaystyle\biggl{|}\biggl{|}\quad\left(q-\dfrac{(2N-k+1)H_{V}(d)}{k+1}\right)T_{f}(r)$
$\displaystyle\leq\dfrac{1}{d}\sum_{i=1}^{q}N^{[H_{V}(d)-1]}_{Q^{d/d_{i}}_{i}(f)}(r)+o(T_{f}(r))$
$\displaystyle\leq\sum_{i=1}^{q}\dfrac{1}{d}\frac{d}{d_{i}}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)+o(T_{f}(r))$
$\displaystyle=\sum_{i=1}^{q}\dfrac{1}{d_{i}}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)+o(T_{f}(r)).$
The theorem is proved. $\square$
## 5\. Unicity of meromorphic mappings sharing hypersurfaces
###### Lemma 5.1.
Let $f$ and $g$ be nonconstant meromorphic mappings of ${\mathbf{C}}^{m}$ into
a complex projective subvariety $V$ of ${\mathbf{P}}^{n}({\mathbf{C}})$, $\dim
V=k\ (k\leq n)$. Let $Q_{i}\ (i=1,...,q)$ be moving hypersurfaces in
${\mathbf{P}}^{n}({\mathbf{C}})$ in $N$-subgeneral position with respect to
$V$, $\deg Q_{i}=d_{i}$, $N\geq n$. Put $d=lcm(d_{1},...,d_{q})$ and
$M=\binom{n+d}{n}-1$. Assume that both $f$ and $g$ are nondegenerate over
$I_{d}(V)$. If $q>\frac{(2N-k+1)H_{V}(d)}{k+1}$ then $||\
T_{f}(r)=O(T_{g}(r))\ \text{ and }\ ||\ T_{g}(r)=O(T_{f}(r)).$
Proof. Using Theorem 1.1 for $f$, we have
$\displaystyle\biggl{|}\biggl{|}\quad\left(q-\dfrac{(2N-k+1)H_{V}(d)}{k+1}\right)T_{f}(r)\leq$
$\displaystyle\sum_{i=1}^{q}\dfrac{1}{d_{i}}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)+o(T_{f}(r))$
$\displaystyle\leq$ $\displaystyle\sum_{i=1}^{q}\dfrac{H_{V}(d)-1}{d_{i}}\
N_{Q_{i}(f)}^{[1]}(r)+o(T_{f}(r))$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{q}\dfrac{H_{V}(d)-1}{d_{i}}\
N_{Q_{i}(g)}^{[1]}(r)+o(T_{f}(r))$ $\displaystyle\leq$ $\displaystyle
q(H_{d}(V)-1)\ T_{g}(r)+o(T_{f}(r)).$
Hence $||\quad T_{f}(r)=O(T_{g}(r)).$ Similarly, we get $||\ \
T_{g}(r)=O(T_{f}(r)).$
Proof of Theorem 1.2. We assume that $f$ and $g$ have reduced representations
$f=(f_{0}:\cdots:f_{n})$ and $g=(g_{0}:\cdots:g_{n})$ respectively. Replacing
$Q_{i}$ by $Q_{i}^{\frac{d}{d_{i}}}$ if necessary, without loss of generality,
we may assume that $d_{i}=d$ for all $i=1,...,q.$
By Lemma 5.1, we have $||\ T_{f}(r)=O(T_{g}(r))\ \text{ and }\ ||\
T_{g}(r)=O(T_{f}(r)).$ Suppose that $f\neq g$. Then there exist two indices
$s,t\ (0\leq s<t\leq n)$ satisfying
$H:=f_{s}g_{t}-f_{t}g_{s}\not\equiv 0.$
By the assumption (ii) of the theorem, we have $H=0$ on
$\bigcup_{i=1}^{q}(\mathrm{Zero}Q_{i}(f)\cup\mathrm{Zero}Q_{i}(g))$.
Therefore, we have
$\nu^{0}_{H}\geq\sum_{i=1}^{q}\min\\{1,\nu^{0}_{Q_{i}(f)}\\}$
outside an analytic subset of codimension at least two. Then, it follows that
(5.2) $\displaystyle N_{H}(r)\geq\sum_{i=1}^{q}N^{[1]}_{Q_{i}(f)}(r).$
On the other hand, by the definition of the characteristic function and Jensen
formula, we have
$\displaystyle N_{H}(r)$
$\displaystyle=\int_{S(r)}\log|f_{s}g_{t}-f_{t}g_{s}|\sigma_{m}$
$\displaystyle\leq\int_{S(r)}\log||f||\sigma_{m}+\int_{S(r)}\log||g||\sigma_{m}$
$\displaystyle=T_{f}(r)+T_{g}(r).$
Combining this and (4.2), we obtain
$T_{f}(r)+T_{g}(r)\geq\sum_{i=1}^{q}N^{[1]}_{Q_{i}(f)}(r).$
Similarly, we have
$T_{f}(r)+T_{g}(r)\geq\sum_{i=1}^{q}N^{[1]}_{Q_{i}(g)}(r).$
Summing-up both sides of the above two inequalities, we have
(5.3) $\displaystyle 2(T_{f}(r)+T_{g}(r))$
$\displaystyle\geq\sum_{i=1}^{q}N^{[1]}_{Q_{i}(f)}(r)+\sum_{i=1}^{q}N^{[1]}_{Q_{i}(g)}(r).$
From (5.3) and applying Theorem 1.1 for $f$ and $g$, we have
$\displaystyle 2(T_{f}(r)+T_{g}(r))$
$\displaystyle\geq\sum_{i=1}^{q}\dfrac{1}{H_{V}(d)-1}N^{[H_{V}(d)-1]}_{Q_{i}(f)}(r)+\sum_{i=1}^{q}\dfrac{1}{H_{V}(d)-1}N^{[H_{V}(d)-1]}_{Q_{i}(g)}(r)$
$\displaystyle\geq\dfrac{d}{H_{V}(d)-1}\left(q-\dfrac{(2N-k+1)H_{V}(d)}{k+1}\right)(T_{f}(r)+T_{g}(r))+o(T_{f}(r)+T_{g}(r)).$
Letting $r\longrightarrow+\infty$, we get
$2\geq\frac{d}{H_{V}(d)-1}\left(q-\frac{(2N-k+1)H_{V}(d)}{k+1}\right)\Leftrightarrow
q\leq\frac{2(H_{V}(d)-1)}{d}+\frac{(2N-k+1)H_{V}(d)}{k+1}.$ This is a
contradiction.
Hence $f=g$. The theorem is proved. $\square$
## References
* [1] T. T. H. An and H. T. Phuong, An explicit estimate on multiplicity truncation in the Second Main Theorem for holomorphic curves encountering hypersurfaces in general position in projective space, Houston J. Math. 35 (2009), no. 3, 775-786.
* [2] M. Dulock and M. Ru, A uniqueness theorem for holomorphic curves sharing hypersurfaces, Complex Var. Elliptic Equ. 53 (2008), 797-802.
* [3] G. Dethloff and T. V. Tan, A second main theorem for moving hypersurface targets. Houston J. Math. 37 (2011), 79-111.
* [4] G. Dethloff and T. V. Tan, A uniqueness theorem for meromorphic maps with moving hypersurfaces. Publ. Math. Debrecen 78 (2011), 347-357.
* [5] Gerd E. Dethloff, Tran Van Tan and Do Duc Thai, An extension of the Cartan-Nochka second main theorem for hypersurfaces, Internat. J. Math. 22 (2011), 863-885.
* [6] H. Fujimoto, Non-integrated defect relation for meromorphic maps of complete Kähler manifolds into ${\mathbf{P}}^{N_{1}}({\mathbf{C}})\times...\times{\mathbf{P}}^{N_{k}}({\mathbf{C}}),$ Japanese J. Math. 11 (1985), 233-264.
* [7] R. Nevanlinna, Einige Eideutigkeitssätze in der Theorie der meromorphen Funktionen, Acta. Math. 48 (1926), 367-391.
* [8] E. I. Nochka, On the theory of meromorphic functions, Sov. Math. Dokl. 27 (1983), 377-381.
* [9] J. Noguchi, A note on entire pseudo-holomorphic curves and the proof of Cartan-Nochka’s theorem, Kodai Math. J. 28 (2005), 336-346.
* [10] J. Noguchi and T. Ochiai, Introduction to Geometric Function Theory in Several Complex Variables, Trans. Math. Monogr. 80, Amer. Math. Soc., Providence, Rhode Island, 1990.
* [11] M. Ru, A defect relation for holomorphic curves interecting hypersurfaces, Amer. J. Math. 126 (2004), 215-226.
* [12] B. Shiffman, Introduction to the Carlson - Griffiths equidistribution theory, Lecture Notes in Math. 981 (1983), 44-89.
* [13] D. D. Thai and S. D. Quang, Uniqueness problem with truncated multiplicities of meromorphic mappings in several compex variables for moving targets, Internat. J. Math., 16 (2005), 903-939.
* [14] D. D. Thai and S. D. Quang, Second main theorem with truncated counting function in several complex variables for moving targets, Forum Mathematicum 20 (2008), 145-179.
Department of Mathematics,
Hanoi University of Education,
136-Xuan Thuy, Cau Giay, Hanoi, Vienam.
E-mail: [email protected]
|
arxiv-papers
| 2013-02-06T04:24:10 |
2024-09-04T02:49:41.383076
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Si Duc Quang",
"submitter": "Duc Quang Si",
"url": "https://arxiv.org/abs/1302.1261"
}
|
1302.1325
|
# Meromorphic mappings have the same inverse images of moving hyperplanes with
truncated multiplicities
Si Duc Quang
###### Abstract.
In 1999, H. Fujimoto proved that there exists an integer $l_{0}$ such that if
two meromorphic mappings $f$ and $g$ of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ have the same inverse images for $(2n+2)$
hyperplanes in general position with counting multiplicities to level $l_{0}$
then the map $f\times g$ is algebraically degenerate. The purpose of this
paper is to generalize the result of H. Fujimoto to the case of meromorphic
mappings and slowly moving hyperplanes. Also we improve this result by giving
an explicit estimate for the number $l_{0}$.
††2010 _Mathematics Subject Classification_ : Primary 32H04, 32A22; Secondary
32A35.††Key words and phrases: Degenerate, meromorphic mapping, truncated
multiplicity, hyperplane.
## Introduction
Let $f$ and $g$ be two meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$. Let $H_{1},...,H_{q}$ be $q$ hyperplanes of
${\mathbf{P}}^{n}({\mathbf{C}})$ in general position. Denote by
$\nu_{(f,H_{i})}$ the pull back divisor of $H_{i}$ by $f$. In 1975, H.
Fujimoto proved the following.
Theorem A (H. Fujimoto [2, Theorem II]). Assume that
$\nu_{(f,H_{i})}=\nu_{(g,H_{i})}\ (1\leq i\leq q)$. If $q=3n+2$ and either $f$
or $g$ is linearly non-degenerate over ${\mathbf{C}}$, i.e, the image does not
included in any hyperplane in ${\mathbf{P}}^{n}({\mathbf{C}})$. Then $f=g$.
We note that in this theorem, the condition $\nu_{(f,H_{i})}=\nu_{(g,H_{i})}\
(1\leq i\leq q)$ means that $f$ and $g$ have the same inverse images with
counting multiplicities for all hyperplanes. In 1999 H. Fujimoto [3]
considered the case where these inverse images are taken with multiplicities
truncated by a level $l_{0}$. He proved the following theorem, in which the
number $q$ of hyperplanes is also reduced.
Theorem B (H. Fujimoto [3, Theorem II]). Let $H_{1},...,H_{2n+2}$ be
hyperplanes of ${\mathbf{P}}^{n}({\mathbf{C}})$ in general position. Then
there exist an integer $l_{0}$ such that; for two algebraically nondegenerate
meromorphic mappings $f$ and $g$, if
$\min\\{\nu_{(f,H_{i})},l_{0}\\}=\min\\{\nu_{(g,H_{i})},l_{0}\\}\ (1\leq i\leq
2n+2)$ then the mapping $f\times g$ into ${\mathbf{P}}^{n}({\mathbf{C}})\times
P^{n}({\mathbf{C}})$ is algebraically degenerate.
Then the following questions arises naturally:
* •
Are there any similar results to the above results of H. Fujimoto in the case
where fixed hyperplanes are replaced by moving hyperplanes?
* •
Is there an explicit estimate for the integer $l_{0}$?
The purpose of the present paper is to give an answer for these questions. We
shall generalize and improve Theorem B to the case of moving hyperplanes, also
give an explicit estimate for the truncation level $l_{0}$. To state our
result, we first recall some following.
Let $f$, $a$ be two meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ with reduced representations
$f=(f_{0}:\dots:f_{n})$, $a=(a_{0}:\dots:a_{n})$ respectively. We say that $a$
is “small” with respect to $f$ if $||\ T_{a}(r)=o(T(r,f))$ as $r\to\infty$.
Put $(f,a)=\sum\limits_{i=0}^{n}a_{i}f_{i}$. We also call $a$ a slowly (with
respect to $f$) moving hyperplanes or moving target.
Let $a_{1},\dots,a_{q}$ $(q\geq n+1)$ be $q$ moving hyperplanes of
${\mathbf{C}}^{m}$ into ${\mathbf{P}}^{n}({\mathbf{C}})$ with reduced
representations $a_{i}=(a_{i0}:\dots:a_{in})\ (1\leq j\leq q).$ We say that
$a_{1},\dots,a_{q}$ are located in general position if
$\det(a_{i_{k}l})\not\equiv 0$ for any $1\leq i_{0}<i_{1}<...<i_{n}\leq q.$ We
denote by; $\mathcal{M}$ the field of all meromorphic functions on
${\mathbf{C}}^{m}$, $\mathcal{R}\\{a_{i}\\}_{i=1}^{q}$ the smallest subfield
of $\mathcal{M}$ which contains ${\mathbf{C}}$ and all
$\dfrac{a_{jk}}{a_{jl}}\text{ with }a_{jl}\not\equiv 0,$ $\mathcal{R}_{f}$ the
field of all small (with respect to $f$) meromorphic function on
${\mathbf{C}}^{m}.$
Let $V$ be a projective subvariety of ${\mathbf{P}}^{N}({\mathbf{C}})$. Take a
homogeneous coordinates $(\omega_{0}:\cdots:\omega_{N})$ of
${\mathbf{P}}^{N}({\mathbf{C}})$. Let $F$ be a meromorphic mappings of
${\mathbf{C}}^{m}$ into ${\mathbf{P}}^{n}({\mathbf{C}})$ with a reduced
representation $F=(f_{0}:\cdots:F_{N})$.
Definition C. The meromorphic mapping $F$ is said to be algebraically
degenerate over a subfield $\mathcal{R}$ of $\mathcal{M}$ if there exists a
homogeneous polynomial $Q\in\mathcal{R}[\omega_{0},...,\omega_{N}]$ with the
form:
$Q(z)(\omega_{0},...,\omega_{N})=\sum_{I\in\mathcal{I}_{d}}a_{I}(z)\omega^{I},$
where $d$ is an integer, $\mathcal{I}_{d}=\\{(i_{0},...,i_{N})\ ;\ 0\leq
i_{j}\leq d,\sum_{j=0}^{N}i_{j}=d\\},$ $a_{I}\in\mathcal{R}$ and
$\omega^{I}=\omega_{0}^{i_{0}}\cdots\omega_{N}^{i_{N}}$ for
$I=(i_{0},...,i_{N})$, such that
$\displaystyle\mathrm{(i)}$ $\displaystyle\ \
Q(z)(F_{0}(z),...,F_{N}(z))\equiv 0\ \text{ on }{\mathbf{C}}^{m},$
$\displaystyle\mathrm{(ii)}$ $\displaystyle\ \ \exists
z_{0}\in{\mathbf{C}}^{m},Q(z_{0})(\omega_{0},...,\omega_{N})\not\equiv 0\text{
on }V.$
Now let $f$ and $g$ be two meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ with representations
$f=(f_{0}:\cdots f_{n})\text{ and }g=(g_{0}:\cdots:g_{n}).$
We consider
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$ as a
projective subvariety of ${\mathbf{P}}^{(n+1)^{2}-1}({\mathbf{C}})$ by Segre
embedding. Then the map $f\times g$ into
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$ is
algebraically degenerate over a subfield $\mathcal{R}$ of $\mathcal{M}$ if
there exists a nontrivial polynomial
$Q(z)(\omega_{0},...,\omega_{n},\omega_{0}^{\prime},...,\omega_{n}^{\prime})=\sum_{\underset{i_{0}+\cdots+i_{n}=d}{I=(i_{0},...,i_{n})\in\mathbf{Z}^{n+1}_{+}}}\sum_{\underset{j_{0}+\cdots+j_{n}=d^{\prime}}{J=(j_{0},...,j_{n})\in\mathbf{Z}^{n+1}_{+}}}a_{IJ}(z)\omega^{I}\omega^{\prime
J},$
where $d,d^{\prime}$ are positive integers, $a_{IJ}\in\mathcal{R}$, such that
$Q(z)(f_{0}(z),...,f_{n}(z),g_{0}(z),...,g_{n}(z))\equiv 0.$
We now generalize and improve Theorem B to the following.
###### Main Theorem.
Let $f$ and $g$ be two meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$. Let $a_{1},...,a_{2n+2}$ be slowly (with
respect to $f$) moving hyperplanes of ${\mathbf{P}}^{n}({\mathbf{C}})$ in
general position. Let $l_{0}$ be a positive integer. Assume that $f$ and $g$
are linearly non-degenerate over $\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}$ and
$\min\\{\nu_{(f,H_{i})},l_{0}\\}=\min\\{\nu_{(g,H_{i})},l_{0}\\}\ (1\leq i\leq
2n+2).$
If $l_{0}>2n^{3}q^{2}(q-1)(q-2),$ where $q=\binom{2n+2}{n+2}$, then the map
$f\times g$ into
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$ is
algebraically degenerate over $\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}$.
N.B. Concerning to finiteness or degeneracy problems of meromorphic mappings
with moving targets, there are many results given by M. Ru [6], Z. H. Tu [10],
D. D. Thai - S. D. Quang [8], G. Dethloff - T. V. Tan [1] and others. However
in all their results, they need an aditional assumption that $f$ and $g$ are
agree on inverse images of all targets. This is a strong condition and it is
very hard to examine.
Acknowledgements. The research of the author was supported in part by a
NAFOSTED grant of Vietnam.
## 1\. Basic notions and auxiliary results from Nevanlinna theory
(a). We set $||z||=\big{(}|z_{1}|^{2}+\dots+|z_{m}|^{2}\big{)}^{1/2}$ for
$z=(z_{1},\dots,z_{m})\in{\mathbf{C}}^{m}$ and define
$\displaystyle B(r):=\\{z\in{\mathbf{C}}^{m}:||z||<r\\},\quad
S(r):=\\{z\in{\mathbf{C}}^{m}:||z||=r\\}\ (0<r<\infty).$
Define
$v_{m-1}(z):=\big{(}dd^{c}||z||^{2}\big{)}^{m-1}\quad\quad\text{and}$
$\sigma_{m}(z):=d^{c}\text{log}||z||^{2}\land\big{(}dd^{c}\text{log}||z||^{2}\big{)}^{m-1}\text{on}\quad{\mathbf{C}}^{m}\setminus\\{0\\}.$
Let $F$ be a nonzero meromorphic function on a domain $\Omega$ in
${\mathbf{C}}^{m}$. For a set $\alpha=(\alpha_{1},...,\alpha_{m})$ of
nonnegative integers, we set $|\alpha|=\alpha_{1}+...+\alpha_{m}$ and
$\mathcal{D}^{\alpha}F=\dfrac{\partial^{|\alpha|}F}{\partial^{\alpha_{1}}z_{1}...\partial^{\alpha_{m}}z_{m}}.$
We denote by $\nu^{0}_{f}$ (resp. $\nu^{\infty}_{f}$) the zero divisor (resp.
pole divisor) of the function $f$.
For a divisor $\nu$ on ${\mathbf{C}}^{m}$, which is regarded as a function on
${\mathbf{C}}^{m}$ with values in $\mathbf{Z}$, and for positive integers
$k,M$ or $M=\infty$, we define the counting function of $\nu$ by
$\displaystyle\nu^{[M]}(z)$ $\displaystyle=\min\ \\{M,\nu(z)\\},$
$\displaystyle\nu_{>k}^{[M]}(z)$
$\displaystyle=\begin{cases}\nu^{[M]}(z)&\text{ if }\nu(z)>k,\\\ 0&\text{ if
}\nu(z)\leq k,\end{cases}$ $\displaystyle n(t)$
$\displaystyle=\begin{cases}\int\limits_{|\nu|\,\cap B(t)}\nu(z)v_{m-1}&\text{
if }m\geq 2,\\\ \sum\limits_{|z|\leq t}\nu(z)&\text{ if }m=1.\end{cases}$
Similarly, we define $n^{[M]}(t),\ n_{>k}^{[M]}(t).$
Define
$N(r,\nu)=\int\limits_{1}^{r}\dfrac{n(t)}{t^{2n-1}}dt\quad(1<r<\infty).$
Similarly, we define $N(r,\nu^{[M]}),\ N(r,\nu_{>k}^{[M]})$ and denote them by
$N^{[M]}(r,\nu),\ N_{>k}^{[M]}(r,\nu)$ respectively.
Let $\varphi:{\mathbf{C}}^{m}\longrightarrow{\mathbf{C}}$ be a meromorphic
function. Define
$N_{\varphi}(r)=N(r,\nu^{0}_{\varphi}),\
N_{\varphi}^{[M]}(r)=N^{[M]}(r,\nu^{0}_{\varphi}),\
N_{\varphi,>k}^{[M]}(r)=N_{>k}^{[M]}(r,\nu^{0}_{\varphi}).$
For brevity we will omit the character [M] if $M=\infty$.
(b). Let $f:{\mathbf{C}}^{m}\longrightarrow{\mathbf{P}}^{n}({\mathbf{C}})$ be
a meromorphic mapping. For arbitrarily fixed homogeneous coordinates
$(\omega_{0}:\cdots:\omega_{n})$ on ${\mathbf{P}}^{n}({\mathbf{C}})$, we take
a reduced representation $f=(f_{0}:\cdots:f_{n})$, which means that each
$f_{i}$ is a holomorphic function on ${\mathbf{C}}^{m}$ and
$f(z)=\big{(}f_{0}(z):\dots:f_{n}(z)\big{)}$ outside the analytic set
$\\{f_{0}=\cdots=f_{n}=0\\}$ of codimension $\geq 2$. Set
$\|f\|=\big{(}|f_{0}|^{2}+\dots+|f_{n}|^{2}\big{)}^{1/2}$.
The characteristic function of $f$ is defined by
$\displaystyle
T(r,f)=\int\limits_{S(r)}\text{log}\|f\|\sigma_{n}-\int\limits_{S(1)}\text{log}\|f\|\sigma_{n}.$
Let $a$ be a meromorphic mapping of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ with reduced representation
$a=(a_{0}:\dots:a_{n})$. We define
$m_{f,a}(r)=\int\limits_{S(r)}\text{log}\dfrac{||f||\cdot||a||}{|(f,a)|}\sigma_{m}-\int\limits_{S(1)}\text{log}\dfrac{||f||\cdot||a||}{|(f,a)|}\sigma_{m},$
where $\|a\|=\big{(}|a_{0}|^{2}+\dots+|a_{n}|^{2}\big{)}^{1/2}$.
If $f,a:{\mathbf{C}}^{m}\to{\mathbf{P}}^{n}({\mathbf{C}})$ are meromorphic
mappings such that $(f,a)\not\equiv 0$, then the first main theorem for moving
targets in value distribution theory (see [5]) states
$T(r,f)+T(r,a)=m_{f,a}(r)+N_{(f,a)}(r).$
Let $\varphi$ be a nonzero meromorphic function on ${\mathbf{C}}^{m}$, which
are occasionally regarded as a meromorphic map into
${\mathbf{P}}^{1}({\mathbf{C}})$. The proximity function of $\varphi$ is
defined by
$m(r,\varphi):=\int_{S(r)}\mathrm{log}\max\ (|\varphi|,1)\sigma_{m}.$
(c). As usual, by the notation $``||\ P"$ we mean the assertion $P$ holds for
all $r\in[0,\infty)$ excluding a Borel subset $E$ of the interval $[0,\infty)$
with $\int_{E}dr<\infty$.
The following plays essential roles in Nevanlinna theory.
###### Theorem 1.1 (see [7, Theorem 2.1],[9, Theorem 2]).
Let $f=(f_{0}:\cdots:f_{n})$ be a reduced representation of meromorphic
mapping $f$ of ${\mathbf{C}}^{m}$ into ${\mathbf{P}}^{n}({\mathbf{C}})$.
Assume that $f_{n+1}$ is a holomorphic such that $f_{0}+\cdots
f_{n}+f_{n+1}=0$. If $\sum_{i\in I}f_{i}\neq 0$ for all
$I\subsetneq\\{0,...,n+1\\}$ then
$||\ T(r,f)\leq\sum_{i=0}^{n+1}N_{f_{i}}^{[n]}(r)+o(T(r,f)).$
###### Theorem 1.2 (see [5], [3, Theorem 5.5]).
Let $f$ be a nonzero meromorphic function on ${\mathbf{C}}^{m}.$ Then
$\biggl{|}\biggl{|}\quad
m\biggl{(}r,\dfrac{\mathcal{D}^{\alpha}(f)}{f}\biggl{)}=O(\mathrm{log}^{+}T(r,f))\
(\alpha\in\mathbf{Z}^{m}_{+}).$
###### Theorem 1.3 (see [5, Theorem 5.2.29]).
Let $f$ be a nonzero meromorphic function on ${\mathbf{C}}^{m}.$ with a
reduced representation $f=(f_{0}:\cdots:f_{n})$. Suppose that $f_{k}\not\equiv
0$. Then
$T(r,\frac{f_{j}}{f_{k}})\leq
T(r,f)\leq\sum_{j=0}^{n}T(r,\frac{f_{j}}{f_{j}})+O[1].$
(d). Let $h_{1},h_{2},...,h_{p}$ be finitely many nonzero meromorphic
functions on ${\mathbf{C}}^{m}$. By a rational function in logarithmic
derivatives of $h_{j}^{\prime}$s we mean a nonzero meromorphic function
$\varphi$ on ${\mathbf{C}}^{m}$ which is represented as
$\varphi=\dfrac{P(\cdots,\frac{\mathcal{D}^{\alpha}h_{j}}{h_{j}},\cdots)}{Q(\cdots,\frac{\mathcal{D}^{\alpha}h_{j}}{h_{j}},\cdots)}$
with polynomials $P(\cdots,X^{\alpha},\cdots)$ and
$Q(\cdots,X^{\alpha},\cdots)$
###### Proposition 1.4 (see [3, Proposition 3.4]).
Let $h_{1},h_{2},...,h_{p}\ (p\geq 2)$ be nonzero meromorphic functions on
${\mathbf{C}}^{m}$. Assume that
$h_{1}+h_{2}+\cdots+h_{p}=0$
Then, the set $\\{1,...,p\\}$ of indices has a partition
$\\{1,...,p\\}=J_{1}\cup J_{2}\cup\cdots\cup J_{k},\sharp J_{\alpha}\geq 2\
\forall\ \alpha,J_{\alpha}\cap J_{\beta}=\emptyset\text{ for }\alpha\neq\beta$
such that, for each $\alpha$,
$\displaystyle\mathrm{(i)}$ $\displaystyle\ \sum_{i\in J_{\alpha}}h_{i}=0,$
$\displaystyle\mathrm{(ii)}$ $\frac{h_{i}^{\prime}}{h_{i}}\ (i,i^{\prime}\in
J_{\alpha})$ are rational functions in logarithmic derivatives of
$h_{j}^{\prime}$s
## 2\. Proof of Main Theorem
In order to prove the main theorem, we need the following algebraic
propositions.
Let $H_{1},...,H_{2n+1}$ be $(2n+1)$ hyperplanes of
${\mathbf{P}}^{n}({\mathbf{C}})$ in general position given by
$H_{i}:\ \ x_{i0}\omega_{0}+x_{i1}\omega_{1}+\cdots+x_{in}\omega_{n}=0\ (1\leq
i\leq 2n+1).$
We consider the rational map
$\Phi:{\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})\longrightarrow{\mathbf{P}}^{2n}({\mathbf{C}})$
as follows:
For $v=(v_{0}:v_{1}\cdots:v_{n}),\
w=(w_{0}:w_{1}:\cdots:w_{n})\in{\mathbf{P}}^{n}({\mathbf{C}})$, we define the
value $\Phi(v,w)=(u_{0}:\cdots:u_{2n+1})\in{\mathbf{P}}^{2n}({\mathbf{C}})$ by
$u_{i}=\frac{x_{i0}v_{0}+x_{i1}v_{1}+\cdots+x_{in}v_{n}}{x_{i0}w_{0}+x_{i1}w_{1}+\cdots+x_{in}w_{n}}.$
###### Proposition 2.1 (see [3, Proposition 5.9]).
The map $\Phi$ is a birational map of
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$ onto
${\mathbf{P}}^{2n}({\mathbf{C}})$.
Now let $b_{1},...,b_{2n+1}$ be $(2n+1)$ moving hyperplanes of
${\mathbf{P}}^{n}({\mathbf{C}})$ in general position with reduced
representations
$b_{i}=(b_{i0}:b_{i1}:\cdots:b_{in})\ (1\leq i\leq 2n+1).$
Let $f$ and $g$ be two meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ with reduced representations
$f=(f_{0}:\cdots:f_{n})\ \text{ and }\ g=(g_{0}:\cdots:g_{n}).$
Define $h_{i}=\frac{(f,b_{i})}{(g,b_{i})}\ (1\leq i\leq 2n+1)$ and
$h_{I}=\prod_{i\in I}h_{i}$ for each subset $I$ of $\\{1,...,2n+1\\}.$ Set
$\mathcal{I}=\\{I=(i_{1},...,i_{n})\ ;\ 1\leq i_{1}<\cdots<i_{n}\leq 2n+1\\}$.
Let $\mathcal{R}$ be a subfield of $\mathcal{M}$ which contains
$\mathcal{R}\\{b_{i}\\}_{i=1}^{2n+1}$. We have the following proposition
###### Proposition 2.2.
If there exist functions $A_{I}\in\mathcal{R}\ (I\in\mathcal{I})$, not all
zero, such that
$\sum_{I\in\mathcal{I}}A_{I}h_{I}\equiv 0$
then the map $f\times g$ into
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$ is
algebraically degenerate over $\mathcal{R}$.
###### Proof.
By changing the homogeneous coordinates of ${\mathbf{P}}^{n}({\mathbf{C}})$,
we ma assume that $b_{i0}\not\equiv 0\ (1\leq i\leq 2n+1)$. Since
$b_{1},...,b_{2n+1}$ are in general position, then for $1\leq i_{0}<\cdots
i_{n}\leq 2n+1$ we have $\det(b_{i_{j}k})_{0\leq j,k\leq n}\not\equiv 0$.
Therefor the set
$S=\bigl{(}\bigcap_{I\in\mathcal{I}}\\{z\in{\mathbf{C}}^{m}\ ;\
A_{I}(z)=0\\}\bigl{)}\cup\bigcup_{1\leq i_{0}<\cdots i_{n}\leq
2n+1}\\{z\in{\mathbf{C}}^{m}\ ;\ \det(b_{i_{j}k}(z))_{0\leq j,k\leq n}=0\\}$
is a proper analytic subset of ${\mathbf{C}}^{m}$. Take $z_{0}\not\in S$ and
set $x_{ij}=b_{ij}(z_{0})$.
For $v=(v_{0}:v_{1}\cdots:v_{n}),\
w=(w_{0}:w_{1}:\cdots:w_{n})\in{\mathbf{P}}^{n}({\mathbf{C}})$, we define the
map $\Phi(v,w)=(u_{0}:\cdots:u_{2n+1})\in{\mathbf{P}}^{2n}({\mathbf{C}})$ as
above. By Proposition 2.1, $\Phi$ is birational function. This implies that
the functions
$\sum_{I\in\mathcal{I}}A_{I}(z_{0})\prod_{i\in
I}\frac{\sum_{j=0}^{n}b_{ij}(z_{0})v_{j}}{\sum_{j=0}^{n}b_{ij}(z_{0})w_{j}}$
is a nontrivial rational function. It follows that
$Q(z)(v_{0},...,v_{n},w_{0},...,w_{n})=\dfrac{1}{\prod_{i=1}^{2n+1}b_{i0}}\sum_{I\in\mathcal{I}}A_{I}(z)\left(\prod_{i\in
I}\sum_{j=0}^{n}b_{ij}(z)v_{j}\right)\times\left(\prod_{i\in
I^{c}}\sum_{j=0}^{n}b_{ij}(z)w_{j}\right),$
where $I^{c}=\\{1,....,2n+1\\}\setminus I$, is a nontrivial polynomial with
coefficients in $\mathcal{R}$. Since the assumption of the proposition, it is
clear that
$Q(z)(f_{0}(z),...,f_{n}(z),g_{0}(z),...,g_{n}(z))\equiv 0.$
Hence $f\times g$ is algebraically degenerate over $\mathcal{R}$. ∎
###### Proposition 2.3.
Let $f$ be a meromorphic mapping of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ and let $b_{1},...,b_{n+1}$ be moving
hyperplanes of ${\mathbf{P}}^{n}({\mathbf{C}})$ in general position with
reduced representations:
$f=(f_{0}:\cdots:f_{n}),\ b_{i}=(b_{i0}:\cdots:b_{in})\ (1\leq i\leq n+1).$
Then for each regular point $z_{0}$ of the analytic subset
$\bigcup_{i=1}^{n+1}\\{z\ ;\ (f,b_{i})(z)=0\\}$ with $z_{0}\not\in I(f)$, we
have
$\min_{1\leq i\leq
n+1}\nu^{0}_{(f,b_{i})}(z_{0})\leq\nu^{0}_{\det\Phi}(z_{0}),$
where $I(f)$ denotes the indeterminacy set of $f$ and $\Phi$ is the matrix
$(b_{ij};1\leq i\leq n+1,0\leq j\leq n).$
###### Proof.
Since $z_{0}\not\in I(f)$, we may assume that $f_{0}(z_{0})\neq 0.$ We
consider the following system of equations
$b_{i0}f_{0}+\cdots+b_{in}f_{n}=(f,b_{i})\ (1\leq i\leq n+1).$
Solving these equations, we obtain
$f_{0}=\frac{\det\Phi^{\prime}}{\det\Phi},$
where $\Phi^{\prime}$ is a matrix obtained from $\Phi$ by replacing the first
column of $\Phi$ by $\left(\begin{array}[]{cc}(f,b_{1})\\\ \vdots\\\
(f,b_{n})\end{array}\right)$. Therefore, we have
$\nu^{0}_{\det\Phi}(z_{0})=\nu^{0}_{\det\Phi^{\prime}}(z_{0})\geq\min_{1\leq
i\leq n+1}\nu^{0}_{(f,b_{i})}(z_{0}).$
The proposition is proved. ∎
Proof of Main Theorem. Assume that $f,g,a_{i}$ have reduced representations
$f=(f_{0}:\cdots:f_{n}),\ g=(g_{0}:\cdots:g_{n}),\
a_{i}=(a_{i0}:\cdots:a_{in}).$
By Theorem 1.1 we have
$\displaystyle\bigl{|}\bigl{|}\ \dfrac{2n+2}{n+2}T(r,f)$
$\displaystyle\leq\sum_{i=1}^{2n+2}N_{(f,a_{i})}^{[n]}(r)+o(T(r,f))$
$\displaystyle\leq n\cdot\sum_{i=1}^{2n+2}N_{(g,a_{i})}^{[1]}(r)+o(T(r,f))$
$\displaystyle\leq n(2n+2)(T(r,g))+o(T(r,f)).$
Then we have $||\ T(r,f)=O(T(r,g))$. Similarly we also have $||\
T(r,g)=O(T(r,f))$.
We suppose contrarily that the map $f\times g$ is algebraically non-degenerate
over $\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}.$ Define
$h_{i}=\dfrac{(f,a_{i})}{(g,a_{i})}\ (1\leq i\leq 2n+2).$ Then
$\dfrac{h_{i}}{h_{j}}=\dfrac{(f,a_{i})\cdot(g,a_{j})}{(g,a_{i})\cdot(f,a_{j})}$
does not depend on the choice of representations of $f$ and $g$ . Since
$\sum_{k=0}^{n}a_{ik}f_{k}-h_{i}\cdot\sum_{k=0}^{n}a_{ik}g_{k}=0\ (1\leq i\leq
2n+2),$ it implies that
(2.1) $\displaystyle\Phi:=\det\
(a_{i0},...,a_{in},a_{i0}h_{i},...,a_{in}h_{i};1\leq i\leq 2n+2)\equiv 0.$
For each subset $I\subset\\{1,2,...,2n+2\\},$ put $h_{I}=\prod_{i\in I}h_{i}$.
Denote by $\mathcal{I}$ the set
$\mathcal{I}=\\{I=(i_{1},...,i_{n+1})\ ;\ 1\leq i_{1}<\cdots<i_{n+1}\leq
2n+2\\}.$
For each $I=(i_{1},...,i_{n+1})\in\mathcal{I}$, define
$\displaystyle A_{I}=(-1)^{\frac{(n+1)(n+2)}{2}+i_{1}+...+i_{n+1}}$
$\displaystyle\times\det(a_{i_{r}l};1\leq r\leq n+1,0\leq l\leq n)$
$\displaystyle\times\det(a_{j_{s}l};1\leq s\leq n+1,0\leq l\leq n),$
where $J=(j_{1},...,j_{n+1})\in\mathcal{I}$ such that $I\cup
J=\\{1,2,...,2n+2\\}.$
We define some following:
$\bullet$ $\mathcal{R}$: the field of rational functions in logarithmic
derivatives of functions in $\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}$,
$\bullet$ $G$: the group of all nonzero functions $\varphi$ so that there
exists an positive integer $m$, $\varphi^{m}$ is a rational function in
logarithmic derivatives of ${h_{i}}^{\prime}$s with coefficients in
$\mathcal{R}$,
$\bullet$ $\mathcal{H}$: the subgroup of the group $\mathcal{M}/G$ generated
by elements $[h_{1}],...,[h_{2n+2}]$.
Hence $\mathcal{H}$ is a finitely generated torsion-free abelian group. We
call $(x_{1},...,x_{p})$ a basis of $\mathcal{H}$. Then for each
$i\in\\{1,...,2n+2\\}$, we have
$[h_{i}]=x_{1}^{t_{i1}}\cdots x_{p}^{t_{ip}}.$
Put $t_{i}=(t_{i1},...,t_{ip})\in\mathbf{Z}^{p}$ and denote by $``\leqslant"$
the lexicographical order on $\mathbf{Z}^{p}$. Without loss of generality, we
may assume that
$t_{1}\leqslant t_{2}\leqslant\cdots\leqslant t_{2n+1}.$
Now the equality (2.1) implies that
$\sum_{I\in\mathcal{I}}A_{I}h_{I}=0.$
Applying Proposition 1.4 to meromorphic mappings $A_{I}h_{I}\
(I\in\mathcal{I})$, then we have a partition
$\mathcal{I}=\mathcal{I}_{1}\cup\cdots\cup\mathcal{I}_{k}$ with
$\mathcal{I}_{\alpha}\neq\emptyset$ and
$\mathcal{I}_{\alpha}\cap\mathcal{I}_{\beta}=\emptyset$ for $\alpha\neq\beta$
such that for each $\alpha$,
(2.2) $\displaystyle\sum_{I\in\mathcal{I}_{\alpha}}A_{I}h_{I}\equiv 0,$ (2.3)
$\displaystyle\frac{A_{I^{\prime}}h_{I^{\prime}}}{A_{I}h_{I}}\
(I,I^{\prime}\in\mathcal{I}_{\alpha})\text{ are rational functions in
logarithmic derivatives of ${A_{J}h_{J}}^{\prime}$s}.$
Moreover, we ma assume that $I_{\alpha}$ is minimal, i.e., there is no proper
subset $J_{\alpha}\subsetneq I_{\alpha}$ with
$\sum_{I\in\mathcal{J}_{\alpha}}A_{I}h_{I}\equiv 0$.
We distinguish the following two cases:
Case 1. Assume that there exists an index $i_{0}$ such that
$t_{i_{0}}<t_{i_{0}+1}$. We may assume that $i_{0}\leq n+1$ (otherwise we
consider the relation $``\geqslant"$ and change indices of
$\\{h_{1},...,h_{2n+2}\\}$).
Assume that $I=(1,2,...,n+1)\in\mathcal{I}_{1}$. By the assertion (2.3), for
each $J=(j_{1},...,j_{n+1})\in\mathcal{I}_{1}\ (1\leq j_{1}<\cdots<j_{n+1}\leq
2n+2)$, we have $[h_{I}]=[h_{J}]$. This implies that
$t_{1}+\cdots+t_{n+1}=t_{j_{1}}+\cdots+t_{j_{n+1}}.$
This yields that $t_{j_{i}}=t_{i}\ (1\leq i\leq n+1)$.
Suppose that $j_{i_{0}}>i_{0}$, then $t_{i_{0}}<t_{i_{0}+1}\leqslant
t_{j_{i_{0}}}$. This is a contradiction. Therefore $j_{i_{0}}=i_{0}$, and
hence $j_{1}=1,...,j_{i_{0}-1}=i_{0}-1.$ We conclude that
$J=(1,...,i_{0},j_{i_{0}+1},...,j_{n+1})$ and $i_{0}\leq n+1$ for each
$J\in\mathcal{I}_{1}.$
By (2.1), we have
$\sum_{I\in\mathcal{I}_{1}}A_{I}h_{I}=h_{i_{0}}\sum_{I\in\mathcal{I}_{1}}A_{I}h_{I\setminus\\{i_{0}\\}}\equiv
0.$
Thus
$\sum_{I\in\mathcal{I}_{1}}A_{I}h_{I\setminus\\{i_{0}\\}}\equiv 0.$
Then Proposition 2.2 shows that $f\times g$ is algebraically degenerate over
$\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}$. It contradicts to the supposition.
Case 2. Assume that $t_{1}=\cdots t_{2n+2}$. It follows that
$\frac{h_{I}}{h_{J}}\in G$ for any $I,J\in\mathcal{I}$. Then we easily see
that $\frac{h_{i}}{h_{j}}\in G$ for all $1\leq i,j\leq 2n+2.$ Hence, there
exists a positive integer $m_{ij}$ such that
$\left(\frac{h_{i}}{h_{j}}\right)^{m_{ij}}$ is a rational funtion in
logarithmic derivatives of ${h_{s}}^{\prime}$s with coefficients in
$\mathcal{R}$. Therefore, by lemma on logarithmic derivatives, we have
(2.4) $\displaystyle\begin{split}\biggl{|}\biggl{|}\ \
m\bigl{(}r,\frac{h_{i}}{h_{j}}\bigl{)}&=\frac{1}{m_{ij}}m\bigl{(}r,\left(\frac{h_{i}}{h_{j}}\right)^{m_{ij}}\bigl{)}+O(1)\\\
&=O\bigl{(}\max
m\left(r,\frac{\mathcal{D}^{\alpha}(\frac{a_{sl}}{a_{sk}})}{\frac{a_{sl}}{a_{sk}}}\right)\bigl{)}+O\bigl{(}\max
m\left(r,\frac{\mathcal{D}^{\alpha}(h_{s})}{h_{s}}\right)\bigl{)}+O(1)\\\
&=O\bigl{(}\max
m\left(r,\frac{\mathcal{D}^{\alpha}(f,b_{s})}{(f,b_{s})}\right)\bigl{)}+O\bigl{(}\max
m\left(r,\frac{\mathcal{D}^{\alpha}(g,b_{s})}{(g,b_{s})}\right)\bigl{)}+o(T(r,f))\\\
&=o(T(r,f))+o(T(r,g))=o(T(r,f)).\end{split}$
###### Claim 2.4.
For each $i_{0}\in\\{1,...,2n+2\\},$ we have
$\displaystyle||\
N_{h_{i_{0}}}(r)\leq\dfrac{q^{2}(q-1)(q-2)}{2(l_{0}+1)}T(r,g)+o(T(r,g))$ and
$\displaystyle||\
N_{\frac{1}{h_{i_{0}}}}(r)\leq\dfrac{q^{2}(q-1)(q-2)}{2(l_{0}+1)}T(r,g)+o(T(r,g)).$
Indeed, fix an index $\alpha$. If $\sharp I_{\alpha}=2$, it is clear that we
have a nontrivial algebraic relation (over
$\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}$) among
$f_{0},...,f_{n},g_{0},...,g_{n}$. It follows that $f\times g$ is
algebraically degenerate over $\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}$. this is a
contradiction. Then $\sharp I_{\alpha}>2$. Assume that
$\mathcal{I}_{\alpha}=\\{I_{0},...,I_{t+1}\\},\ t\geq 1$ and put
$J=I_{0}\cup\cdots\cup I_{t+1}.$ We consider the meromorphic mapping
$F_{\alpha}$ of ${\mathbf{C}}^{m}$ into ${\mathbf{P}}^{t}({\mathbf{C}})$ with
the reduced representation
$F_{\alpha}=(dA_{I_{0}}h_{I_{0}}:\cdots:dA_{I_{t}}h_{I_{t}}),$
where $d$ is a meromorphic function. Then we see that each zero point of
$dA_{I_{i}}h_{I_{i}}\ (0\leq i\leq t+1)$ must be either zero or pole of some
$A_{I_{j}}h_{I_{j}}\ (0\leq j\leq t)$. Then by Theorem 1.1, we have
(2.5) $\displaystyle\begin{split}||\
T(r,F_{\alpha})&\leq\sum_{i=0}^{t+1}N^{[t]}_{dA_{I_{i}}h_{I_{i}}}(r)+o(T(r,F_{\alpha}))\leq\sum_{i=0}^{n+1}(N^{[t]}_{A_{I_{i}}h_{I_{i}}}(r)+N^{[t]}_{\frac{1}{A_{I_{i}}h_{I_{i}}}}(r))+o(T(r,F_{\alpha}))\\\
&\leq\sum_{i=0}^{t+1}(N^{[t]}_{h_{I_{i}}}(r)+N^{[t]}_{\frac{1}{h_{I_{i}}}}(r))+o(T(r,g))\leq
t\sum_{i=0}^{t+1}(N^{[1]}_{h_{I_{i}}}(r)+N^{[1]}_{\frac{1}{h_{I_{i}}}}(r))+o(T(r,g))\\\
&\leq
t(t+2)\sum_{i\in\mathcal{J}}(N^{[1]}_{h_{i}}(r)+N^{[1]}_{\frac{1}{h_{i}}}(r))+o(T(r,g))\\\
&\leq
t(t+2)\sum_{i\in\mathcal{J}}N^{[1]}_{(g,a_{i}),>l_{0}}(r)+o(T(r,g))\leq\frac{t(t+2)}{l_{0}+1}\sum_{i\in\mathcal{J}}N_{(g,a_{i}),>l_{0}}(r)+o(T(r,g))\\\
&\leq\frac{t(t+2)(2n+2)}{l_{0}+1}T(r,g)+o(T(r,g))\leq\frac{q(q+2)(2n+2)}{l_{0}+1}T(r,g)+o(T(r,g)),\end{split}$
where $q=\binom{2n+2}{n+1}.$
Take a regular point $z_{0}$ of the analytic subset
$\bigcup_{i=1}^{2n+2}\\{z\ ;\ (f,a_{i})=0\\}\text{ with }z_{0}\not\in I(f)\cup
I(g).$
We may assume that
$\nu^{0}_{h_{1}}(z_{0})\geq\nu^{0}_{h_{2}}(z_{0})\geq\cdots\geq\nu^{0}_{h_{2n+2}}(z_{0}).$
Set $I=(1,...,n+1)$. Then $I\in\mathcal{I}_{\alpha}$ with an index $\alpha$,
$1\leq\alpha\leq k.$
It also is easy to see that if $1\in I^{\prime}\ \forall
I^{\prime}\in\mathcal{I}_{\alpha}$, then
$\sum_{I^{\prime}\in\mathcal{I}_{\alpha}}A_{I^{\prime}}h_{I^{\prime}\setminus\\{1\\}}\equiv
0.$
Therefore, Proposition 2.2 shows that $f\times g$ is algebraically degenerate
(over $\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}$). This is a contradiction. Hence,
there exists $I^{\prime}\in\mathcal{I}_{\alpha}$ such that $1\not\in
I^{\prime}$. Assume that $I^{\prime}=(j_{1},...,j_{n+1})$ with
$1<j_{1}<\cdots<j_{n+1}$. By Proposition 2.3, it yields that
$\nu^{0}_{j_{n+1}}(z_{0})\leq\nu^{0}_{det(a_{jl};j\in I^{\prime},0\leq l\leq
n)}(z_{0})$. Then we have
$\displaystyle\nu^{0}_{h_{i_{0}}}(z_{0})$
$\displaystyle\leq\nu^{0}_{h_{1}}(z_{0})\leq\nu^{0}_{\frac{A_{I}h_{I}}{A_{I^{\prime}}h_{I^{\prime}}}}(z_{0})$
$\displaystyle+\nu^{0}_{\frac{A_{I^{\prime}}}{A_{i}}}(z_{0})+\nu^{0}_{j_{n+1}}(z_{0})$
$\displaystyle\leq\nu^{0}_{\frac{A_{I}h_{I}}{A_{I^{\prime}}h_{I^{\prime}}}}(z_{0})+\nu^{0}_{\frac{A_{I^{\prime}}}{A_{I}}}(z_{0})+\nu^{0}_{\det(a_{jl};j\in
I^{\prime},0\leq l\leq n)}(z_{0}).$
Thus we have
$\displaystyle\nu^{0}_{h_{i_{0}}}(z_{0})\leq$
$\displaystyle\sum_{\alpha=1}^{k}\sum_{I,I^{\prime}\in\mathcal{I}_{\alpha}}(\nu^{0}_{\frac{A_{I}h_{I}}{A_{I^{\prime}}h_{I^{\prime}}}}(z_{0})+\nu^{0}_{\frac{A_{I^{\prime}}}{A_{I}}}(z_{0}))$
$\displaystyle+\sum_{1\leq j_{1}<\cdots<j_{n+1}\leq
2n+2}\nu^{0}_{\det(a_{j_{i}l};1\leq i\leq n+1,0\leq l\leq n)}(z_{0}).$
The above inequality holds for all $z_{0}$ outside an analytic subset of
codimension at least two. Integrating both sides of this inequality, we obtain
$\displaystyle||\ N_{h_{i_{0}}}(r)$
$\displaystyle\leq\sum_{\alpha=1}^{k}\sum_{I,I^{\prime}\in\mathcal{I}_{\alpha}}(N_{\frac{A_{I}h_{I}}{A_{I^{\prime}}h_{I^{\prime}}}}(r)+N_{\frac{A_{I^{\prime}}}{A_{I}}}(r))+o(T(r,g))$
$\displaystyle\leq\sum_{\alpha=1}^{k}\sum_{I,I^{\prime}\in\mathcal{I}_{\alpha}}T(r,\frac{A_{I}h_{I}}{A_{I^{\prime}}h_{I^{\prime}}})+o(T(r,g))$
$\displaystyle\leq\sum_{\alpha=1}^{k}\sum_{I,I^{\prime}\in\mathcal{I}_{\alpha}}T(r,F_{\alpha})+o(T(r,g))$
$\displaystyle\leq\sum_{\alpha=1}^{k}\sum_{I,I^{\prime}\in\mathcal{I}_{\alpha}}\frac{q(q+2)(2n+2)}{l_{0}+1}T(r,g)+o(T(r,g))\
\ \ \text{ by (\ref{new3.2})},$
$\displaystyle\leq\sum_{I,I^{\prime}\in\mathcal{I}}\frac{q(q+2)(2n+2)}{l_{0}+1}T(r,g)+o(T(r,g))$
$\displaystyle=\frac{q^{2}(q-1)(q-2)}{2(l_{0}+1)}T(r,g)+o(T(r,g)).$
Similarly, we get
$||\
N_{\frac{1}{h_{i_{0}}}}(r)\leq\dfrac{q^{2}(q-1)(q-2)}{2(l_{0}+1)}T(r,g)+o(T(r,g)).$
We complete the proof of Claim 2.4
We now continue the proof of Main Theorem. By changing the homogeneous
coordinates of ${\mathbf{P}}^{n}({\mathbf{C}})$ if necessary, we may assume
that $a_{i0}\not\equiv 0$ for all $1\leq i\leq 2n+2$. Set
$\tilde{a}_{ij}=\frac{a_{ij}}{a_{i0}},\
\tilde{a}_{i}=(\tilde{a}_{i0},....,\tilde{a}_{in}),\
(f,\tilde{a}_{i})=\sum_{j=0}^{n}\tilde{a}_{ij}f_{j}\ \text{and
}(g,\tilde{a}_{i})=\sum_{j=0}^{n}\tilde{a}_{ij}g_{j}.$
Then there exist functions $b_{ij}\in\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}\
(n+2\leq i\leq 2n+2,1\leq j\leq n+1)$ such that
$\tilde{a}_{i}=\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{j}.$
By the identity (2.1), we have
$\det\
(\tilde{a}_{i0},...,\tilde{a}_{in},\tilde{a}_{i0}h_{i},...,\tilde{a}_{in}h_{i};1\leq
i\leq 2n+2)\equiv 0.$
It easily implies that
$\det\
(\tilde{a}_{i0}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{j0}h_{j},...,\tilde{a}_{in}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{jn}h_{j};n+2\leq
i\leq 2n+2,1\leq j\leq n+1)\equiv 0.$
Therefore, the matrix
$\Psi=\bigl{(}\tilde{a}_{i0}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{j0}h_{j},...,\tilde{a}_{in}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{jn}h_{j};n+2\leq
i\leq 2n+2\bigl{)}$
has the rank at most $n$.
Suppose that ${\mathrm{rank}\,}\Psi<n$. Then, the determinant of the square
submatrix
$\bigl{(}\tilde{a}_{i1}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{j0}h_{j},...,\tilde{a}_{in}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{jn}h_{j};n+2\leq
i\leq 2n+1\bigl{)}$
vanishes identically. By Proposition 2.2, it follows that $f\times g$ is
algebraically degenerate over $\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}$. This
contradicts to the supposition. Hence ${\mathrm{rank}\,}\Psi=n$.
Without loss of generality, we may assume that the determinant of the square
submatrix
$\bigl{(}\tilde{a}_{i1}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{j0}h_{j},...,\tilde{a}_{in}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{jn}h_{j};n+2\leq
i\leq 2n+1\bigl{)}$
of $\Psi$ does not vanish identically. On the other hand, we have
$(\tilde{a}_{i0}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{j0}h_{j})g_{0}+\cdots+(\tilde{a}_{in}h_{i}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{jn}h_{j})g_{n}=0\
(n+2\leq i\leq 2n+1).$
Thus
$\displaystyle(\tilde{a}_{i0}\frac{h_{i}}{h_{1}}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{j0}\frac{h_{j}}{h_{1}})\frac{g_{0}}{g_{n}}$
$\displaystyle+\cdots+(\tilde{a}_{in}\frac{h_{i}}{h_{1}}-\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{j(n-1)}\frac{h_{j}}{h_{1}})\frac{g_{n-1}}{g_{n}}$
$\displaystyle=-\tilde{a}_{in}\frac{h_{i}}{h_{1}}+\sum_{j=1}^{n+1}b_{ij}\tilde{a}_{jn}\frac{h_{j}}{h_{1}}\
(n+2\leq i\leq 2n+1).$
We regard the above identities as a system of $n$ equations in unkown
variables $\dfrac{g_{0}}{g_{n}},...,\dfrac{g_{n-1}}{g_{n}}$ and solve these to
obtain that $\dfrac{g_{i}}{g_{n}}\ (0\leq i\leq n-1)$ has the form
$\frac{g_{i}}{g_{n}}=\frac{P_{i}}{Q_{i}},$
where $P_{i}$ and $Q_{i}$ are homogeneous polynomials in
$\dfrac{h_{j}}{h_{1}}\ (1\leq j\leq 2n+1)$ of degree $n$ with coefficients in
$\mathcal{R}\\{a_{j}\\}_{j=1}^{2n+2}$. Then by Theorem 1.2 we have
(2.6) $\displaystyle
T(r,g)\leq\sum_{i=0}^{n-1}T(r,\frac{g_{i}}{g_{n}})=\sum_{i=0}^{n-1}T(r,\frac{P_{i}}{Q_{i}})\leq
n^{2}\sum_{j=1}^{2n+1}T(r,\dfrac{h_{j}}{h_{1}})+o(T(r,g)).$
We note that $T(r,\dfrac{h_{1}}{h_{1}})=0$. By the equality (2.4) and Claim
2.4, for each $j\in\\{2,...,2n+1\\}$ we have
(2.7) $\displaystyle\begin{split}||\
T(r,\dfrac{h_{j}}{h_{1}})&=m(r,\dfrac{h_{j}}{h_{1}})+N(r,\nu^{\infty}_{\frac{h_{j}}{h_{1}}})+O(1)\leq
N_{h_{1}}(r)+N_{\frac{1}{h_{j}}}(r)+O(1)\\\
&\leq\dfrac{q^{2}(q-1)(q-2)}{(l_{0}+1)}T(r,g)+o(T(r,g)).\end{split}$
Combining (2.6) and (2.7), we obtain
$T(r,g)\leq 2n^{3}\dfrac{q^{2}(q-1)(q-2)}{(l_{0}+1)}T(r,g)+o(T(r,g)).$
Letting $r\longrightarrow+\infty$, we get
$1\leq 2n^{3}\dfrac{q^{2}(q-1)(q-2)}{(l_{0}+1)}\Leftrightarrow l_{0}+1\leq
2n^{3}q^{2}(q-1)(q-2).$
This is a contradiction.
Then from Case 1 and Case 2, we see that the supposition is impossible. Hence,
$f\times g$ is algebraically degenerate over
$\mathcal{R}\\{a_{i}\\}_{i=1}^{2n+2}.$ The theorem is proved. $\square$
## References
* [1] G. Dethloff and T. V. Tan, Uniqueness problem for meromorphic mappings with truncated multiplicities and moving targets, Nagoya J. Math. 181 (2006), 75-101.
* [2] H. Fujimoto, The uniqueness problem of meromorphic maps into the complex projective space, nagoya Math. J. 58 (1975), 1-23.
* [3] H. Fujimoto, Uniqueness problem with truncated multiplicities in value distribution theory, II, nagoya Math. J. 155 (1999), 161-188.
* [4] P. H. Ha and S. D. Quang and D. D. Thai, Uniqueness problem with truncated multiplicities of meromorphic mappings in several complex variables sharing small identical sets for moving target, Intern. J. Math. 21 (2010), 1095-1120.
* [5] J. Noguchi and T. Ochiai, Introduction to Geometric Function Theory in Several Complex Variables, Trans. Math. Monogr. 80, Amer. Math. Soc., Providence, Rhode Island, 1990.
* [6] M. Ru, A uniqueness theorem with moving targets without counting multiplicity, Proc. Amer. Math. Soc. 129 (2001), 2701-2707.
* [7] M. Ru and J. Wang, Truncated second main theorem with moving targets, Trans. Amer. Math. Soc. 356 (2004), 557-571.
* [8] D. D. Thai and S. D. Quang, Uniqueness problem with truncated multiplicities of meromorphic mappings in several complex variables for moving target, Intern. J. Math. 16 (2005), 903-939.
* [9] D. D. Thai and S. D. Quang, Second main theorem with truncated counting function in several complex variables for moving targets, Forum Math. 20 (2008), 163-179.
* [10] Z-H. Tu, Uniqueness problem of meromorphic mappings in several complex variables for moving targets, Tohoku Math. J. 54 (2002), 567-579.
Department of Mathematics
Hanoi University of Education
136-Xuan Thuy, Cau Giay, Hanoi
Vietnam
E-mail address:
[email protected]
|
arxiv-papers
| 2013-02-06T11:27:54 |
2024-09-04T02:49:41.391583
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Si Duc Quang and Le Ngoc Quynh",
"submitter": "Duc Quang Si",
"url": "https://arxiv.org/abs/1302.1325"
}
|
1302.1333
|
# Geometric approach to non-relativistic Quantum Dynamics of mixed states
Vicent Gimeno [email protected] Departament de Matemàtiques- Institute of
New Imaging Technologies, Universitat Jaume I, Castelló, Spain. Jose M.
Sotoca [email protected] Departamento de Lenguajes y Sistemas Informáticos-
Institute of New Imaging Technologies, Universitat Jaume I, Castelló, Spain.
(August 27, 2024)
###### Abstract
In this paper we propose a geometrization of the non-relativistic quantum
mechanics for mixed states. Our geometric approach makes use of the Uhlmann’s
principal fibre bundle to describe the space of mixed states and as a novelty
tool, to define a dynamic-dependent metric tensor on the principal manifold,
such that the projection of the geodesic flow to the base manifold gives the
temporal evolution predicted by the von Neumann equation. Using that approach
we can describe every conserved quantum observable as a Killing vector field,
and provide a geometric proof for the Poincare quantum recurrence in a
physical system with finite energy levels.
Mixed states; Fibre bundle; Dynamic metric; Poincare recurrence
###### pacs:
03.65.-w
††preprint: AIP/123-QED
## I Introduction
The geometrization of physical theories is a successful and challenging area
in theoretical physics. The most well known examples are Hamiltonian mechanics
based on symplectic geometry, General Relativity based on semi-Riemannian
geometry and classical Yang-Mills theory which uses fibre bundles Jost (2009).
Geometric ideas have also found a clear utility in non-relativistic quantum
mechanics problems because quantum theory can be formulated in the language of
Hamiltonian phase-space dynamics Kibble (1979). Hence, the quantum theory has
an intrinsic mathematical structure equivalent to Hamiltonian phase-space
dynamics. However, the underlying phase-space is not the same space of
classical mechanics, but the space of quantum mechanics itself, i.e., the
_space of pure states_ or the _space of mixed states_.
Unlike General Relativity or Gauge Theory where the metric tensor or the
connection are related with the physical interaction, the most usual geometric
formulation of the geometry of non-relativistic quantum mechanics is not
dynamic, in the sense that is insensitive to changes in the Hamiltonian of the
system. Under these assumptions, that approach only makes use of the
differential structure of the Hilbert space for quantum states and the Fubini-
Study metric. See for example the geometric interpretation of Berry’s phase
Bengtsson and Życzkowski (2006).
From a more dynamical point of view, A. KryukovKryukov (2007) has stated that
the Schrödinger equation 111Observe that in equation 1 and throughout this
paper we use the natural units system ($h=1$) for a pure state
$\ket{\alpha_{t}}$ (Plank’s constant is set equal to $1$)
$\frac{d}{dt}\ket{\alpha_{t}}=-{i}{H}\ket{\alpha_{t}}\quad,$ (1)
can be considered as a geodesic flow in a certain Riemannian manifold with an
accurate metric which depends on the Hamiltonian of the system.
The goal of this paper is to generalize the work of Kryukov for mixed states.
To this end, we provide an underlying differential manifold to describe mixed
states and a dynamic-dependent Riemannian metric tensor to analyze their
temporal evolution.
The mixed states are characterized by density matrices and the equation which
plays the role of the Schrödinger one is the von Neumann equation Sakurai
(1994)
$\frac{d\rho_{t}}{dt}=-{i}\left[H,\rho_{t}\right]\quad.$ (2)
To obtain the underlying differential manifold following the Uhlmann’s
geometrization for non-relativistic quantum mechanicsUhlmann (1986, 1987,
1989, 1991), we make use of a principal fibre bundle such that its base
manifold is the space of mixed states. Finally, to provide the Riemannian
metric we choose an appropriate metric in the principal bundle, in such a way
that the projection of the geodesic flow in the principal manifold to the base
manifold is just the temporal evolution given by the von Neumann equation.
Among the geometric properties that are observed due to the movement of this
geodesic flow, in this paper we analyze the phase volume conservation
according to the Liouville Theorem. That allow us to show a geometric proof of
the Poincare recurrence theorem relating it with the recurrence principle for
physical systems with discrete energy levels. Let us emphasize that our
geometric proof for the quantum Poincare recurrence is closer to the classical
mechanics proofArnol′d (199?) (that also uses the conservation of the volume
in the phase-space evolution) than the previous given in the quantum
settingBocchieri and Loinger (1957); Schulman (1978); Percival (1961).
## II Density matrices space as a base of a principal fibre bundle
The most general state, the so-called _mixed state_ , is represented by a
_density operator_ in the Hilbert space $\mathcal{H}$. In this paper we always
assume that dim$(\mathcal{H})=n<\infty$, being $\mathcal{H}$ a vector space on
the complex field ($\mathcal{H}=\mathbb{C}^{n}$). The density operator $\rho$
is in fact a _density matrix_. Recall that a density matrix is a complex
matrix $\rho$ that satisfies the following properties:
1. 1.
$\rho$ is a hermitian matrix, i.e, the matrix coincides with its conjugate
transpose matrix: $\rho=\rho^{\dagger}$.
2. 2.
$\rho$ is positive $\rho\geq 0$. It means that any eigenvalue of $A$ is non-
negative.
3. 3.
$\rho$ is normalized by the trace $\text{tr}(\rho)=1$.
Let us denote by $\mathcal{P}$ the space of mixed quantum states. Note that
the space of pure states $\mathcal{P}(\mathcal{H})$ is just
$\mathcal{P}(\mathcal{H})=\left\\{\rho\in\mathcal{P}\,|\,\rho^{2}=\rho\right\\}\quad.$
Recall that the space of quantum pure states has an elegant interpretation as
a $U(1)$-fibre bundle $\mathcal{S}(\mathcal{H})\to\mathcal{P}(\mathcal{H})$.
Following Uhlmann Uhlmann (1986, 1987, 1989, 1991) and Bengtsson and
Chrus̀ciǹski books Bengtsson and Życzkowski (2006); Chruściński and
Jamiołkowski (2004), we can use a similar argument to the case of quantum pure
states. The key idea of Uhlmann’s approach is to lift the system density
operator $\rho$, acting on the Hilbert space $\mathcal{H}$, to an extended
Hilbert space
$\mathcal{H}^{\text{ext}}:=\mathcal{H}\otimes\mathcal{H}\quad.$
In quantum information theory Nielsen and Chuang (2000), the procedure of
extension, $\mathcal{H}\to\mathcal{H}^{\text{ext}}$ is known as attaching an
ancilla living in $\mathcal{H}$. Obviously, the space of squared matrices
$\mathcal{M}_{n,n}(\mathbb{C})$ ($n$ rows, $n$ columns) over $\mathbb{C}$
(that is a $2n^{2}$ real dimensional manifold) can be identified with
$\mathcal{H}^{\text{ext}}$
$\mathcal{M}_{n,n}(\mathbb{C})\cong\mathcal{H}^{\text{ext}}\quad.$
Since $\text{tr}(WW^{\dagger})$ is a smooth function in the space of squared
matrices, by the Regular Level Set Theorem Lee (2003), the set
$\mathcal{S}_{0}:=\left\\{W\in\mathcal{M}_{n,n}(\mathbb{C})\,:\,\text{tr}(WW^{\dagger})=1\right\\}\quad,$
(3)
is a smooth manifold of $\mathcal{M}_{n,n}(\mathbb{C})$. Actually, it is not
hard to see that $\mathcal{S}_{0}$ is diffeomorphic to the sphere
$\mathbb{S}^{2n^{2}-1}$. If $\rho$ is a mixed state in $\mathcal{P}$, we shall
denote an element $W\in\mathcal{S}_{0}$ a _purification_ of $\rho$ if
$\displaystyle\rho=WW^{\dagger}\quad,$ (4)
therefore, we get the space of density matrices $\mathcal{P}$ by the
projection $\pi:\mathcal{S}_{0}\to\mathcal{P}$, where the projection is given
by
$\pi(W)=WW^{\dagger}\quad.$ (5)
Observe that, if $u$ is an unitary matrix (i.e,
$uu^{\dagger}=u^{\dagger}u=\mathbb{I}_{n}$) then
$\pi(Wu)=\pi(W)\quad.$ (6)
Moreover, to fix notation recall that the Lie group $U(n)$ is a _Lie
transformation group_ Kobayashi and Nomizu (1996) acting on $\mathcal{S}_{0}$
on the right. In general, a principal fibre bundleKobayashi and Nomizu (1996)
will be denoted by $P(M,G,\pi)$, being $P$ the _total space_ , $M$ the _base
space_ , $G$ the _structure group_ and $\pi$ the _projection_. For each $x\in
M$, $\pi^{-1}(x)$ is a closed submanifold of $P$, called the _fibre_ over $x$.
If $p$ is a point of $\pi^{-1}(x)$, then $\pi^{-1}(x)$ is the set of points
$\left\\{pa,a\in G\right\\}$, and it is called fibre through $p$.
At this point, an important question to answer, is if
$\mathcal{S}_{0}(\mathcal{P},U(n),\pi)$ is a principal fibre bundle over the
base manifold $\mathcal{P}$ of density matrices. Unfortunately the answer is
no, because $U(n)$ does not act freely on $\mathcal{S}_{0}$. In general,
$Wu=W$ for $W\in\mathcal{S}_{0}$ and $u\in U(n)$ do not imply that
$u=\mathbb{I}_{n}$, but observe that if $\det(W)\neq 0$ ( i.e, $W$ is an
invertible matrix ) $U(n)$ would act freely on our space. This should be the
way to describe the space of density matrices. Instead of starting with
$\mathcal{M}_{n,n}(\mathbb{C})$, we start with the subset of invertible
matrices. That subset has the differentiable structure of the Lie group
$\text{GL}(n,\mathbb{C})$. Then, we build a submanifold $\mathcal{S}$ of
$\text{GL}(n,\mathbb{C})$ given by
$\mathcal{S}:=\left\\{W\in\text{GL}(n,\mathbb{C})\,;\,\text{tr}(WW^{\dagger})=1\right\\}\quad.$
(7)
Finally, we obtain the base manifold $\mathcal{P}^{+}$ using the projection
$\pi:\mathcal{S}\to\mathcal{P}^{+}$ given by
$\pi(W)=WW^{\dagger}\quad,$ (8)
and therefore, $\mathcal{S}(\mathcal{P}^{+},U(n),\pi)$ becomes a principal
fibre bundle. Observe that
$\mathcal{P}^{+}=\left\\{\rho\in\mathcal{P}\,|\,\rho>0\right\\}\quad,$
contains only strictly positive (or faithful) density operators. But
$\mathcal{P}$ can be recovered from $\mathcal{P}^{+}$ by continuity
argumentsBengtsson and Życzkowski (2006).
In short, we describe the geometry of density matrices as a base manifold of a
principal fibre bundle consisting of a submanifold $\mathcal{S}$ of the Lie
group $GL(n,\mathbb{C})$ diffeomorphic to the sphere $\mathbb{S}^{2n^{2}-1}$
as a total space and the Lie group $U(n)$ as structure group.
Since $\mathcal{S}(\mathcal{P}^{+},U(n),\pi)$ admits a global section 222The
map $\tau$ is well defined because a positive operator admits a unique
positive square root. It is a section because
$\pi(\tau(\rho))=(\sqrt{(}\rho))^{2}=\rho$.
$\tau:\mathcal{P}^{+}\to\mathcal{S}$
$\tau(\rho):=\sqrt{\rho}\quad,$ (9)
therefore $\mathcal{S}(\mathcal{P}^{+},U(n),\pi)$ is a trivial bundle from a
topological point of view, that means thatBengtsson and Życzkowski (2006);
Kobayashi and Nomizu (1996)
$\mathcal{S}=\mathcal{P}^{+}\times U(n)\quad.$ (10)
## III Hamiltonian vector field, Dynamic Riemannian metric, SHg-quantum fibre
bundle and Main theorem
In this section, we define a Riemannian metric for dynamics systems and we
study how this metric acts within the tangent vector space of $\mathcal{S}$.
We also discuss its relationship with other metrics such as the Bures metric
or the metric proposed by Kryukov Kryukov (2007).
### III.1 Hamiltonian vector field, dynamic metric and its relation with
other metrics
In order to provide explicit expressions for tangent vectors to $\mathcal{S}$
and the metric tensor, we identify the tangent space
$T_{W}\mathcal{M}_{n,n}(\mathbb{C})$ with $\mathcal{M}_{n,n}(\mathbb{C})$
itself. Since our total space $\mathcal{S}$ is a submanifold of the manifold
$\mathcal{M}_{n,n}(\mathbb{C})$, where each point $W\in\mathcal{S}$ is a
matrix, and the tangent space $T_{W}\mathcal{S}$ to $\mathcal{S}$ in the point
$W$ is a subspace of the tangent space $T_{W}\mathcal{M}_{n,n}(\mathbb{C})$.
We can use a matrix to describe a point $W\in\mathcal{S}$ and a matrix to
describe a tangent vector $X\in T_{W}\mathcal{S}$ too.
First of all, note that the Hamiltonian operator $H$ induces a vector field
$h:\mathcal{S}\to T\mathcal{S}$ on $\mathcal{S}$ given by
$h_{W}:=-iHW\quad,$ (11)
where $h_{W}$ denotes the vector field in the point $W\in\mathcal{S}$, i.e,
$h_{W}=h(W)$. That vector field $h$ will be denoted as the _Hamiltonian vector
field_.
For any point $W\in\mathcal{S}$, and any two tangent vectors $X,Y\in
T_{W}\mathcal{S}$, we define the _dynamic Riemannian metric_ $g_{H}(X,Y)$ as
$g_{H}(X,Y):=\frac{1}{2}\text{tr}(X^{\dagger}H^{-2}Y+Y^{\dagger}H^{-2}X)\quad.$
(12)
It will be denoted by $\nabla^{H}$ the Levi-Civita connexion (the sole metric
torsion free connexion) given by $g_{H}$. In the definition (12) we use
$H^{-2}$ assuming that $H$ is an invertible matrix, but that in fact makes no
restriction on the Hamiltonian of the system because we can set $H\to
H+\mathbb{I}_{n}$ without changing the underlying physics. It is not hard to
see that $g_{H}$ defines a positive definite inner product in each tangent
space $T_{W}\mathcal{S}$, being therefore $g_{H}$ a Riemannian metric.
With that metric tensor $g_{H}$ the (sub)manifold $(\mathcal{S},g_{H})$
becomes a Riemannian manifold. In order to fix the notation we denote
$\left\\{\mathcal{S}(\mathcal{P}^{+},U(n),\pi),h,g_{H}\right\\}$ the _SHg-
quantum fibre bundle of dimension_ $n$.
The rest of this section will examine the inherited metric in the base
manifold (theorem 1) from the dynamic metric in the principal manifold and its
relation with other metrics.
The tangent space $T_{W}\mathcal{S}$ at the point $W\in\mathcal{S}$ can be
decomposed in its horizontal $H_{W}$ and vertical $V_{W}$ subspaces:
$T_{W}\mathcal{S}=H_{W}\oplus V_{W}\quad.$
Observe moreover that the vertical subspaces $V_{W}$ are the vectors tangent
to the fibres. Therefore, any vertical vector $X_{V}\in T_{W}\mathcal{S}$ can
be written as
$X_{V}=W\,A\quad,$
where $A\in\mathfrak{u}(n)$ (i.e, $A$ is an antihermitian matrix). Note that
our metric $g_{H}$ defines a natural connexion as follows: A tangent vector
$X$ at $W$ is horizontal if it is orthogonal to the fibre passing through $W$,
i.e., if
$g_{H}(X,Y)=0\quad,$
for all vertical vector $Y$ at $W$ ($Y\in V_{W}$). Hence $X\in
T_{W}\mathcal{S}$ is horizontal if
$X^{\dagger}H^{-2}W-W^{\dagger}H^{-2}X=0\quad.$ (13)
Therefore, we can define a metric $g^{\mathcal{P}^{+}}_{H}$ in the base
manifold for any point $\rho\in\mathcal{P}^{+}$, given by
$g^{\mathcal{P}^{+}}_{H}(Y,Z):=g_{H}(Y_{\text{Hor}},Z_{\text{Hor}})\quad,$
where $Y,Z\in T_{\rho}\mathcal{P}^{+}$ and $Y_{\text{Hor}}$ (respectively
$Z_{\text{Hor}}$) are the horizontal lift of $Y$ (respectively $Z$).
###### Theorem 1.
The metric $g^{\mathcal{P}^{+}}_{H}$ in the base manifold at any point
$\rho\in\mathcal{P}^{+}$ can be obtained as
$g^{\mathcal{P}^{+}}_{H}(Y,Z):=\frac{1}{2}\text{tr}(H^{-1}G_{Y}H^{-1}Z)\quad,$
where $G_{Y}$ is the unique hermitian matrix satisfying
$H^{-1}YH^{-1}=G_{Y}H^{-1}\rho H^{-1}+H^{-1}\rho H^{-1}G_{Y}\quad.$
Note that matrix $G_{Y}$ exists and is unique by the existence and uniqueness
of the solution of the Sylvester equationSylvester (1884); Bartels and Stewart
(1972). Observe moreover that when $H$ is the identity matrix, then
$g^{\mathcal{P}^{+}}_{H}(Y,Z):=\frac{1}{2}\text{tr}(G_{Y}Z)\quad,$
where $G_{Y}$ is the (unique) solution of
$Y=G_{Y}\rho+\rho G_{Y}\quad,$
and that is the Bures metric Dittmann (1999).
###### Proof.
Let $W:\mathbb{R}\to\mathcal{S}$ be a curve, such that $\dot{W}$ is an
horizontal vector, then
$(\dot{W})^{\dagger}H^{-2}W=W^{\dagger}H^{-2}\dot{W}\quad.$
Let us define $A=H^{-1}W$, thus
$\dot{A}^{\dagger}A=A^{\dagger}\dot{A}\quad.$
It is easy to see that the latter condition is fulfilled if
$\dot{A}=GA\quad,$ (14)
where $G$ is an Hermitian matrix. Therefore
$\dot{W}=HGH^{-1}W\quad.$
Hence applying equation 4
$\pi_{*}(\dot{W})=\dot{W}W^{\dagger}+W\dot{W}^{\dagger}=HGH^{-1}\rho+\rho
H^{-1}GH\quad.$ (15)
Suppose that
$\displaystyle\pi_{*}(\dot{W})=Y\quad\pi_{*}(\dot{V})=Z$ $\displaystyle
W(0)W(0)^{\dagger}=V(0)V(0)^{\dagger}=\rho\quad,$
then
$\displaystyle g_{H}^{\mathcal{P}^{+}}(Y,Z)=$ $\displaystyle
g_{H}(\dot{W},\dot{V})=\frac{1}{2}\text{tr}(\dot{W}^{\dagger}H^{-2}\dot{V}+\dot{V}^{\dagger}H^{-2}\dot{W})$
(16) $\displaystyle=$
$\displaystyle\frac{1}{2}\text{tr}(H^{-1}G_{Y}G_{Z}H^{-1}\rho+H^{-1}G_{Z}G_{Y}H^{-1}\rho)\quad,$
where
$\dot{W}=HG_{Y}H^{-1}W\quad\dot{V}=HG_{Z}H^{-1}V\quad.$
Applying equation (15) in $\pi_{*}(\dot{V})$
$Z=HG_{Z}H^{-1}\rho+\rho H^{-1}G_{Z}H\quad.$
Using the above expression (16) the theorem follows. ∎
In the case of pure states, our Hilbert space is $\mathbb{C}^{n}$ and the
tangent space will be $\mathbb{C}^{n}$ too. Following Kryukov workKryukov
(2007), we can define a metric $g_{K}(X,Y)$ for any two tangent vectors
$X=(x,x^{*})$, $Y=(y,y^{*})$, by
$g_{K}(X,Y):=\text{Re}\left(\langle H^{-1}X,H^{-1}Y\rangle\right)\quad,$
where $\langle X,Y\rangle=\sum_{i=1}^{n}x_{i}y_{i}^{*}$, therefore
$g_{K}(X,Y):=\frac{1}{2}\left(\langle H^{-1}X,H^{-1}Y\rangle+\langle
H^{-1}Y,H^{-1}X\rangle\right)=\frac{1}{2}\text{tr}(X^{\dagger}H^{-2}Y+Y^{\dagger}H^{-2}X)\quad.$
When $H$ is the identity, we recover the Fubini-Study metric.
### III.2 Geometric structure of the SHg-quantum fibre bundle
As we have previously seen in the SHg-quantum fibre bundle
$\left\\{\mathcal{S}(\mathcal{P}^{+},U(n),\pi),h,g_{H}\right\\}$ of dimension
$n$, $\mathcal{S}(\mathcal{P}^{+},U(n),\pi)$ is a principal (and trivial)
fibre bundle, $\mathcal{S}$ is diffeomophic to the sphere of dimension
$2n^{2}-1$, $h$ is a vector field on $\mathcal{S}$, and $(\mathcal{S},g_{H})$
is a Riemannian manifold. But the SHg-quantum fibre bundle has more geometric
properties :
###### Theorem 2 (Main theorem).
Let $\left\\{\mathcal{S}(\mathcal{P}^{+},U(n),\pi),h,g_{H}\right\\}$ be a SHg-
quantum fibre bundle of dimension $n$. Then:
1. 1.
$h$ is a Killing vector field of $(\mathcal{S},g_{H})$.
2. 2.
The integral curves $\gamma:I\subset\mathbb{R}\to\mathcal{S}$ of $h$ are
geodesics of $(\mathcal{S},g_{H})$.
3. 3.
The projection on the base manifold $\mathcal{P}^{+}$ of the geodesic $\gamma$
satisfies the von Neumann equation
$\frac{d}{dt}\pi\circ\gamma=-{i}\left[H,\pi\circ\gamma\right]\quad.$ (17)
###### Proof.
Condition (1): In order to proof that $h$ is a Killing vector field, we only
have to show that the flow $\varphi_{t}:\mathcal{S}\to\mathcal{S}$ given by
$\begin{cases}\varphi_{0}(W)=W,\textnormal{where }W\in S\\\
\frac{d}{dt}\varphi_{t}(W)|_{t=0}=h_{W}\quad,\end{cases}$ (18)
is an isometry, i.e, for any $X,Y\in T_{W}\mathcal{S}$
$g_{H}({\varphi_{t}}_{*}(X),{\varphi_{t}}_{*}(Y))=g_{H}(X,Y)\quad.$ (19)
Note that
${\varphi_{t}}_{*}(X)=e^{-iHt}X\quad,$ (20)
and
$\displaystyle g_{H}({\varphi_{t}}_{*}(X),{\varphi_{t}}_{*}(Y))=$
$\displaystyle g_{H}(e^{-iHt}X,e^{-iHt}Y)$ (21) $\displaystyle=$
$\displaystyle\frac{1}{2}\text{tr}\left((e^{-iHt}X)^{\dagger}H^{-2}e^{-iHt}Y+(e^{-iHt}Y)^{\dagger}H^{-2}e^{-iHt}X\right)$
$\displaystyle=$
$\displaystyle\frac{1}{2}\text{tr}\left(X^{\dagger}e^{iHt}H^{-2}e^{-iHt}Y+Y^{\dagger}e^{iHt}H^{-2}e^{-iHt}X\right)$
$\displaystyle=$
$\displaystyle\frac{1}{2}\text{tr}\left(X^{\dagger}H^{-2}Y+Y^{\dagger}H^{-2}X\right)=g_{H}(X,Y)\quad.$
Conditions (2) and (3): First of all observe that if $\gamma$ is the integral
curve of the vector field $h$, i.e,
$\dot{\gamma}=h_{\gamma}=-iH\gamma\quad.$ (22)
The projection of $\gamma$ satisfies
$\displaystyle\frac{d}{dt}\pi(\gamma(t))$
$\displaystyle=\frac{d}{dt}\left(\gamma(t)\gamma^{\dagger}(t)\right)=\dot{\gamma}(t)\gamma^{\dagger}(t)+\gamma(t)\dot{\gamma}^{\dagger}(t)=\dot{\gamma}(t)\gamma^{\dagger}(t)+\gamma(t)(\dot{\gamma}(t))^{\dagger}$
(23)
$\displaystyle=-iH\gamma\gamma^{\dagger}(t)+\gamma(t)(-iH\gamma)^{\dagger}=-i[H,\pi(\gamma(t))]\quad.$
Hence, the projection of the integral curves of the vector field $h$ satisfies
the von Neumann equation. So all we have to prove is that curves are actually
geodesic curves
$\nabla_{h_{\gamma}}^{H}h_{\gamma}=0\quad.$ (24)
Since $h$ is a Killing vector field, we only have to proof that $h$ is a
unitary vector field (due any unitary Killing vector field is a geodesic).
Namely, the equality
$\displaystyle g_{H}(h_{\gamma},h_{\gamma})=$ $\displaystyle
g_{H}(-iH\gamma,-iH\gamma)=\text{tr}\left((-iH\gamma)^{\dagger}H^{-2}(-iH\gamma)\right)$
(25) $\displaystyle=$
$\displaystyle\text{tr}\left(\gamma^{\dagger}HH^{-2}H\gamma\right)=\text{tr}\left(\gamma^{\dagger}\gamma\right)=1\quad.$
Finally, since $\mathcal{H}$ is a unitary Killing vector field and the
integral curves of any Killing vector field of constant length is a geodesic
(see appendix theorem 8), the integral curves of $\mathcal{H}$ are geodesics.
∎
###### Remark.
Let us emphasize that for any hermitian matrix $A=A^{\dagger}$, we can build
the vector field $\mathcal{A}$ on $\mathcal{S}$ given by $-iAW$ for any
$W\in\mathcal{S}$. It is easy to check as done in equation 21 that if
$[H,A]=0$, $\mathcal{A}$ is a Killing vector field. Therefore, the set of
operators compatible with the Hamiltonian are related to the set of isometries
of $(\mathcal{S},g_{H})$, and we can identify any conserved quantum observable
with a Killing vector field.
## IV Geometric approach to quantum Poincare recurrence
As we know from the main theorem, $h$ is a Killing vector field on the
principal manifold $(\mathcal{S},g_{H})$ endowed with the dynamic metric
$g_{H}$. Then, the transformations given by the $1-$parametric subgroup
$\varphi_{t}:S\to S$ of integral curves of $h$ are distance-preserving and
volume-preserving (see appendix theorem 9). These two facts have the following
consequences
###### Theorem 3 (Insensitivity to Initial Conditions Theorem).
Let $\left\\{\mathcal{S}(\mathcal{P}^{+},U(n),\pi),h,g_{H}\right\\}$ be a SHg-
quantum principal bundle of dimension $n$. Then, for any two points
$W,V\in\mathcal{S}$
$\text{dist}(\varphi_{t}(W),\varphi_{t}(V))=\text{dist}(W,V)\quad,$ (26)
being the $\varphi_{t}$ the $1-$parametric subgroup of transformations given
by the integral curves of the Killing field $h$.
Figure 1: Since the Hamiltonian vector field is a Killing vector field, its
flow $\varphi$ preserves the volume (Liouville theorem) in the sphere
$\mathcal{S}$, where each point can be projected into the space of density
matrices $\mathcal{P}^{+}$.
The classical Liouville theoremArnol′d (199?) states that the natural volume
form on a symplectic manifold is invariant under the Hamiltonian flows. In our
case, we have the $1-$parametric subgroup of transformations
$\varphi_{t}:\mathcal{S}\to\mathcal{S}$ given by the integral curves of the
Killing vector field $h$ and we can set
###### Theorem 4 (Liouville Type Theorem).
Let $\left\\{\mathcal{P}^{+},U(n),\pi),h,g_{H}\right\\}$ be a SHg-quantum
principal bundle of dimension $n$. Then for any domain
$\Omega\subset\mathcal{S}$
$\operatorname{Vol}(\varphi_{t}(\Omega))=\operatorname{Vol}(\Omega)\quad,$
(27)
being the $\varphi_{t}$ the $1-$parametric subgroup of transformations given
by the integral curves of the Killing vector field $h$.
Using the above theorem, we can therefore state a similar theorem to the
Poincare recurrence theoremArnol′d (199?).
###### Theorem 5 (Poincare Type Theorem).
Let $\left\\{\mathcal{S}(\mathcal{P}^{+},U(n),\pi),h,g_{H}\right\\}$ be a SHg-
quantum principal bundle of dimension $n$. For any domain
$\Omega\subset\mathcal{S}$ and any time period $T\in\mathbb{R}^{+}$ there
exist a point $x\in\Omega$ and a positive integer $k>0$ such that
$\varphi_{kT}(x)\in\Omega\quad,$ (28)
being $\varphi_{t}:\mathcal{S}\to\mathcal{S}$ the $1-$parametric subgroup of
transformations given by the integral curves of the Killing field $h$.
###### Proof.
Consider the following sequence of domains
$\Omega,\varphi_{T}(\Omega),\varphi_{2T}(\Omega),\cdots,\varphi_{kT}(\Omega),\cdots$
All domain in the sequence belongs to the same volume
$\operatorname{Vol}(\Omega)$. If the above domains never intersect
$\mathcal{S}$, an infinite volume would obtain, but $\mathcal{S}$ is compact,
so $\operatorname{Vol}(\mathcal{S})<\infty$. Then, there exist $l\geq 0$ and
$m>l$ such that
$\varphi_{lT}(\Omega)\cap\varphi_{mT}(\Omega)\neq\emptyset\quad,$ (29)
so
$\Omega\cap\varphi_{(m-l)T}(\Omega)\neq\emptyset\quad.$ (30)
Setting $k=m-l$ the theorem is proven. ∎
Joining the above theorem with the Insensitivity to the Initial Conditions we
get
###### Theorem 6 (Strong Poincare Type Theorem).
Let $\left\\{\mathcal{S}(\mathcal{P}^{+},U(n),\pi),h,g_{H}\right\\}$ be a SHg-
quantum principal bundle of dimension $n$. Then, for any point
$W\in\mathcal{S}$, any $\epsilon>0$ and any $T\in\mathbb{R}^{+}$, there exist
a positive integer $k>0$ such that
$\text{dist}(W,\varphi_{kT}(W))<\epsilon\quad,$ (31)
being $\varphi_{t}:\mathcal{S}\to\mathcal{S}$ the $1-$parametric subgroup of
transformations given by the integral curves of the Killing vector field $h$.
###### Proof.
Let us consider the domain
$B_{\frac{\epsilon}{2}}(W)=\left\\{V\in\mathcal{S}\,:\,\text{dist
}(W,V)<\frac{\epsilon}{2}\right\\}\quad.$ (32)
Applying now the Poincare type theorem there must exist $W_{0}\in
B_{\frac{\epsilon}{2}}(W)$ and $k>0$ such that
$\varphi_{kT}(W_{0})\in B_{\frac{\epsilon}{2}}(W)\quad.$ (33)
So,
$\text{dist }(W,\varphi_{kT}(W_{0}))<\frac{\epsilon}{2}\quad.$ (34)
But, by the Insensitivity to Initial Conditions Theorem
$\text{dist }(\varphi_{kT}(W),\varphi_{kT}(W_{0}))=\text{dist
}(W,W_{0})<\frac{\epsilon}{2}\quad.$ (35)
Therefore, applying the triangular inequality
$\text{dist }(W,\varphi_{kT}(W))\leq\text{dist
}(W,\varphi_{kT}(W_{0}))+\text{dist
}(\varphi_{kT}(W_{0}),\varphi_{kT}(W))<\epsilon\quad.$ (36)
∎
### IV.1 Physical systems with discrete energy eigenvalues
Using previously stated theorems we can give an alternative proof and more
geometric sense of well-knownBocchieri and Loinger (1957); Schulman (1978);
Percival (1961) principle of recurrence for physical systems with discrete
energy eigenvalues.
Thus, defining the length $\|A\|$ of a matrix $A$ as followsPercival (1961)
$\|A\|=\sqrt{\text{tr}(A^{\dagger}A)}\quad.$
Then
###### Theorem 7.
Let $\rho$ be a mixed state of a quantum system with discrete energy spectrum.
Then, $\rho$ is almost periodic. Namely, for an arbitrarily small positive
error $\epsilon$ the inequality
$\|\rho(t+T)-\rho(t)\|<\epsilon\text{ for all }t$ (37)
is satisfied by infinitely many values of $T$, these values being spread over
the whole range $-\infty$ to $\infty$ so as not to leave arbitrarily long
empty intervals.
###### Proof.
Let $\rho(t)$ be the density matrix of a system with a discrete set of
stationary states, labeled $n=0,1,2,\cdots,$ with energies $E_{n}$, some of
which may be equal if there are degeneracies. In energy representation the
matrix elements are
$\rho_{nn^{\prime}}(t)=\langle n|\rho(t)|n^{\prime}\rangle\quad.$ (38)
Let $T_{n}=|n\rangle\langle n|$ be the projection operator onto the $n$th
stationary state, then
$\rho^{nn^{\prime}}(t)=T_{n}\rho(t)T_{n^{\prime}}\quad,$ (39)
is the matrix which energy representation has only one nonzero element, equal
to $\rho_{nn^{\prime}}(t)$ and in the location $(n,n^{\prime})$. These
matrices are orthogonal in density space
$\left(\rho^{nn^{\prime}}(t),\rho^{n^{\prime\prime}n^{\prime\prime\prime}}(t)\right)=\delta_{nn^{\prime\prime}}\delta_{n^{\prime}n^{\prime\prime\prime}}|\rho_{nn^{\prime}}(t)|^{2}\quad,$
(40)
and
$\displaystyle\rho(t)$
$\displaystyle=\sum_{n=0}^{\infty}\sum_{n^{\prime}=0}^{\infty}\rho^{nn^{\prime}}(t)$
(41)
$\displaystyle=\sum_{n=0}^{\infty}\sum_{n^{\prime}=0}^{\infty}\rho^{nn^{\prime}}(0)e^{i\omega_{nn^{\prime}}t}\quad,$
where $\omega_{nn^{\prime}}=(E_{n^{\prime}}-E_{n})$. Now, consider the finite
sum
$\sigma^{NN^{\prime}}(t)=\sum_{n=0}^{N}\sum_{n^{\prime}=0}^{N^{\prime}}\rho^{nn^{\prime}}(t)\quad,$
(42)
as an approximation to $\rho(t)$. The square of the error is
$\displaystyle\|\rho(t)-\sigma^{NN^{\prime}}(t)\|^{2}$
$\displaystyle=\|\sum_{n=N+1}^{\infty}\sum_{n^{\prime}=N^{\prime}+1}^{\infty}\rho^{nn^{\prime}}(t)\|^{2}$
(43)
$\displaystyle=\sum_{n=N+1}^{\infty}\sum_{n^{\prime}=N^{\prime}+1}^{\infty}\|\rho^{nn^{\prime}}(t)\|^{2}$
$\displaystyle=\sum_{n=N+1}^{\infty}\sum_{n^{\prime}=N^{\prime}+1}^{\infty}\|\rho^{nn^{\prime}}(0)\|^{2}\quad.$
The second equality follows from the orthogonality of the
$\rho^{nn^{\prime}}$. Since the error is independent of the time,
$\sigma^{NN^{\prime}}(t)$ converges uniformly to $\rho(t)$ (in the
$\|\,\|$-norm sense). So, $\rho(t)$ can be approximated by
$\sigma^{NN^{\prime}}(t)$. $\sigma^{NN^{\prime}}(t)$ is a discrete density
with finite energy levels, $\sigma^{NN^{\prime}}(t)\in\mathcal{P}^{+}$, and
the set
$B_{\epsilon}^{\mathcal{P}^{+}}(\sigma^{NN^{\prime}}):=\\{\rho\in\mathcal{P}^{+}\,:\,\|\rho-\sigma^{NN^{\prime}}(t)\|<\epsilon\\}\quad,$
(44)
is an open precompact set in $\mathcal{P}^{+}$. Using the global section given
in equation (9), $\tau(B_{\epsilon})$ will be an open precompact set of
$\mathcal{S}$. But applying the Strong Poincare Type Theorem for any time
period $T>0$ there exists $k>0$ such that
$\text{dist}\left(\tau(\sigma^{NN^{\prime}}(t)),\varphi_{kT}(\tau(\sigma^{NN^{\prime}}(t)))\right)=\text{dist}\left(\tau(\sigma^{NN^{\prime}}(t)),\tau(\sigma^{NN^{\prime}}(t+kT))\right)<\varepsilon\quad,$
(45)
for any $\varepsilon>0$. Namely,
$\tau(\sigma^{NN^{\prime}}(t+kT))\in
B_{\varepsilon}^{\mathcal{S}}(\tau(\sigma^{NN^{\prime}}(t)))\quad,$ (46)
being $B_{\varepsilon}^{\mathcal{S}}(\tau(\sigma^{NN^{\prime}}(t)))$ the
geodesic ball in $\mathcal{S}$ centered at $\tau(\sigma^{NN^{\prime}}(t))$ of
radius $\varepsilon$. Now choosing $\varepsilon$ small enough
$B_{\varepsilon}^{\mathcal{S}}(\tau(\sigma^{NN^{\prime}}(t)))\subset\tau(B^{\mathcal{P}^{+}}_{\epsilon}(\sigma^{NN^{\prime}}(t)))\quad.$
(47)
Therefore by (46)
$\tau(\sigma^{NN^{\prime}}(t+kT))\in\tau(B^{\mathcal{P}^{+}}_{\epsilon}(\sigma^{NN^{\prime}}(t)))\quad.$
(48)
Projecting to the base manifold
$\sigma^{NN^{\prime}}(t+kT)\in
B^{\mathcal{P}^{+}}_{\epsilon}(\sigma^{NN^{\prime}}(t))\quad.$ (49)
By definition of $B^{\mathcal{P}^{+}}_{\epsilon}(\sigma^{NN^{\prime}}(t))$
$\|\sigma^{NN^{\prime}}(t+kT)-\sigma^{NN^{\prime}}(t)\|<\epsilon\quad.$ (50)
And the theorem is proven. ∎
## V Appendix
In this section we recall several well known results about Killing vector
fields on Riemannian manifolds(for a more detailed approximation see
O’NeillO’Neill (1983)).
###### Theorem 8.
(see also Berestovskiĭ and Nikonorov (2008))Let $(M,g)$ be Riemannian
manifold, then any integral curve $\gamma:I\subset\mathbb{R}\to M$ of a
Killing vector field $X$ of constant length $\sqrt{g(X,X)}$ is a geodesic on
$M$.
###### Proof.
Here, we need
$\nabla_{\dot{\gamma}}\dot{\gamma}=\nabla_{X}X=0\quad,$ (51)
but, since $X$ is a Killing vector field, the Lie derivative of the metric is
zero $L_{X}g=0$ and (see O’NeillO’Neill (1983), proposition 25) $\nabla X$ is
skew-adjoint relative to $g$, then
$g(\nabla_{X}X,W)+g(\nabla_{W}X,X)=0\quad,$ (52)
for any $X\in T\mathcal{S}$. Therefore
$0=g(\nabla_{X}X,W)+1/2W(g(X,X))=g(\nabla_{X}X,W)\quad,$ (53)
then $\nabla_{X}X=0$. ∎
###### Theorem 9.
Let $(M,g)$ be a Riemannian manifold, let $X$ be a Killing vector field on
$M$, and denote by $\varphi_{t}:M\to M$ the $1-$parametric subgroup of
transformations given by $X$ (i.e, $\varphi_{0}(p)=p$,
$\frac{d}{dt}\varphi_{t}(p)|_{t=0}=X_{p}$), then
1. 1.
Given any two points $p,q\in M$,
$\text{dist}(p,q)=\text{dist}(\varphi_{t}(p),\varphi_{t}(q))$.
2. 2.
Given any domain $\Omega\subset M$,
$\operatorname{Vol}(\varphi_{t}(\Omega))=\operatorname{Vol}(\Omega)$.
###### Proof.
Let $\varphi_{t}(\Omega)$ be the flow of the domain $\Omega$. Allow us denote
$V(t):=Vol(\varphi_{t}(\Omega))\quad.$ (54)
Then, the divergence is just (see ChavelChavel (1993))
$V^{\prime}(0)=\int_{\Omega}\text{div }\mathcal{H}\,d\mu_{g_{H}}\quad,$ (55)
where $d\mu_{g_{H}}$ denotes the Riemannian density measure.
The divergence is defined asdo Carmo (1992)
$\text{div }\mathcal{H}=\text{tr}(Y\to\nabla^{H}_{Y}X)\quad.$ (56)
Given an orthonormal base $\\{E_{i}\\}_{i=1}^{2n^{2}-1}$ in $T_{W}\mathcal{S}$
$\nabla^{H}_{Y}\mathcal{H}=\sum_{i}Y^{i}\nabla^{H}_{E_{i}}\mathcal{H}=\sum_{i,j}Y^{i}g_{H}(\nabla^{H}_{E_{i}}\mathcal{H},E_{j})E_{j}\quad,$
(57)
where $Y^{i}:=g_{H}(Y,E_{i})$. Therefore
$\text{div
}\mathcal{H}=\sum_{i}g_{H}(\nabla^{H}_{E_{i}}\mathcal{H},E_{j})\quad.$ (58)
But since $\mathcal{H}$ is a Killing vector field $\nabla^{H}\mathcal{H}$ is
skew-adjoint relative to $g_{H}$ (see O’NeillO’Neill (1983), proposition 25
again), then
$\text{div }\mathcal{H}=0\quad.$ (59)
And the theorem follows. ∎
## References
* Arnol′d (199?) V. I. Arnol′d, _Mathematical methods of classical mechanics_ , Graduate Texts in Mathematics, Vol. 60 (Springer-Verlag, New York, 1989) pp. xvi+516 .
* Bartels and Stewart (1972) R. H. Bartels and G. W. Stewart, “Solution of the matrix equation $AX+XB=C$,” Comm. ACM 15, 820–826 (1972).
* Bengtsson and Życzkowski (2006) Ingemar Bengtsson and Karol Życzkowski, _Geometry of quantum states. An introduction to quantum entanglement,_ (Cambridge University Press, Cambridge, 2006) pp. xii+466 .
* Berestovskiĭ and Nikonorov (2008) V. N. Berestovskiĭ and Yu. G. Nikonorov, “Killing vector fields of constant length on Riemannian manifolds,” Sibirsk. Mat. Zh. 49, 497–514 (2008).
* Bocchieri and Loinger (1957) P. Bocchieri and A. Loinger, “Quantum recurrence theorem,” Phys. Rev. (2) 107, 337–338 (1957).
* do Carmo (1992) Manfredo Perdigão do Carmo, _Riemannian geometry_ , Mathematics: Theory & Applications (Birkhäuser Boston Inc., Boston, MA, 1992) pp. xiv+300 .
* Chavel (1984) Isaac Chavel, _Eigenvalues in Riemannian geometry_ , Pure and Applied Mathematics, Vol. 115 (Academic Press Inc., Orlando, FL, 1984) pp. xiv+362 .
* Chavel (1993) Isaac Chavel, _Riemannian geometry—a modern introduction_ , Cambridge Tracts in Mathematics, Vol. 108 (Cambridge University Press, Cambridge, 1993) pp. xii+386.
* Chruściński and Jamiołkowski (2004) Dariusz Chruściński and Andrzej Jamiołkowski, _Geometric phases in classical and quantum mechanics_, Progress in Mathematical Physics, Vol. 36 (Birkhäuser Boston Inc., Boston, MA, 2004) pp. xiv+333.
* Dittmann (1999) J. Dittmann, “Explicit formulae for the Bures metric,” J. Phys. A 32, 2663–2670 (1999).
* Jost (2009) Jürgen Jost, _Geometry and physics_ (Springer-Verlag, Berlin, 2009) pp. xiv+217.
* Kibble (1979) T.W.B. Kibble, Geometrization of quantum mechanics, Communications in Mathematical Physics 65, 189–201 (1979).
* Kobayashi and Nomizu (1996) Shoshichi Kobayashi and Katsumi Nomizu, _Foundations of differential geometry. Vol. I_ , Wiley Classics Library (John Wiley & Sons Inc., New York, 1996) pp. xii+329 .
* Kryukov (2005) Alexey A. Kryukov, “Linear algebra and differential geometry on abstract Hilbert space,” Int. J. Math. Math. Sci. , 2241–2275 (2005).
* Kryukov (2006) Alexey A. Kryukov, “Quantum mechanics on Hilbert manifolds: the principle of functional relativity,” Found. Phys. 36, 175–226 (2006).
* Kryukov (2007) Alexey A. Kryukov, “On the measurement problem for a two-level quantum system,” Found. Phys. 37, 3–39 (2007).
* Lee (2003) John M. Lee, _Introduction to smooth manifolds_ , Graduate Texts in Mathematics, Vol. 218 (Springer-Verlag, New York, 2003) pp. xviii+628.
* Nielsen and Chuang (2000) Michael A. Nielsen and Isaac L. Chuang, _Quantum computation and quantum information_ (Cambridge University Press, Cambridge, 2000) pp. xxvi+676.
* Note (1) Observe that in equation 1 and throughout this paper we use the natural units system ($h=1$).
* Note (2) The map $\tau$ is well defined because a positive operator admits a unique positive square root. It is a section because $\pi(\tau(\rho))=(\sqrt{(}\rho))^{2}=\rho$.
* O’Neill (1983) Barrett O’Neill, _Semi-Riemannian geometry. With applications to relativity_ , Pure and Applied Mathematics, Vol. 103 (Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1983) pp. xiii+468 .
* Percival (1961) Ian C. Percival, “Almost periodicity and the quantal $H$ theorem,” J. Mathematical Phys. 2, 235–239 (1961).
* Sakurai (1994) Jun John Sakurai, _Modern Quantum Mechanics_ (Addison-Wesley, Reading, MA, 1994).
* Schulman (1978) L. S. Schulman, “Note on the quantum recurrence theorem,” Phys. Rev. A 18, 2379–2380 (1978).
* Sylvester (1884) J. Sylvester, “Sur l′equation en matrices px = xq,” C.R. Acad. Sci. Paris 99, 67–71 (1884).
* Uhlmann (1987) A Uhlmann, “Parallel transport and holonomy along density operators,” Tech. Rep. (Leipzig Univ., Leipzig, 1987).
* Uhlmann (1989) A. Uhlmann, “On Berry phases along mixtures of states,” Ann. Physik (7) 46, 63–69 (1989).
* Uhlmann (1986) Armin Uhlmann, “Parallel transport and “quantum holonomy” along density operators,” Reports on Mathematical Physics 24, 229 – 240 (1986).
* Uhlmann (1991) Armin Uhlmann, “A gauge field governing parallel transport along mixed states,” Lett. Math. Phys. 21, 229–236 (1991).
|
arxiv-papers
| 2013-02-06T12:06:25 |
2024-09-04T02:49:41.399479
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vicent Gimeno and Jose Sotoca",
"submitter": "Vicent Gimeno",
"url": "https://arxiv.org/abs/1302.1333"
}
|
1302.1419
|
# Blind One-Bit Compressive Sampling
Lixin Shen and Bruce W. Suter This research is supported in part by an award
from National Research Council via the Air Force Office of Scientific Research
and by the US National Science Foundation under grant DMS-1115523.L. Shen
(Corresponding author) is with Department of Mathematics, Syracuse University,
Syracuse, NY 13244. (Email: [email protected]). Bruce W. Suter is with Air Force
Research Laboratory. AFRL/RITB, Rome, NY 13441-4505 (Email:
[email protected]).Copyright (c) 2012 IEEE. Personal use of this material
is permitted. However, permission to use this material for any other purposes
must be obtained from the IEEE by sending a request to pubs-
[email protected].
###### Abstract
The problem of 1-bit compressive sampling is addressed in this paper. We
introduce an optimization model for reconstruction of sparse signals from
1-bit measurements. The model targets a solution that has the least
$\ell_{0}$-norm among all signals satisfying consistency constraints stemming
from the 1-bit measurements. An algorithm for solving the model is developed.
Convergence analysis of the algorithm is presented. Our approach is to obtain
a sequence of optimization problems by successively approximating the
$\ell_{0}$-norm and to solve resulting problems by exploiting the proximity
operator. We examine the performance of our proposed algorithm and compare it
with the binary iterative hard thresholding (BIHT) [10] a state-of-the-art
algorithm for 1-bit compressive sampling reconstruction. Unlike the BIHT, our
model and algorithm does not require a prior knowledge on the sparsity of the
signal. This makes our proposed work a promising practical approach for signal
acquisition.
###### Index Terms:
1-bit compressive sensing, $\ell_{1}$ minimization, $\ell_{0}$ minimization,
proximity operator
## I Introduction
Compressive sampling is a recent advance in signal acquisition [4, 5]. It
provides a method to reconstruct a sparse signal $x\in\mathbb{R}^{n}$ from
linear measurements
$y=\Phi x,$ (1)
where $\Phi$ is a given $m\times n$ measurement matrix with $m<n$ and
$y\in\mathbb{R}^{m}$ is the measurement vector acquired. The objective of
compressive sampling is to deliver an approximation to $x$ from $y$ and
$\Phi$. It has been demonstrated that the sparse signal $x$ can be recovered
exactly from $y$ if $\Phi$ has Gaussian i.i.d. entries and satisfies the
restricted isometry property [5]. Moreover, this sparse signal can be
identified as a vector that has the smallest $\ell_{0}$-norm among all vectors
yielding the same measurement vector $y$ under the measurement matrix $\Phi$.
However, the success of the reconstruction of this sparse signal is based on
the assumption that the measurements have infinite bit precision. In realistic
settings, the measurements are never exact and must be discretized prior to
further signal analysis. In practice, these measurements are quantized, a
mapping from a continuous real value to a discrete value over some finite
range. As usual, quantization inevitably introduces errors in measurements.
The problem of estimating a sparse signal from a set of quantized measurements
has been addressed in recent literature. Surprisedly, it has been demonstrated
theoretically and numerically that 1-bit per measurement is enough to retain
information for sparse signal reconstruction. As pointed out in [3, 10],
quantization to 1-bit measurements is appealing in practical applications.
First, 1-bit quantizers are extremely inexpensive hardware devices that test
values above or below zeros, enabling simple, efficient, and fast
quantization. Second, 1-bit quantizers are robust to a number of non-linear
distortions applied to measurements. Third, 1-bit quantizers do not suffer
from dynamic range issues. Due to these attractive properties of 1-bit
quantizers, in this paper we will develop efficient algorithms for
reconstruction of sparse signals from 1-bit measurements.
The 1-bit compressive sampling framework originally introduced in [3] is
briefly described as follows. Formally, it can be written as
$y=A(x):=\mathrm{sign}(\Phi x),$ (2)
where the function $\mathrm{sign}(\cdot)$ denotes the sign of the variable,
element-wise, and zero values are assigned to be $+1$. Thus, the measurement
operator $A$, called a 1-bit scalar quantizer, is a mapping from
$\mathbb{R}^{n}$ to the Boolean cube $\\{-1,1\\}^{m}$. Note that the scale of
the signal has been lost during the quantization process. We search for a
sparse signal $x^{\star}$ in the unit ball of $\mathbb{R}^{m}$ such that the
sparse signal $x^{\star}$ is consistent with our knowledge about the signal
and measurement process, i.e., $A(x^{\star})=A(x)$.
The problem of reconstructing a sparse signal from its 1-bit measurements is
generally non-convex, and therefore it is a challenge to develop an algorithm
that can find a desired solution. Nevertheless, since this problem was
introduced in [3] in 2008, there are several algorithms that have been
developed for attacking it [3, 12, 17, 19]. Among those existing 1-bit
compressive sampling algorithms, the binary iterative hard thresholding (BIHT)
[10] exhibits its superior performance in both reconstruction error and as
well as consistency via numerical simulations over the algorithms in [3, 12].
When there are a lot of sign flips in the measurements, a method based on
adaptive outlier pursuit for 1-bit compressive sampling was proposed in [19].
The algorithms in [10, 19] require the sparsity of the desired signal to be
given in advance. This requirement, however, is hardly satisfied in practice.
By keeping only the sign of the measurements, the magnitude of the signal is
lost. The models associated with the aforementioned algorithms seek sparse
vectors $x$ satisfying consistency constraints (2) in the unit sphere. As a
result, these models are essentially non-convex and non-smooth. In [17], a
convex minimization problem is formulated for reconstruction of sparse signals
from 1-bit measurements and is solved by linear programming. The details of
the above algorithms will be briefly reviewed in the next section.
In this paper, we introduce a new $\ell_{0}$ minimization model over a convex
set determined by consistency constraints for 1-bit compressive sampling
recovery and develop an algorithm for solving the proposed model. Our model
does not require prior knowledge on the sparsity of the signal, therefore, is
referred to as the blind 1-bit compressive sampling model. Our approach for
dealing with our proposed model is to obtain a sequence of optimization
problems by successively approximating the $\ell_{0}$-norm and to solve
resulting problems by exploiting the proximity operator [16]. Convergence
analysis of our algorithm is presented.
This paper is organized as follows. In Section II we review and comment
current 1-bit compressive sampling models and then introduce our own model by
assimilating advantages of existing models. Heuristics for solving the
proposed model are discussed in Section III. Convergence analysis of the
algorithm for the model is studied in Section IV. A numerical implementable
algorithm for the model is presented in Section V. The performance of our
algorithm is demonstrated and compared with the BIHT in Section VI. We present
our conclusion in Section VII.
## II Models for One-Bit Compressive Sampling
In this section, we begin with reviewing existing models for reconstruction of
sparse signals from 1-bit measurements. After analyzing these models, we
propose our own model that assimilates the advantages of the existing ones.
Using matrix notation, the 1-bit measurements in (2) can be equivalently
expressed as
$Y\Phi x\geq 0,$ (3)
where $Y:=\mathrm{diag}(y)$ is an $m\times m$ diagonal matrix whose $i$th
diagonal element is the $i$th entry of $y$. The expression $Y\Phi x\geq 0$ in
(3) means that all entries of the vector $Y\Phi x$ are no less than $0$.
Hence, we can treat the 1-bit measurements as sign constraints that should be
enforced in the construction of the signal $x$ of interest. In what follows,
equation (3) is referred to as sign constraint or consistency condition,
interchangeably.
The optimization model for reconstruction of a sparse signal from 1-bit
measurements in [3] is
$\min\|x\|_{1}\quad\mbox{s.t.}\quad Y\Phi x\geq
0\quad\mbox{and}\quad\|x\|_{2}=1,$ (4)
where $\|\cdot\|_{1}$ and $\|\cdot\|_{2}$ denote the $\ell_{1}$-norm and the
$\ell_{2}$-norm of a vector, respectively. In model (4), the $\ell_{1}$-norm
objective function is used to favor sparse solutions, the sign constraint
$Y\Phi x\geq 0$ is used to impose the consistency between the 1-bit
measurements and the solution, the constraint $\|x\|_{2}=1$ ensures a
nontrivial solution lying on the unit $\ell_{2}$ sphere.
Instead of solving model (4) directly, a relaxed version of model (4)
$\min\left\\{\lambda\|x\|_{1}+\sum_{i=1}^{m}h((Y\Phi
x)_{i})\right\\}\quad\mbox{s.t.}\quad\|x\|_{2}=1$ (5)
was proposed in [3] and solved by employing a variation of the fixed point
continuation algorithm in [9]. Here $\lambda$ is a regularization parameter
and $h$ is chosen to be the one-sided $\ell_{1}$ (or $\ell_{2}$) function,
defined at $z\in\mathbb{R}$ as follows
$h(z):=\left\\{\begin{array}[]{ll}|z|\;\;(\mbox{or}\;\frac{1}{2}z^{2}),&\hbox{if
$z<0$;}\\\ 0,&\hbox{otherwise.}\end{array}\right.$ (6)
We remark that the one-sided $\ell_{2}$ function was adopted in [3] due to its
convexity and smoothness properties that are required by a fixed point
continuation algorithm.
In [12] a restricted-step-shrinkage algorithm was proposed for solving model
(4). This algorithm is similar in sprit to trust-region methods for nonconvex
optimization on the unit sphere and has a provable convergence guarantees.
Binary iterative hard thresholding (BIHT) algorithms were recently introduced
for reconstruction of sparse signals from 1-bit measurements in [10]. The BIHT
algorithms are developed for solving the following constrained optimization
model
$\min\sum_{i=1}^{m}h((Y\Phi x)_{i})\quad\mbox{s.t.}\quad\|x\|_{0}\leq
s\quad\mbox{and}\quad\|x\|_{2}=1,$ (7)
where $h$ is defined by equation (6), $s$ is a positive integer, and the
$\ell_{0}$-norm $\|x\|_{0}$ counts the number of non-zero entries in $x$.
Minimizing the objective function of model (7) enforces the consistency
condition (3). The BIHT algorithms for model (7) are a simple modification of
the iterative thresholding algorithm proposed in [2]. It was shown numerically
that the BIHT algorithms perform significantly better than the other
aforementioned algorithms in [3, 12] in terms of both reconstruction error as
well as consistency. Numerical experiments in [10] further show that the BIHT
algorithm with $h$ being the one-sided $\ell_{1}$ function performs better in
low noise scenarios while the BIHT algorithm with $h$ being the one-sided
$\ell_{2}$ function perform better in high noise scenarios. Recently, a robust
method for recovering signals from 1-bit measurements using adaptive outlier
pursuit was proposed for the measurements having noise (i.e., sign flips) in
[19].
The algorithms reviewed above for 1-bit compressive sampling are developed for
optimization problems having convex objective functions and non-convex
constraints. In [17] a convex optimization program for reconstruction of
sparse signals from 1-bit measurements was introduced as follows:
$\min\|x\|_{1}\quad\mbox{s.t.}\quad Y\Phi x\geq 0\quad\mbox{and}\quad\|\Phi
x\|_{1}=p,$ (8)
where $p$ is any fixed positive number. The first constraint $Y\Phi x\geq 0$
requires that a solution to model (8) should be consistent with the 1-bit
measurements. If a vector $x$ satisfies the first constraint, so is $ax$ for
all $0<a<1$. Hence, an algorithm for minimizing the $\ell_{1}$-norm by only
requiring consistency with the measurements will yield the solution $x$ being
zero. The second constraint $\|\Phi x\|_{1}=p$ is then used to prevent model
(8) from returning a zero solution, thus, resolves the amplitude ambiguity. By
taking the first constraint into consideration, we know that $\|\Phi
x\|_{1}=\langle y,\Phi x\rangle$, therefore, the second constraint becomes
$\langle\Phi^{\top}y,x\rangle=p$. This confirms that both objective function
and constraints of model (8) are convex. It was further pointed out in [17]
that model (8) can be cast as a linear program. As comparing model (8) with
model (4), both the constraint $\|x\|_{2}=1$ in model (4) and the constraint
$\|\Phi x\|_{1}=p$ in model (8), the only difference between both models,
enforce a non-trivial solution. However, as we have already seen, model (8)
with the constraint $\|\Phi x\|_{1}=p$ can be solved by a computationally
tractable algorithm.
Let us further comment on models (7) and (8). First, the sparsity constraint
in model (7) is impractical since the sparsity of the underlying signal is
unknown in general. Therefore, instead of imposing this sparse constraint, we
consider to minimize an optimization model having the $\ell_{0}$-norm as its
objective function. Second, although model (8) can be tackled by efficient
linear programming solvers and the solution of model (8) preserves the
effective sparsity of the underlying signal (see [17]), the solution is not
necessarily sparse in general as shown in our numerical experiments (see
Section VI). Motivated by the aforementioned models and the associated
algorithms, we plan in this paper to reconstruct sparse signals from 1-bit
measurements via solving the following constrained optimization model
$\min\|x\|_{0}\quad\mbox{s.t.}\quad Y\Phi x\geq 0\quad\mbox{and}\quad\|\Phi
x\|_{1}=p,$ (9)
where $p$ is again a arbitrary positive number. This model has the
$\ell_{0}$-norm as its objective function and inequality $Y\Phi x\geq 0$ and
equality $\|\Phi x\|_{1}=p$ as its convex constraints.
We remark that the actual value of $p$ is not important as long as it is
positive. More precisely, suppose that $\mathcal{S}$ and
$\mathcal{S}^{\diamond}$ are two sets collecting all solutions of model (9)
with $p=1$ and $p=p^{\diamond}>0$, respectively. If $x\in\mathcal{S}$, that
is, $Y\Phi x\geq 0$ and $\|\Phi x\|_{1}=1$, then, by denoting
$x^{\diamond}:={p^{\diamond}}x$, it can be verified that
$\|x^{\diamond}\|_{0}=\|x\|_{0}$, $Y\Phi x^{\diamond}\geq 0$, and $\|\Phi
x^{\diamond}\|_{1}=p^{\diamond}$. That indicates
$x^{\diamond}\in\mathcal{S}^{\diamond}$. Therefore, we have that
${p^{\diamond}}\mathcal{S}\subset\mathcal{S}^{\diamond}$. Conversely, we can
show that $\mathcal{S}^{\diamond}\subset{p^{\diamond}}\mathcal{S}$ by
reverting above steps. Hence,
${p^{\diamond}}\mathcal{S}=\mathcal{S}^{\diamond}$. Without loss of
generality, the positive number $p$ is always assumed to be $1$ in the rest
part of the paper.
To close this section, we compare model (7) and our proposed model (9) in the
following result.
###### Proposition 1
Let $y\in\mathbb{R}^{m}$ be the 1-bit measurements from an $m\times n$
measurement matrix $\Phi$ via equation (2) and let $s$ be a positive integer.
Assume that the vector $x\in\mathbb{R}^{n}$ is a solution to model (9). Then
model (7) has the unit vector $\frac{x}{\|x\|_{2}}$ as its solution if
$\|x\|_{0}\leq s$; otherwise, model (7) can not have a solution satisfying the
consistency constraint if $\|x\|_{0}>s$.
###### Proof:
Since the vector $x$ is a solution to model (9), then $x$ satisfies the
consistency constraint $Y\Phi x\geq 0$. Hence, it, together with definition of
$h$ in (6), implies that
$\sum_{i=1}^{m}h\left(\left(Y\Phi\frac{x}{\|x\|_{2}}\right)_{i}\right)=0.$
We further note that $\left\|\frac{x}{\|x\|_{2}}\right\|_{0}=\|x\|_{0}$ and
$\left\|\frac{x}{\|x\|_{2}}\right\|_{2}=1$. Hence, the vector
$\frac{x}{\|x\|_{2}}$ is a solution of model (7) if $\|x\|_{0}\leq s$.
On the other hand, if $\|x\|_{0}>s$ then all solutions to model (7) do not
satisfy the consistency constraint. Suppose this statement is _false_. That
is, there exists a solution of model (7), say $x^{\sharp}$, such that $Y\Phi
x^{\sharp}\geq 0$, $\|x^{\sharp}\|_{0}\leq s$, and $\|x^{\sharp}\|_{2}=1$
hold. Set $x^{\diamond}:=\frac{x^{\sharp}}{\|\Phi x^{\sharp}\|_{1}}$. Then
$\|x^{\diamond}\|_{0}=\|x^{\sharp}\|_{0}\leq s$, $Y\Phi x^{\diamond}\geq 0$,
and $\|\Phi x^{\diamond}\|_{1}=1$. Since $\|x^{\diamond}\|_{0}<\|x\|_{0}$, it
turns out that $x$ is not a solution of model (9). This contracts our
assumption on the vector $x$. This completes the proof of the result. ∎
From Proposition 1, we can see that the sparsity $s$ for model (7) is
critical. If $s$ is set too large, a solution to model (7) may not be the
sparsest solution satisfying the consistency constraint; if $s$ is set too
small, solutions to model (7) cannot satisfy the consistency constraint. In
contrast, our model (9) does not require the sparsity constraint used in model
(7) and delivers the sparsest solution satisfying the consistency constraint.
Therefore, these properties make our model more attractive for 1-bit
compressive sampling than the BIHT. Since sparsity of the underlying signal is
not specified in advance in model (9), we refer it to as _blind 1-bit
compressive sampling model_.
## III An Algorithm for the Blind 1-Bit Compressive Sampling
In this section, we will develop algorithms for the proposed model (9). We
first reformulate model (9) as an unconstrained optimization problem via the
indicator function of a closed convex set in $\mathbb{R}^{m+1}$. It turns out
that the objective function of this unconstrained optimization problem is the
sum of the $\ell_{0}$-norm and the indicator function composing with a matrix
associated with the 1-bit measurements. Instead of directly solving the
unconstrained optimization problem we use some smooth concave functions to
approximate the $\ell_{0}$-norm and then linearize the concave functions. The
resulting model can be viewed as an optimization problem of minimizing a
weighted $\ell_{1}$-norm over the closed convex set. The solution of this
resulting model is served as a new point at which the concave functions will
be linearized. This process is repeatedly performed until a certain stopping
criteria is met. Several concrete examples for approximating the
$\ell_{0}$-norm are provided at the end of this section.
We begin with introducing our notation and recalling some background from
convex analysis. For the $d$-dimensional Euclidean space $\mathbb{R}^{d}$, the
class of all lower semicontinuous convex functions
$f:\mathbb{R}^{d}\rightarrow(-\infty,+\infty]$ such that
$\mathrm{dom}f:=\\{x\in\mathbb{R}^{d}:f(x)<+\infty\\}\neq\emptyset$ is denoted
by $\Gamma_{0}(\mathbb{R}^{d})$. The indicator function of a closed convex set
$C$ in $\mathbb{R}^{d}$ is defined, at $u\in\mathbb{R}^{d}$, as
$\iota_{C}(u):=\left\\{\begin{array}[]{ll}0,&\hbox{if $u\in C$;}\\\
+\infty,&\hbox{otherwise.}\end{array}\right.$
Clearly, $\iota_{C}$ is in $\Gamma_{0}(\mathbb{R}^{d})$ for any closed
nonempty convex set $C$.
Next, we reformulate model (9) as an unconstrained optimization problem. To
this end, from the $m\times n$ matrix $\Phi$ and the $m$-dimensional vector
$y$ in equation (2), we define an $(m+1)\times n$ matrix
$B:=\begin{bmatrix}\mathrm{diag}(y)\\\ y^{\top}\end{bmatrix}\Phi$ (10)
and a subset of $\mathbb{R}^{m+1}$
$\mathcal{C}:=\\{z:z_{m+1}=1\;\mbox{and}\;z_{i}\geq 0,\;i=1,2,\ldots,m\\},$
(11)
respectively. Then a vector $x$ satisfies the two constraints of model (9) if
and only if the vector $Bx$ lies in the set $\mathcal{C}$. Hence, model (9)
can be rewritten as
$\min\\{\|x\|_{0}+\iota_{\mathcal{C}}(Bx):x\in\mathbb{R}^{n}\\}.$ (12)
Problem (12) is known to be NP-complete due to the non-convexity of the
$\ell_{0}$-norm. Thus, there is a need for an algorithm that can pick the
sparsest vector $x$ satisfying the relation $Bx\in\mathcal{C}$. To attack this
$\ell_{0}$-norm optimization problem, a common approach that appeared in
recent literature is to approximate the $\ell_{0}$-norm by its computationally
feasible approximations. In the context of compressed sensing, we review
several popular choices for defining the $\ell_{0}$-norm as the limit of a
sequence. More precisely, for a positive number $\epsilon\in(0,1)$, we
consider separable concave functions of the form
$F_{\epsilon}(x):=\sum_{i=1}^{n}f_{\epsilon}(|x_{i}|),\quad
x\in\mathbb{R}^{n},$ (13)
where $f_{\epsilon}:\mathbb{R}_{+}\rightarrow\mathbb{R}$ is strictly
increasing, concave, and twice continuously differentiable such that
$\lim_{\epsilon\rightarrow 0+}F_{\epsilon}(x)=\|x\|_{0},\quad\mbox{for
all}\quad x\in\mathbb{R}^{n}.$ (14)
Since the function $f_{\epsilon}$ is concave and smooth on
$\mathbb{R}_{+}:=[0,\infty)$, it can be majorized by a simple function formed
by its first-order Taylor series expansion at a arbitrary point. Write
$\mathcal{F}_{\epsilon}(x,v):=F_{\epsilon}(v)+\langle\nabla
F_{\epsilon}(|v|),|x|-|v|\rangle$. Therefore, at any point
$v\in\mathbb{R}^{n}$ the following inequality holds
$F_{\epsilon}(x)<\mathcal{F}_{\epsilon}(x,v)$ (15)
for all $x\in\mathbb{R}^{n}$ with $|x|\neq|v|$. Here, for a vector $u$, we use
$|u|$ to denote a vector such that each element of $|u|$ is the absolute value
of the corresponding element of $u$. Clearly, when $v$ is close enough to $x$,
$\mathcal{F}_{\epsilon}(x,v)$ the expression on the right-hand side of (15)
provides a reasonable approximation to the one on its left-hand side.
Therefore, it is considered as a computationally feasible approximation to the
$\ell_{0}$-norm of $x$. With such an approximation, a simplified problem is
solved and its solution is used to formulate another simplified problem which
is closer to the ideal problem (12). This process is then repeated until the
solutions to the simplified problems become stationary or meet a termination
criteria. This procedure is summarized in Algorithm 1.
Algorithm 1 (Iterative scheme for model (12))
Initialization: choose $\epsilon\in(0,1)$ and let $x^{(0)}\in\mathbb{R}^{n}$
be an initial point.
repeat($k\geq 0$)
Step 1: Compute $x^{(k+1)}$:
$x^{(k+1)}\in\mathrm{argmin}\left\\{\mathcal{F}_{\epsilon}(x,|x^{(k)}|)+\iota_{\mathcal{C}}(Bx):x\in\mathbb{R}^{n}\right\\}.$
until a given stopping criteria is met
The terms $F_{\epsilon}(|x^{(k)}|)$ and $\langle\nabla
F_{\epsilon}(|x^{(k)}|),|x^{(k)}|\rangle$ appeared in the optimization problem
in Algorithm 1 can be ignored because they are irrelevant to the optimization
problem. Hence the expression for $x^{(k+1)}$ in Algorithm 1 can be simplified
as
$x^{(k+1)}\in\mathrm{argmin}\left\\{\langle\nabla
F_{\epsilon}(|x^{(k)}|),|x|\rangle+\iota_{\mathcal{C}}(Bx):x\in\mathbb{R}^{n}\right\\}.$
(16)
Since $f_{\epsilon}$ is strictly concave and increasing on $\mathbb{R}_{+}$,
$f^{\prime}_{\epsilon}$ is positive on $\mathbb{R}_{+}$. Hence, $\langle\nabla
F_{\epsilon}(|x^{(k)}|),|x|\rangle=\sum_{i=1}^{n}f^{\prime}_{\epsilon}(|x^{(k)}_{i}|)|x_{i}|$
can be viewed as the weighted $\ell_{1}$-norm of $x$ having
$f^{\prime}_{\epsilon}(|x^{(k)}_{i}|)$ as its $i$th weight. Thus, the
objective function of the above optimization problem is convex. Details for
finding a solution to the problem will be presented in the next section.
In the rest of this section, we list several possible choices of the functions
in (13) including but not limited to the Mangasarian function in [14] and the
Log-Det function in [8].
The Mangasarian function is given as follows:
$F_{\epsilon}(x)=\sum_{i=1}^{n}\left(1-e^{-|x_{i}|/\epsilon}\right),$ (17)
where $x\in\mathbb{R}^{n}$. This function is used to approximate the
$\ell_{0}$-norm to obtain minimum-support solutions (that is, solutions with
as many components equal to zero as possible). The usefulness of the
Mangasarian function was demonstrated in finding sparse solutions of
underdetermined linear systems (see [11]).
The Log-Det function is defined as
$F_{\epsilon}(x)=\sum_{i=1}^{n}\frac{\log(|x_{i}|/\epsilon+1)}{\log(1/\epsilon)},$
(18)
where $x\in\mathbb{R}^{n}$. Notice that $\|x\|_{0}$ is equal to the rank of
the diagonal matrix $\mathrm{diag}(x)$. The function $F_{\epsilon}(x)$ is
equal to $(\log(1/\epsilon))^{-1}\log(\mathrm{det}(\mathrm{diag}(x)+\epsilon
I))+n$, the logarithm of the determinant of the matrix
$\mathrm{diag}(x)+\epsilon I$. Hence, it was named as the Log-Det heuristic
and used for minimizing the rank of a positive semidefinite matrix over a
convex set in [8]. Constant terms can be ignored since they will not affect
the solution of the optimization problem (16). Hence the Log-Det function in
(18) can be replaced by
$F_{\epsilon}(x)=\sum_{i=1}^{n}\log(|x_{i}|+\epsilon).$ (19)
The function $F_{\epsilon}$ for the above three choices are plotted in Figure
1 for $n=1$ and $\epsilon$ being $\frac{1}{4}$, $\frac{1}{8}$, $\frac{1}{16}$,
and $\frac{1}{32}$. We can see that for a fixed $\epsilon\in(0,1)$ the
Mangasarian function is the one which is the most closest to the
$\ell_{0}$-norm.
We point it out that the Mangasarian function is bounded by $1$, therefore, is
non-coercive while the Log-Det function is coercive. This makes difference in
convergence analysis of the associated Algorithm 1 that will be presented in
the next section. In what follows, the function $F_{\epsilon}$ is the
Mangasarian function or the Log-Det function. We specify it only when it is
noted.
|
---|---
(a) Mangasarian | (b) Log-Det
Figure 1: Plots of $F_{\frac{1}{4}}$, $F_{\frac{1}{8}}$, $F_{\frac{1}{16}}$,
$F_{\frac{1}{32}}$ with $n=1$ for (a) the Mangasarian function; (b) the Log-
Det function.
## IV Convergence Analysis
In this section, we shall give convergence analysis for Algorithm 1. We begin
with presenting the following result.
###### Theorem 2
Given $\epsilon\in(0,1)$, $x^{(0)}\in\mathbb{R}^{n}$, and the set
$\mathcal{C}$ defined by (11), let the sequence $\\{x^{(k)}:k\in\mathbb{N}\\}$
be generated by Algorithm 1, where $\mathbb{N}$ is the set of all natural
numbers. Then the following three statements hold:
* (i)
The sequence $\\{F_{\epsilon}(x^{(k)}):k\in\mathbb{N}\\}$ converges when
$F_{\epsilon}$ is corresponding to the Mangasarian function (17) or the Log-
Det function (19);
* (ii)
The sequence $\\{x^{(k)}:k\in\mathbb{N}\\}$ is bounded when $F_{\epsilon}$ is
the Log-Det function;
* (iii)
$\sum_{k=1}^{+\infty}\left\||x^{(k+1)}|-|x^{(k)}|\right\|_{2}^{2}$ is
convergent when the sequence $\\{x^{(k)}:k\in\mathbb{N}\\}$ is bounded.
###### Proof:
We first prove Item (i). The key step for proving it is to show that the
sequence $\\{F_{\epsilon}(x^{(k)}):k\in\mathbb{N}\\}$ is decreasing and
bounded below. The boundedness of the sequence is due to the fact that
$F_{\epsilon}(0)\leq F_{\epsilon}(x^{(k)})$. From Step 1 of Algorithm 1 or
equation (16), one can immediately have that
$\iota_{\mathcal{C}}(Bx^{(k+1)})=0$
and
$\langle\nabla F_{\epsilon}(|x^{(k)}|),|x^{(k+1)}|\rangle\leq\langle\nabla
F_{\epsilon}(|x^{(k)}|),|x^{(k)}|\rangle.$ (20)
By identifying $x^{(k)}$ and $x^{(k+1)}$, respectively, as $v$ and $x$ in (15)
and using the inequality in (20), we get $F_{\epsilon}(x^{(k+1)})\leq
F_{\epsilon}(x^{(k)})$. Hence, the sequence
$\\{F_{\epsilon}(x^{(k)}):k\in\mathbb{N}\\}$ is decreasing and bounded below.
Item (i) follows immediately.
When $F_{\epsilon}$ is chosen as the Log-Det function, the coerciveness of
$F_{\epsilon}$ together with Item (i) implies that the sequence
$\\{x^{(k)}:k\in\mathbb{N}\\}$ must be bounded, that is, Item (ii) holds.
Finally, we prove Item (iii). Denote $w^{(k)}:=|x^{(k+1)}|-|x^{(k)}|$. From
the second-order Taylor expansion of the function $F_{\epsilon}$ at $x^{(k)}$
we have that
$F_{\epsilon}(x^{(k+1)})=\mathcal{F}_{\epsilon}(x^{(k+1)},x^{(k)})+\frac{1}{2}(w^{(k)})^{\top}\nabla^{2}F_{\epsilon}(v)w^{(k)},$
(21)
where $v$ is some point in the line segment linking the points $|x^{(k+1)}|$
and $|x^{(k)}|$ and $\nabla^{2}F_{\epsilon}(v)$ is the Hessian matrix of
$F_{\epsilon}$ at the point $v$.
By (20), the first term on the right-hand of equation (21) is less than
$F_{\epsilon}(x^{(k)})$. By equation (19), $\nabla^{2}F_{\epsilon}(v)$ for $v$
lying in the first octant of $\mathbb{R}^{n}$ is a diagonal matrix and is
equal to
$-\frac{1}{\epsilon^{2}}\mathrm{diag}(e^{-\frac{v_{1}}{\epsilon}},e^{-\frac{v_{2}}{\epsilon}},\ldots,e^{-\frac{v_{n}}{\epsilon}})$
or
$-\mathrm{diag}((v_{1}+\epsilon)^{-2},(v_{2}+\epsilon)^{-2},\ldots(v_{n}+\epsilon)^{-2})$
which corresponds to $F_{\epsilon}$ being the Mangasarian or the Log-Det
function. Hence, the matrix $\nabla^{2}F_{\epsilon}(v)$ is negative definite.
Since the sequence $\\{x^{(k)}:k\in\mathbb{N}\\}$ is bounded, there exists a
constant $\rho>0$ such that
$(w^{(k)})^{\top}\nabla^{2}F_{\epsilon}(v)w^{(k)}\leq-\rho\|w^{(k)}\|_{2}^{2}.$
Putting all above results together into (21), we have that
$F_{\epsilon}(x^{(k+1)})\leq
F_{\epsilon}(x^{(k)})-\frac{\rho}{2}\left\||x^{(k+1)}|-|x^{(k)}|\right\|_{2}^{2}.$
Summing the above inequality from $k=1$ to $+\infty$ and using Item (i) we get
the proof of Item (iii). ∎
From Item (iii) of Theorem 2, we have
$\left\||x^{(k+1)}|-|x^{(k)}|\right\|_{2}\rightarrow 0$ as
$k\rightarrow\infty$.
To further study properties of the sequence $\\{x^{(k)}:k\in\mathbb{N}\\}$
generated by Algorithm 1, the matrix $B^{\top}$ is required to have the range
space property (RSP) which is originally introduced in [21]. With this
property and motivated by the work in [21] we prove that Algorithm 1 can yield
a sparse solution for model (12).
Prior to presenting the definition of the RSP, we introduce the notation to be
used throughout the rest of this paper. Given a set
$S\subset\\{1,2,\ldots,n\\}$, the symbol $|S|$ denotes the cardinality of $S$,
and $S^{c}:=\\{1,2,\ldots,n\\}\setminus S$ is the complement of $S$. Recall
that for a vector $u$, by abuse of notation, we also use $|u|$ to denote the
vector whose elements are the absolute values of the corresponding elements of
$u$. For a given matrix $A$ having $n$ columns, a vector $u$ in
$\mathbb{R}^{n}$, and a set $S\subset\\{1,2,\ldots,n\\}$, we use the notation
$A_{S}$ to denote the submatrix extracted from $A$ with column indices in $S$,
and $u_{S}$ the subvector extracted from $u$ with component indices in $S$.
###### Definition 3 (Range Space Property (RSP))
Let $A$ be an $m\times n$ matrix. Its transpose $A^{\top}$ is said to satisfy
the _range space property (RSP)_ of order $K$ with a constant $\rho>0$ if for
all sets $S\subseteq\\{1,\dots,n\\}$ with $|S|\geq K$ and for all $\xi$ in the
range space of $A^{\top}$ the following inequality holds
$\|\xi_{S^{c}}\|_{1}\leq\rho\|\xi_{S}\|_{1}.$
We remark that if the transpose of an $m\times n$ matrix $B$ has the RSP of
order $K$ with a constant $\rho>0$, then for every non-empty set
$S\subseteq\\{1,\dots,n\\}$, the transpose of the matrix $B_{S}$, denoted by
$B_{S}^{\top}$, has the RSP of order $K$ with constant $\rho$ as well.
The next result shows that if the transpose of the matrix $B$ in Algorithm 1
possesses the RSP, then Algorithm 1 can lead to a sparse solution for model
(12). To this end, we define a mapping
$\sigma:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ such that the $i$th component
of the vector $\sigma(u)$ is the $i$th largest component of $|u|$.
###### Proposition 4
Let $B$ be the $(m+1)\times n$ matrix be defined by (10) and let
$\\{x^{(k)}:k\in\mathbb{N}\\}$ be the sequence generated by Algorithm 1.
Assume that the matrix $B^{\top}$ has the RSP of order $K$ with $\rho>0$
satisfying $(1+\rho)K<n$. Suppose that the sequence
$\\{x^{(k)}:k\in\mathbb{N}\\}$ is bounded. Then $(\sigma(x^{(k)}))_{n}$ the
$n$th largest component of $x^{(k)}$ converges to $0$.
###### Proof:
Suppose this proposition is _false_. Then there exist a constant $\gamma>0$
and a subsequence $\\{x^{(k_{j})}:j\in\mathbb{N}\\}$ such that
$(\sigma(x^{(k_{j})}))_{n}\geq 2\gamma>0$ for all $j\in\mathbb{N}$. From Item
(iii) of Theorem 2 we have that
$(\sigma(x^{(k_{j}+1)}))_{n}\geq\gamma$ (22)
for all sufficient large $j$. For simplicity, we set $y^{(k_{j})}:=\nabla
F_{\epsilon}(|x^{(k_{j})}|)$. Hence, by inequality (22) and $F_{\epsilon}$, we
know that
$|x^{(k_{j})}|>0\quad|x^{(k_{j}+1)}|>0,\quad\mbox{and}\quad y^{(k_{j})}>0$
(23)
for all sufficient large $j$. In what follows, we assume that the integer $j$
is large enough such that the above inequalities in (23) hold.
Since the vector $x^{(k_{j}+1)}$ is obtained through Step 1 of Algorithm 1,
i.e., equation (16), then by Fermat’s rule and the chain rule of
subdifferential we have that
$0=\mathrm{diag}(y^{(k_{j})})\partial{\|\cdot\|_{1}}(\mathrm{diag}(y^{(k_{j})})x^{(k_{j}+1)})+B^{\top}b^{(k_{j}+1)},$
where $b^{(k_{j}+1)}\in\partial\iota_{C}(Bx^{(k_{j}+1)})$. By (23), we get
$\partial{\|\cdot\|_{1}}(\mathrm{diag}(y^{(k_{j})})x^{(k_{j}+1)})=\\{\mathrm{sgn}(x^{(k_{j}+1)})\\},$
where $\mathrm{sgn}(\cdot)$ denotes the sign of the variable element-wise.
Thus
$y^{(k_{j})}=|\xi^{(k_{j}+1)}|,$
where $\xi^{(k_{j}+1)}=B^{\top}b^{(k_{j}+1)}$ is in the range of $B^{\top}$.
Let $S$ be the set of indices corresponding to the $K$ smallest components of
$|\xi^{(k_{j}+1)}|$. Hence,
$\sum_{i=1}^{n-K}(\sigma(y^{(k_{j})}))_{i}=\|\xi^{(k_{j}+1)}_{S^{c}}\|_{1}$
and
$\sum_{i=n-K+1}^{n}(\sigma(y^{(k_{j})}))_{i}=\|\xi^{(k_{j}+1)}_{S}\|_{1}.$
Since $B^{\top}$ has the RSP of order $K$ with the constant $\rho$, we have
that $\|\xi^{(k_{j}+1)}_{S^{c}}\|_{1}\leq\rho\|\xi^{(k_{j}+1)}_{S}\|_{1}$.
Therefore,
$\sum_{i=1}^{n-K}(\sigma(y^{(k_{j})}))_{i}\leq\rho\sum_{i=n-K+1}^{n}(\sigma(y^{(k_{j})}))_{i}.$
(24)
However, by the definition of $\sigma$, we have that
$\sum_{i=1}^{n-K}(\sigma(y^{(k_{j})}))_{i}\geq(n-K)(\sigma(y^{(k_{j})}))_{n-K+1}$
and
$\sum_{i=n-K+1}^{n}(\sigma(y^{(k_{j})}))_{i}\leq
K(\sigma(y^{(k_{j})}))_{n-K+1}.$
These inequalities together with the condition $(1+\rho)K<n$ lead to
$\sum_{i=1}^{n-K}(\sigma(y^{(k_{j})}))_{i}>\rho\sum_{i=n-K+1}^{n}(\sigma(y^{(k_{j})}))_{i},$
which contradicts to (24). This completes the proof of the proposition. ∎
From Proposition 4, we conclude that a sparse solution is guaranteed via
Algorithm 1 if the transpose of $B$ satisfies the RSP. Next, we answer how
sparse this solution will be. To this end, we introduce some notation and
develop a technical lemma. For a vector $x\in\mathbb{R}^{d}$, we denote by
$\tau(x)$ the set of the indices of non-zero elements of $x$, i.e.,
$\tau(x):=\\{i:x_{i}\neq 0\\}$. For a sequence $\\{x^{(k)}:k\in\mathbb{N}\\}$,
a positive number $\mu$, and an integer $k$, we define
$I_{\mu}(x^{(k)}):=\\{i:|x_{i}^{(k)}|\geq\mu\\}$.
###### Lemma 5
Let $B$ be the $(m+1)\times n$ matrix defined by (10), let $F_{\epsilon}$ be
the Log-Det function defined by (19), and let $\\{x^{(k)}:k\in\mathbb{N}\\}$
be the sequence generated by Algorithm 1. Assume that the matrix $B^{\top}$
has the RSP of order $K$ with $\rho>0$ satisfying $(1+\rho)K<n$. If there
exist $\mu>\rho\epsilon n$ such that $|I_{\mu}(x^{(k)})|\geq K$ for all
sufficient large $k$, then there exists a $k^{\prime\prime}\in\mathbb{N}$ such
that $\|x^{(k)}\|_{0}<n$ and
$\tau(x^{(k+1)})\subseteq\tau(x^{(k^{\prime\prime})})$ for all
$k>k^{\prime\prime}$.
###### Proof:
Set $y^{(k)}:=\nabla F_{\epsilon}(|x^{(k)}|)$. Since $x^{(k+1)}$ is a solution
to the optimization problem (16), then by Fermat’s rule and the chain rule of
subdifferential we have that
$0\in\mathrm{diag}(y^{(k)})\partial{\|\cdot\|_{1}}(\mathrm{diag}(y^{(k)})x^{(k+1)})+B^{\top}b^{(k+1)},$
where $b^{(k+1)}\in\partial\iota_{C}(Bx^{(k+1)})$. Hence, if
$x_{i}^{(k+1)}\neq 0$, we have that $y^{(k)}_{i}=|(B^{\top}b^{(k+1)})_{i}|$.
For $i\in I_{\mu}(x^{(k)})$, we have that $|x_{i}^{(k)}|\geq\mu$ and
$y^{(k)}_{i}=f^{\prime}_{\epsilon}(|x^{(k)}_{i}|)\leq
f^{\prime}_{\epsilon}(\mu)$ for all $k\in\mathbb{N}$, where
$f_{\epsilon}=\log(\cdot+\epsilon)$. Furthermore, there exist a $k^{\prime}$
such that $|x_{i}^{k+1}|>0$ for $i\in I_{\mu}(x^{(k)})$ and $k\geq k^{\prime}$
due to Item (iii) in Theorem 2. Thus, we have for all $k\geq k^{\prime}$
$\displaystyle\sum_{i\in I_{\mu}(x^{(k)})}{|(B^{\top}b^{(k+1)})_{i}|}$
$\displaystyle=$ $\displaystyle\sum_{i\in I_{\mu}(x^{(k)})}y^{(k)}_{i}$
$\displaystyle\leq$ $\displaystyle\sum_{i\in
I_{\mu}(x^{(k)})}f^{\prime}_{\epsilon}(\mu)\leq W^{*},$
where $W^{*}=n\lim_{\epsilon\rightarrow
0+}f_{\epsilon}^{\prime}(\mu)=\frac{n}{\mu}$ is a positive number dependent on
$\mu$.
Now, we are ready to prove $\|x^{(k)}\|_{0}<n$ for all $k>k^{\prime\prime}$.
By Proposition 4, we have that $(\sigma(x^{(k)}))_{n}\rightarrow 0$ when
$k\rightarrow+\infty$. Therefore, there exists an integer
$k^{\prime\prime}>k^{\prime}$ such that $|I_{\mu}(x^{(k)})|\geq K$ and
$0\leq\sigma(x^{(k)}))_{n}<\min\\{\frac{\mu}{\rho n}-\epsilon,\mu\\}$ for all
$k\geq k^{\prime\prime}$. Let $i_{0}$ be the index such that
$|x_{i_{0}}^{(k^{\prime\prime})}|=(\sigma(x^{(k^{\prime\prime})}))_{n}$. We
will show that $x_{i_{0}}^{(k^{\prime\prime}+1)}=0$. If this statement is not
true, that is, $x_{i_{0}}^{(k^{\prime\prime}+1)}$ is not zero, then
$|(B^{\top}b^{(k^{\prime\prime}+1)})_{i_{0}}|=f^{\prime}_{\epsilon}(|x^{(k^{\prime\prime})}_{i_{0}}|)=\frac{1}{|x^{(k^{\prime\prime})}_{i_{0}}|+\epsilon}>\rho
W^{*}.$ (25)
However, since $i_{0}$ is not in the set $I_{\mu}(x^{(k^{\prime\prime})})$ and
$B^{\top}$ satisfies the RSP, we have that
$\displaystyle|(B^{\top}b^{(k^{\prime\prime}+1)})_{i_{0}}|$
$\displaystyle\leq$ $\displaystyle\sum_{i\notin
I_{\mu}(x^{(k^{\prime\prime})})}|(B^{\top}b^{(k^{\prime\prime}+1)})_{i}|$
$\displaystyle\leq$ $\displaystyle\rho\sum_{i\in
I_{\mu}(x^{(k^{\prime\prime})})}|(B^{\top}b^{(k^{\prime\prime}+1)})_{i}|\leq\rho
W^{*},$
which contradicts to (25). Hence, we have that
$x_{i_{0}}^{(k^{\prime\prime}+1)}=0$ and $|\tau(x^{(k^{\prime\prime}+1)})|<n$.
By replacing $k^{\prime\prime}$ by $k^{\prime\prime}+1$ and repeating this
process we can obtain $x_{i_{0}}^{(k^{\prime\prime}+\ell)}=0$ for all
$\ell\in\mathbb{N}$. Therefore, $\|x\|_{0}<n$ for all $k>k^{\prime\prime}$.
This process can be also applied to other components satisfying
$x_{i}^{(k^{\prime\prime}+1)}=0$. Thus there exists a
$k^{\prime\prime}\in\mathbb{N}$ such that
$\tau(x^{(k)})\subseteq\tau(x^{(k^{\prime\prime})})$ for all $k\geq
k^{\prime\prime}$. ∎
With Lemma 5, the next result shows that when the transpose of $B$ satisfies
the RSP there exists a cluster point of the sequence generated by Algorithm 1
that is sparse and satisfies the consistency condition.
###### Theorem 6
Let $B$ be the $(m+1)\times n$ matrix defined by (10), let $F_{\epsilon}$ be
the Log-Det function defined by (19), and let $\\{x^{(k)}:k\in\mathbb{N}\\}$
be the sequence generated by Algorithm 1. Assume that the matrix $B^{\top}$
has the RSP of order $K$ with $\rho>0$ satisfying $(1+\rho)K<n$. Then there is
a subsequence $\\{x^{(k_{j})}:j\in\mathbb{N}\\}$ that converges to a
$\lfloor(1+\rho)K\rfloor$-sparse solution, that is
$(\sigma(x^{(k_{j})}))_{\lfloor(1+\rho)K+1\rfloor}\rightarrow 0$ as
$j\rightarrow+\infty$ and $\epsilon\rightarrow 0$.
###### Proof:
Suppose the theorem is _false_. Then there exist $\mu^{*}$, for any
$0<\epsilon^{*}<\frac{\mu^{*}}{\rho n}$, there exist a
$\epsilon\in(0,\epsilon^{*})$ and $k^{\prime}$ such that
$(\sigma(x^{(k)}))_{\lfloor(1+\rho)K+1\rfloor}\geq\mu^{*}$ for all $k\geq
k^{\prime}$. It implies that for all $k\geq k^{\prime}$
$|I_{\mu^{*}}(x^{(k)})|\geq\lfloor(1+\rho)K+1\rfloor>(1+\rho)K>K.$ (26)
By Lemma 5, there exist a $k^{\prime\prime}\geq k^{\prime}$ such that
$\|x^{(k)}\|_{0}<n$ and $\tau(x^{(k+1)})\subseteq\tau(x^{(k^{\prime\prime})})$
for all $k\geq k^{\prime\prime}$. Let $S=\tau(x^{(k^{\prime\prime})})$. Thus
$x^{(k)}_{S^{c}}=0$ for all $k\geq k^{\prime\prime}$. Therefore, the
optimization problem (16) for updating $x^{(k+1)}$ can be reduced to the
following one
$x_{S}^{k+1}\in\mathrm{arg}\min\\{\langle(\nabla
F_{\epsilon}(|x^{(k)}|))_{S},u\rangle+\iota((B_{S})u):u\in\mathbb{R}^{|S|}\\}.$
(27)
If $|\tau(x^{(k^{\prime\prime})})|>|I_{\mu^{*}}(x^{(k^{\prime\prime})})|$,
from (26) we have $(1+\rho)K<|S|$. Thus from Lemma 5 and $B^{\top}_{S}$ having
RSP with the same parameters, there exist a
$k^{\prime\prime\prime}>k^{\prime\prime}$ such that
$\tau(x^{(k)})<\tau(x^{(k^{\prime\prime})})$ for all $k\geq
k^{\prime\prime\prime}$. Therefore, by induction, there must exist a
$\tilde{k}$ such that for all $k\geq\tilde{k}$
$\tau(x^{(k)})=I_{\mu^{*}}(x^{(k)}),\;\tau(x^{k})\subseteq\tau(x^{(\tilde{k})}).$
It means that for all $k\geq\tilde{k}$ all the nonzero components of $x^{(k)}$
are bounded below by $\mu^{*}$. Therefore, for any $k\geq\tilde{k}$, the
updating equation (16) is reduced by (27) with $S=I_{\mu^{*}}(x^{(k)})$. From
Lemma 4 we get $[\sigma(x^{(k)})]_{|S|}\rightarrow 0$ which contradicts with
$|x_{|S|}^{k}|\geq\mu^{*}$. Therefore, we get this theorem. ∎
## V An Implementation of Algorithm 1
In this section, we describe in detail an implementation of Algorithm 1 and
show how to select the parameters of the associated algorithm.
Solving problem (16) is the main issue for Algorithm 1. A general model
related to (16) is
$\min\\{\|\Gamma x\|_{1}+\varphi(Bx):x\in\mathbb{R}^{n}\\},$ (28)
where $\Gamma$ is a diagonal matrix with positive diagonal elements and
$\varphi$ is in $\Gamma_{0}(\mathbb{R}^{m+1})$. In particular, if we choose
$\Gamma=\nabla F_{\epsilon}(|x^{(k)}|)$ and $\varphi=\iota_{\mathcal{C}}$,
where $x^{(k)}$ is a vector in $\mathbb{R}^{n}$, $\epsilon$ is a positive
number, $\mathcal{C}$ is given by (11), and $F_{\epsilon}$ is a function given
by (13), then model (28) reduces to the optimization problem in Algorithm 1.
We solve model (28) by using recently developed first-order primal-dual
algorithm (see, e.g., [6, 13, 20]). To present this algorithm, we need two
concepts in convex analysis, namely, the proximity operator and conjugate
function. The proximity operator was introduced in [15]. For a function
$f\in\Gamma_{0}(\mathbb{R}^{d})$, the proximity operator of $f$ with parameter
$\lambda$, denoted by $\mathrm{prox}_{\lambda f}$, is a mapping from
$\mathbb{R}^{d}$ to itself, defined for a given point $x\in\mathbb{R}^{d}$ by
$\mathrm{prox}_{\lambda
f}(x):=\mathop{\mathrm{argmin}}\left\\{\frac{1}{2\lambda}\|u-x\|^{2}_{2}+f(u):u\in\mathbb{R}^{d}\right\\}.$
The conjugate of $f\in\Gamma_{0}(\mathbb{R}^{d})$ is the function
$f^{*}\in\Gamma_{0}(\mathbb{R}^{d})$ defined at $z\in\mathbb{R}^{d}$ by
$f^{*}(z):=\sup\\{\langle x,z\rangle-f(x):x\in\mathbb{R}^{d}\\}.$
With these notation, the first-order primal-dual (PD) method for solving (28)
is summarized in Algorithm 2 (referred to as PD-subroutine).
Algorithm 2 PD-subroutine (The first-order primal-dual algorithm for solving
(28))
Input: the $(m+1)\times n$ matrix $B$ defined by (10); two positive numbers
$\alpha$ and $\beta$ satisfying the relation
$\alpha\beta<\frac{1}{\|B\|^{2}}$; the $n\times n$ diagonal matrix $\Gamma$
with all diagonal elements positive; and the function
$\varphi\in\Gamma_{0}(\mathbb{R}^{n})$.
Initialization: $i=0$ and an initial guess
$(u^{-1},u^{0},x^{0})\in\mathbb{R}^{m+1}\times\mathbb{R}^{m+1}\times\mathbb{R}^{n}$
repeat($i\geq 0$)
Step 1: Compute $x^{i+1}$:
$x^{i+1}=\mathrm{prox}_{\alpha\|\cdot\|_{1}\circ\Gamma}\left(x^{i}-\alpha
B^{\top}(2u^{i}-u^{i-1})\right)$
Step 2: Compute $u^{i+1}$:
$u^{i+1}=\mathrm{prox}_{\beta\varphi^{*}}(u^{i}+\beta Bx^{i+1})$
Step 3: Set $i:=i+1$.
until a given stopping criteria is met and the corresponding vectors $u^{i}$,
$u^{i+1}$, and $x^{i+1}$ are denoted by $u^{cur}$, $u^{new}$, and $x^{new}$,
respectively.
Output:
$(u^{cur},u^{new},x^{new})=\mathrm{PD}(\alpha,\beta,B,\Gamma,\varphi,u^{-1},u^{0},x^{0})$
###### Theorem 7
Let $B$ be an $(m+1)\times n$ matrix defined by (10), let $\mathcal{C}$ be the
set given by (11), let $\alpha$ and $\beta$ be two positive numbers, and let
$L$ be a positive such that $L\geq\|B\|^{2}$, where $\|B\|$ is the largest
singular value of $B$. If
$\alpha\beta L<1,$
then for any arbitrary initial vector
$(x^{-1},x^{0},u^{0})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{m+1}$,
the sequence $\\{x^{k}:k\in\mathbb{N}\\}$ generated by Algorithm 2 converges
to a solution of model (28).
The proof of Theorem 7 follows immediately from Theorem 1 in [6] or Theorem
3.5 in [13]. We skip its proof here.
Both proximity operators $\mathrm{prox}_{\alpha\|\cdot\|_{1}\circ\Gamma}$ and
$\mathrm{prox}_{\beta\varphi^{*}}$ should be computed easily and efficiently
in order to make the iterative scheme in Algorithm 2 numerically efficient.
Indeed, the proximity operator
$\mathrm{prox}_{\alpha\|\cdot\|_{1}\circ\Gamma}$ is given at
$z\in\mathbb{R}^{n}$ as follows: for $j=1,2,\ldots,n$
$\left(\mathrm{prox}_{\alpha\|\cdot\|_{1}\circ\Gamma}(z)\right)_{j}=\max\left\\{|z_{j}|-\alpha\gamma_{j},0\right\\}\cdot\mathrm{sign}(z_{j}),$
(29)
where $\gamma_{j}$ is the $j$th diagonal element of $\Gamma$. Using the well-
known Moreau decomposition (see, e.g. [1, 15])
$\mathrm{prox}_{\beta\varphi^{*}}=I-\beta\;\mathrm{prox}_{\frac{1}{\beta}\varphi}\circ\left(\frac{1}{\beta}I\right),$
(30)
we can compute the proximity operator $\mathrm{prox}_{\beta\varphi^{*}}$ via
$\mathrm{prox}_{\frac{1}{\beta}\varphi}$ which depends on a particular form of
the function $\varphi$. As our purpose is to develop algorithms for the
optimization problem in Algorithm 1, we need to compute the proximity operator
of $\iota^{*}_{\mathcal{C}}$ which is given in the following.
###### Lemma 8
If $\mathcal{C}$ is the set given by (11) and $\beta$ is a positive number,
then for $z\in\mathbb{R}^{m+1}$ we have that
$\mathrm{prox}_{\beta\iota^{*}_{\mathcal{C}}}(z)=(z_{1}-(z_{1})_{+},\ldots,z_{m}-(z_{m})_{+},z_{m+1}-\beta),$
(31)
where $(s)_{+}$ is $s$ if $s\geq 0$ and $0$ otherwise.
###### Proof:
We first give an explicit form for the proximity operator
$\mathrm{prox}_{\frac{1}{\beta}\iota_{\mathcal{C}}}$. Note that
$\iota_{\mathcal{C}}=\frac{1}{\beta}\iota_{\mathcal{C}}$ for $\beta>0$ and
$\iota_{\mathcal{C}}(z)=\iota_{\\{1\\}}(z_{m+1})+\sum_{i=1}^{m}\iota_{[0,\infty)}(z_{i})$,
for $z\in\mathbb{R}^{m+1}$. Hence, we have that
$\mathrm{prox}_{\frac{1}{\beta}\iota_{\mathcal{C}}}(z)=((z_{1})_{+},(z_{2})_{+},\ldots,(z_{m})_{+},1),$
(32)
where $(s)_{+}$ is $s$ if $s\geq 0$ and $0$ otherwise. Here we use the facts
that $\mathrm{prox}_{\iota_{[0,+\infty)}}(s)=(s)_{+}$ and
$\mathrm{prox}_{\iota_{\\{1\\}}}(s)=1$ for any $s\in\mathbb{R}$.
By the Moreau decomposition (30), we have that
$\mathrm{prox}_{\beta\iota^{*}_{\mathcal{C}}}(z)=z-\beta\mathrm{prox}_{\frac{1}{\beta}\iota_{\mathcal{C}}}(\frac{1}{\beta}z)$.
This together with equation (32) yields (31). ∎
Next, we comment on the diagonal matrix $\Gamma$ in model (28). When the
function $\varphi$ in model (28) is chosen to be $\iota_{C}$, then the
relation $a\varphi=\varphi$ holds for any positive number $a$. Hence, by
rescaling the diagonal matrix $\Gamma$ in model (28) with any positive number,
that does not alter the solutions of model (28). Therefore, we can assume that
the largest diagonal entry of $\Gamma$ is always equal to one.
In applications of Theorem 7 as in Algorithm 2, we should make the product of
$\alpha$ and $\beta$ as close to ${1}/{\|B\|^{2}}$ as possible. In our
numerical simulations, we always set
$\alpha=\frac{0.999}{\beta\|B\|^{2}}.$ (33)
In such the way, $\beta$ is essentially the only parameter that needs to be
determined.
Prior to computing $\alpha$ for a given $\beta$ by equation (33), we need to
know the norm of the matrix $B$. When $\min\\{m,n\\}$ is small, the norm of
the matrix $B$ can be computed directly. When $\min\\{m,n\\}$ is large, an
upper bound of the norm of the matrix $B$ is estimated in terms of the size of
$B$ as follows.
###### Proposition 9
Let $\Phi$ be an $m\times n$ matrix with i.i.d. standard Gaussian entries and
$y$ be an $m$-dimensional vector with its component being $+1$ or $-1$. We
define an $(m+1)\times n$ matrix $B$ from $\Phi$ and $y$ via equation (10).
Then
$\mathbb{E}\\{\|B\|\\}\leq\sqrt{m+1}(\sqrt{n}+\sqrt{m}).$
Moreover,
$\|B\|\leq\sqrt{m+1}(\sqrt{n}+\sqrt{m}+t)$
holds with probability at least $1-2e^{-t^{2}/2}$ for all $t\geq 0$.
###### Proof:
By the structure of the matrix $B$ in (10), we know that
$\|B\|\leq\left\|\begin{bmatrix}\mathrm{diag}(y)\\\
y^{\top}\end{bmatrix}\right\|\cdot\|\Phi\|.$
Therefore, we just need to compute the norms on the right-hand side of the
above inequality. Denote by $I_{m}$ the $m\times m$ identity matrix and
${1}_{m}$ the vector with all its components being $1$. Then
$\begin{bmatrix}\mathrm{diag}(y)\\\
y^{\top}\end{bmatrix}\begin{bmatrix}\mathrm{diag}(y)&y\end{bmatrix}=\begin{bmatrix}I_{m}&{1}_{m}\\\
{1}_{m}^{\top}&m\end{bmatrix},$
which is a special arrow-head matrix and has $m+1$ as its largest eigenvalue
(see [18]). Hence,
$\left\|\begin{bmatrix}\mathrm{diag}(y)\\\
y^{\top}\end{bmatrix}\right\|=\sqrt{m+1}.$
Furthermore, by using random matrix theory for the matrix $\Phi$, we know that
$\mathbb{E}\\{\|\Phi\|\\}\leq\sqrt{n}+\sqrt{m}$ and
$\|\Phi\|\leq\sqrt{n}+\sqrt{m}+t$ with probability at least $1-2e^{-t^{2}/2}$
for all $t\geq 0$ (see, e.g., [7]). This completes the proof of this
proposition. ∎
Let us compute the norm of $B$ numerically for $100$ randomly generated
matrices $\Phi$ and vectors $y$ for the pair $(m,n)$ with three different
choices $(500,1000)$, $(1000,1000)$, and $(1500,1000)$, respectively.
Corresponding to these choices, the mean values of $\|B\|$ are about $815$,
$1276$, and $1711$ while the upper bounds of the expected values of $\|B\|$ by
Proposition 9 are about $1208$, $2001$, and $2726$, respectively. We can see
that the norm of $B$ varies with its size and turns to be a big number when
the value of $\min\\{m,n\\}$ is relatively large. As a consequence, the
parameter $\alpha$ or $\beta$ must be very small relative to the other by
equation (33). Therefore, in what follows, the used matrix $B$ in model (28)
is considered to have been rescaled in the following way:
$\frac{B}{\|B\|}\quad\mbox{or}\quad\frac{B}{\sqrt{m+1}(\sqrt{n}+\sqrt{m})}$
(34)
when the norm of $B$ can be computed easily or not.
The complete procedure for model (12) and how the PD-subroutine is employed
are summarized in Algorithm 3.
Algorithm 3 (Iterative scheme for model (12))
Input: the $(m+1)\times n$ matrix $B$ formed by an $m\times n$ matrix $\Phi$
and an $m$-dimensional vector $y$ via (10); the set $\mathcal{C}$ given by
(11); $\epsilon\in(0,1)$, and $\tau>0$; $\alpha_{\max}$ and $\epsilon_{\min}$
be two real numbers; the maximum iteration number $k_{\max}$.
Initialization: normalizing $B$ according to (34); $\Gamma$ being the $n\times
n$ identity matrix; an initial guess
$(u^{old_{0}},u^{cur_{0}},x^{(0)})\in\mathbb{R}^{m+1}\times\mathbb{R}^{m+1}\times\mathbb{R}^{n}$;
and initial parameters $\beta$ and $\alpha=0.999/\beta$.
while $k<k_{\max}$ do
Step 1: Compute $\displaystyle(u^{old_{k+1}},u^{cur_{k+1}},x^{(k+1)})$
$\displaystyle=$
$\displaystyle\mathrm{PD}(\alpha,\beta,B,\Gamma,\iota_{\mathcal{C}},u^{old_{k}},u^{cur_{k}},x^{(k)})$
Step 2: Update $\Gamma$ as the scaled matrix $\mathrm{diag}(\nabla
F_{\epsilon}(x^{(k+1)}))$ such that the largest diagonal element of $\Gamma$
is one.
Step 3: If $\alpha<\alpha_{\max}$, update $\alpha\leftarrow
2\alpha,\quad\beta\leftarrow\beta/2$; if $\epsilon>\epsilon_{\min}$, update
$\epsilon\leftarrow\tau\epsilon$;
Step 4: Update $k\leftarrow k+1$.
end while
Output: $x^{(k_{\max})}$
## VI Numerical Simulations
In this section, we demonstrate the performance of Algorithm 3 for 1-bit
compressive sampling reconstruction in terms of accuracy and consistency and
compare it with the BIHT algorithm.
Through this section, all random $m\times n$ matrices $\Phi$ and length-$n$,
$s$-sparse vectors $x$ are generated based on the following assumption:
entries of $\Phi$ and $x$ on its support are i.i.d. Gaussian random variables
with zero mean and unit variances. The locations of the nonzero entries (i.e.,
the support) of $x$ are randomly permuted. We then generate the 1-bit
observation vector $y$ by equation (2). We obtain reconstruction of
$x^{\star}$ from $y$ by using the BIHT and Algorithm 3. The quality of the
reconstructed $x^{\star}$ is measured in terms of the signal-to-noise ratio
(SNR) in dB
$\mathrm{SNR}(x,x^{\star})=20\log_{10}\left(\left\|\frac{x}{\|x\|}\right\|_{2}/\left\|\frac{x}{\|x\|}-\frac{x^{\star}}{\|x^{\star}\|}\right\|_{2}\right).$
The accuracy of the BIHT and Algorithm 3 is measured by the average of SNR
values over 100 trials unless otherwise noted. For all figures in this
section, results by the BIHT and Algorithm 3 with the Mangasarian function
(17) and the Log-Det function (19) are marked by the symbols
“$\triangledown$”, “$\circ$”, and “$\star$”, respectively.
### VI-A Effects of using inaccurate sparsity on the BIHT
The BIHT requires the availability of the sparsity of the underlying signals.
This requirement is, however, not known in practical applications. In this
subsection, we demonstrate through numerical experiments that the mismatched
sparsity for a signal will degenerate the performance of the BIHT.
To this end, we fix $n=1000$ and $s=10$ and consider two cases of $m$ being
$500$ and $1000$. For each case, we vary the sparsity input for the BIHT from
$8$ to $12$ in which $10$ is the only right choice. Therefore, there are total
ten configurations. For each configuration, we record the SNR values and the
numbers of sign constraints not being satisfied of the reconstructed signals
by the BIHT.
Figure 2 depicts the SNR values of the experiments. The plots in the left
column of Figure 2 are for the case $m=500$ while the plots in the right
column are for the case $m=1000$. The marks in each plot represent the pairs
of the SNR values with a mismatched sparsity input (i.e., $s=8$, $s=9$,
$s=11$, or $s=12$ corresponding to the row 1, 2, 3, or 4) and with the correct
sparsity input (i.e., $s=10$). A mark below the red line indicates that the
BIHT with the correct sparsity input works better than the one with an
incorrect sparsity input. A mark that is far away from the red line indicates
the BIHT with the correct sparsity input works much better than the one with
an incorrect sparsity input or vice versa. Except the second plot in the left
column, we can see that the BIHT with the correct sparsity input performs
better than the one with an inaccurate sparsity input. In particular, when an
underestimated sparsity input to the BIHT is used, the performance of the BIHT
will be significantly reduced (see the plots in the first two columns of
Figure 2). When an overestimated sparsity input to the BIHT is used, majority
marks are under the red lines and are relatively closer to the red lines than
those from the BIHT with underestimated sparsity input. We further report that
the average SNR values for the sparsity input $s=8$, $9$, $10$, $11$, and $12$
for $m=500$ are $21.89$dB, $24.18$dB, $23.25$dB, $22.10$dB, and $21.00$dB,
respectively. Similarly, for $m=1000$, the average SNR values for the sparsity
input $s=8$, $9$, $10$, $11$, and $12$ are $19.77$dB, $26.37$dB, $34.74$dB,
$31.12$dB, and $29.46$dB, respectively.
|
---|---
|
|
|
(a) $m=500$ | (b) $m=1000$
Figure 2: The marks in the plots (from the top row to the bottom row)
represent the pairs of the SNR values from the BIHT with the correct sparsity
input (i.e., $s=10$) and incorrect sparsity inputs 8, 9, 11, and 12,
respectively. We fix $n=1000$.
Figure 3 (a) and (b) illustrate the histograms of the numbers of unsatisfied
consistency conditions over 200 trials for $m=500$ and $1000$, respectively.
We can see from Figure 3 (a) that the use of an underestimated sparsity
constraint ($s=8$ or $9$) will tend to yield, on average, a solution with a
large amount of sign constraints unsatisfied, in other words, under the
current setting the solution to model (7) via the BIHT does not satisfy
equation (2). As expected, when an overestimated sparsity constraint ($s=11$
or $12$) is used, the sign constraints are usually satisfied.
In summary, we conclude that a proper chosen sparsity constraint is critical
for the success of the BIHT.
|
---|---
(a) | (b)
Figure 3: The histograms of the numbers of unsatisfied consistency conditions
over 200 trials with (a) $(m,n)=(500,1000)$ and (b) $(m,n)=(1000,1000)$.
### VI-B Plan-Vershynin’s model for 1-bit reconstruction
Both our model (9) and Plan-Vershynin’s model (8) use the same constraint
conditions. Their objective functions are different. Our model uses the
$\ell_{0}$-norm while Plan-Vershynin’s model uses the $\ell_{1}$-norm. As
suggested in [17], linear programming can be applied for the Plan-Vershynin
model. We report here some numerical results for this model.
In our simulations, we fix $n=1000$, $m=1000$, and $s=10$. All simulations
were performed 100 trials. Figure 4 illustrates the sparsity of the
reconstructions of all trials which are clearly greater than 10 (indicated by
the solid red line in the figure). The average sparsity of the reconstructions
over 100 trials is $23.42$. Recall that the average SNR values of all
reconstructions by the BIHT is $34.74$dB.
---
Figure 4: Results for Plan-Vershynin’s model using Linear programming over 100
trials.
### VI-C Performance of Algorithm 3
Prior to applying Algorithm 3 for 1-bit compressive sampling problem,
parameters $k_{\max}$, $\tau$, $\alpha_{\max}$, $\epsilon_{\min}$, $\alpha$,
and $\epsilon$ in Algorithm 3 need to be determined. Under the aforementioned
setting for the random matrix $\Phi$ and sparse signal $x$, we fix
$k_{\max}=17$, $\tau=\frac{1}{2}$, $\alpha_{\max}=8000$,
$\epsilon_{\min}=10^{-5}$. For the functions $F_{\epsilon}$ defined by (17)
and (19), we set the pair of initial parameters $(\alpha,\epsilon)$ as
$(500,0.25)$ and $(250,0.125)$, respectively. The iterative process in the PD-
subroutine is forced to stop if the corresponding number of iteration exceeds
$300$. These parameters are used in all simulations performed by Algorithm 3
in the rest of this section.
To evaluate the performance of Algorithm 3 in terms of SNR values at various
scenarios, we consider three configurations for the size of the random matrix
$\Phi$ and the sparsity of the vector $x$. In the first configuration, we fix
$n=1000$ and $s=10$ and vary $m$ such that the ratio $m/n$ is between $0.1$
and $2$. In the second configuration, we fix $m=1000$ and $n=1000$ and vary
the sparsity of $x$ from $1$ to $20$. In the third configuration, we fix
$m=1000$ and $s=10$ and vary $n$ from $500$ to $1400$.
For every cases in each configuration, we compare the accuracy of Algorithm 3
with the BIHT by computing the average of SNR values over 100 trials. For the
given parameters and stopping criteria adopted by Algorithm 3, the estimate
$x^{(k_{\max})}$ may not satisfy the consistency condition (3), that is the
signs of measurements of the estimate $x^{(k_{\max})}$ are not completely
consistent with that of the original measurements. Thus, for a fair
comparison, we only compute the average of SNR values for those trials that
both reconstructions from the BIHT and Algorithm 3 satisfy the consistency
condition (3) and we say the corresponding trials are valid.
For the first configuration, the SNR values in decibels of the average
reconstruction errors by both the BIHT and Algorithm 3 are depicted in Figure
5. The plots demonstrate that our proposed algorithm performs as equally good
as the BIHT, in particular, when $m/n$ is greater than $1$, even thought our
algorithm does not require to know the exact sparsity of the original signal.
We can see that Algorithm 3 with the Log-Det function (19) (Figure 5(b))
performs slightly better than with the Mangasarian function (17) (Figure 5(a))
.
|
---|---
(a) Mangasarian | (b) Log-Det
Figure 5: Average SNR values vs. $m/n$ for fixed $n=1000$ and $s=10$. |
---|---
|
|
|
(a) Mangasarian | (b) Log-Det
Figure 6: The difference of SNR values of estimates by Algorithm 3 and the
BIHT vs. sparsity of reconstructed estimates by Algorithm 3. We fix $n=1000$
and $s=10$. Row 1 to Row 4 are corresponding to $m$ being 200, 800, 1400, and
2000, respectively.
Detailed descriptions for valid trials for $m=200$, $800$, $1400$, and $2000$
are displayed in the rows (from top to bottom) of Figure 6, respectively. The
horizontal axis of each plot represents the sparsity of the reconstructed
signals by Algorithm 3 while the vertical axis represents the difference of
the SNR values of the reconstructions between Algorithm 3 and the BIHT.
Therefore, the marks (“$\circ$” and “$\star$”) above the dashed horizontal
lines indicate that Algorithm 3 performs better than the BIHT for the
corresponding trials. Since all ideal signals in our simulations are
10-sparse, the marks whose horizontal axis are bigger than, exactly equal to,
or smaller than $10$ imply that the $\ell_{0}$-norm of the reconstructions by
Algorithm 3 are bigger than, exactly equal to, or smaller than $10$,
respectively. Thus, the $\ell_{0}$-norm of a reconstruction over $10$
indicates that the reconstruction is not a global minimizer of model (9), the
$\ell_{0}$-norm of a reconstruction being $10$ indicates that the sparsity of
the reconstruction is consistent with the one of the original test signal, the
$\ell_{0}$-norm of a reconstruction below $10$ indicates that the
reconstruction is potentially a global minimizer of model (9) and the original
test signal is not a solution to model (9). We can conclude from Figure 6 that
(i) the reconstructions by Algorithm 3 with sparsity higher (res. lower) than
10 usually have lower (res. higher) SNR values than that by the BIHT; (ii)
Increasing $m$ (number of measurements) tends to reduce the sparsity of the
reconstructions. For example, average sparsity of the reconstructions for
$m=200$, $800$, $1400$, and $2000$ are, respectively, 11.74, 10.26, 10, and
10.06 for Algorithm 3 with the Mangasarian function, and 11.53, 10.05, 9.88,
9.88 for Algorithm 3 with the Log-Det function.
For the second configuration, the SNR values in decibels of the average
reconstruction errors by both the BIHT and Algorithm 3 are compared in Figure
7 for varying sparsity of original signals. The plots demonstrate that our
proposed algorithm performs better than the BIHT for sparsity $s$ being $2$
and $6$ to $10$. We emphasize again that unlike the BIHT the exact sparsity of
the original signal is not required in advance by Algorithm 3. We remark that
when $s=1$ both the BIHT and Algorithm 3 find an exact solution to model (9).
This phenomenon was also reported in [12]. Detailed descriptions for valid
trials for $s=2$, $8$, $14$, and $20$ are displayed in the rows (from top to
bottom) of Figure 8, respectively. The marks in each plot of Figure 8 have the
same meaning as that in Figure 6. For fixed $m=1000$ and $n=1000$ we can draw
conclusions from Figure 8 that (i) Algorithm 3 tends to produce an estimate
whose sparsity is consistent with the ideal sparse signal; (ii) Algorithm 3
can give an estimate whose sparsity is smaller than that of the ideal sparse
signal, in particular, when the sparsity of an original signal is relative
large.
|
---|---
(a) Mangasarian | (b) Log-Det
Figure 7: Average SNR values vs. sparsity of original signals for fixed $n=1000$ and $m=1000$. |
---|---
|
|
|
(a) Mangasarian | (b) Log-Det
Figure 8: The difference of SNR values of estimates by Algorithm 3 and the
BIHT vs. sparsity of reconstructed estimates by Algorithm 3. We fix $n=1000$
and $m=1000$. Row 1 to Row 4 are corresponding to $s$ being 2, 8, 14, and 20,
respectively.
For the third configuration, the SNR values in decibels of the average
reconstruction errors by both the BIHT and Algorithm 3 are compared in Figure
9 for fixed $m=1000$ and $s=10$ and varying dimensions of original signals.
The plots in Figure 9 show that the average SNR values for reconstructions by
Algorithm 3 are lower than that by the BIHT in most cases. This is due to the
fact that the BIHT explores an unattainable additional information on the
sparsity of the original signal. Another reason which we can see from Figure
10 is that reconstructions with their sparsity larger than $10$ by Algorithm 3
usually have lower SNR values than by the BIHT. The marks in each plot of
Figure 10 have the same meaning as that in Figures 6 and 8. For fixed $m=1000$
and $s=10$ we can draw conclusions from Figure 10 that (i) Algorithm 3 can
give an estimate whose sparsity is smaller than that of the ideal sparse
signal and (ii) Algorithm 3 with the Log-Det function works better than that
with the Mangasarian function.
|
---|---
(a) Mangasarian | (b) Log-Det
Figure 9: Average SNR values of estimates vs. the signal size $n$ for fixed $m=1000$ and $s=10$. |
---|---
|
|
|
(a) Mangasarian | (b) Log-Det
Figure 10: The difference of SNR values of estimates by Algorithm 3 and the
BIHT vs. sparsity of reconstructed estimates by Algorithm 3. We fix $m=1000$
and $s=10$. Row 1 to Row 4 are corresponding to $n$ being 500, 800, 1100, and
1400, respectively.
## VII Summary and Conclusion
In this paper we proposed a new model and algorithm for 1-bit compressive
sensing. Unlike the state-of-the-art BIHT method, our model does not need to
know the sparsity of the signal of interest. We demonstrated the performance
of our proposed algorithm for reconstruction from 1-bit measurements.
It would be of interest to study the convergence of Algorithm 3 with the
Mangasarian function in the future. It would be highly needed to adaptively
update all parameters in Algorithm 3 so that consistent reconstruction can be
always achieved with improved accuracy of the solution.
## Acknowledgements
The authors would like to thank Ms. Na Zhang for her valuable comments and
insightful suggestions which have brought improvements to several aspects of
this manuscript.
The views and conclusions contained herein are those of the authors and should
not be interpreted as necessarily representing the official policies or
endorsement, either expressed or implied, of the Air Force Research Laboratory
or the U.S. Government.
## References
* [1] H. L. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, AMS Books in Mathematics, Springer, New York, 2011.
* [2] T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Applied and Computational Harmonic Analysis, 27 (2009), pp. 265 – 274.
* [3] P. T. Boufounos and R. G. Baraniuk, 1-bit compressive sensing, Proceedings of Conference on Information Science and Systems (CISS), Princeton, NJ, 2008.
* [4] E. Candes, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Communications on Pure and Applied Mathematics, 59 (2006), pp. 1207–1223.
* [5] E. Candes and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Transactions on Information Theory, 52 (2006), pp. 5406–5425.
* [6] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision, 40 (2011), pp. 120–145.
* [7] K. Davidson and S. Szarek, Local operator theory, random matrices and Banach spaces, in Handbook of the Geometry of Babach Spaces, vol. I, Amsterdam: North-Holland, 2001, pp. 317–366.
* [8] M. Fazel, H. Hindi, and S. Boyd, Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices, vol. 3 of Proceedings of American Control Conference, 2003, pp. 2156 – 2162.
* [9] E. Hale, W. Yin, and Y. Zhang, Fixed-point continuation for $\ell_{1}$ minimization: Methodology and convergence, SIAM Journal on Optimization, 19 (2008), pp. 1107–1130.
* [10] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors, IEEE Transactions on Information Theory, accepted, (2012).
* [11] S. Jokar and M. E. Pfetsch, Exact and approximate sparse solutions of underdetermined linear equations, SIAM Journal on Scientific Computing, 31 (2008), pp. 23–44.
* [12] J. Laska, Z. Wen, W. Yin, and R. Baraniuk, Trust, but verify: Fast and accurate signal recovery from 1-bitcompressive measurements, IEEE Transactions on Signal Processing, 59 (2011), pp. 5289 – 5301.
* [13] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss-Seidel iterations for $\mathrm{L1/TV}$ denoising models, Inverse Problems, 28 (2012), p. 095003.
* [14] O. L. Mangasarian, Minimum-support solutions of polyhedral concave programs, Optimization, 45 (1999), pp. 149 – 162.
* [15] J.-J. Moreau, Fonctions convexes duales et points proximaux dans un espace hilbertien, C.R. Acad. Sci. Paris Sér. A Math., 255 (1962), pp. 1897–2899.
* [16] , Proximité et dualité dans un espace hilbertien, Bull. Soc. Math. France, 93 (1965), pp. 273–299.
* [17] Y. Plan and R. Vershynin, One-bit compressed sensing by linear programming, Communications on Pure and Applied Mathematics, accepted, (2012).
* [18] L. Shen and B. W. Suter, Bounds for eigenvalues of arrowhead matrices and their applications to hub matrices and wireless communications, EURASIP Journal on Advances in Signal Processing, Article ID 379402, 12 pages, (2009).
* [19] M. Yan, Y. Yang, and S. Osher, Robust 1-bit compressive sensing using adaptive outlier pursuit, IEEE Transactions on Signal Processing, 60 (2012), pp. 3868 – 3875.
* [20] X. Zhang, M. Burger, and S. Osher, A unified primal-dual algorithm framework based on Bregman iteration, Journal of Scientific Computing, 46 (2011), pp. 20–46.
* [21] Y.-B. Zhao and D. Li, Reweighted $\ell_{1}$-minimization for sparse solutions to underdetermined linear system, SIAM Journal on Optimization, 22 (2012), pp. 1065–1088.
|
arxiv-papers
| 2013-02-06T16:16:47 |
2024-09-04T02:49:41.410891
|
{
"license": "Public Domain",
"authors": "Lixin Shen and Bruce W. Suter",
"submitter": "Lixin Shen",
"url": "https://arxiv.org/abs/1302.1419"
}
|
1302.1492
|
# Why could ice ages be unpredictable?
Submitted to Climate of the Past on the 4th February 2012.
Michel Crucifix
[email protected]
Earth and Life Institute, Georges Lemaitre Centre for Earth and Climate
Research
Université catholique de Louvain, Louvain-la-Neuve, Belgium
###### Abstract
It is commonly accepted that the variations of Earth’s orbit and obliquity
control the timing of Pleistocene glacial-interglacial cycles. Evidence comes
from power spectrum analysis of palaeoclimate records and from inspection of
the timing of glacial and deglacial transitions. However, we do not know how
tight this control is. Is it, for example, conceivable that random climatic
fluctuations could cause a delay in deglaciation, bad enough to skip a full
precession or obliquity cycle and subsequently modify the sequence of ice
ages?
To address this question, seven previously published conceptual models of ice
ages are analysed by reference to the notion of generalised synchronisation.
Insight is being gained by comparing the effects of the astronomical forcing
with idealised forcings composed of only one or two periodic components. In
general, the richness of the astronomical forcing allows for synchronisation
over a wider range of parameters, compared to periodic forcing. Hence, glacial
cycles may conceivably have remained paced by the astronomical forcing
throughout the Pleistocene.
However, all the models examined here also show a range of parameters for
which the structural stability of the ice age dynamics is weak. This means
that small variations in parameters or random fluctuations may cause
significant shifts in the succession of ice ages if the system were
effectively in that parameter range. Whether or not the system has strong
structural stability depends on the amplitude of the effects associated with
the astronomical forcing, which significantly differ across the different
models studied here. The possibility of synchronisation on eccentricity is
also discussed and it is shown that a high Rayleigh number on eccentricity, as
recently found in observations, is no guarantee of reliable synchronisation.
## 1 Introduction
hays76 showed that southern ocean climate benthic records exhibit spectral
peaks around 19, 23-24, 42 and 100 thousand years (thousand years are
henceforth denoted ‘ka’). More or less concomitantly berger77 showed, based on
celestial mechanics, that the power spectrum of climatic precession was
dominated by periods of 19, 22 and 24 ${\mathrm{ka}}$, and that of obliquity
was dominated by a period of 41 ${\mathrm{ka}}$. These authors concluded that
the succession of ice ages is somehow controlled by the astronomical forcing.
The much less cited paper by Birchfield78aa is, however, at least as
important. These authors considered a dynamical ice sheet model, which they
forced by astronomically-induced variations in incoming solar radiation
(insolation). They managed to reproduce grossly the spectral signature found
by hays76. However, subtle changes in the model parameters, well within the
range allowed by physics, disturbed significantly the precise sequence of ice
ages, without altering the power spectrum of ice volume variations.
It is this author’s experience that some patient tuning is generally needed to
reproduce the exact sequence of glacial-interglacial cycles with a conceptual
model. On Figure 1 are shown two examples of ice volume history reproduced
with models previously published in the palaeoclimate modelling literature
(saltzman91sm; tziperman06pacing). In both cases, small changes in model
parameters do, at some stage in the climate history, induce a shift in the
sequence of ice ages. Sometimes this sensitivity is explicitly acknowledged by
the authors (paillard98; Imbrie11aa), but not always, and this may have given
the false impression that these models unambiguously confirm the tight control
of astronomical forcing on ice ages.
Figure 1: Ice volume simulated with two models previously published:
saltzman90sm and tziperman06pacing, forced by normalised insolation at 65∘ N.
The blue lines are obtained with the published parameters; the latter were
slightly changed to obtain the red ones: $p_{0}=0.262$ Sv instead of $0.260$
Sv in tziperman06pacing, and $w=0.6$ instead of $w=0.5$ in saltzman90sm. While
the qualitative aspect of the curves are preserved, the timing of ice ages is
affected by the parameter changes.
Yet, as early as in 1980, imbrie80 posed the right questions. They wondered
whether “nonorbitally forced high-frequency fluctuations may have caused the
system to flip or flop in an unpredictable fashion.” They also noted that “the
regularity of the 100-ka cycle, and particularly its phase coherence with the
100-ka eccentricity cycle, argue for predictability”.
Let us comment these quotes:
Predictability:
The horizon of predictability of a system—i.e., the fact that one cannot
predict its evolution arbitrarily far in time— emerges as a combination of (1)
our epistemic uncertainty on the system state, structure and its controlling
environmental factors and (2) the stability of the system. All things being
equal a stable system is more predictable than a chaotic one. What imbrie80
were asking is essentially how stable the climate system is with respect to
non-astronomical fluctuations.
Phase coherence with eccentricity:
The spectrum of eccentricity is dominated by a period of 413 ka, followed by
four periods around 100 ka (berger78, Table 3). If the 100-ka eccentricity
cycles have a strong controlling action on the succession of ice ages, then we
expect the system to be quite stable to non-astronomical fluctations. In
favour of this argument, Lisiecki10aa recently documented a good coherence
between the timing of eccentricity cycles and that of ice ages.
Our purpose here is to understand the dynamical factors which may induce
instability in the succession of ice ages. The approach is dynamics-oriented:
we use tools from mathematics, and focus more on the understanding of the
dynamics, than on the identification of physical mechanisms. Though, it will
not be concluded whether or not glacial-interglacial cycles are indeed
predictable or not. This requires an additional step of statistical inference,
which is left for another article.
Which model to use? There are many models of ice ages, spanning different
orders of complexity and based on different physical interpretations. We will
therefore work with different models, but only of the class of the simplest
ones. This choice offers us a greater flexibility in analysing model dynamics
with computing intensive techniques, and it also allows us to keep our
hypotheses to a minimum.
Indeed, most of the simplest models of ice ages (saltzman90sm; saltzman91sm;
paillard04eps; tziperman06pacing; Imbrie11aa) share a number of
characteristics:
1. 1.
These are dynamical systems: climate has a memory (in contrast to
milankovitch41);
2. 2.
the astronomical forcing is introduced as an additive or quasi-additive
forcing term, which involves a combination of precession and obliquity;
3. 3.
there are non-linear terms involved in the internal system dynamics, which
induce episodically conditions of instability. The general hypothesis is that
high glaciation levels are unstable. The instability conditions may lie
implicitly in the system dynamics (saltzman90sm), or postulated explicitly by
means of a threshold criteria (as in paillard98; Imbrie11aa). The threshold
may be a function of precession and obliquity (Parrenin12ab), and a dependency
on eccentricity was also proposed (Rial04aa).
System instability is an important aspect of Pleistocene theory. It is a
convenient starting point to explain the existence of large climatic
fluctuations such as deglaciations, even when the astronomical forcing is
weak. Termination V, which occurred 400 ka ago, is an often-cite example
(paillard01rge). Instability may also explain the emergence of 100-ka climatic
cycles independently of the effect of eccentricity (see, e.g. Crucifix12aa for
a review).
The present article is structured as follows. The discussion starts with the
van der Pol oscillator forced by astronomical forcing. As the other models
cited so far, this is a dynamical system that combines the accumulative action
of astronomical forcing with an instability mechanism causing regime changes.
The vanderpol26 Pol oscillator was first introduced as a model of an
electronic circuit and it has been studied for over 80 years. This gives us
the possibility to anchor the present work in a long tradition of dynamical
system theory. Next, the analysis techniques used with the van der Pol
oscillator are applied to 6 other models previously published in the
literature. We will then be able to determine which conclusions seem the most
robust.
## 2 The van der Pol oscillator
### 2.1 Model definition
The van der Pol model can be introduced as a dynamical system of two coupled
ordinary differential equations:
$\left\\{\parbox{0.0pt}{
\vskip-10.00002pt\@@eqnarray\frac{\mathrm{d}x}{\mathrm{d}\,t}&=&-\frac{1}{\tau}\left(F\left(t\right)+\beta+y\right).\\\
\frac{\mathrm{d}y}{\mathrm{d}\,t}&=&\frac{\alpha}{\tau}(y-y^{3}/3+x),
}\right.$
with: $(x,y)$ the climate state vector, $\tau$ a time constant, $\alpha$ a
time-decoupling factor, $\beta$ a bifurcation parameter and $F(t)$ the
forcing. The parameter $\beta$ does not appear in the original van der Pol
equations. The present variant is sometimes referred to as the biased van der
Pol model.
The autonomous (i.e. $F(t)=0$) model displays self-sustained oscillations as
long as $|\beta|<1$. For later reference the period of the unforced oscillator
is denoted $T_{n}(\tau)$. The variable $x$ follows then a saw-tooth periodic
cycle, the asymmetry of which is controlled by $\beta$.
Variable $y$ varies the more abruptly that $\alpha$ is high. Here we chose
$\alpha=30$, so that $y$ may be termed the ‘fast’ variable. In climate terms,
$x$ may be interpreted as a glaciation index, which accumulates slowly the
effects of the astronomical forcing $F(t)$, while $y$, which shifts between
approximately $-1$ and $1$, might be interpreted as some representation of the
ocean dynamics. This is admittedly arguable and other interpretations would be
possible. Keep in mind that the van der Pol model is only used here to
identify phenomena emerging from the combination of limit cycle dynamics and
astronomical forcing, and models with better physical justifications are
analysed section 3.
In ice age models the forcing function is generally one or several insolation
curves, computed for specific seasons and latitudes. The rationale behind this
choice is that whichever insolation is used it is, to a very good
approximation, a linear combination of climatic precession and obliquity
(Loutre93aa, see also Appendix A). The choice of one specific insolation curve
may be viewed as a modelling decision about the effective forcing phase of
climatic precession, and the relative amplitudes of the forcings due to
precession and obliquity. In turn, climatic precession and obliquity can be
expressed as a sum of sines and cosines of various amplitudes and frequencies
(berger78), so that $F(t)$ can be modelled as a linear combination of about a
dozen of dominant periodic signals, plus a series of smaller amplitude
components. They are shown on Figure 2. More details are given in section 2.4.
With these hypotheses the van der Pol model may be tuned so that the benthic
curve over the last 800,000 years is reasonably well reproduced (Figure 3).
An abundant literature analyses the response of the van der Pol oscillator to
a periodic forcing (e.g. Mettin93aa; Guckenheimer03aa, and ref. therein). The
response of oscillators to the sum of two periodic forcings has been the focus
of attention because it leads to the emergence of ‘strange non-chaotic
attractors’, on which we will come back (Wiggins87aa; Romeiras87aa;
Kapitaniak90ab; Kapitaniak93aa; Belogortsev92aa; Feudel97aa; Glendinning00aa).
To our knowledge, however, there is no systematic study of the response of an
oscillator to a signal of the form of the astronomical forcing, except for
preliminary work of our group (De-Saedeleer12aa). letreut83, for example,
represented the astronomical forcing as a sum of only two or three periodic
components and it will be shown here that it matters to consider the
astronomical forcing with all its complexity.
Figure 2: Spectral decomposition of precession and obliquity given by
berger78, scaled here such that the strongest components have amplitude 1.
Figure 3: Simulation with the van der Pol oscillator (eq.: 2.1, 2 and 3 – 4),
forced by astronomical forcing and compared with the benthic record of
lisiecki05lr04. Parameters are : $\alpha=30$, $\beta=0.75$,
$\gamma_{p}=\gamma_{o}=0.6$, $\tau=36.2\,{\mathrm{ka}}$.
### 2.2 Periodic forcing
Consider a sine-wave forcing ($F(t)=\gamma\sin(2\pi/{P_{1}}t+\phi_{P1})$),
with period $P_{1}=23,716$ years and $\phi_{P1}=32.01^{\circ}$. This is the
first component of the harmonic development of climatic precession (berger78).
If certain conditions are met—they will soon be given—the van der Pol
oscillator may become synchronised on the forcing. Synchronised means, in this
particular context, that the response of the system displays $p$ cycles within
$q$ forcing periods, where $p$ and $q$ are integers. It is said that the
system is in a $p:q$ synchronisation regime (Pikovski01aa, p. 66-67). The
output is periodic, and its period is equal to $q\times P$.
There are several ways to identify the synchronisation in the output of a
dynamical system. One method is to plot the state of the system at a given
time $t$, and then superimpose on that plot the state of the system at every
time $t+nP$, where $n$ is integer. The system is synchronised if only $q$
distinct points appear on the graph, discarding transient effects associated
with initial conditions. These correspond to the stable fixed points of the
iteration bringing the system from $t$ to $t+qP$. In the following we refer to
this kind of plot as a “stroboscopic section” of period $P$ (Figure 2.2a). If
the system is not synchronised, then there are two options: the stroboscopic
section is a closed curve (the response is quasi-periodic), or a figure with a
strange geometry (the response is a-periodic).
There is another, equivalent way to identify synchronisation. Suppose that the
system is started from arbitrary initial conditions. Then, plot the system
state at a given time $t$, long enough after the initial conditions. Repeat
the experiment with another set of initial conditions, superimpose the result
to the plot, and so on with a very large number of different initial
conditions. In doing so one constructs the section of the global pullback
attractor at time $t$ (henceforth referred to as the _pullback section_); the
pullback attractor itself being the continuation of this figure over all times
(Figure 2.2b). Each component of the global pullback attractor (two aro
illustrated on Figure 2.2b) is a local pullback attractor. Rasmussen00aa
reviews all the relevant mathematical formalism.
In the particular case of a periodic forcing, the stroboscopic section and the
pullback section are often identical (Figure 5). 111 This property derives
from the system invariance with respect a time translation by $P$. There will
be, however, cases where different initial conditions will create different
stroboscopic plots. For example two $1:2$ synchronisation regimes co-exist in
the forced van der pol oscillator, so that there are four distinct local
pullback attractors, while only two points will appear on a stroboscopic plot
started from a single set of initial conditions. Rigorously, the global
pullback attractor at time $t$ is identical to the global attractor of the
iteration $t+nP$, and the global pullback attractor at two times $t$ and
$t^{\prime}$ are homeomorphic.
The number of points on the pullback section may then be estimated for
different combinations of parameters and we can use this as a criteria to
detect synchronisation. This is done on Figure 6 for a range of $\gamma$ and
$\tau$. It turns out that synchronisation regimes are organised in the form of
triangles, known in the dynamical system literature as Arnol’d tongues
(Pikovski01aa, p. 52). A $p:q$ synchronisation regime appears when the ratio
between the natural period ($T_{n}$) and the forcing period is close to $q/p$.
The tolerance, i.e., how distant this ratio can afford to be with respect to
$q/p$, increases with the forcing amplitude and decreases with $p$ and $q$.
Synchronisation is weakest (least reliable) near the edge of the tongues.
Unreliable synchronisation characterises a system that is synchronised, but in
which small fluctuations may cause episodes of desynchronisation. In the
particular case of periodic forcing the episode of desynchronisation is called
a phase slip, as is well explained in Pikovski01aa
Using the pullback attractor to identify synchronisation is not a very
efficient method in the periodic forcing case. Arc-length continuation methods
are faster and more accurate (e.g., Schilder07aa, and ref. therein). It is
shown however in De-Saedeleer12aa that the pullback section method gives
results that are acceptable enough for our purpose, and it is adopted here
because it provides a more intuitive starting point to characterise
synchronisation with multi-periodic forcings.
(a)
---
(b) & $t_{0}$$t_{\textrm{back}}$$x$$y$$t$
Figure 4: (a) The stroboscopic section is obtained by superimposing the system
state every forcing period ($P$), here illustrated for a 2:1 synchronisation.
(b) The pullback section at time $t_{0}$ is obtained by superimposing the
system states obtained by initialising the system with the ensemble of all
possible initial conditions far back in time, here at $t_{\mathrm{back}}$. The
particular example shows a global pullback attractor made of two local
pullback attractors (in dashed red and blue), the sections of which are seen
at $t$. Also shown is the convergence of initial conditions towards the
pullback attractors, in grey.
Figure 5: Pullback (at $t_{0}=0$) and stroboscopic sections ($t=0+nP_{1}$)
obtained with the van der Pol oscillator, with parameters $\alpha=30$,
$\beta=0.7$, $\tau=36$ forced by $F(t)=\gamma\sin(2\pi/P_{1}t+\phi_{P1})$,
$P_{1}=23.7\,{\mathrm{ka}}$, $\phi_{P1}=32.01^{\circ}$ and $\gamma=0.6$. The
two plots indicate a case of $4:1$ synchronisation, and they are identical
because the forcing is periodic. Figure 6: Bifurcation diagram obtained by
counting the number of points on the pullback section in the van der Pol
oscillator ($\alpha=30,\quad\beta=0.7$) and
$F(t)=\gamma\sin(2\pi/P_{1}t+\phi_{P1})$. The two x-axes indicate $\tau$ and
the ratio between the natural system period and the forcing, respectively. One
observes the synchronisation regimes corresponding to 1:1, 2:1, 3:1, 4:1 and
5:1, respectively (gray, blue, red, green, yellow) and, intertwinned, higher
order synchronisations including $2:3$, $2:5$, $3:2$ etc. White areas are weak
or no synchronisation. Graph constructed using $t_{\mathrm{back}}=-10$Ma (see
Figure 11 and text for meaning and implications.)
### 2.3 Synchronisation on two periods
Consider now a forcing function that is the sum of two periodic signals. Two
cases are considered here: the two forcing periods differ by a factor of about
2, and the two forcing periods are close.
#### 2.3.1 $P_{1}=23.716{\mathrm{ka}}$ and $O_{1}=41.000{\mathrm{ka}}$
We adopt
$F(t)=\gamma[\sin(2\pi/P_{1}t+\phi_{P1}))+\cos(2\pi/O_{1}t+\phi_{O1})]$, with
$P_{1}=23.716{\mathrm{ka}}$, $O_{1}=41.000{\mathrm{ka}}$ and
$\phi_{P1}=32.01^{\circ}$ and $\phi_{O1}=251.09^{\circ}$. $P_{1}$ is the first
period in the development of precession, $O_{1}$ is the first period in the
development of obliquity and $\phi_{P1}$,$\phi_{O1}$ the corresponding phases
given by berger78, so that $F(t)$ may already be viewed as a very rough
representation of the astronomical forcing.
Let us begin with $\tau=36\,{\mathrm{ka}}$, which corresponds to a limit cycle
in the van der Pol oscillator of period $T_{n}=98.79\,{\mathrm{ka}}$, and
consider the stroboscopic section on $P_{1}$ (Figure 7, line 1). Forcing
amplitude $\gamma$ is set to $0.6$. Due to the presence of the $O_{1}$
forcing, the four points of the periodic case seen on Figure 5 have mutated
into four local attractors, which appear as closed curves (some are very
flat). Every time $P_{1}$ elapses, the system visits a different local
attractor. They are attractors in the sense that they attract solutions of the
iteration bringing the system from $t$ to $t+4\cdot P_{1}$. In this particular
example, the system is said to be phase- or frequency-locked on $P_{1}$ with
ratio 1:4 (Pikovski01aa, p. 68), because on average, one ice age cycle takes
four precession cycles, even though the response is no longer periodic. In
this example, the curves on the $P_{1}$ stroboscopic section nearly touch each
other. This implies that synchronisation is not reliable since a solution
captured by one of these attractors could easily escape and fall into the
basin of attraction of another local attractor. One can also see that it is
not synchronised on $O_{1}$ since the stroboscopic section of period $O_{1}$
shows one closed curve englobing all possible phases. It may also be said that
the system is synchronised in the generalised sense (Pikovski01aa, p. 150),
because the pullback section is made of only four points : starting from
arbitrary conditions, the system converges to only a small number of solutions
at any time $t$. It is also stable in the Lyapunov sense, a point that will
not be further discussed here, but see De-Saedeleer12aa.
Consider now $\tau=41\,{\mathrm{ka}}$. The four closed curves on the
P1-stroboscopic section have collided and merged into one attractor with
strange geometry. A similar figure appears on the E1-stroboscopic section. The
phenomenon of strange non-chaotic attractor has been described since
Romeiras87aa, its occurrence in the van der Pol oscillator is discussed in
Kapitaniak90ab, and its relevance to climate dynamics was suggested by
Sonechkin01aa. In our specific example, the system is neither frequency-locked
on $P_{1}$ nor on $O_{1}$, but it is synchronised in the generalised sense:
the pullback section has two points. Finally, with $\tau=44\,{\mathrm{ka}}$
there is frequency-locking on $O_{1}$ (regime 3:1) but not on $P_{1}$.
Clearly, the system underwent changes in synchronisation regimes as $\tau$
increased from $36$ to $44\,{\mathrm{ka}}$. Further insight may be had by
considering the $\tau-\gamma$ plot (Figure 8). The frequency locking regime on
$P_{1}$ lies in the relic of the 1:4 tongue visible in the periodic forcing
case (Figure 6). Frequency locking on $O_{1}$ belongs to the 1:3 tongue
associated with $O_{1}$. The strange non-chaotic regime occurs where the
tongues associated with these different forcing components merge.
The word bifurcation has been defined for non-autonomous dynamical systems
(Rasmussen00aa, chap. 2). This is a complex subject and we will admit here the
rather informal notion that there is a bifurcation when a local pullback
attractor appears or ceases to exist (adapted from Def. 2.42, in
Rasmussen00aa). With this definition, there is a bifurcation at least every
time color changes on Figure 8 (assuming $t_{\mathrm{back}}$ is far enough in
the past).
Another view on the bifurcation structure may be obtained by plotting the $x$
and $y$ solutions of the system at $t_{0}=0$, initiated from a grid of initial
conditions at $t_{\mathrm{back}}=-5\,{\mathrm{Ma}}$, as a function of $\tau$,
still with $\gamma=0.6$ (Figure 9). This plot outlines a region of weak
structural stability, where the system shows pretty strong dependence on the
value of the parameters. It occurs here between $\tau=$ 37 and
42$\,{\mathrm{ka}}$.
Figure 7: Pullback (at $t_{0}=0$) and stroboscopic sections ($t=0+nP_{1}$ and
$t=0+nO_{1}$) obtained with the van der Pol oscillator with parameters
$\alpha=30$, $\beta=0.7$, and forced by $F(t)=\gamma(\sin(2\pi
t/P_{1}+\phi_{P1})+\sin(2\pi t/O_{1})+\phi_{O1})$, $P_{1}=23.7\,{\mathrm{ka}}$
and $O_{1}=41.0\,{\mathrm{ka}}$ and $\gamma=0.6$, and three different values
of $\tau$. The presence of dots on the pullback section indicates generalised
synthronisation. Localised closed curves on the stroboscopic sections indicate
frequency locking on the corresponding period (on $P_{1}$ with
$\tau=36\,{\mathrm{ka}}$ and $O_{1}$ with $\tau=44\,{\mathrm{ka}}$), and
complex geometries indicate the presence of a strange attractor
($\tau=41\,{\mathrm{ka}}$). Figure 8: As Figure 6 but with a 2-period
forcing :
$F(t)=\gamma(\sin(2\pi/P_{1}t+\phi_{P1})+\sin(2\pi/O_{1}t+\phi_{O2}))$ (see
text for values). Tongues originating from frequency-locking on individal
periods merge and give rise to strange non-chaotic attractors. Orange dots
correspond to the cases shown on Figure 7. Figure 9: Pullback solutions of
the van der Pol oscillator forced by two periods (as on Figures 7 and 8) as a
function of $\tau$. Forcing amplitude is $\gamma=0.6$ and the other parameters
are as on the previous Figures. Vertical lines indicate the cases shown on
Figure 7.
These observations have two important consequences for our understanding of
the phenomena illustrated on Figure 1. To see this it is useful to refer to
general considerations about autonomous dynamical systems. A bifurcation
generally separates two distinct (technically: non-homeomorphic) attractors,
which control the asymptotic dynamics of the system. As the bifurcation is
being approached, the convergence to the attractor is slower, while the
attractor that exists on the other side of the bifurcation may already take
some temporary control on the transient dynamics of the system. This is,
namely, one possible mechanism of excitable systems. One sometimes refers to
‘remnant’ or ‘ghost attractors’ to refer to these attractors that exist on the
other side of the bifurcation and may take control on the dynamics of the
system over significant time intervals (e.g. Nayfeh04aa, p. 206)
The idea may be generalised to non-autonomous systems. Consider Figure 10. The
upper plot shows the two local pullback attractors of the system obtained with
$\tau=41\,{\mathrm{ka}}$. The middle panel displays one local attractor
obtained with $\tau=40$. The two $\tau=41$ attractors are reproduced with thin
lines for comparison. Observe that this $\tau=40$ attractor is qualitatively
similar to the $\tau=41$ attractors, and most of its time is spent on a path
that is nearly undistinguishable from those obtained with $\tau=41$. However,
on a portion of the time interval displayed it follows a sequence of ice ages
that is distinct from those obtained with $\tau=41$. In fact, there are four
pullback attractors at $\tau=40$, which clarifies the fact that there is at
least one bifurcation between $\tau=40$ and $\tau=41$ ka.
Let us now consider a third scenario. Parameter $\tau=41$, but an additive
stochastic term
($\sigma\frac{\mathrm{d}\omega}{\mathrm{d}t},\sigma^{2}=0.25\mathrm{ka}^{-1}$,
and $\omega$ symbolises a Wiener process) is added to the second system
equation. This is thus a slightly noisy version of the original system. Shown
here is one realisation of this stochastic equation, among the infinity of
solutions that could be obtained with this system. Expectedly, the system
spends large fractions of time near one or the other of the two pullback
attractors. However, it also spends a significant time on a distinct path.
Speculatively, this distinct path is under the influence of a ‘ghost’ pullback
attractor. As the bifurcation structure is complex and dense, as shown on
Figure 9, we expect a host of ghosts to lie around, ready to take control of
the system over significant fractions of time, and this is what happens in
this particular case.
To further support this hypothesis, consider a second experiment. Figure 11
displays the number of distinct solutions counted at time $t_{0}=0$, when the
system is started from 121 distinct initial conditions at a time back
$t_{\mathrm{back}}$, as a function of $t_{\mathrm{back}}$. Surprisingly, one
needs to go back to $-30$ Ma (million years) to identify the true pullback
attractor. Obviously $30$ Ma is a very long time compared to the Pleistocene
and so this solution is in practice no more or relevant than the 4 or 8
solutions that can be identified by only starting the system back in time $1$
or $2$ Ma ago. They may be interpreted as ghost (‘almost alive’) pullback
attractors, and following the preceding discussion they are likely to be
visited by a system forced by large enough random external fluctuations.
Figure 10: (top:) Pullback attractors of the forced van der Pol oscillator
($\beta=0.7$, $\alpha=30$, $\gamma=0.6$ with 2-period forcing) as on Figure 8,
for $\tau=41$ ka. They are reproduced on the graphs below (very thin lines),
overlain by (middle:) one pullback attractor with same parameters but
$\tau=40\,\mathrm{ka}$, and (bottom:) one stochastic realisation of the
stochastic van der pol oscillator with $\tau=41{\mathrm{ka}}$. Figure 11:
Number of distinct solutions simulated with the van der Pol oscillator
($\beta=0.7$, $\alpha=30$, $\tau=41$ ka and $\gamma=0.6$ with 2-period forcing
as on Figure 8, as a function of the time $t_{\mathrm{back}}$ at which 121
distinct initial conditions are considered. The actual stable pullback
attracting set(s), in the rigorous mathematical sense, is (are) found for
$t_{\mathrm{back}}\rightarrow-\infty$.
#### 2.3.2 $P_{2}=22.427\,{\mathrm{ka}}$ and $P_{3}=18.976\,{\mathrm{ka}}$
The two periods now being combined are the second and third components of
precession, still according to berger78. These two periods were selected for
two reasons. The first one is that the addition of the two periodic signal
produces an interference beating with period $123\,319\,$yr, not too far away
from the usual 100-ka cycle that characterises Late Pleistocene climatic
cycles. Second, the period of the beating is not close to an integer number of
the two original periods (this occurs, accidentally, when using $P_{1}$ and
$P_{3}$). This was important to be able to clearly distinguish a
synchronisation to the beating from a higher resonance harmonic to either
forcing components.
It is known from astronomical theory that the periodicity of eccentricity is
mechanically related to the beatings of the precession signal (berger78).The
scientific question considered here is whether the correspondence between the
period of ice age cycles and eccentricity is coincidental, or whether a
phenomenon of synchronisation of climate on eccentricity developed.
To address this question we need a marker of synchronisation on the precession
beating. The Rayleigh number has already been used to this end in
palaeoclimate applications (huybers04Pleistocene; Lisiecki10aa). Let $P_{b}$
be the beating period, and $X_{i}$ the system state snapshotted every
$t=t_{0}+iP_{b}$, the Rayleigh number $R$ is defined as $|\sum
X_{i}-\bar{X}|/\sum|X_{i}-\bar{X}|$, where the overbar denotes an average. $R$
is strictly equal to $1$ when the solution is synchronised with a periodic
forcing of period $P_{b}$, assuming no other source of fluctuations. As a
reference, Lisiecki10aa estimated 0.94 the Rayleigh number of a stacked
benthic $\delta^{18}O$ signal with respect to eccentricity over the last
million years.
The bifurcation diagram showing the number of pullback solutions is displayed
on Figure 12. The frequency-locking tongues on $P_{2}$ and $P_{3}$ are easily
identified at low forcing amplitude; as forcing amplitude increases the
tongues merge and produce generalised synchronisation regimes. Regions of
synchronisation on $P_{b}$, identified as $R>0.95$, are hashed. They occur
when the system natural period is close to $P_{b}$ but narrow bands also
appear near $T_{N}=P_{b}/2$. Observe also that these synchronisation regimes
are generally not unique (several pullback attractors co-exist), and
additional sensitivity experiments show that convergence is quite slow. More
specifically, the synchronisation diagram was computed here using
$t_{\mathrm{back}}=-10$ Ma. With shorter backward time horizons, the number of
remaining solutions identified in the beating-synchronisation regime often
exceeds 6 and could not been seen on the graph, while the Rayleigh number was
still beyond 0.95. Hence, a high Rayleigh number is not necessarily a good
indicator of reliable synchronisation.
Figure 12: As Figure 8 but with :
$F(t)=\gamma(\sin(2\pi/P_{2}t+\phi_{P2})+\sin(2\pi/P_{3}t+\phi_{P3}))$. Hashes
indicate regions of Rayleigh number $>0.95$ on the beating associated with
$P_{2}$ and $P_{3}$, the period of which is called $E_{3}$ in the Berger
(1978) nomenclature (third component of eccentricity).
### 2.4 Full astronomical forcing
The next step is to consider the full astronomical forcing, as the sum of
standardized climatic precession ($\Pi$) and the deviation of obliquity with
respect to its standard value ($O$):
$F(t)=\gamma_{p}\Pi(t)+\gamma_{o}O(t),$ (2)
where
$\displaystyle\Pi(t)$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{N_{p}}a_{i}\sin(\omega_{p_{i}}t+\phi_{p_{i}})/a_{1}$
(3) $\displaystyle O$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{N_{o}}b_{i}\cos(\omega_{o_{i}}t+\phi_{o_{i}})/b_{1}$
(4)
The various coefficients are taken from berger78. We take $N_{p}=N_{o}=34$, so
that the signal includes in total 68 harmonic components. With this choice the
BER78 solution (berger78) is almost perfectly reproduced. BER78 is still used
in many palaeoclimate applications. Compared to a state-of-the-art solution
such as La04 (Laskar04), the error on amplitude is between 0 and 25 %, and the
error on phase is generally much less than 20∘.
The bifurcation diagram representing the number of pullback solutions as a
function of forcing amplitude and $\tau$ is shown on Figure 13. We have taken
$\gamma=\gamma_{p}=\gamma_{o}$. One recognises the tongues originating from
the individual components merging gradually into a complex pattern. The number
of attractors settles to 1 as the amplitude of the forcing is further
increased. Let us call this the 1-pullback attractor regime. We already know
that synchronisation is generally not reliable in the region characterised by
the complex and dense bifurcation region, where more than one attractor exist.
The remaining problem is to characterise the reliability of synchronisation in
the 1-pullback attractor regime.
The literature says little about systematic approaches to quantify the
reliability of generalised synchronisation with quasi-periodic forcings. To
develop further the ideas developed in section 7, one can plot pullback
solutions at a certain time $t$ as a function of one or several parameters.
This is done Figure 14. Here $\gamma$ is kept constant ($=1.0$) and $\tau$ is
varied between 25 and 40. There is a brief episode of 2-solution regime
between 31 and 32 ka. Within the 1-solution regime there is a number of abrupt
transitions (at 26, 27, 34 and 37 ka).
As we have seen above, the density of bifurcations in the parameter space is
an indicator of the structural stability of the system. Changes in the number
of pullback attractors are clearly bifurcations. Abrupt variations such as
near $\tau=26,\ 27,\ 34$ and $37$ ka could well be bifurcations because one
observes slower convergence near the transitions, with co-existence of two or
more solutions when $t_{\mathrm{back}}$ is only 1 Ma (not shown). It may also
be verified that near these transition zones scenarios similar to those
depicted on Figure 10 may be observed. Most of the transition zones disappear
as $\gamma$ is further increased. Hence, being in the 1-pullback attractor
zone is not quite enough to guarantee a reliable synchronisation: one need to
be deep into that zone.
Figure 13: As Figure 8 but with the full astronomical forcing, made of 34
precession and 34 obliquity periods (eq. (2)). The white area in the top-left
is a zone of numerical instability caused by inappropriate time-step. Figure
14: As Figure 9 but with the full astronomical forcing, with parameters as on
Figure 13 and $\gamma=1.0$.
## 3 Other models
We now consider 6 previously published models. Mathematical details are given
in the Appendix and the codes are available on-line at
https://github.com/mcrucifix.
SM90:
This is a model with three ordinary differential equations representing the
dynamics of ice volume, carbon dioxide concentration and deep-ocean
temperature. The astronomical forcing is linearly introduced in the ice volume
equation, under the form of insolation at 65∘ North on the day of summer
solstice. Only the carbon dioxide equation is non-linear, and this non-
linearity induces the existence of a limit-cycle solution—spontaneous
glaciation and deglaciation—in the corresponding autonomous system. The SM90
model is thus a mathematical transcription of the hypothesis according to
which the origin 100,000 year cycle is to be found in the biological
components of Earth’s climate.
SM91:
This model is identical to SM90 except for a difference in the carbon cycle
equation.
PP04:
The Paillard-Parrenin model (paillard04eps) is also a 3-differential-equation
system, featuring Northern Hemisphere ice volume, Antarctic ice area and
carbon dioxide concentration. The carbon dioxide equation includes one non-
linear term associated to a switch on/off of the southern ocean ventilation.
Astronomical forcing is injected linearly at three places in the model: in the
ice-volume equation, in the carbon dioxide equation, and in the ocean
ventilation parameterisation. The autonomous version of the model also
features a limit cycle. As in SM90 and SM91 the non-linearity introduced in
the carbon cycle equation plays a key role but the bifurcation structure of
this model differs from SM90 and SM91 (Crucifix12aa).
T06:
The tziperman06pacing model is a mathematical idealisation of more complex
versions previously published by Gildor-Tziperman-2000:sea. T06 features the
concept of sea-ice switch, according to which sea-ice growth in the Northern
Hemisphere inhibits accumulation of snow over the ice sheets, and vice-versa.
Mathematically, T06 is presented as a hybrid model, which is the combination
of a differential equation in which the astronomical forcing is introduced
linearly as a summer insolation forcing term, and a discrete variable, which
may be 0 or 1 to represent the absence or presence of sea-ice in the northern
hemisphere.
I11:
The Imbrie11aa was introduced by its authors as a “phase-space” model. It is a
2-D model, of which the equations were designed to distinguish an ‘ice
accumulation phase’ and an ‘abrupt deglaciation’ phase, which is triggered
when a threshold defined in the phase space is crossed. I11 was specifically
tuned to reproduce the phase-space characteristics of the benthic oxygen
isotopic dynamics. A particularity of this model is that the phasing and
amplitude of the forcing depend on the level of glaciation.
PP12:
Similar to Imbrie11aa, the Parrenin12ab model distinguishes accumulation and
deglaciation phases. Accumulation is a linear accumulation of insolation,
without restoring force (hence similar to equation 1 of the van der Pol
oscillator); deglaciation accumulates insolation forcing but a negative
relaxation towards deglaciation is added. Contrarily to Imbrie11aa, the
trigger function, which determines the regime change, is mainly a function of
astronomical parameters. An ice volume term only appears in the function
controlling the shift from ‘accumulation’ to ‘deglaciation’ regime.
The ice volumes (or, equivalently, glaciation index or sea-level) simulated by
each of these models are shown on Figure 15. Shown here are estimates of the
pullback attractors. More specifically, the trajectories obtained with an
ensemble of initial conditions at $t_{\mathrm{back}}=-20\,$Ma; in some cases
the curves actually published (in particular in SM90) are not pullback
attractors, but ghost trajectories in the sense illustrated on Figure 11. In
some cases (PP04 and PP12) the parameters had to be slightly adjusted to
reproduce the published version satsifactorily. Details are given in appendix.
Some of these models include as much as 14 adjustable parameters (e.g.: PP04)
and a full dynamical investigation of each of them is beyond the scope of the
study. Rather, we proceeded as follows. Every model responds to a state
equation, which may be written, in general (assuming a numerical
implementation), as:
$\displaystyle\left\\{\begin{array}[]{rcl}t_{i+1}&=&t_{i}+\delta t\\\
x_{i+1}&=&x_{i}+\delta t\ f(x_{i},F(t)),\end{array}\right.$
where $t_{i}$ is the discretized time, $x_{i}$ the climate state (a 2-D or 3-D
vector) at $t_{i}$ and $F(t)$ is the astronomical forcing, which is specific
to each model because the different authors made different choices about the
respective weights and phases of precession and obliquity.
In all generality, the equation (or its numerical approximation) may be
rewritten as follows, posing $\tau=1$ and $\gamma=1$:
$\displaystyle\left\\{\begin{array}[]{rcl}t_{i+1}&=&t_{i}+\delta t\\\
x_{i+1}&=&x_{i}+\tau\delta t\ f(x_{i},\gamma F(t)),\end{array}\right.$
The parameters $\tau$ and $\gamma$ introduced this way have a similar meaning
as in the van der Pol oscillator, since $\tau$ controls the characteristic
response time of the model, while $\gamma$ controls the forcing amplitude.
Bifurcation diagrams, similar to Figure 13 are then shown on Figure 16.
Remember that $\gamma=\tau=1$ corresponds to the model as originally
published.
A first group of four models appears (SM90, SM91, T06 and PP04), on which one
recognises a similar tongue-synchronisation structure as in the van der Pol
oscillator. This was expected since these models are also oscillators with
additive astronomical forcing. Depending on parameter choices synchronisation
may be reliable or not. Synchronisation is clearly not reliable in the
standard parameters used for SM90 and SM91. In T06 the standard parameters are
not far away from the complex multi-pullback-attractor regime and this
explains why transitions such as displayed on Figure 1 could be obtained under
small parameter changes (or, equivalently, with some noise, as discussed in
the original tziperman06pacing study). The standard parameter choice of PP04
is further into the stability zone and, indeed, experimenting with this model
shows that instabilities such as displayed on Figure 1 are harder to obtain.
I11, with standard parameters, is also fairly deep in the stable zone. One has
to consider much smaller forcing values than published to recognize the
synchronisation tongue structure that characterises oscillators (Figure 17).
PP12 finally turns out to be the only case not showing a tongue-like
structure. This may be surprising because this model also has limit cycle
dynamics (self-sustained oscillations in absence of astronomical forcing).
Some aspects of its design resemble the van der Pol oscillator. The role of
the variable $y$ in the van der Pol oscillator is here played by the mode,
which may either be $g$ (glaciation) or $d$ (deglaciation). Also similar to
the van der Pol, the direct effect of the astronomical forcing on the ice
volume ($x$ in the van der Pol; $v$ in PP04) is additive. T06 has also similar
characteristics. The distinctive feature of PP12 is that the transition
between deglaciation and glaciation modes is controlled by the astronomical
forcing and not by the system state, as in the other models. To assess the
importance of this element of design we considered a hacked version of PP12,
where the $d\rightarrow g$ occurs when for $v<v_{1}$ (cf. appendix B.6 for
model details). In this case the tongue synchronisation structure is
recovered, with standard parameters marginally in the reliable synchronisation
regime (Figure 17).
Figure 15: Pullback attractors obtained for 6 models over the last 800 ka,
forced by astronomical forcing with the parameters of the original
publications. Shown is the model component representing ice volume. Units are
arbitrary in all models, except in T06 (ice volume in $10^{15}\mathrm{m}^{3}$)
and PP12 (sea-level equivalent, in $\mathrm{m}$). Figure 16: As Figure 13
but for 6 models previously published. Orange dots correspond to standard
(published) parameters of these models. Figure 17: As Figure 16 for (left)
the I11 model, but with a zoom on the $x-$axis , and (right) for a slightly
modified version of the PP12 model, in which the transition between deglacial
and glacial states is also controlled by ice volume, as opposed to the
original PP12 model.
## 4 Conclusion
The present article is built around the paradigm of the ‘pacemaker’, that is,
the timing of ice ages arises as a combination of climate’s internal dynamics
with the variations of incoming solar radiation induced by the variations of
our planet’s orbit and obliquity. This is not the only explanation of ice
ages, but this is certainly one of the most plausible.
In this study we paid attention to the dynamical aspects that may affect the
stability of the ice age sequence and its predictability. First, the
astronomical forcing has a rich harmonic structure. We showed that a system
like the van der Pol oscillator is more likely to be synchronised on the
astronomical forcing as Nature provides it than on a periodic forcing, because
the fraction of the parameter space corresponding to synchronisation is larger
in the former case. A synchronised system is Lyapunov stable, so that at face
value this would imply that the sequence of ice ages is stable. However, —this
is the second point– even if the dynamical structure of the Pleistocene
climate was correctly identified, there would be at least two sources of
uncertainties : random fluctuations associated with the chaotic atmosphere and
ocean and other statistically random forcings like volcanoes; and uncertainty
on system parameters. In theory both types of uncertainties point to different
mathematical concepts: path-wise stability to random fluctuations in the first
case, and the structural stability in the second. In practice, however, lack
of either form of stability will result in similar consequences: quantum skips
of insolation cycles in the succession of ice ages. This was the lesson of
Figure 10.
It was shown here that, compared to periodic forcing, the richness of the
harmonic structure of astronomical forcing favours situations of weak
structural stability. To preserve stability, the richness of the astronomical
forcing has to be compensated for by large enough forcing amplitude.
Out of the seven models tested here, we ignore which one best captures ice
ages dynamics. The overwhelming complexity of the climate system does not
allow us to securely select the most plausible model on the sole basis of our
knowledge of physics, biology and chemistry. Consequently, while we have
understood here how and why the sequence of ice ages could be unstable in
spite of available evidence (astronomical spectral signature; Rayleigh
number), estimating the stability of the sequence ice ages and quantifying our
ability to predict ice ages is also a problem of statistical inference :
calibrating and selecting stochastic dynamical systems based on both theory
and observations, which are sparse and characterised by chronological
uncertainties. A conclusive demonstration of our ability to reach this
objective is still awaited.
## Appendix A Insolation
In the following models, the forcing is computed as a sum of precession
($\Pi:=e\sin\varpi/a_{1}$), co-precession ($\tilde{\Pi}:=e\cos\varpi/a_{1}$)
and obliquity ($O:=(\varepsilon-\varepsilon_{0})/b_{1}$) computed according to
the berger78 decomposition (Figure 2, and eqs. (3 – 4)). More precisely, we
use here these quantities scaled ($\bar{P}$, $\bar{\tilde{\Pi}}$ and
$\bar{O}$) such as they have unit variance. All insolation quantities used in
climate models may be approximated as a linear combination of $\bar{\Pi}$,
$\bar{\tilde{\Pi}}$ and $\bar{O}$. For example:
* •
Normalised summer solstice insolation at 65∘N =
$0.8949\bar{\Pi}+0.4346\bar{O}$
* •
Normalised insolation at 60∘ S on the 21st February =
$-0.4942\bar{\Pi}+0.8399\bar{\tilde{\Pi}}+0.2262\bar{O}$
## Appendix B Model definitions
### B.1 SM90 model
$\displaystyle\frac{\mathrm{d}x}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle-x-y-v\,z-uF(t)$ $\displaystyle\frac{\mathrm{d}y}{\mathrm{d}t}$
$\displaystyle=$ $\displaystyle-pz+r\,y+s\,z^{3}-w\,y\,z-z^{2}\,y$
$\displaystyle\frac{\mathrm{d}y}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle-q\,(x+z)$
$p=1.0$, $q=2.5$, $r=0.9$, $s=1.0$, $u=0.6$, $v=0.2$ and $w=0.5$, and $F(t)$
is (here) insolation on the day of summer solstice, at 65∘ North, normalised
(results are qualitatively insensitive to the exact choice of insolation).
### B.2 SM91 model
$\displaystyle\frac{\mathrm{d}x}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle-x-y-v\,z-uF(t)$ $\displaystyle\frac{\mathrm{d}y}{\mathrm{d}t}$
$\displaystyle=$ $\displaystyle-pz+r\,y-s\,y^{2}-y^{3}$
$\displaystyle\frac{\mathrm{d}y}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle-q(x+z)$
$p=1.0$, $q=2.5$, $r=1.3$, $s=0.6$, $u=0.6$ and $v=0.2$, and $F(t)$ is (here)
insolation on the day of summer solstice, at 65∘ North, normalised (results
are qualitatively insensitive to the exact choice of insolation). One time
unit is 10 ka.
### B.3 PP04 model
The three model variables are $V$ (Ice volume), $A$ (Antarctic Ice Area) and
$C$ (Carbon dioxide concentration):
$\displaystyle\frac{\mathrm{d}V}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle\frac{1}{\tau_{V}}(-xC-yF_{1}(t)+z-V)$
$\displaystyle\frac{\mathrm{d}A}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle\frac{1}{\tau_{A}}(V-A)$
$\displaystyle\frac{\mathrm{d}C}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle\frac{1}{\tau_{C}}(\alpha F_{1}(t)-\beta V+\gamma H+\delta-C),$
$\tau_{V}=15\,{\mathrm{ka}}$, $\tau_{C}=0.5\,{\mathrm{ka}}$,
$\tau_{A}=1.2\,{\mathrm{ka}}$, $x=1.3$, $y=0.4$ (was 0.5 in the original
paper), $z=0.8$, $\alpha=0.15$, $\beta=0.5$, $\gamma=0.5$, $\delta=0.4$,
$a=0.4$, $b=0.7$, $c=0.01$, $d=0.27$; $H=1$ if $aV-bA+d-c\,F_{2}(t)<0$, and
$H=0$ otherwise. $F_{1}(t)$ is the normalised, summer-solstice insolation at
65∘ North, and $F_{2}(t)$ is insolation at 60∘ S on the 21st February (taken
as 330∘ of true solar longitude). Other quantities ($V$, $A$, $C$) have
arbitrary units.
### B.4 T06 model
The two model variables are $x$ (ice volume) and $y$ (sea-ice area). $x$ is
expressed in units of $10^{15}\,\mathrm{m}^{3}$.
$\displaystyle\frac{\mathrm{d}x}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle(p_{0}-K\,x)(1-\alpha_{\mathrm{si}})-(s+s_{m}F(t))$
The equation represents the net ice balance, as accumulation minus ablation,
and $\alpha_{\mathrm{si}}$ is the sea-ice albedo.
$\alpha_{\mathrm{si}}=0.46y$. $y$ switches from 0 to 1 when $x$ exceeds
$45\times 10^{6}\,\mathrm{km}^{3}$, and switches from 1 to 0 when $x$
decreases below $3\times 10^{6}\,\mathrm{km}^{3}$. The parameters given by
tziperman06pacing are: $p_{0}=0.23$Sv, $K=0.7/(40\,{\mathrm{ka}})$, $s=0.23$Sv
and $s_{m}=0.03$Sv, where 1 Sv=$10^{6}$m3/s. $F(t)$ is the normalised, summer-
solstice insolation at 65∘ North.
### B.5 I11 model
Define first:
$\displaystyle\phi$ $\displaystyle=$
$\displaystyle\frac{\pi}{180}\cdot\left\\{\begin{array}[]{ll}10-25y&\textrm{where
}y<0,\\\ 10&\textrm{elsewhere.}\end{array}\right.$ $\displaystyle\theta$
$\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{ll}0.135+0.07y&\textrm{where }y<0,\\\
0.135\quad\textrm{elsewhere.}\end{array}\right.$ $\displaystyle a$
$\displaystyle=$ $\displaystyle 0.07+0.015y$ $\displaystyle h_{O}$
$\displaystyle=$ $\displaystyle(0.05-0.005y)\bar{O}$ $\displaystyle h_{\Pi}$
$\displaystyle=$ $\displaystyle a\bar{\Pi}\cdot\sin\phi$ $\displaystyle
h_{\tilde{\Pi}}$ $\displaystyle=$ $\displaystyle a\bar{\tilde{\Pi}}\cos\phi$
$\displaystyle F$ $\displaystyle=$ $\displaystyle
h_{\Pi}+h_{\tilde{\Pi}}+h_{O}$ $\displaystyle d$ $\displaystyle=$
$\displaystyle-1+3\frac{(F+0.28)}{0.51}$
With these definitions:
$\left\\{\begin{array}[]{ll}\ddot{y}&=0.5[0.0625(d-y)-\frac{\dot{y}^{2}}{d-y}]\quad\textrm{where
}\dot{y}>\theta,\\\
\dot{y}&=-0.036375+y[0.014416+y(0.001121-0.008264y)]+0.5F\quad\mathrm{elsewhere,}\\\
\end{array}\right.$ (9)
where $\dot{y}:=\frac{\mathrm{d}y}{\mathrm{d}t}$ and
$\ddot{y}:=\frac{\mathrm{d}\dot{y}}{\mathrm{d}t}$. Note that the polynomial on
the right-hand-side of the equation for $\dot{y}$ is a continuous fit to the
piece-wise function used in the original Imbrie11aa publication. Time units
are here ${\mathrm{ka}}$.
### B.6 PP12 model
This is a hybrid dynamical system, with ice volume $v$ (expressed in $m$ of
equivalent sea-level) and state, which may be $g$ (glaciation) or $d$
(deglaciation).
Define first
$f(x):=\left\\{\begin{array}[]{l}x+\sqrt{4a^{2}+x^{2}}-2a\quad\textrm{where
}x>0,\\\ x\quad\textrm{elsewhere }.\\\ \end{array}\right.$ (10)
with $a=0.68034$. Then define the following quantities, standardized as
follows:
$\displaystyle\Pi^{\star}$ $\displaystyle=$
$\displaystyle(f(\bar{\Pi})-0.148)/0.808$ $\displaystyle\tilde{\Pi}^{\star}$
$\displaystyle=$ $\displaystyle(f(\bar{\tilde{\Pi}})-0.148)/0.808$
the threshold
$\theta=k_{\Pi}\bar{\Pi}+k_{\tilde{\Pi}}\tilde{\bar{\Pi}}+k_{O}\bar{O}$, and
finally the following rule controlling the transition between state $g$ and
$d$:
$\left\\{\parbox{0.0pt}{ \vskip-10.00002pt\@@eqnarray d\rightarrow
g&\quad&\mathrm{if}\quad\theta<v_{1}\\\ g\rightarrow
d&\quad&\mathrm{if}\quad\theta+v<v_{0} }\right.$
Ice volume $v$, expressed in sea-level equivalent, responds to the following
equation:
$\displaystyle\frac{\mathrm{d}v}{\mathrm{d}t}$ $\displaystyle=$
$\displaystyle-
a_{\Pi}\Pi^{\star}-a_{\tilde{\Pi}}*\tilde{\Pi}^{\star}-a_{O}\tilde{O}+\left\\{\begin{array}[]{ll}a_{d}-v/\tau&\quad\textrm{
if state is $d$}\\\ a_{g}&\quad\textrm{ if state is $g$,}\\\
\end{array}\right.$
with the following parameter values: $a_{\Pi}=1.456\mathrm{m/{\mathrm{ka}}}$,
$a_{\tilde{\Pi}}=0.387\mathrm{m/{\mathrm{ka}}}$,
$a_{O}=1.137\mathrm{m/{\mathrm{ka}}}$, $a_{g}=0.978$
$\mathrm{m/{\mathrm{ka}}}$, $a_{d}=-0.747\mathrm{m/{\mathrm{ka}}}$,
$\tau=0.834{\mathrm{ka}}$, $k_{\Pi}=14.635$m, $k_{\tilde{\Pi}}=2.281$m,
$k_{O}=23.5162$m, $v_{0}=122.918$m and $v_{1}=3.1031$m, assuming that one time
unit$=10$ka. This parameter set is the one originally presented by
Parrenin12ab in Climate of the Past Discussion (which differs from the final
version in Climate of the Past), except that $k_{O}$ is $18.5162\mathrm{m}$ in
the original paper. This modification was needed to reproduce the exact
sequence of terminations shown by the authors. Subtle details, such as the
numerical scheme or the choice of the astronomical solution might explain the
difference.
All codes and scripts are available from GitHub at
https://github.com/mcrucifix.
## Acknowledgements
Thanks are due to Peter Ditlevsen (Niels Bohr Institute, Copenhague), Frédéric
Parrenin (Laboratoire de Glaciologie et de Géophysique, Grenoble), Bernard De
Saedeleer, Ilya Ermakov and Guillaume Lenoir (Université catholique de
Louvain) for comments on an earlier version of this manuscript. Thanks also to
the numerous benevolent developers involved in the R, numpy and matplotlib
projects, without which this research would have taken far more time. MC is
research associate with the Belgian National Fund of Scientific Research. This
research is a contribution to the ITOP project, ERC-StG grant 239604.
|
arxiv-papers
| 2013-02-06T19:49:45 |
2024-09-04T02:49:41.425284
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Michel Crucifix",
"submitter": "Michel Crucifix",
"url": "https://arxiv.org/abs/1302.1492"
}
|
1302.1493
|
# ContentFlow: Mapping Content to Flows in Software Defined Networks
Abhishek Chanda WINLAB, Rutgers University
North Brunswick, NJ, USA
[email protected] Cedric Westphal23 2Innovation Center
Huawei Technology
Santa Clara, CA, USA
[email protected] 3Dept of Computer Engineering
University of California, Santa Cruz
Santa Cruz, CA 95064
[email protected]
###### Abstract
Information-Centric Networks place content as the narrow waist of the network
architecture. This allows to route based upon the content name, and not based
upon the locations of the content consumer and producer. However, current
Internet architecture does not support content routing at the network layer.
We present ContentFlow, an Information-Centric network architecture which
supports content routing by mapping the content name to an IP flow, and thus
enables the use of OpenFlow switches to achieve content routing over a legacy
IP architecture.
ContentFlow is viewed as an evolutionary step between the current IP
networking architecture, and a full fledged ICN architecture. It supports
content management, content caching and content routing at the network layer,
while using a legacy OpenFlow infrastructure and a modified controller. In
particular, ContentFlow is transparent from the point of view of the client
and the server, and can be inserted in between with no modification at either
end. We have implemented ContentFlow and describe our implementation choices
as well as the overall architecture specification. We evaluate the performance
of ContentFlow in our testbed.
###### Index Terms:
SDN, content management, network abstractions, cache, storage
## I Introduction
Many new Information-Centric Network (ICN) architectures have been proposed to
enable next generation networks to route based upon content names [1, 2, 3,
4]. The purpose is to take advantage of the multiple copies of a piece of
content that are distributed throughout the network, and to dissociate the
content from its network location. However, the forwarding elements in the
network still rely on location (namely IP or MAC addresses) to identify the
flows within the network.
One issue is that content is not isomorphous to flows, especially when it
comes to network policy. Consider as an example two requests from a specific
host to the same server, one for a video stream and one for an image object.
Both these requests could be made on the same port, say port 80 for HTTP as is
most of the Internet traffic. From the point of view of a switch, without Deep
Packet Inspection (DPI), they will look identical. However, the requirements
for the network to deliver proper service to these flows are very different,
one with more stringent latency constraints than the other. Flows are not the
proper granularity to differentiate traffic or to provide different sets of
resource for both pieces of content.
Yet, flows are a convenient handle as they are what switches see. Consider the
recent popularity of Software-Defined Networking (SDN) and OpenFlow [5] in
particular. This framework introduces a logically centralized software
controller to manage the flow tables of the switches within a network: by
using the flow primitive to configure the network, a controller can achieve
significant benefits. A mechanism is needed to bridge the gap between the flow
view as used in SDN and achieving policy and management at the content level,
as envisioned by ICNs.
We present in this paper ContentFlow, a network architecture which leverages
the principles of SDN to achieve the ICN goal of placing content at the center
of the network: namely, with ContentFlow, a centralized controller in a domain
will manage the content, resolve content to location, enable content-based
routing and forwarding policies, manage content caching, and provide the
extensibility of a software controller to create new content-based network
mechanisms. In order to provision an information-centric network architecture
over a legacy IP underlay, a mapping mechanism is necessary to identify flows
based upon the content they carry. In this paper, we propose an extension to
the current SDN model to perform this mapping and integrate, in addition to
the switching elements, some caching elements into the OpenFlow framework. We
also observe that there should be a centralized content management layer
installed in a controller. We propose such a layer that uses HTTP header
information to identify content, route it based on $name$ and map it back to
TCP and IP semantics so that the whole system can operate on an underlying
legacy network without any modification to either clients or servers.
ContentFlow can be inserted transparently and independently by an operator
over its own domain.
To demonstrate the feasibility of our approach, we have implemented a content
management framework on top of a SDN controller and switches which includes a
distributed, transparent caching mechanism. Thus, the overall vision is to
have a transparent caching network that can be placed between a service
provider network and a consumer network as shown in figure 1
Figure 1: End to end architecture
Our contribution is thus to propose an architecture to bridge the gap between
content and flows and enable the use of current SDN network elements in order
to provide content-based routing, caching and policy management. We have
implemented the architecture, focusing on HTTP traffic, and have demonstrated
that it can operate transparently between unmodified clients and servers. We
describe our implementation choices and present some evaluation results based
upon the implementation.
The rest of the paper is organized as follows: Section II examines some
related work in this area. Section III will show how to put these together to
present an illustration of the capability enabled by our framework. Namely, we
will construct a programmable content management module in the network
controller in order to set up a transparent content caching network. Section
IV describes the content management layer which forms the core of the whole
architecture. We describe in Section V some details of our implementation and
provide some evaluation results. Section VI presents a walk through of some
sample usage scenarios and explains how the architecture will handle those.
Section VII concludes.
## II Related Work
While information-centric networks and software defined networking has been
explored as separate areas of research in the past, recent trend in adoption
of OpenFlow has sparked research on combining the two. The benefits are very
obvious: while SDN brings low maintenance and reduced network complexity,
distributed caching brings low latency. Some recent findings in this vein
includes [6] which proposes a application driven routing model on top of
OpenFlow. Another notable work is [7] which proposes a system to dynamically
redirect requests from a client based on content. Note that our approach is
different from this, we have proposed a distributed caching system based on
content based switching, thus extending [7].
Software defined networking (SDN) decouples the control plane and the
forwarding plane of a network. The forwarding plane then exports some simple
APIs to the control plane, which then utilizes these APIs to provide desired
network policy. The decoupling allows for the control plane to be written in
software, and thus be programmable.
Current approaches for SDN, such as OpenFlow [5], focus on controlling
switching element, and adopt a definition of the forwarding plane which takes
traffic flows as the unit to which a policy is applied.
However, this approach suffers from two limitations: 1) The network might
include other non-forwarding elements; it is the common view of most future
Internet architectures [1, 8, 3] and many commercial products [9] combine a
switching element with additional capability, such as storage and processing;
these capabilities need to be advertised to the control plane so that the
programmable policy takes them into account. 2) The network might need to
consider different policy for different objects at a finer granularity.
Currently, OpenFlow ignores the specific content and provides a forwarding
behavior that is per-flow based, where the flow is defined as a combination of
the source and destination MAC and IP addresses and protocol ports plus some
other fields such as MPLS tags, VLAN tags. Many architectures attempt to place
content as the focus of the architecture [10, 11, 12, 4] and this is not
supported by the current SDN proposals.
SDN has been considering L4-L7 extensions (say by Citrix or Qosmos), but those
often take the form of DPI modules which do not integrate the routing
decisions based upon the observed content. To operate on the granularity of
content, many new architectures [1, 8, 3, 11, 13] have been proposed, which
make identifying content and forwarding based upon data possible. See [14] for
a short survey on ICNs. However, these architectures typically require to
replace the whole network while our architectural model is based upon
inserting a content-management layer in the SDN controller.
Other proposals to extend the SDN framework include integration with the
application layer transport optimization [15]. In an earlier paper [16], we
have discussed extending the definition of network elements to include caches.
However, this still requires a mapping layer between content and flow which we
introduce here. [17] proposed to use HTTP as the basis for an ICN, and our
work finds inspiration in this idea, as we focus on HTTP content, in our
implementation in particular. The extensions to HTTP from [17] could be used
in ContentFlow, but would require modifications of the end points.
## III Description of the architecture
### III-A Overview
The major design philosophy here is to implement content-based management
using existing elements as much as possible. We present here the ContentFlow
architecture, which enables allocation of content-dependent flow headers. We
use a combination of the source, destination ports and IPv4 addresses, but not
that we could also use IPv6 addresses and use the lower bits in the IPv6
address to embed content information.
The key design concern is to allow multiple flows from the same
source/destination pair and to provide this handle to the different network
elements. We consider both switches and caches to be network elements. Thus we
need to identify a piece of content, assign it a flow identifier for routing
purpose, yet at the same time, provide a caching element with this flow
identifier and the original content name, so as to demultiplex content in the
cache.
We perform a _separation of content and its metadata_ as early as possible.
Thus, we try to separate out the metadata for the content close to the source
and put it in the control plane. In our implementation, the metadata consists
of the file name parsed from the HTTP GET request and the TCP flow
information. Thus, it is a five tuple of the form _$\langle
file~{}name,destination~{}IP,destination~{}port,source~{}IP,\\\
source~{}port\rangle$_. A high level overview of the architecture is shown in
figure 2.
One key aspect is that content routing in a SDN network requires to proxy all
TCP connections at the ingress switch in the network. This is due to the TCP
semantics: a session is first established between a client and the server,
before an HTTP GET is issued to request a specific content. Routing is thus
performed _before_ the content is requested, which prohibits content routing.
On the other hand, content-routing requires late binding of the content with
the location. Proxying TCP at the ingress switch ensures that no TCP session
is established beyond the ingress switch. Only when the content is requested,
then the proper route (and TCP session) can then be set-up.
Figure 2 shows the different architecture elements of ContentFlow: a proxy at
the ingress to terminate the TCP sessions and support late binding; OpenFlow-
enabled switching elements; caching and storage elements which can
transparently cache content; a content controller expanding an OpenFlow
controller to include content features; and the protocols for the controller
to communicate with both the caches and the proxies (the controller-to-switch
protocol is out-of-the-box OpenFlow). The different elements and protocols are
detailed below.
Figure 2: Overview of the architecture Figure 3: Sequence diagram of the
system
The system works as follows. A request for content entering the network for
the first time will be a cache miss. Thus, the request will be redirected to
the original destination server. Once the content has been cached, subsequent
requests will be redirected to the cache if the controller decides so.
1. 1.
When a client (in the client network) tries to connect to a server (in the
provider network), the packet reaches the ingress switch of the ContentFlow
network. This switch does not find a matching flow and sends this packet to
the ContentFlow controller.
2. 2.
The controller installs static rules into the switch directly connected to the
client to forward all HTTP traffic from the client to the proxy (on the
selected port) and vice-versa. The TCP proxy terminates all TCP connections
from the client. At this point, the client sets up a TCP session with the
proxy and believes that it has a TCP session with the server.
3. 3.
The client issues an HTTP request which goes to the proxy.
4. 4.
The proxy parses the request to extract the name of the content and the web
server to which the request was sent.
5. 5.
The proxy queries the controller with the file name (as a URI) asking if the
file is cached somewhere in the network. The query also include the source of
the request and the destination.
6. 6.
The controller then determines whether the content from the web server should
(or should not) be cached and which cache to use. To redirect the content to
the selected cache, it computes a forking point (forking switch) where the
connection from the web server to the proxy should be duplicated towards the
cache as well.
7. 7.
The controller installs a rule in the forking switch and invokes the cache.
The controller notifies the cache of the content name and of the flow
information to map the content stream to the proper content name. The
controller also records the location of the cached content.
8. 8.
Since the content is not already cached, the controller returns ”none” to the
proxy. The controller gives the flow information to use to the proxy
(destination port number). The proxy directs the connection to the original
web server.
9. 9.
The cache saves the content with the file name obtained from the controller.
Upon receiving the whole content, it ACKs to the controller, which updates its
content location state.
10. 10.
The other copy of the content goes back to the proxy and in the egress switch,
it hits the reverse flow which re-writes its destination IP and port to that
of the requester.
For the cache hit case, steps $1$ to $5$ are the same and the controller
returns a cache IP in step $5$. The proxy connects to the cache which serves
back the content. On its way back, the content hits the reverse flow and the
source information is re-written making the client believe that the content
came from the original server. Figure 3 shows how the system works.
Figure 4: Mapping of Content to Flows in ContentFlow: Two Clients A and B
attempt to fetch 3 pieces of content; A fetches one each from Server C and D,
and B gets one from C. The flow goes from source IP A, port x, to destination
IP C, port 80. This is labelled as AxC80 on the figure. The path of the
requests follow the thin arrow going through the ingress switch, the proxy,
etc., to the server and back. Rules are applied along the path: ($\neq$)
static rule to forward packets on port 80 from the client to the proxy. AxC80
becomes AxPp; ($\star$) Proxy terminates the TCP connection; Starts a TCP
connection on src port u (assigned by the controller) to server C port 80,
flow becomes PuC80; ($\lozenge$) catches return flows at the forking switch
set by the controller, which duplicates packets towards cache K; (+) is the
reciprocate of ($\star$) and forwards packets back to the client; (x) returns
the server address in the source field. This is set by the controller to map
the src port from proxy to switch back to the original server/port
combination.
### III-B Proxy
The proxy is a transparent TCP proxy that is located in the caching network.
In our setup, OpenFlow is used to make the proxy transparent by writing
reactive flows to appropriate switches. The proxy is the primary device that
is responsible for separating out content metadata and putting it on the
control plane, thus it must intercept packets from the client and parse
relevant information. The proxy communicates with the controller through REST
API calls (in the future, this could be integrated in the OpenFlow protocol).
Once the client has setup a connection with the proxy, it issues a HTTP
request which the proxy intercepts. The proxy parses the request to extract
content metadata and queries the controller. Algorithm 1 describes how the
proxy works.
While booting up, the proxy starts listening on a set of ports. It also reads
a configuration file to know the IP address and port of the OpenFlow
controller to which it connects. The proxy then establishes a session with the
controller and gives it a range of usable port numbers to be used as content
handles. Now, when a client tries to connect to a server, the proxy parses the
requests and forward the requests for content to the controller. The
controller picks the first unused port number and redirects the connection to
that port. From this point, this port number acts as a handle for the content
name. The controller maintains a global mapping of the form $\langle
content\\_name,client\\_ip\\_port,server\\_ip\\_port,handle\rangle$ which can
be queried either using $content\\_name$ or $handle$ to retrieve the other
parameters.
Listen on proxy port;
if _a GET request arrives_ then
Parse the file name from the request;
Construct content name by combining the destination URI and the file name;
Query controller with the content name;
if _the controller returns an IP address and port_ then
Redirect all requests to that IP address/port combo;
else
Update controller with the file name;
Pass the request unmodified;
end if
Add the port to the list of free ports;
Update the controller;
else
Do not proxy;
end if
Algorithm 1 Proxy Algorithm
### III-C Cache
In our design, when the cache receives a content stream to store, the cache
will see only responses from the web server. The client’s (more accurately,
proxy’s) side of the TCP session is not redirected to the cache. This scenario
is not like generic HTTP caching systems like Squid which can see both the
request and response and cache the response. In our design, we want to avoid
the extra round trip delay of a cache miss, so we implemented a custom cache
that can accept responses and store them against request metadata obtained
from the controller. The cache implements algorithm 2.
We handle streaming video as a special case which is delivered over a HTTP 206
response indicating partial content. Often, in this case, the client will
close the connection for each of the chunks. Thus, when the cache sees a HTTP
206 response, it parses the Content-range header to find out how much data is
currently being transmitted and when that reaches the file size, it knows it
has received the complete file and then it can save it.
Listen on cache port after receiving message from controller with (filename,
server IP, dest port) filter;
Start webserver on cache directory;
if _a HTTP response arrives_ then
if _The response is a 206 partial content_ then
Extract the content-range header to know current range being sent;
while _Server has data to send_ do
Append to a file
end while
if _the controller has sent a file name matching the flow filter_ then
Save the response with the file name;
else
Discard the response;
end if
else
Lookup the source IP of the response;
if _the controller has sent a file name matching the flow filter_ then
Save the response with the file name;
else
Discard the response;
end if
end if
else
Serve back the file using the webserver;
end if
Algorithm 2 Cache Algorithm
### III-D Controller
The controller can run on any device that can communicate with the switches
and in our case, the caches (in most cases, it is placed in the same subnet
where the switches are). It maintains two dictionaries that can be looked up
in constant time. The $cacheDictionary$ maps file names to cache IP where the
file is stored, this acts as a global dictionary for all content in a network.
$requestDictionary$ maps destination server IP and ports to file name, this is
necessary to set up flow rules and forward content metadata to the cache when
it will save a piece of content. The controller algorithm is described in 3.
The next section describes the content management layer that runs in the
controller.
Install static flows for proxy forwarding in the switch to which the client is
connected;
$cacheDictionary\leftarrow\\{\\}$
$requestDictionary\leftarrow\\{\\}$
if _proxy queries with a file name_ then
Lookup cache IP from $cacheDictionary$ and send back cache IP
end if
else if _proxy sends content meta data_ then
Insert file name and destination IP to $requestDictionary$
Select cache based upon caching policy and availability
Compute the forking point for a flow from destination IP to proxy and
destination IP to cache
Push flows to all switches as necessary
Invoke the cache and send it the file name from $requestDictionary$
Insert the file name and cache IP in $cacheDictionary$
end if
Algorithm 3 Controller Algorithm
## IV Content management layer and ContentFlow controller
As mentioned earlier, we propose an augmentation to a standard OpenFlow
controller layer to include content management functionality. This layer
handles the following functionality:
* •
Content identification: we propose content identification using HTTP
semantics. This indicates, if a client in a network sends out a HTTP GET
request to another device and receives a HTTP response, we will conclude that
the initial request was a content request which was satisfied by the content
carried over HTTP (however, the response might be an error. In that case we
will ignore the request and the response). Further, we propose that this
functionality should be handled in the proxy since it is directly responsible
for connection management close to the client. The content management layer
gathers content information from the proxy which parses HTTP header to
identify content.
* •
Content naming: as with a number of content centric network proposals, we
propose that content should be named using its location. Thus, if an image of
the name $picture.jpg$ resides in a server whose name is $www.server.com$ in a
directory called $pictures$, the full name for the specific content will be
$www.server.com/pictures/picture.jpg$. Such a naming scheme has several
advantages; 1) it is unique by definition, the server file system will not
allow multiple objects of the same name to exist in a directory. Therefore,
this naming scheme will allow us to identify content uniquely. 2) this scheme
is easy to handle and parse using software since it is well structured. 3)
this scheme is native to HTTP. Thus it allows us to handle HTTP content
seamlessly. As mentioned before, in our implementation, the content name is
derived by parsing HTTP header for requests.
* •
Manage content name to TCP/IP mapping: to enable the content management
mechanism to work on a base TCP/IP system, we need to map content semantics to
TCP/IP semantics (end point information like port and IP) and back. The
content management layer handles this by assigning a port number to a specific
content; the data structure can be looked up using the port number and server
address to identify the content. The OpenFlow flows can be written to relevant
switches based on that information. The whole process is described in figure
4. Note that, the number of (server address,port combinations) in the proxy is
finite and might be depleted if there are too many requests. Thus, freed up
port numbers should be reused and also, the network admin should use
sufficient number of proxies so that the probability of collision is
sufficiently small.
* •
Manage Content Caching Policy: the $cacheDictionary$ can be expanded to
achieve the desired caching policy, but taking all the distributed caching
available into account. For instance, the controller can append to each
content some popularity information gathered from the number of requests, and
thus decide which content to cache where based upon user’s request history.
It is easy to note here that, since the number of ports in a proxy is finite,
there is a non zero probability that a large number of requests to a specific
server might deplete the pool of possible (port,server) combinations.
* •
Ensuring availability: since there is a limit on the number of potential
requests to a given server, we must make sure there are enough port numbers to
accommodate all possible content in our network. Since contents map to a
port/proxy/server combination, the number of concurrent connections to a
server is as limited by the number of available src port between a given
proxy/server pair. Further, the number of proxies can be increased, by adding
virtual proxies with different addresses, and thus increasing the flow space.
IPv6 could be used there as well to increase the flow space size and
demultiplex different content over different flows. The use of a flow filter
for a restricted period of time can be translated as a queueing theoretic
issue and has been proposed in the case of network address translation for
IPv4-IPv6 migration in [18]. The analysis therein translates directly into
dimensioning the number of concurrent content requests that a proxy/server
pair can perform.
* •
Ensuring consistency: the consistency constraint implies all clients should
receive their requested content only and not each other’s. In our system, this
translates to the constraint that once a port number is designated for a
content, it should not be re-used while that content is live in the network.
This is ensured using the controller. The controller maintains a pool of port
numbers from which it assigns to content. Once a port number is in use, it is
removed from the pool. When the client closes the connection to the proxy, the
proxy sends a message to the controller and updates the list of free port
numbers, which the controller puts back in the pool.
Thus while the proposed system ensures availability and consistency, it is not
fully partition tolerant since the whole architecture is sensitive to the
latency between the OpenFlow controller and the network elements. This
limitation however, is from OpenFlow and not our architecture which leverages
OpenFlow.
## V Implementation and Evaluation
Figure 5: Variation of access delay (as seen on the client) with file size of
content
To evaluate the merits of ContentFlow, we implemented a basic use case of in-
network storage virtualization. This is a simple application of the content-
flow mapping. We hope other applications can take advantage of the framework.
We implemented our proposed architecture on a small testbed. The testbed has a
blade server (we would call it _H_) running Ubuntu. The server runs an
instance of Open vSwitch which enables it to act as an OpenFlow switch. _H_
runs three VMs each of which hosts the cache, the proxy and the client and
also the FloodLight controller. This setup is placed in Santa Clara, CA.
The major components of the system are described in the following subsections.
### V-A Content management layer
We used FloodLight as the OpenFlow controller. FloodLight allows loading
custom modules on the controller platform which can then write flows to all
connected switches. We implemented the content management layer as a module to
do content based forwarding on top of FloodLight. It subscribes to _PACKET_IN_
events and maintains two data structures for lookup. The _requestDictionary_
is used to map $\langle client,server\rangle$ pairs to request file names.
This data structure can be queried using REST API to retrieve the file name
corresponding to a request. The _cacheDictionary_ holds mapping of content and
its location as the IP and port number of a cache.
### V-B Proxy
The proxy is written in pure Python and uses the _tproxy_ library. The library
provides methods to work with HTTP headers, note that there is no way to
access any TCP or IP information in the proxy. The proxy uses the controller’s
REST API to communicate with it. It should be instantiated with the command
sudo tproxy <script.py> -b
0.0.0.0:<port number>
According to our implementation, the box running the proxy runs multiple
instances of it on different ports. Each of those instances will proxy one
$\langle client,server\rangle$ pair.
### V-C Cache
Our implementation of a cache is distinct from existing Internet caches in a
number of ways: it can interface with an OpenFlow controller (running the
content management module), on the flip side this does not implement usual
caching protocols simply because it does not need to. Standard Internet caches
see the request and if there is a miss, forwards it to the destination server.
When the server sends back a response, it saves a copy and indexes it by the
request metadata. Thus, they can setup a TCP connection with then server and
use the socket interface to communicate. In our case, the cache sees only the
response and not the request. Since it always get to hear only one side of the
connection, it cannot have a TCP session with the server and so, cannot
operate with a socket level abstraction. Thus, the cache must listen to a
network interface and read packets from it. The cache has a number of distinct
components as described below:
* •
The Redis queue There is a Redis server running in the backend which serves as
a simple queueing mechanism. This is necessary to pass data (IP addresses)
between the grabber module and the watchdog module. The grabber module
(described below) can put IP addresses in the queue which can be read from the
watchdog module.
* •
The grabber module The grabber module is responsible for listening to an
interface and to read (and assemble) packets. It is written in C++ and uses
the _libpcap_ library. The executable takes an interface name as a command
line argument and starts to listen on that interface. It collects packets with
the same ACK numbers, when it sees a FIN packet, it extracts the ACK number
and assembles all packets which has that ACK number. In this step, it discards
duplicates. Note that, since there is no TCP connection, the cache won’t know
if some packets are missing. It then extracts data from all those packets and
writes back to a file in the disk with a default name. It also puts the
sender’s IP in the Redis queue.
* •
The watchdog module This module communicates with the controller using a set
of REST calls. It is written in Python and uses the _inotify_ library to
listen on the cache directory for file write events. When the grabber module
writes a file to the disk, the watchdog module is invoked. It calls the
controller API to get the file name (using the IP from the Redis queue as
parameter), it then strips all HTTP header from the file, changes its name and
writes it back. After the file is saved, it sends back an ACK to the
controller indicating that the file is cached.
* •
The cache server module This module serves back content when a client
requests. This is written in Python and is an extended version of
_SimpleHTTPServer_.
We placed $12$ files of different sizes, from $2Kb$ to $6Mb$ on a web server
located in New Brunswick, NJ. These files are then accessed from our client in
Santa Clara, CA which opens up a regular browser and access the files over
HTTP. We turned off browser cache for our experiments to ensure that the
overall effect is only dues to our cache. FireBug is used to measure content
access delay in the two cases case, once when there is a cache miss (and the
content gets stored in the cache) and the cache hit case (when the content is
delivered from the cache). Caching content (and its performance benefit) is
well known and we do not claim that our method innovate in this dimension. We
only use this to demonstrate that our architecture works in offering content
APIs to a controller and that we have successfully extended the SDN
architecture to support content-based management and information-centric
network ability.
We can also do a back of the envelope delay analysis given Figure 3. We
compare and contrast three cases, in each case $TCP(A,B)$ represents the delay
for establishing a TCP session between $A$ and $B$ and later to tear it down,
$F(A,B)$ is the delay to download a file from $A$ to $B$, $Delay(Proxy)$ is
the processing delay at the proxy.
* •
Case 1 is when a client accesses a file directly without a cache or a proxy.
In this case, the total delay is approximately equal to
$TCP(Client,Server)+F(Server,Client)$.
* •
Case 2 is when the client uses a proxy (and an OpenFlow controller) to connect
to the server. In this case, total delay is
$TCP(Client,Proxy)+Delay(Proxy)+TCP(Proxy,Server)+F(Server,Proxy)+F(Proxy,Client)$
* •
Case 3 is our design with a proxy and a cache. Here delay
$TCP(Client,Proxy)+Delay(Proxy)+TCP(Proxy,Cache)+F(Server,Proxy)+F(Proxy,Client)$
From the expressions, we see that case 1 has the highest delay followed by
case 2 and case 3, assuming $Delay(Proxy)$ is negligible and that the proxy
and cache is located in the same network as the client which makes file
transfer delay between them very small. We noticed that, when content size is
greater than $5kb$, all the expressions are dominated by $F(A,B)$ and
$Delay(proxy)$ can be ignored. However, if the proxy is overloaded, this will
increase resulting in case 2 and 3 having higher delay than case 1, which is
not desireable. Two possible ways to reduce computation at the proxy is to
have static rules for popular content at the switch. Or a bloom filter having
a list of content cached in the network can be placed at the proxy to avoid
controller lookup (with a default port number to the cache). This method
introduces another level of indirection as a cache on the proxy.
## VI Discussion
As we mentioned before, when a content request enters the network, the network
assigns a name and port number in anticipation. When the actual content enters
the network, it is demultiplexed using the name and port number. Thus, the
cases of multiple servers, clients or content can be converted to simpler
cases of single devices (real or virtual) communicating.
In this section we present some sample usage scenarios of the proposed
architecture.
* •
The simplest scenario is that of one client and one server where the on the
client only one process communicates with the server. Given the proposed
architecture, this case is trivial. One unused proxy port will be assigned to
the client server pair and the content will be named as described before.
* •
The case of multiple clients and a single server is also simple. Each of the
client server pairs will be assigned a port number which will be used a demux
handle throughout the network, along with the content name.
* •
A more complex scenario is that of multiple processes on a client, each
talking to the same server. More often than not, they will request multiple
content and will run on different ports. We argue that our architecture will
handle this scenario seamlessly since it identifies _consumer_ and _producer_
in the ICN sense by a combination of IP address and port number. Thus, it will
see a number of virtual clients, each with the same IP address trying to
connect to a server. Each of the virtual client -server pairs will be assigned
a port number and their content will be named.
* •
A related scenario is that of multiple files from the server to a single
client. This case can be treated as multiple virtual servers, each
communicating a piece of content to a client.
* •
Finally, the case of multiple clients and multiple servers. It is easy to see
that this case is a combinatorial extension of the previous cases and will be
handled similarly.
## VII Conclusion and future directions
In this paper, we have proposed and evaluated ContentFlow, a generalization of
the SDN philosophy to work on the granularity of content rather than flow.
This hybrid approach enables the end user to leverage the flexibility a
traditional SDN provides coupled with the content management properties an ICN
provides. We have implemented ContentFlow in a small scale testbed, and
demonstrated that it is able to perform network-layer task at the content
level.
Some immediate questions point to directions for future work: how can we scale
ContentFlow to handle large traffic loads? Can we improve the proxy hardware
and can we distribute the content management functionality of the controller
in order to reduce the latency? How does the caching policy impact the
performance as observed by the end user? And can the set of primitives be
expanded further beyond flows and contents, to include for instance some more
complex workloads requiring synchronization of a wide set of network
resources, including storage, compute and network? We plan to handle these
problems as a natural extension to this work.
## References
* [1] “Named data networking.” http://named-data.org/, Aug. 2010.
* [2] “ANR Connect, ”content-oriented networking: a new experience for content transfer”.” http://www.anr-connect.org/, Jan. 2011.
* [3] S. Paul, R. Yates, D. Raychaudhuri, and J. Kurose, “The cache-and-forward network architecture for efficient mobile content delivery services in the future internet,” in Innovations in NGN: Future Network and Services, 2008\. K-INGN 2008. First ITU-T Kaleidoscope Academic Conference, pp. 367–374, IEEE, May 2008.
* [4] “PURSUIT: Pursuing a pub/sub internet.” http://www.fp7-pursuit.eu/, Sept. 2010\.
* [5] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” SIGCOMM Comput. Commun. Rev., vol. 38, pp. 69–74, Mar. 2008\.
* [6] O. M. M. Othman and K. Okamura, “Design and implementation of application based routing using openflow,” in Proceedings of the 5th International Conference on Future Internet Technologies, CFI ’10, (New York, NY, USA), pp. 60–67, ACM, 2010.
* [7] Y. Sakurauchi, R. McGeer, and H. Takada, “Open web: Seamless proxy interconnection at the switching layer,” in Networking and Computing (ICNC), 2010 First International Conference on, pp. 285 –289, nov. 2010.
* [8] A. Anand, F. Dogar, D. Han, B. Li, H. Lim, M. Machado, W. Wu, A. Akella, D. Andersen, J. Byers, S. Seshan, and P. Steenkiste, “XIA: An architecture for an evolvable and trustworhty internet,” in Carnegie Mellon University TR CMU-CS-11-100, Jan. 2011.
* [9] “Cisco service ready engine.” http://www.cisco.com/en/US/products/ ps10598/prod_module_series_home.html, downloaded June 2012.
* [10] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs, and R. L. Braynard, “Networking named content,” in Proceedings of the 5th international conference on Emerging networking experiments and technologies, CoNEXT ’09, (New York, NY, USA), pp. 1–12, ACM, 2009.
* [11] A. Ghodsi, T. Koponen, B. Raghavan, S. Shenker, A. Singla, and J. Wilcox, “Information-Centric networking: Seeing the forest for the trees,” in the Tenth ACM Workshop on Hot Topics in Networks HotNets 2011, Nov. 2011\.
* [12] T. Koponen, M. Chawla, B. G. Chun, A. Ermolinskiy, K. H. Kim, S. Shenker, and I. Stoica, “A data-oriented (and beyond) network architecture,” in Proceedings of the 2007 conference on Applications, technologies, architectures, and protocols for computer communications, SIGCOMM ’07, (New York, NY, USA), pp. 181–192, ACM, 2007.
* [13] “CBMEN: Content-based mobile edge networking.” Solicitation Number: DARPA-BAA-11-51, http://www.darpa.mil/ Our_Work/STO/Programs/Content- Based_Mobile_Edge_Networking_ (CBMEN).aspx, May 2012.
* [14] B. Ahlgren, C. Dannewitz, C. Imbrenda, D. Kutscher, and B. Ohlman, “A survey of information-centric networking,” Communications Magazine, IEEE, vol. 50, pp. 26–36, July 2012.
* [15] H. Xie, T. Tsou, H. Yin, and D. Lopez, “Use cases for ALTO with software defined networks,” Tech. Rep. draft-xie-alto-sdn-use-cases-00.txt, IETF Secretariat, Fremont, CA, USA, 2012.
* [16] A. Chanda and C. Westphal, “Content as a network primitive.” arXiv:1212.3341, http://arxiv.org/abs/1212.3341, Dec. 2012.
* [17] L. Popa, A. Ghodsi, and I. Stoica, “HTTP as the narrow waist of the future internet,” in Proceedings of the Ninth ACM SIGCOMM HotNets Workshop, (New York, NY, USA), 2010.
* [18] C. Westphal and C. E. Perkins, “A queueing theoretic analysis of source ip nat,” in Communications (ICC), 2010 IEEE International Conference on, pp. 1 –6, may 2010.
|
arxiv-papers
| 2013-02-06T19:50:54 |
2024-09-04T02:49:41.435597
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Abhishek Chanda, Cedric Westphal",
"submitter": "Abhishek Chanda",
"url": "https://arxiv.org/abs/1302.1493"
}
|
1302.1591
|
# Automatically Mining Program Build Information via Signature Matching
Charng-Da Lu Buffalo, NY 14203
###### Abstract
Program build information, such as compilers and libraries used, is vitally
important in an auditing and benchmarking framework for HPC systems. We have
developed a tool to automatically extract this information using signature-
based detection, a common strategy employed by anti-virus software to search
for known patterns of data within the program binaries. We formulate the
patterns from various ”features” embedded in the program binaries, and the
experiment shows that our tool can successfully identify many different
compilers, libraries, and their versions.
## 1 Introduction
One important component in an auditing and benchmarking framework for HPC
systems is to be able to report the build information of program binaries.
This is because the program performance depends heavily on the compilers,
numerical libraries, and communication libraries. For example, the SPEC CPU
2000 Run and Reporting Rules [2] contain meticulous guidelines on the
reporting of the compiler of choice, compilation flags, allowed and forbidden
compiler tuning, libraries, data type sizes, etc.
However, in most HPC systems, program build information, if maintained at all,
is recorded manually by system administrators. Over time, the sheer number of
software/library packages of different versions, builds, and compilers of
choice can grow exponentially and become too daunting and burdensome to
document. For example, at our local center we have software packages built
from 250 combinations of different compilers and numerical/MPI libraries. On
larger systems such as Jaguar and Kraken at the Oak Ridge National Laboratory,
the number can be as high as 738 [13].
In addition, there is no standard format of documenting program build
information. Many HPC systems use Modules [3] or SoftEnv [4] to manage
software packages, and a common naming scheme is to incorporate the compiler
name (as a suffix) in the package name. There is usually additional textual
description to indicate build information, such as compiler version,
debug/optimization/profiling build, and so on. Mining these free-form texts,
however, requires the understanding of each HPC site’s software environment
and documentation style and is not generally applicable.
In this paper, we present a signature-matching approach to automatically
uncover the program build information. This approach is akin to the common
strategy employed by anti-virus software to detect malware: search for a set
of known signatures. We exploit the following ”features” of program binaries
and create signatures out of them:
* •
Compiler-specific code snippets.
* •
Compiler-specific meta data.
* •
Library code snippets.
* •
Symbol versioning.
* •
Checksums.
Our approach has several advantages. First, we only need to create, annotate,
and maintain a database of signatures gathered from compilers and libraries,
and we can then run the signature scanner over program binaries to derive
their build information. Second, unlike the anti-virus industry where the
malware code must be identified and extracted by experts, our signature
collection process is almost mechanical and can be performed by non-experts.
Third, our approach does not rely on symbolic information and thus can handle
stripped program binaries.
Our implementation is based on the advanced pattern matching engine of ClamAV
[11], an open-source anti-virus package. We choose ClamAV for its open-source
nature, signature expressiveness and scanning speed.
The remainder of this paper begins by describing the features in the program
binaries. Section 3-4 provide the implementation details and experimental
results. We then discuss potential improvement and related work in § 5-6,
followed by a conclusion in §7.
## 2 Program Binary Characteristics
On most modern UNIX and UNIX-related systems, the executable binaries
(programs and libraries) are stored in a standard object file format called
the Executable and Linking Format (ELF) [5, 6]. An ELF file can be divided
into named ”sections,” each of which serves a specific function at compile
time or runtime. The sections relevant to our work are:
* •
.text section contains the executable machine code and is the main source for
our signature identification.
* •
.comment section contains compiler and linker specific version control
information. More on this in §2.2.
* •
.dynamic section holds dynamic linking information, including file names of
dependent dynamic libraries, and pointers to symbol version tables and
relocation tables.
* •
.rel.text and .rela.text sections consist of relocation tables associated with
the corresponding .text sections. More details in §3.2.
* •
.gnu.version_d section comprises the version definition table. More on this in
§2.4.
There is a wealth of information embedded in these sections, and in the
following we explain these characteristics in detail.
### 2.1 Compiler-Specific Code Snippets
It is not news that certain popular compilers on the Intel x86 platform insert
extra code snippets unbeknownst to the developers [7]. We will illustrate with
three examples.
The first example is the so-called ”processor dispatch” employed by certain
optimizing compilers. As the x86 architecture evolves with the addition of new
capabilities and new instructions such as Streaming SIMD Extensions (SSE) and
Advanced Vector eXtensions (AVX), an optimizing compiler will produce machine
code tuned for each capability. Since the new instructions are not recognized
by older generations of x86 processors, to avoid ”illegal instruction” errors
and to re-route the execution path to the suitable code blocks, an extra code
snippet is inserted to perform this task.
Both Intel and PGI compilers, when invoked with optimization flags enabled
(and -O2 is used implicitly), insert the processor dispatch code which is
executed before the application’s main function. These code snippets
invariably use the cpuid instruction to obtain processor feature flags. For
example, the core processor dispatch routine used by the Intel compiler is
called __intel_cpu_indicator_init. It initializes an internal variable called
__intel_cpu_indicator to different values based on the processor on which the
program is running [7]. This information is later used to either abort program
execution immediately, with an error like ”This program was not built to run
on the processor in your system,” or execute different code blocks (tuned for
different generations of SSE instructions) in Intel’s optimized C library
routines such as memcpy and strcmp.
A second instance of compiler-inserted code is to enable or disable certain
floating-point unit (FPU) features. For example, when GCC is invoked with
-ffast-math or -funsafe-math-optimizations optimization flags, it inserts code
to turn on the Flush-To-Zero (FTZ) mode and the Denormals-Are-Zero (DAZ) mode
in the x86 control register MXCSR. When these modes are on, the FPU bypasses
IEEE 754 standards and treats denormal numbers, i.e. values extremely close to
zero, as zeros. This optimization trades off accuracy for speed [8]. The GNU C
Compiler, GCC, also accepts -mpc{32|64|80} flags, which are used to set the
legacy x87 FPU precision/rounding mode. Again, GCC uses a special prolog code
to configure the FPU to the requested mode.
A third instance of compiler-inserted code is to initialize user’s data. For
example, one of the C++ language features requires that static objects must be
initialized, i.e. their constructors must be called, before program startup
[9]. To implement this, the C++ compiler emits a special ELF section called
.ctors, which is an array of pointers to static objects’ constructors, and
inserts a prolog code snippet which sweeps through the .ctors section before
running the application’s main function.
### 2.2 Compiler-Specific Meta Data
ELF files have an optional section called .comment which consists of a
sequence of null-terminated ASCII strings. This section is not loaded into
memory during execution and its primary use is a placeholder for version
control software such as CVS or SVN to store control keyword information. In
practice, most compilers we examined will also fill this section with strings
which are unique enough to differentiate the compilers and the versions (see
§4.1). The compiler adds string data by using the .ident assembler directive
when generating the assembly code, and then the assembler pools these strings
and saves them into the .comment section. Unlike the debugging and symbolic
information embedded in other ELF sections, the .comment section is not
removed by the GNU strip utility, so we can mine it to obtain the compiler
provenance.
For example, using the GNU readelf tool with command-line option -p .comment
on GCC-compiled programs could have the following output:
GCC: (GNU) 4.1.2 20080704 (Red Hat 4.1.2-50)
### 2.3 Library Code Snippets
If a program calls library functions, the linker will bind the functions to
libraries to create the executable. The linking mode is either static or
dynamic. In the former, the linker extracts the code of called functions from
libraries, which are simply archives of ELF files, and performs the relocation
(see §3.2) to merge the user’s code and the library functions code into a
single executable. In the latter, the linker does not use any code from the
libraries, but instead creates proxy/stub code which can locate the entry
point of each called library function at runtime.
Static linking, although it has drawbacks such as code duplication and is no
longer the default mode of linking on most platforms, is still used in cases
where dynamic linking is problematic. For example, unlike C, C++ and Fortran
do not have an agreed API and ABI (application binary interface), so not only
object files created by different C++/Fortran compilers can seldom be linked
together, object files created by different versions of the same compiler are
not guaranteed to interoperate either [14, 15] For this reason, Fortran
compilers in particular, tend to use static linking. It is also not uncommon
for independent software vendors (ISVs) to ship only statically linked
binaries to avoid portability and library dependency issues.
On some platforms where the operating system is designed to be simple and
efficient, e.g. Cray XT’s Catamount and IBM Blue Gene/L’s Compute Node Kernel
(CNK), dynamic linking is usually not an option and static linking has to be
used [17].
A third case for static linking is the aforementioned compiler-specific code
snippets. They exist as object files or libraries and are almost always
statically linked.
For all of above reasons, library code snippets are the most important source
of signatures in our program build discovery tool.
### 2.4 Symbol Versioning
Some dynamic libraries are self-annotated with version information in a
uniform format, and we use this information to identify both the library and
its version.
As mentioned in §2.3, dynamic linking has the issue of interoperability.
Historically, this was partly solved by having unique file names for the
dynamic libraries. The file names usually incorporate major and minor release
numbers, such as lib<name>.so.<major>.<minor>. The linker will then record the
exact file names in the resulting binaries’ .dynamic section. In 1995 Sun
introduced a new and fine-grained versioning mechanism in Solaris 2.5, which
the GNU/Linux community soon adopted [12]. In this scheme, each function name
and symbol can be associated with a version, and at the library level, a chain
of version compatibility can be specified. The version of the library is then
the highest version in the version chain.
As an example, in the GNU C runtime library (glibc) source tree, one can find
version definition scripts containing the following
libc { libc { GLIBC_2.0 { GLIBC_2.0 malloc; GLIBC_2.1 free; ... ... GLIBC_2.10
} ... ... } GLIBC_2.10 { malloc_info; }} The left-hand side specifies that
malloc and free are versioned GLIBC_2.0 and malloc_info GLIBC_2.10. The right-
hand side indicates GLIBC_2.10 is compatible with GLIBC_2.1, which is
compatible with GLIBC_2.0. All of the versioning data are encoded in the
.gnu.version_d section (d for definition) of dynamic libraries when they are
built. When a user program is compiled and linked, a version-aware linker
obtains versions of called functions from the dynamic libraries and stores
them in the resulting binaries’ .gnu.version_r section (r for reference). At
runtime, the program loader-linker ld.so first examines whether all version
references in the user’s program binary can be satisfied or not, and
determines to either abort or continue.
Symbol versioning is used extensively in the GNU compiler collection (C, C++,
Fortran, and OpenMP runtime libraries), Myrinet MX/DAPL libraries, and
OpenFabrics/InfiniBand Verbs libraries. All of these instances adopt the same
version naming scheme: a unique label, e.g. GLIBC, GLIBCXX, or MX, followed by
an underscore and the version. Hence, our tool can recognize them using a
hard-coded list of labels and obtain their version by traversing the version
chain.
### 2.5 Checksums
Most dynamic libraries are less sophisticated and do not use symbol
versioning. Therefore, to recognize them, we resort to the traditional
approach of checksums. Md5sum is a commonly used open-source utility to
produce and verify the MD5 checksum of a file, but it is file-structure
agnostic and fails to characterize ELF dynamic libraries on platforms (e.g.
Red Hat Enterprise Linux) where the prelinking/prebinding technology [18] is
used. Prelinking is intended to speed up the runtime loading and linking of
dynamic libraries when a program binary is launched. To achieve this, a daemon
process will periodically update the dynamic libraries’ relocation table. The
side effect of prelinking is MD5 checksum mismatch, as part of the file
content has been changed. To defeat this effect, we calculate the MD5 checksum
over the .text section only for ELF files.
## 3 Implementation
Our implementation is based on the pattern matching engine of the open-source
anti-virus package ClamAV [11], with additional code to support symbol
versioning. The implementation comprises two tools: a signature generator and
a signature scanner. The signature generator parses ELF files and outputs
ClamAV-formatted signature files. The signature scanner takes as input the
signature files and the user’s program binary and outputs all possible
matches. In the following, we discuss ClamAV’s signature formats and matching
algorithms and how we leverage ClamAV in our implementation.
### 3.1 ClamAV Design
ClamAV signatures can be classified as one of the following types, in the
order of increasing complexity and power: MD5, basic, regular expression
(regex), logical, and bytecode. Our implementation makes use of the first
three types because they can be generated automatically (see §3.2).
A basic signature is a hexadecimal string. ClamAV’s scanning engine handles
this type of signature with a modified version of the classical Boyer-Moore
string searching algorithm called Wu-Manber. A regex signature is a basic
signature with wildcards. Our implementation use two kinds of wildcards
extensively: ?? (to match any byte) and {n} (to match any consecutive $n$
bytes). ClamAV’s scanning engine handles regex signatures with the Aho-
Corasick (AC) string searching algorithm, which can match multiple strings
concurrently at the cost of consuming more memory. The AC algorithm starts
with a preprocessing phase: Take a set of wildcard-free strings to create a
finite automaton. The scanning phase is simply a series of state transitions
in this finite automaton. ClamAV utilizes the AC algorithm as follows: Every
regex signature is broken into basic signatures (separated by wildcards), and
a single finite automaton (implemented as a two-level 256-way “trie” data
structure) is created from all of these basic signatures. If all wildcard-free
parts of a regex signature are matched, ClamAV checks whether the order and
the gaps between the parts satisfy the specified wildcards.
For completeness we briefly mention the remaining two signature types. We do
not use them because we do not yet find automatic ways to create them. Logical
signatures allow combining of multiple regex signatures using logical and
arithmetic operators. Bytecode signatures further extend logical signatures
and offer the maximal flexibility. Bytecode signatures are actually ClamAV
plug-ins compiled from C programs into LLVM bytecodes, and hence allow
arbitrary algorithmic detections of patterns.
### 3.2 Signature Generator
For dynamic libraries (.so files), the signature generator computes the MD5
checksums over their .text sections and outputs the ClamAV-conformant MD5
signature files.
Compiler-specific code snippets and static library code reside in ELF .o
(object) and .a (library archive) files. In the following discussions we only
focus on .o file handling because an .a file is just an archive of multiple .o
files. Our signature generator extracts .text sections from .o files, and
outputs, for each .text section, a basic or regex signature of 16-255 bytes
length (excluding the wildcards.) We describe this process in depth as
follows.
First, a signature is not just bytes from the .text section verbatim. When a
source file is compiled into an .o file, the addresses of unresolved function
names and symbols in this .o file are unknown and have to be left empty. It is
during the linking phase that these addresses are resolved and assigned by the
linker. This process is called relocation [10]. To facilitate the relocation,
the compiler emits one relocation table for each .text section. Each entry of
a relocation table specifies the symbol name to be resolved, the offset into
the .text section which contains the address to be assigned, and the
relocation type. When we create a signature from the bytes of a .text section,
we have to mask the bytes which are reserved for addresses yet to be computed.
To illustrate, suppose that we compile the following source code into an .o
file:
#include <stdlib.h>void foo() { char *buf = malloc(10);}
On x86, the disassembly of the generated .o file would be (using the GNU
objdump utility):
000000 <foo>: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: 48 83 ec 10 sub
$0x10,%rsp 8: bf 0a 00 00 00 mov $0xa,%edi d: e800 00 00 00 callq 12
<foo+0x12>12: 48 89 45 f8 mov %rax,-0x8(%rbp)16: c9 leaveq17: c3 retq and the
corresponding relocation table is:
OFFSET TYPE VALUE00000e R_X86_64_PC32 malloc+0xffffffff fffffffc
Together, the above examples illustrate that the target of the callq
instruction should be the address of a function named ”malloc”, and the
address should fill the 4 bytes (as specified by the R_X86_64_PC32 relocation
type) starting at offset 0xe (the boxed 00’s). So if foo, as a library
function, is used to create a user program binary, the linker will take the
byte stream 55 48 89 e5 …c9 c3 and fill the bytes at offset 0xe through 0xe+3
with the actual address of malloc. Thus, to identify foo, we create a ClamAV
regex signature as:
55 48 89 e5 48 83 ec 10 bf 0a 00 00
00 e8 ?? ?? ?? ?? 48 89 45 f8 c9 c3
The second consideration is the signature size. As will be seen in §4.2 a
.text section can be as big as four megabytes. Using the entire .text section
could lead to long preprocessing time and large disk/memory storage space.
Therefore, we impose an upper limit on the signature size to be 255 bytes. We
think 255 is a reasonable size, as there are $256^{255}$ possible distinct
256-byte streams, which is large enough to have few collisions/false
positives. For a .text section of $n>256$ bytes, we use the tailing 255/3=85
bytes $x_{1}x_{2}\ldots x_{85}$ of the first third portion, the tailing 85
bytes $y_{1}y_{2}\ldots y_{85}$ of the middle third, and the tailing 85 bytes
$z_{1}z_{2}\ldots z_{85}$ of the last middle third, and form a regex signature
as:
$x_{1}x_{2}\ldots x_{85}\;\\{l\\}\;y_{1}y_{2}\ldots
y_{85}\;\\{m\\}\;z_{1}z_{2}\ldots z_{85}$
where $l=\lfloor n/3\rfloor-85$ and $m=l+(n\%3)$. We also ignore .text
sections which are shorter than 16 bytes. This cut-off is chosen because the
size of an x86 instruction varies between 1 and 16 bytes, and since we do not
decode the bytes back to x86 instructions, we do not know the instruction
boundaries and have to make a conservative assumption. Besides, signatures
that are too short could result in many false positives.
The third consideration is an .o file could contain more than one .text
section. This happens in GNU Fortran’s static library, which is created with
the -ffunction-sections compiler flag. This flag instructs the compiler to put
each function in its own .text section instead of all functions from the same
source file in one single .text section. So for a Fortran function, say foo,
the compiler creates a section named .text.foo which consists of foo’s code
only 111This optimization reduces the size of statically linked program
binaries because it eliminates dead code, i.e. functions which are unused but
included nevertheless because they are in the same source files as the used
functions.. In such a situation, our tool emits one signature for one such
.text section.
### 3.3 Signature Scanner
The signature database is organized as a collection of signature files, each
of which contain signatures from a specific compiler/library, e.g. Intel
Fortran compiler, Intel MKL, MVAPICH, etc. Each signature file is annotated
manually to indicate the package name and version. The scanner takes as input
this database and the user’s program binary and outputs all possible matches.
For dynamic library identification, it uses the ldd command to obtain the
library pathnames. It then extracts their symbol versioning data (if there is
any) and compares against a list of known labels, as explained in §2.4. For
those without symbol versioning, the scanner checks their MD5 checksums
against those in the database.
For compiler and static library identification, the scanner loads the program
binary’s .text and .comment sections (compiler meta-data are treated as basic
signatures) and runs them through the ClamAV matching engine. By default
ClamAV stops as soon as it spots a match, so to find all matches, we modify it
by repeatedly zeroing out the matched area and rerunning the engine, until no
match can be found.
## 4 Evaluation
We evaluate our approach with both toy programs and real-world HPC software
packages from two HPC sites. We compile toy programs with a variety of
compilers to test the effectiveness of source compiler identification. We use
the existing HPC software packages to assess not only the compiler and library
recognition but also ClamAV’s scanning performance.
### 4.1 Compiler Identification
We examine fourteen compilers on the x86-64 Linux platform and we summarize
our findings in Table 1. We locate the compiler-specific code snippets by
enabling the verbosity flag in building the toy programs. This flag is
supported by all compilers and it can display exactly where and which .a and
.o files are used in the compilation process. The toy programs we constructed,
e.g. ”Hello, World” and matrix multiplication, are short and use only basic
language features and APIs, so they can highlight the usefulness of our
approach. All test cases are compiled with each compilers’ default settings.
Compiler | Note | Version | Meta Data | Code Snippet Source
---|---|---|---|---
Absoft | F,O | 11.1 | | liba*.a
Clang | C,L | 2.8 | |
Cray | | 7.1, 7.2 | V | libcsup.a, libf*.a, libcray*.a
G95 | F,G | 0.93 | V | libf95.a
GNU | G | 4.1, 4.4, 4.5 | V | crt*.o, libgcc*.a
Intel | | 9.x thru 12.0 | I | libirc*.a, libfcore*.a
Lahey-Fujitsu | F | 8.1 | I | fj*.o, libfj*.a
LLVM-GCC | G,L | 2.8 | V |
NAG | F,$\dagger$ | 5.2 | | libf*.a
Open64 | O,$\ddagger$ | 4.2 | V | libopen64*.a, libf*.a
PathScale | O,$\ddagger$ | 3.2, 4.0 | V | lib*crt.a, libpath*.a
PCC | C | 0.99 | V | crt*.o, libpcc*.a
PGI | | 6.x thru 11.x | V | libpgc.a, libpgf*.a, f90*.o, pgf*.o
Sun Studio | | 12.x | V | crt*.o, libc_supp.a, libf*.a
Table 1: Compiler identification. C: C/C++ compiler only. F: Fortran compiler only. G: uses GNU codebase. I: has unique meta data. L: uses LLVM codebase. O: uses Open64 codebase. V: meta data have both brand string and version number. $\dagger$: is actually a Fortran-to-C converter with GCC as backend. $\ddagger$: inserts FTZ/DAZ-enabling prolog code (see §2.1) but this code is not in any .a/.o files so we manually produce its signature. Library | Version (Compiler) | Code Snippet Source | Mean and StdDev .text size in KB
---|---|---|---
ACML | 4.4.0 (I,P) | libacml*.a | 11.1, 70.8
Cray LibSci | 10.4.0 (G,I,P) | libsci*.a | 3.4, 4.9
Intel MKL | 8.0, 8.1, 9.1 | libmkl*.a | 4.6, 9.0
| 10.x | libmkl_core.a | 4.2, 16.6
Cray MPI | 3.5.1 (G,I,P) | libmpich*.a | 1.3, 2.6
MPICH | 1.2.7mx (G,I) | libmpich.a | 1.2, 2.7
MVAPICH2 | 1.4, 1.5 (I) | libmpich.a | 2.6, 4.8
Table 2: Library identification. G: GNU. I: Intel. P: PGI.
As an example, the ”Hello, World” program compiled with Intel compiler 12.0
yields the following output from our scanner. It gives the number of matches
and total size of matches against each signature file:
(3 times, 6992 bytes) Intel Compiler Suite 12.0
(2 times, 200 bytes) GCC 4.4.3
We have the following observations. 1. Many compilers strive to be compatible
with the GNU development tools and runtime environment, so they also use GNU’s
code snippets. Therefore, GCC becomes a common denominator and is ubiquitous
in the scanning results. The above output is typical: The Intel compiler
locates the system’s default GCC installation (version 4.4.3 in this case) and
uses its crtbegin.o and crtend.o in the compilation. These two .o files handle
the .ctors section as discussed in §2.1.
2\. As opposed to C, the Fortran compiler space is very fragmented, with each
compiler having its own implementation of language intrinsics and extensions.
Hence, we can spot a Fortran compiler by just examining the runtime library
code used. The same is true for OpenMP and UPC.
3\. It is possible to recognize different versions of the same compiler. To
demonstrate, we wrote a simple Fortran program which calls the matmul
intrinsic to perform matrix multiplications and compiled it with PGI 11.0. The
result is as follows:
(58 times, 346766 bytes) PGI Fortran Compiler 11.x
(48 times, 56833 bytes) PGI Fortran Compiler 8.x
(45 times, 118288 bytes) PGI Fortran Compiler 10.x
(42 times, 49895 bytes) PGI Fortran Compiler 7.x
(32 times, 82808 bytes) PGI Compiler Suite 11.x
(29 times, 57166 bytes) PGI Compiler Suite 7.x
....
(2 times, 200 bytes) GCC 4.4.3
The matches include both the Fortran runtime library and compiler-specific
code snippets, which are shared by C/C++ and Fortran compilers. The result
also implies that PGI reuses a significant amount of code across each release.
We scrutinized the code snippets which matched both versions 7.x and 11.x and
found their functionality includes memory operations (allocate, copy, zero,
set), I/O setup (open, close), command-line argc/argv handling, etc.
4\. Compilers which share codebase are not easily distinguishable. Examples
include Open64 and PathScale, GNU and LLVM-GCC, etc. In these cases, only the
compiler-specific meta data can tell them apart, and Clang is thus far the
only compiler which defies our inference efforts.
### 4.2 Library Identification
We applied the scanner to a subset of HPC applications (Amber [20], Charmm
[21], CPMD [22], GAMESS [23], Lammps [24], NAMD [25], NWChem [26], PWscf [27])
from two HPC sites (a 3456-core Intel-based commodity PC cluster at our center
and a 672-core Cray XT5m at Indiana University). We gathered signatures from
numerical and MPI libraries which we know have been linked statically in the
application builds. The libraries and the size of their constituent .o files
are summarized in Table 2. Numerical libraries tend to have more .o files and
larger code size per .o file. The explanation is various processor-
specialization codes and aggressive loop unrolling. For example, ACML
4.4.0-ifort64’s libacml.a has 4.5K .o files, with the largest (4.1 MB code
size) being an AMD-K8-tuned complex matrix multiplication (zgemm) kernel, and
Intel MKL 10.3.1’s libmkl_core.a has 44K .o’s, with the largest (1.4 MB) being
an Intel-Nehalem-optimized batched forward discrete Fourier transform code.
For the test we create a signature database exclusively from the
aforementioned libraries. It has 100K signatures and the predominant signature
type is regex. The 21 HPC application binaries under test have a mean code
size of 13.3 MB and the largest is NWChem 6.0 on Cray (39.4 MB, mainly due to
static linking, as in §2.3). We build the (single-threaded) scanner with Intel
compiler 12.0 and we run the scan on a 2.5 GHz Intel Xeon L5420 ”Harpertown”
node and a 2.8 GHz X5560 ”Nehalem” node. The results show that the scanner can
correctly identify all used libraries. The scanning time $t$ (in seconds) can
be best described by the linear regressions $t=-1.11+7.23x$ (Harpertown) and
$t=-5.44+6.98x$ (Nehalem) where $x$ is the code size in MB, and the peak
memory usage is 195 MB.
## 5 Discussion
Our methodology of identifying the source compiler depends on the
idiosyncrasies of the x86 platform and compilers. We also explored the two
major compilers, GCC and IBM XL, on the PowerPC platform, and did not find
discernible compiler-specific code snippets. IBM XL compilers do inscribe
their brand strings in the .comment section, but in general, content in
.comment section is subject to tampering. For example, the following line in a
C program:
__asm__(".ident \"foo\""); will emit “foo” to the .comment section. This makes
.comment section a less reliable source of compiler provenance from a general
perspective of software forensics.
Another issue is that a compiler inserts its characteristic prolog code only
when it is compiling the source file which contains the main function. So if
different source files are compiled with different compilers, the resulting
program binary could lack the compiler-specific code snippets one would
expect. In addition, in Intel compiler’s case, it does not insert processor-
dispatch code if the optimization is turned off either explicitly (with -O0)
or implicitly (e.g. with -g).
Our approach cannot discover the compilation flags used in the program build
process. Some compilers offer a switch to record the command-line options
inside either .comment or other sections. For example, Intel has -sox, GCC has
-frecord-gcc-switches (recorded in .GCC.command.line section), and
Open64/PathScale and Absoft do it by default. We expect this self-annotation
feature to be more widely embraced by compiler developers, as they move toward
better compatibility with GCC, and used by HPC programmers, as it greatly aids
debugging and performance analysis.
## 6 Related Work
ALTD [13] is an effort to track software and library usage at HPC sites. It
takes a proactive approach by intercepting and recording every invocation of
the linker and the job scheduler. Our work is complementary in that it
performs post-mortem analysis and works on systems without ALTD.
The work by Rosenblum et al [16] is the first attempt to infer the compiler
provenance. They used sophisticated machine learning by modeling and
classifying the code byte stream as a linear chain Conditional Random Field.
As in most supervised learning systems, a lengthy training phase is required.
The resulting system can then infer the source compiler with a probability.
Their approach has several drawbacks, which our method addresses: They focus
solely on executable code and ignores other parts of ELF files, the
preprocessing/training phase, albeit one-time, is slow and complex, the model
parameters cannot be updated incrementally with ease when a new compiler is
added, and it is unclear if their model can discern the nuances among
different versions of the same compiler.
Kim’s approach [19] is closest to ours in spirit, but it misses the key
feature in our implementation: the relocation table. It produces a signature
by copying the first 25 bytes of a library function code verbatim. With such a
short signature and lack of relocation information, his tool has very limited
success in identifying library code snippets.
## 7 Conclusion
Compilers and libraries provenance reporting is crucial in an auditing and
benchmarking framework for HPC systems. In this paper we present a simple and
effective way to mine this information via signature matching. We also
demonstrate that building and updating a signature database is straightforward
and needs no expert knowledge. Finally, our tests show excellent scanning
speed even on very large program binaries.
## Acknowledgments
This work is supported by the National Science Foundation under award number
OCI 1025159. We would like to thank Gregor von Laszewski for providing access
to FutureGrid computing resources.
## References
* [1] T. R. Furlani et al., “Performance metrics and auditing framework using applications kernels for high performance computer systems.” In preparation.
* [2] http://www.spec.org
* [3] http://modules.sf.net
* [4] http://www.teragrid.org/userinfo/softenv/
* [5] M. Wilding and D. Behman, “Self-service Linux: Mastering the art of problem determination.” Prentice Hall, 2005.
* [6] “System V application binary interface - AMD64 architecture processor supplement.” http://www.x86-64.org/documentation/
* [7] A. Fog, Chapter 13 of “Optimizing software in C++: An optimization guide for Windows, Linux and Mac platforms.” http://www.agner.org/optimize/
* [8] I. Dooley and L. Kale, “Quantifying the interference caused by subnormal floating-point values.” The Workshop on Operating System Interference in High Performance Applications (OSIHPA), 2005.
* [9] §8.5 of “Working Draft of Standard for Programming Language C++, Document No. N1905.” http://www.open-std.org
* [10] J. R. Levine, “Linkers and loaders.” Morgan Kaufmann, 1999.
* [11] T. Kojm, http://www.clamav.net
* [12] D. J. Brown and K. Runge, “Library interface versioning in Solaris and Linux.” The 4th Annual Linux Showcase (ALS) & Conference, 2000.
* [13] B. Hadri, M. Fahey, and N. Jones, “Identifying software usage at HPC centers with the automatic library tracking database.” Proceedings of the 2010 TeraGrid Conference.
* [14] N. Sidwell, “A common vendor ABI for C++ – GCC’s why, what and not.” Proceedings of the 2003 ACCU Conference.
* [15] http://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
* [16] N. Rosenblum, B. Miller, and X. Zhu, “Extracting compiler provenance from program binaries.” The workshop on Program Analysis for Software Tools and Engineering (PASTE), 2010.
* [17] G. Johansen and B. Mauzy, “Cray XT programming environment’s implementation of dynamic shared libraries.” Cray User Group (CUG) Conference, 2009.
* [18] J. Jelinek, http://people.redhat.com/jakub/prelink.pdf
* [19] J. S. Kim, “Recovering debugging symbols from stripped static compiled binaries.” Hakin9 Magazine, June 2009. http://0xbeefc0de.org/papers/
* [20] D. A. Case et al., “The Amber biomolecular simulation programs.” J. Comp. Chem. v 26, 1668-1688 (2005).
* [21] B. R. Brooks et al., “CHARMM: The biomolecular simulation program.” J. Comp. Chem. v 30, 1545-1615 (2009).
* [22] http://www.cpmd.org
* [23] M. W. Schmidt et al., “General atomic and molecular electronic structure system.” J. Comp. Chem. v 14, 1347-1363 (1993).
* [24] S. J. Plimpton, “Fast parallel algorithms for short-range molecular dynamics.” J. Comp. Phys. v 117, 1-19 (1995).
* [25] J. C. Phillips et al., “Scalable molecular dynamics with NAMD.” J. Comp. Chem. v 26, 1781-1802 (2005).
* [26] M. Valiev et al., “NWChem: a comprehensive and scalable open-source solution for large scale molecular simulations.” Comput. Phys. Commun. v 181, 1477 (2010).
* [27] P. Giannozzi et al., http://www.quantum-espresso.org
|
arxiv-papers
| 2013-02-06T21:45:34 |
2024-09-04T02:49:41.446599
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Charng-Da Lu",
"submitter": "Charng-Da Lu",
"url": "https://arxiv.org/abs/1302.1591"
}
|
1302.1630
|
# Euclidean and Hyperbolic Planes
Anton Petrunin
###### Contents
1. Introduction
2. 1 Preliminaries
3. 2 The Axioms
4. 3 Half-planes
5. 4 Congruent triangles
6. 5 Perpendicular lines
7. 6 Parallel lines and similar triangles
8. 7 Triangle geometry
9. 8 Inscribed angles
10. 9 Inversion
11. 10 Absolute plane
12. 11 Hyperbolic plane
13. 12 Geometry of h-plane
14. 13 Spherical geometry
15. 14 Klein model
16. 15 Complex coordinates
17. 16 Hints
### Introduction
This is an introduction to Euclidean and Hyperbolic plane geometries and their
development from postulate systems.
The lectures are meant to be rigorous, conservative, elementary and
minimalistic. At the same time it includes about the maximum what students can
absorb in one semester.
Approximately half of the material used to be covered in high school, not any
more.
The lectures are oriented to sophomore and senior university students. These
students already had a calculus course. In particular they are familiar with
the real numbers and continuity. It makes possible to cover the material
faster and in a more rigorous way than it could be done in high school.
#### Prerequisite
The students has to be familiar with the following topics.
* $\diamond$
Elementary set theory: $\in$, $\cup$, $\cap$, $\backslash$, $\subset$,
$\times$.
* $\diamond$
Real numbers: intervals, inequalities, algebraic identities.
* $\diamond$
Limits, continuous functions and Intermediate value theorem.
* $\diamond$
Standard functions: absolute value, natural logarithm, exponent. Occasionally,
basic trigonometric functions are used, but these parts can be ignored.
* $\diamond$
Chapter 13 use in addition elementary properties of _scalar product_ , also
called _dot product_.
* $\diamond$
To read Chapter 15, it is better to have some previous experience with complex
numbers.
#### Overview
We use so called _metric approach_ introduced by Birkhoff. It means that we
define Euclidean plane as a _metric space_ which satisfies a list of
properties. This way we minimize the tedious parts which are unavoidable in
the more classical Hilbert’s approach. At the same time the students have
chance to learn basic geometry of metric spaces.
Euclidean geometry is discussed in the the chapters 1–7. In the Chapter 1 we
give all definitions necessary to formulate the axioms; it includes metric
space, lines, angle measure, continuous maps and congruent triangles. In the
Chapter 2, we formulate the axioms and prove immediate corollaries. In the
chapters 3–6 we develop Euclidean geometry to a dissent level. In Chapter 7 we
give the most classical theorem of triangle geometry; this chapter included
mainly as an illustration.
In the chapters 8–9 we discuss geometry of circles on the Euclidean plane.
These two chapters will be used in the construction of the model of hyperbolic
plane.
In the chapters 10–12 we discuss non-Euclidean geometry. In Chapter 10, we
introduce the axioms of absolute geometry. In Chapter 11 we describe so called
Poincaré disc model (discovered by Beltrami). This is a construction of
hyperbolic plane, an example of absolute plane which is not Euclidean. In the
Chapter 12 we discuss some geometry of hyperbolic plane.
The last few chapters contain additional topics: Spherical geometry, Klein
model and Complex coordinates. The proofs in these chapters are not completely
rigorous.
When teaching the course, I used to spent one week for compass-and-ruler
constructions111 see www.math.psu.edu/petrunin/fxd/car.html. This topic works
perfectly as an introduction to the proofs. I used extensively java applets
created by C.a.R. which are impossible to include in the printed version.
#### Disclaimer
I am not doing history. It is impossible to find the original reference to
most of the theorems discussed here, so I do not even try. (Most of the proofs
discussed in the lecture appeared already in the Euclid’s Elements and of the
Elements are not the original source anyway.)
#### Recommended books
* $\diamond$
Kiselev’s text book [11] — a classical book for school students. Should help
if you have trouble to follow the lectures.
* $\diamond$
Moise’s book, [8] — should be good for further study.
* $\diamond$
Greenberg’s book [4] — a historical tour through the axiomatic systems of
various geometries.
* $\diamond$
Methodologically my lecture notes are very close to Sharygin’s text book [10].
Which I recommend to anyone who can read Russian.
### Chapter 1 Preliminaries
#### Metric spaces
1.1. Definition. Let $\mathcal{X}$ be a nonempty set and $d$ be a function
which returns a real number $d(A,B)$ for any pair $A,B\in\mathcal{X}$. Then
$d$ is called _metric_ on $\mathcal{X}$ if for any $A,B,C\in\mathcal{X}$, the
following conditions are satisfied.
1. (a)
Positiveness:
$d(A,B)\geqslant 0.$
2. (b)
$A=B$ if and only if
$d(A,B)=0.$
3. (c)
Symmetry:
$d(A,B)=d(B,A).$
4. (d)
Triangle inequality:
$d(A,C)\leqslant d(A,B)+d(B,C).$
A _metric space_ is a set with a metric on it. More formally, a metric space
is a pair $(\mathcal{X},d)$ where $\mathcal{X}$ is a set and $d$ is a metric
on $\mathcal{X}$.
Elements of $\mathcal{X}$ are called _points_ of the metric space. Given two
points $A,B\in\nobreak\mathcal{X}$ the value $d(A,B)$ is called _distance_
from $A$ to $B$.
#### Examples
* $\diamond$
_Discrete metric._ Let $\mathcal{X}$ be an arbitrary set. For any
$A,B\in\nobreak\mathcal{X}$, set $d(A,B)=\nobreak 0$ if $A=B$ and $d(A,B)=1$
otherwise. The metric $d$ is called _discrete metric_ on $\mathcal{X}$.
* $\diamond$
_Real line._ Set of all real numbers ($\mathbb{R}$) with metric defined as
$d(A,B)\buildrel\mathrm{def}\over{=\\!\\!=}|A-B|.$
* $\diamond$
_Metrics on the plane._ Let us denote by $\mathbb{R}^{2}$ the set of all pairs
$(x,y)$ of real numbers. Assume $A=(x_{A},y_{A})$ and $B=(x_{B},y_{B})$ are
arbitrary points in $\mathbb{R}^{2}$. One can equip $\mathbb{R}^{2}$ with the
following metrics.
* $\circ$
_Euclidean metric,_ denoted as $d_{2}$ and defined as
$d_{2}(A,B)=\sqrt{(x_{A}-x_{B})^{2}+(y_{A}-y_{B})^{2}}.$
* $\circ$
_Manhattan metric,_ denoted as $d_{1}$ and defined as
$d_{1}(A,B)=|x_{A}-x_{B}|+|y_{A}-y_{B}|.$
* $\circ$
_Maximum metric,_ denoted as $d_{\infty}$ and defined as
$d_{\infty}(A,B)=\max\\{|x_{A}-x_{B}|,|y_{A}-y_{B}|\\}.$
1.2. Exercise. Prove that $d_{1}$, $d_{2}$ and $d_{\infty}$ are metrics on
$\mathbb{R}^{2}$.
#### Shortcut for distance
Most of the time we study only one metric on the space. For example
$\mathbb{R}$ will always refer to the real line. Thus we will not need to name
the metric function each time.
Given a metric space $\mathcal{X}$, the distance between points $A$ and $B$
will be further denoted as
$AB\ \ \text{or}\ \ d_{\mathcal{X}}(A,B);$
the later is used only if we need to emphasize that $A$ and $B$ are points of
the metric space $\mathcal{X}$.
For example, the triangle inequality can be written as
$AB+BC\geqslant AC.$
For the multiplication we will always use “${\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}$”, so $AB$ should not be confused with $A{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}B$.
#### Isometries and motions
Recall that a map $f\colon\mathcal{X}\to\mathcal{Y}$ is a _bijection_ if it
gives an exact pairing of the elements of two sets. Equivalently,
$f\colon\mathcal{X}\to\mathcal{Y}$ is a bijection if it has an _inverse_ ;
i.e., a map $g\colon\mathcal{Y}\to\mathcal{X}$ such that $g(f(A))=\nobreak A$
for any $A\in\mathcal{X}$ and $f(g(B))=\nobreak B$ for any $B\in\mathcal{Y}$.
1.3. Definition. Let $\mathcal{X}$ and $\mathcal{Y}$ be two metric spaces and
$d_{\mathcal{X}}$, $d_{\mathcal{Y}}$ be their metrics. A map
$f\colon\mathcal{X}\to\nobreak\mathcal{Y}$
is called _distance-preserving_ if
$d_{\mathcal{Y}}(f(A),f(B))=d_{\mathcal{X}}(A,B)$
for any $A,B\in{\mathcal{X}}$.
A bijective distance-preserving map is called an _isometry_.
Two spaces are _isometric_ if there exists an isometry from one to the other.
The isometry from space to itself is also called _motion_ of the space.
1.4. Exercise. Show that any distance preserving map is injective; i.e., if
$f\colon\mathcal{X}\to\mathcal{Y}$ is a distance preserving map then $f(A)\neq
f(B)$ for any pair of distinct points $A,B\in\mathcal{X}$
1.5. Exercise. Show that if $f\colon\mathbb{R}\to\mathbb{R}$ is a motion of
the real line then either
$f(X)=f(0)+X\ \ \text{for any}\ \ X\in\mathbb{R}$
or
$f(X)=f(0)-X\ \ \text{for any}\ \ X\in\mathbb{R}.$
1.6. Exercise. Prove that $(\mathbb{R}^{2},d_{1})$ is isometric to
$(\mathbb{R}^{2},d_{\infty})$.
1.7. Exercise. Describe all the motions of the Manhattan plane.
#### Lines
If $\mathcal{X}$ is a metric space and $\mathcal{Y}$ is a subset of
$\mathcal{X}$, then a metric on $\mathcal{Y}$ can be obtained by restricting
the metric from $\mathcal{X}$. In other words, the distance between points of
$\mathcal{Y}$ is defined to be the distance between the same points in
$\mathcal{X}$. Thus any subset of a metric space can be also considered as a
metric space.
1.8. Definition. A subset $\ell$ of metric space is called _line_ if it is
isometric to the real line.
Note that a space with discrete metric has no lines. The following picture
shows examples of lines on the Manhattan plane, i.e. on $(\mathbb{R},d_{1})$.
Half-lines and segments. Assume there is a line $\ell$ passing through two
distinct points $P$ and $Q$. In this case we might denote $\ell$ as $(PQ)$.
There might be more than one line through $P$ and $Q$, but if we write $(PQ)$
we assume that we made a choice of such line.
Let us denote by $[PQ)$ the half-line which starts at $P$ and contains $Q$.
Formally speaking, $[PQ)$ is a subset of $(PQ)$ which corresponds to
$[0,\infty)$ under an isometry $f\colon(PQ)\to\mathbb{R}$ such that $f(P)=0$
and $f(Q)>0$.
The subset of line $(PQ)$ between $P$ and $Q$ is called segment between $P$
and $Q$ and denoted as $[PQ]$. Formally, segment can defined as the
intersection of two half-lines: $[PQ]=[PQ)\cap[QP)$.
An ordered pair of half-lines which start at the same point is called _angle_.
An angle formed by two half-lines $[PQ)$ and $[PR)$ will be denoted as $\angle
QPR$. In this case the point $P$ is called _vertex_ of the angle.
1.9. Exercise. Show that if $X\in[PQ]$ then $PQ=PX+QX$.
1.10. Exercise. Consider graph $y=|x|$ in $\mathbb{R}^{2}$. In which of the
following spaces (a) $(\mathbb{R}^{2},d_{1})$, (b) $(\mathbb{R}^{2},d_{2})$
(c) $(\mathbb{R}^{2},d_{\infty})$ it forms a line? Why?
1.11. Exercise. How many points $M$ on the line $(AB)$ for which we have
1. 1.
$AM=MB$ ?
2. 2.
$AM=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}MB$ ?
#### Congruent triangles
An _ordered_ triple of distinct points in a metric space, say $A,B,C$ is
called _triangle_ and denoted as $\triangle ABC$. So the triangles $\triangle
ABC$ and $\triangle ACB$ are considered as different.
Two triangles $\triangle A^{\prime}B^{\prime}C^{\prime}$ and $\triangle ABC$
are called _congruent_ (briefly $\triangle
A^{\prime}B^{\prime}C^{\prime}\cong\nobreak\triangle ABC$) if there is a
motion $f\colon\mathcal{X}\to\mathcal{X}$ such that $A^{\prime}=f(A)$,
$B^{\prime}=f(B)$ and $C^{\prime}=f(C)$.
Let $\mathcal{X}$ be a metric space and $f,g\colon\mathcal{X}\to\mathcal{X}$
be two motions. Note that the inverse $f^{-1}:\mathcal{X}\to\mathcal{X}$, as
well as the composition $f\circ g:\mathcal{X}\to\mathcal{X}$ are also motions.
It follows that “$\cong$” is an equivalence relation; i.e., the following two
conditions hold.
* $\diamond$
If $\triangle A^{\prime}B^{\prime}C^{\prime}\cong\nobreak\triangle ABC$ then
$\triangle ABC\cong\nobreak\triangle A^{\prime}B^{\prime}C^{\prime}$.
* $\diamond$
If $\triangle
A^{\prime\prime}B^{\prime\prime}C^{\prime\prime}\cong\nobreak\triangle
A^{\prime}B^{\prime}C^{\prime}$ and $\triangle
A^{\prime}B^{\prime}C^{\prime}\cong\nobreak\triangle ABC$ then
$\triangle A^{\prime\prime}B^{\prime\prime}C^{\prime}\cong\nobreak\triangle
ABC.$
Note that if $\triangle A^{\prime}B^{\prime}C^{\prime}\cong\nobreak\triangle
ABC$ then $AB=\nobreak A^{\prime}B^{\prime}$, $BC=B^{\prime}C^{\prime}$ and
$CA=C^{\prime}A^{\prime}$.
For discrete metric, as well some other metric spaces the converse also holds.
The following example shows that it does not hold in the Manhattan plane.
Example. Consider three points $A=(0,1)$, $B=(1,0)$ and $C=\nobreak(-1,0)$ on
the Manhattan plane $(\mathbb{R}^{2},d_{1})$. Note that
$d_{1}(A,B)=d_{1}(A,C)=d_{1}(B,C)=2.$
$A$$B$$C$
On one hand
$\triangle ABC\cong\triangle ACB.$
Indeed, it is easy to see that the map $(x,y)\mapsto(-x,y)$ is an isometry of
$(\mathbb{R}^{2},d_{1})$ which sends $A\mapsto A$, $B\mapsto C$ and $C\mapsto
B$.
On the other hand
$\triangle ABC\ncong\nobreak\triangle BCA.$
Indeed, assume there is a motion $f$ of $(\mathbb{R}^{2},d_{1})$ which sends
$A\mapsto B$ and $B\mapsto C$. Note that a point $M$ is a midpoint111$M$ is a
midpoint of $A$ and $B$ if $d_{1}(A,M)=d_{1}(B,M)=\tfrac{1}{2}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}d_{1}(A,B)$. of $A$ and $B$ if and only if
$f(M)$ is a midpoint of $B$ and $C$. The set of midpoints for $A$ and $B$ is
infinite, it contains all points $(t,t)$ for $t\in[0,1]$ (it is the dark gray
segment on the picture). On the other hand the midpoint for $B$ and $C$ is
unique (it is the black point on the picture). Thus $f$ can not be bijective,
a contradiction.
#### Continuous maps
Here we define continuous maps between metric spaces. This definition is a
straightforward generalization of the standard definition for the real-to-real
functions.
Further $\mathcal{X}$ and $\mathcal{Y}$ be two metric spaces and
$d_{\mathcal{X}}$, $d_{\mathcal{Y}}$ be their metrics.
A map $f\colon\mathcal{X}\to\mathcal{Y}$ is called continuous at point
$A\in\mathcal{X}$ if for any $\varepsilon>0$ there is $\delta>0$ such that if
$d_{\mathcal{X}}(A,A^{\prime})<\delta$ then
$d_{\mathcal{Y}}(f(A),f(A^{\prime}))<\varepsilon.$
The same way one may define a continuous map of several variables. Say, assume
$f(A,B,C)$ is a function which returns a point in the space $\mathcal{Y}$ for
a triple of points $(A,B,C)$ in the space $\mathcal{X}$. The map $f$ might be
defined only for some triples in $\mathcal{X}$.
Assume $f(A,B,C)$ is defined. Then we say that $f$ continuous at the triple
$(A,B,C)$ if for any $\varepsilon>0$ there is $\delta>0$ such that
$d_{\mathcal{Y}}(f(A,B,C),f(A^{\prime},B^{\prime},C^{\prime}))<\varepsilon.$
if $d_{\mathcal{X}}(A,A^{\prime})<\delta$,
$d_{\mathcal{X}}(B,B^{\prime})<\delta$ and
$d_{\mathcal{X}}(C,C^{\prime})<\delta$ and
$f(A^{\prime},B^{\prime},C^{\prime})$ is defined.
1.12. Exercise. Let $\mathcal{X}$ be a metric space.
1. (a)
Let $A\in\mathcal{X}$ be a fixed point. Show that the function
$f(B)\buildrel\mathrm{def}\over{=\\!\\!=}d_{\mathcal{X}}(A,B)$
is continuous at any point $B$.
2. (b)
Show that $d_{\mathcal{X}}(A,B)$ is a continuous at any pair
$A,B\in\mathcal{X}$.
1.13. Exercise. Let $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$ be a metric
spaces. Assume that the functions $f\colon\mathcal{X}\to\mathcal{Y}$ and
$g\colon\mathcal{Y}\to\mathcal{Z}$ are continuous at any point and $h=g\circ
f$ is its composition; i.e., $h(x)=g(f(A))$ for any $A\in\mathcal{X}$. Show
that $h\colon\mathcal{X}\to\mathcal{Z}$ is continuous.
#### Angles
$O$$B$$A$$\alpha$
Before formulating the axioms, we need to develop a language which makes
possible rigorously talk about angle measure.
Intuitively, the angle measure of an angle is how much one has to rotate the
first half-line counterclockwise so it gets the position of the second half-
line of the angle.
Note that the angle measure is defined up to full rotation which is $2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi$ if measure in radians; so the angles
$\dots,\alpha-2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi$, $\alpha$,
$\alpha+2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi$,$\alpha+4{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi,\dots$ should be regarded to be the same.
#### Reals modulo $\bm{2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi}$
Let us introduce a new notation; we will write
$\alpha\equiv\beta\ \ \ \ \text{or}\ \ \ \ \alpha\equiv\beta\pmod{2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi}$
if $\alpha=\beta+2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}n$ for some integer $n$. In this case we say
$\textit{``$\alpha$ is equal to $\beta$ modulo $2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi$''}.$
For example
$-\pi\equiv\pi\equiv 3{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi\ \
\text{and}\ \ \tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\pi\equiv-\tfrac{3}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi.$
The introduced relation “$\equiv$” behaves roughly as equality. We can do
addition subtraction and multiplication by integer number without getting into
trouble. For example
$\alpha\equiv\beta\ \ \ \ \text{and}\ \ \ \
\alpha^{\prime}\equiv\beta^{\prime}$
implies
$\alpha+\alpha^{\prime}\equiv\beta+\beta^{\prime},\ \ \ \ \ \
\alpha-\alpha^{\prime}\equiv\beta-\beta^{\prime}\ \ \ \ \text{and}\ \ \ \
n{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha\equiv n{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\beta$
for any integer $n$. But “$\equiv$” does not in general respect multiplication
by non-integer numbers; for example
$\pi\equiv-\pi\ \ \ \ \text{but}\ \ \ \ \tfrac{1}{2}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi\not\equiv-\tfrac{1}{2}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi.$
1.14. Exercise. Show that $2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\alpha\equiv 0$ if and only if $\alpha\equiv 0$ or $\alpha\equiv\pi$.
## Euclidean geometry
### Chapter 2 The Axioms
#### Models and axioms
The metric space $(\mathbb{R}^{2},d_{2})$ described on page $\circ$ ‣
$\diamond$ ‣ 1, may be taken as a definition of Euclidean plane. It can be
called _numerical model_ of Euclidean plane; it builds the Euclidean plane
from the real numbers while the later is assumed to be known.
In the axiomatic approach, one describes Euclidean plane as anything which
satisfy a list of properties called _axioms_. Axiomatic system for the theory
is like rules for the game. Once the axiom system is fixed, a statement
considered to be true if it follows from the axioms and nothing else is
considered to be true.
Historically, the axioms provided common ground for mathematicians. Their
formulation were not rigorous at all; for example, Euclid described a _line_
as _breadthless length_. But the axioms were formulated clear enough so that
one mathematician could understand the other.
The best way to understand an axiomatic system is to make one by yourself.
Look around and choose a physical model of the Euclidean plane, say imagine an
infinite and perfect surface of chalk board. Now try to collect the key
observations about this model. Let us assume that we have intuitive
understanding of such notions as _line_ and _point_.
* $\diamond$
We can measure distances between points.
* $\diamond$
We can draw unique line which pass though two given points.
* $\diamond$
We can measure angles.
* $\diamond$
If we rotate or shift we will not see the difference.
* $\diamond$
If we change scale we will not see the difference.
These observations are good enough to start with. In the next section we use
the language developed in this and previous chapters to formulate them
rigorously.
The observations above are intuitively obvious. On the other hand, it is not
intuitively obvious that Euclidean plane is isometric to
$(\mathbb{R}^{2},d_{2})$.
An other advantage of using axiomatic approach lies in the fact that it is
easily adjustable. For example we may remove one axiom from the list, or
exchange it to an other axiom. We will do such modifications in Chapter 10 and
further.
#### The Axioms
In this section we set an axiomatic system of the Euclidean plane. Roughly it
says that Euclidean plane is a metric space where observations stated in the
previous section hold, but now everything is rigorously stated.
This set of axioms is very close to the one given by Birkhoff in [3].
2.1. Definition. The _Euclidean plane_ is a metric space with at least two
points which satisfies the following axioms:
1. I.
There is one and only one line, that contains any two given distinct points
$P$ and $Q$.
2. II.
Any angle $\angle AOB$ defines a real number in the interval $(-\pi,\pi]$.
This number is called _angle measure of $\angle AOB$_ and denoted by
$\measuredangle AOB$. It satisfies the following conditions:
1. (a)
Given a half-line $[OA)$ and $\alpha\in(-\pi,\pi]$ there is unique half-line
$[OB)$ such that $\measuredangle AOB=\alpha$
2. (b)
For any points $A$, $B$ and $C$ distinct from $O$ we have
$\measuredangle AOB+\measuredangle BOC\equiv\measuredangle AOC.$
3. (c)
The function
$\measuredangle\colon(A,O,B)\mapsto\measuredangle AOB$
is continuous at any triple of points $(A,O,B)$ such that $O\neq A$ and $O\neq
B$ and $\measuredangle AOB\neq\pi$.
3. III.
$\triangle ABC\cong\triangle A^{\prime}B^{\prime}C^{\prime}$ if and only if
$\displaystyle A^{\prime}B^{\prime}$ $\displaystyle=AB,$ $\displaystyle
A^{\prime}C^{\prime}$ $\displaystyle=AC,$ and $\displaystyle\measuredangle
C^{\prime}A^{\prime}B^{\prime}$ $\displaystyle=\pm\measuredangle CAB.$
4. IV.
If for two triangles $\triangle ABC$, $\triangle AB^{\prime}C^{\prime}$ and
$k>0$ we have
$\displaystyle B^{\prime}$ $\displaystyle\in[AB),$ $\displaystyle C^{\prime}$
$\displaystyle\in[AC)$ $\displaystyle AB^{\prime}$ $\displaystyle=k{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}AB,$ $\displaystyle AC^{\prime}$
$\displaystyle=k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}AC$
then
$\displaystyle B^{\prime}C^{\prime}$ $\displaystyle=k{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}BC,$ $\displaystyle\measuredangle ABC$
$\displaystyle=\measuredangle AB^{\prime}C^{\prime}$ and
$\displaystyle\measuredangle ACB$ $\displaystyle=\measuredangle
AC^{\prime}B^{\prime}.$
From now on, we can use no information about Euclidean plane which does not
follow from the Definition 2.
#### Angle and angle measure
The notations $\angle AOB$ and $\measuredangle AOB$ look similar, they also
have close but different meaning, which better not to be confused. The angle
$\angle AOB$ is a pair of half-lines $[OA)$ and $[OB)$ while $\measuredangle
AOB$ is a number in the interval $(-\pi,\pi]$.
The equality
$\angle AOB=\angle A^{\prime}O^{\prime}B^{\prime}$
means that $[OA)=[O^{\prime}A^{\prime})$ and
$[OB)=\nobreak[O^{\prime}B^{\prime})$, in particular $O=O^{\prime}$. On the
other hand the equality
$\measuredangle AOB=\nobreak\measuredangle A^{\prime}O^{\prime}B^{\prime}$
means only equality of two real numbers; in this case $O$ may be distinct from
$O^{\prime}$.
#### Lines and half-lines
2.2. Proposition. Any two distinct lines intersect at most at one point.
Proof. Assume two lines $\ell$ and $m$ intersect at two distinct points $P$
and $Q$. Applying Axiom I, we get $\ell=m$. ∎
2.3. Exercise. Suppose $A^{\prime}\in[OA)$ and $A^{\prime}\not=O$ show that
$[OA)=\nobreak[OA^{\prime})$.
2.4. Proposition. Given $r\geqslant 0$ and a half-line $[OA)$ there is unique
$A^{\prime}\in[OA)$ such that $OA=r$.
Proof. According to definition of half-line, there is an isometry
$f\colon[OA)\to[0,\infty),$
such that $f(O)=0$. By the definition of isometry, $OA^{\prime}=f(A^{\prime})$
for any $A^{\prime}\in\nobreak[OA)$. Thus, $OA^{\prime}=r$ if and only if
$f(A^{\prime})=r$.
Since isometry has to be bijective, the statement follows. ∎
#### Zero angle
2.5. Proposition. $\measuredangle AOA=0$ for any $A\not=O$.
Proof. According to Axiom IIb,
$\measuredangle AOA+\measuredangle AOA\equiv\measuredangle AOA$
Subtract $\measuredangle AOA$ from both sides, we get $\measuredangle
AOA\equiv 0$. Hence $\measuredangle AOA=\nobreak 0$. ∎
2.6. Exercise. Assume $\measuredangle AOB=0$. Show that $[OA)=[OB)$.
2.7. Proposition. For any $A$ and $B$ distinct from $O$, we have
$\measuredangle AOB\equiv-\measuredangle BOA.$
Proof. According to Axiom IIb,
$\measuredangle AOB+\measuredangle BOA\equiv\measuredangle AOA$
By Proposition 2 $\measuredangle AOA=0$. Hence the result. ∎
#### Straight angle
If $\measuredangle AOB=\pi$, we say that $\angle AOB$ is a _straight angle_.
Note that by Proposition 2, if $\angle AOB$ is a straight angle then so is
$\angle BOA$.
We say that point $O$ _lies between_ points $A$ and $B$ if $O\not=A$,
$O\not=B$ and $O\in[AB]$.
2.8. Theorem. The angle $\angle AOB$ is straight if and only if $O$ _lies
between_ $A$ and $B$.
$B$$O$$A$
Proof. By Proposition 2, we may assume that $OA=OB=1$.
($\Leftarrow$). Assume $O$ lies between $A$ and $B$. Let
$\alpha=\measuredangle AOB$.
Applying Axiom IIa, we get a half-line $[OA^{\prime})$ such that
$\alpha=\measuredangle BOA^{\prime}$. We can assume that $OA^{\prime}=1$.
According to Axiom III, $\triangle AOB\cong\nobreak\triangle BOA^{\prime}$;
denote by $h$ the corresponding motion of the plane.
Then $(A^{\prime}B)=h(AB)\ni h(O)=O$. Therefore both lines $(AB)$ and
$(A^{\prime}B)$, contain $B$ and $O$. By Axiom I, $(AB)=(A^{\prime}B)$.
By the definition of the line, $(AB)$ contains exactly two points $A$ and $B$
on distance $1$ from $O$. Since $OA^{\prime}=1$ and $A^{\prime}\neq B$, we get
$A=A^{\prime}$.
By Axiom IIb and Proposition 2, we get
$\displaystyle 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha$
$\displaystyle\equiv\measuredangle AOB+\measuredangle BOA^{\prime}\equiv$
$\displaystyle\equiv\measuredangle AOB+\measuredangle BOA\equiv$
$\displaystyle\equiv\measuredangle AOA\equiv$ $\displaystyle\equiv 0$
Since $[OA)\neq[OB)$, Axiom IIa implies $\alpha\neq 0$. Hence $\alpha=\pi$
(see Exercise 1).
($\Rightarrow$). Suppose that $\measuredangle AOB\equiv\pi$. Consider line
$(OA)$ and choose point $B^{\prime}$ on $(OA)$ so that $O$ lies between $A$
and $B^{\prime}$.
From above, we have $\measuredangle AOB^{\prime}=\pi$. Applying Axiom IIa, we
get $[OB)=\nobreak[OB^{\prime})$. In particular, $O$ lies between $A$ and $B$.
∎
A triangle $\triangle ABC$ is called _degenerate_ if $A$, $B$ and $C$ lie on
one line.
2.9. Corollary. A triangle is degenerate if and only if one of its angles is
equal to $\pi$ or $0$.
2.10. Exercise. Show that three distinct points $A$, $O$ and $B$ lie on one
line if and only if
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle AOB\equiv 0.$
2.11. Exercise. Let $A$, $B$ and $C$ be three points distinct from $O$. Show
that $B$, $O$ and $C$ lie on one line if and only if
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle AOB\equiv 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle AOC.$
#### Vertical angles
$A$$A^{\prime}$$O$$B$$B^{\prime}$
A pair of angles $\angle AOB$ and $\angle A^{\prime}OB^{\prime}$ is called
_vertical_ if $O$ leis between $A$ and $A^{\prime}$ and at the same time $O$
lies between $B$ and $B^{\prime}$.
2.12. Proposition. The vertical angles have equal measures.
Proof. Assume that the angles $\angle AOB$ and $\angle A^{\prime}OB^{\prime}$
are vertical.
Note that the angles $\angle AOA^{\prime}$ and $\angle BOB^{\prime}$ are
straight. Therefore $\measuredangle AOA^{\prime}=\measuredangle
BOB^{\prime}=\pi$. It follows that
$\displaystyle 0$ $\displaystyle=\measuredangle AOA^{\prime}-\measuredangle
BOB^{\prime}\equiv$ $\displaystyle\equiv\measuredangle AOB+\measuredangle
BOA^{\prime}-\measuredangle BOA^{\prime}-\measuredangle
A^{\prime}OB^{\prime}\equiv$ $\displaystyle\equiv\measuredangle
AOB-\measuredangle A^{\prime}OB^{\prime}.$
Hence the result follows. ∎
2.13. Exercise. Assume $O$ is the mid-point for both segments $[AB]$ and
$[CD]$. Prove that $AC=BD$.
### Chapter 3 Half-planes
This chapter contains long proofs of self-evident statements. It is OK to skip
it, but make sure you know definitions of positive/negative angles and that
your intuition agrees with 3, 3, 3 and 3.
#### Sign of angle
* $\diamond$
An angle $\angle AOB$ is called _positive_ if $0<\measuredangle AOB<\pi$;
* $\diamond$
An angle $\angle AOB$ is called _negative_ if $\measuredangle AOB<0$.
Note that according to the above definitions the straight angle as well as
zero angle are neither positive nor negative.
3.1. Exercise. Show that $\angle AOB$ is positive if and only if $\angle BOA$
is negative.
3.2. Exercise. Let $\angle AOB$ is a straight angle. Show that $\angle AOX$ is
positive if and only if $\angle BOX$ is negative.
3.3. Exercise. Assume that the angles $\angle AOB$ and $\angle BOC$ are
positive. Show that
$\measuredangle AOB+\measuredangle BOC+\measuredangle COB=2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi.$
if $\angle COB$ is positive and
$\measuredangle AOB+\measuredangle BOC+\measuredangle COB=0.$
if $\angle COB$ is negative.
#### Intermediate value theorem
3.4. Intermediate value theorem. Let $f\colon[a,b]\to\mathbb{R}$ be a
continuous function. Assume $f(a)$ and $f(b)$ have the opposite signs then
$f(t_{0})=0$ for some $t_{0}\in[a,b]$.
$f(b)$$f(a)$$t_{0}$$b$$a$
The Intermediate value theorem should be covered in any calculus course. We
will use the following corollary.
3.5. Corollary. Assume that for any $t\in\nobreak[0,1]$ we have three points
in the plane $O_{t}$, $A_{t}$ and $B_{t}$ such that
1. (a)
Each function $t\mapsto O_{t}$, $t\mapsto A_{t}$ and $t\mapsto B_{t}$ is
continuous.
2. (b)
For for any $t\in[0,1]$, the points $O_{t}$, $A_{t}$ and $B_{t}$ do not lie on
one line.
Then the angles $\angle A_{0}O_{0}B_{0}$ and $\angle A_{1}O_{1}B_{1}$ have the
same sign.
Proof. Consider the function $f(t)=\measuredangle A_{t}O_{t}B_{t}$.
Since the points $O_{t}$, $A_{t}$ and $B_{t}$ do not lie on one line, Theorem
2 implies that $f(t)=\measuredangle A_{t}O_{t}B_{t}\neq 0$ or $\pi$ for any
$t\in[0,1]$.
Therefore by Axiom IIc and Exercise 1, $f$ is a continuous function.
Further, by Intermediate value theorem, $f(0)$ and $f(1)$ have the same sign;
hence the result follows. ∎
#### Same sign lemmas
3.6. Lemma. Assume $Q^{\prime}\in[PQ)$ and $Q^{\prime}\neq\nobreak P$. Then
for any $X\notin\nobreak(PQ)$ the angles $\angle PQX$ and $\angle
PQ^{\prime}X$ have the same sign.
$P$$Q^{\prime}$$Q$$X$
Proof. By Proposition 2, for any $t\in[0,1]$ there is unique point
$Q_{t}\in[PQ)$ such that $PQ_{t}=(1-t){\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}PQ+t{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}PQ^{\prime}$. Note that the
map $t\mapsto Q_{t}$ is continuous, $Q_{0}=Q$ and $Q_{1}=Q^{\prime}$ and for
any $t\in[0,1]$, we have $P\neq\nobreak Q_{t}$.
Applying Corollary 3, for $P_{t}=P$, $Q_{t}$ and $X_{t}=X$, we get that
$\angle PQX$ has the same sign as $\angle PQ^{\prime}X$. ∎
3.7. Lemma. Assume $[XY]$ does not intersect $(PQ)$ then the angles $\angle
PQX$ and $\angle PQY$ have the same sign.
$P$$Q$$X$$Y$
The proof is nearly identical to the one above.
Proof. According to Proposition 2, for any $t\in[0,1]$ there is a point
$X_{t}\in[XY]$ such that $XX_{t}=t{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}XY$.
Note that the map $t\mapsto X_{t}$ is continuous, $X_{0}=X$ and $X_{1}=Y$ and
for any $t\in[0,1]$, we have $Q\neq Y_{t}$.
Applying Corollary 3, for $P_{t}=\nobreak P$, $Q_{t}=\nobreak Q$ and $X_{t}$,
we get that $\angle PQX$ has the same sign as $\angle PQY$. ∎
#### Half-planes
3.8. Proposition. The complement of a line $(PQ)$ in the plane can be
presented in the unique way as a union of two disjoint subsets called _half-
planes_ such that
1. (a)
Two points $X,Y\notin(PQ)$ lie in the same half-plane if and only if the
angles $\angle PQX$ and $\angle PQY$ have the same sign.
2. (b)
Two points $X,Y\notin(PQ)$ lie in the same half-plane if and only if $[XY]$
does not intersect $(PQ)$.
Further we say that $X$ and $Y$ lie on _one side from_ $(PQ)$ if they lie in
one of the half-planes of $(PQ)$ and we say that $P$ and $Q$ lie on the
_opposite sides from_ $\ell$ if they lie in the different half-planes of
$\ell$.
$P$$Q$$\mathcal{H}_{+}$$\mathcal{H}_{-}$
Proof. Let us denote by $\mathcal{H}_{+}$ (correspondingly $\mathcal{H}_{-}$)
the set of points $X$ in the plane such that $\angle PQX$ is positive
(correspondingly negative).
According to Theorem 2, $X\in(PQ)$ if and only if $\measuredangle
PQX\neq\nobreak 0$ nor $\pi$. Therefore $\mathcal{H}_{+}$ and
$\mathcal{H}_{-}$ give the unique subdivision of the complement of $(PQ)$
which satisfies property (a).
Now let us prove that the this subdivision depends only on the line $(PQ)$;
i.e., if $(P^{\prime}Q^{\prime})=(PQ)$ and $X,Y\notin(PQ)$ then the angles
$\angle PQX$ and $\angle PQY$ have the same sign if and only if the angles
$\angle P^{\prime}Q^{\prime}X$ and $\angle P^{\prime}Q^{\prime}Y$ have the
same sign.
Applying Exercise 3, we can assume that $P=P^{\prime}$ and
$Q^{\prime}\in\nobreak[PQ)$. It remains to apply Lemma 3.
(b). Assume $[XY]$ intersects $(PQ)$. Since the subdivision depends only on
the line $(PQ)$, we can assume that $Q\in[XY]$. In this case, by Exercise 3,
the angles $\angle PQX$ and $\angle PQY$ have opposite signs.
$O$$A$$B$$A^{\prime}$$B^{\prime}$
Now assume $[XY]$ does not intersects $(PQ)$. In this case, by Lemma 3,
$\angle PQX$ and $\angle PQY$ have the same sign. ∎
3.9. Exercise. Assume that the angles $\angle AOB$ and $\angle
A^{\prime}OB^{\prime}$ are vertical. Show that the line $(AB)$ does not
intersect the segment $[A^{\prime}B^{\prime}]$.
Consider triangle $\triangle ABC$. The segments $[AB]$, $[BC]$ and $[CA]$ are
called _sides of the triangle_.
The following theorem is a corollary of Proposition 3.
3.10. Pasch’s theorem. Assume line $\ell$ does not pass through any vertex a
triangle. Then it intersects either two or zero sides of the triangle.
3.11. Signs of angles of triangle. In any nondegenerate triangle $\triangle
ABC$ the angles $\angle ABC$, $\angle BCA$ and $\angle CAB$ have the same
sign.
$C$$A$$B$$Z$
Proof. Choose a point $Z\in(AB)$ so that $A$ lies between $B$ and $Z$.
According to Lemma 3, the angles $\angle ZBC$ and $\angle ZAC$ have the same
sign.
Note that $\measuredangle ABC=\measuredangle ZBC$ and
$\measuredangle ZAC+\measuredangle CAB\equiv\pi.$
Therefore $\angle CAB$ has the same sign as $\angle ZAC$ which in turn has the
same sign as $\measuredangle ABC=\nobreak\measuredangle ZBC$.
Repeating the same argument for $\angle BCA$ and $\angle CAB$, we get the
result. ∎
3.12. Exercise. Show that two points $X,Y\notin(PQ)$ lie on the same side from
$(PQ)$ if and only if the angles $\angle PXQ$ and $\angle PYQ$ have the same
sign.
$P$$Q$$X$$Y$
$B$$A$$A^{\prime}$$B^{\prime}$$C$
3.13. Exercise. Let $\triangle ABC$ be a nondegenerate triangle,
$A^{\prime}\in[BC]$ and $B^{\prime}\in[AC]$. Show that the segments
$[AA^{\prime}]$ and $[BB^{\prime}]$ intersect.
3.14. Exercise. Assume that the points $X$ and $Y$ lie on the opposite sides
from the line $(PQ)$. Show that the half-line $[PX)$ does not interests
$[QY)$.
3.15. Advanced exercise. Note that the following quantity
$\tilde{\measuredangle}ABC=\left[\begin{aligned}
&\pi&&\text{if}&\measuredangle ABC&=\pi\\\ -&\measuredangle
ABC&&\text{if}&\measuredangle ABC&<\pi\end{aligned}\right.$
can serve as the angle measure; i.e., the axioms hold if one changes
everywhere $\measuredangle$ to $\tilde{\measuredangle}$.
Show that $\measuredangle$ and $\tilde{\measuredangle}$ are the only possible
angle measures on the plane.
Show that without Axiom IIc, this is not longer true.
#### Triangle with the given sides
Consider triangle $\triangle ABC$. Let $a=BC$, $b=CA$ and $c=AB$. Without loss
of generality we may assume that $a\leqslant b\leqslant c$. Then all three
triangle inequalities for $\triangle ABC$ hold if and only if $c\leqslant
a+b$. The following theorem states that this is the only restriction on $a$,
$b$ and $c$.
3.16. Theorem. Assume that $0<a\leqslant b\leqslant c\leqslant a+b$. Then
there is a triangle $\triangle ABC$ such that $a=BC$, $b=CA$ and $c=AB$.
$A$$C$$B$$\,s(\beta,r)\,$$r$$r$$\beta$
The proof requires some preparation.
Assume $r>0$ and $\pi>\beta>0$. Consider triangle $\triangle ABC$ such that
$AB=BC=r$ and $\measuredangle ABC=\beta$. The existence of such triangle
follow from Axiom IIa and Proposition 2.
Note that according to Axiom III, the values $\beta$ and $r$ define the
triangle up to congruence. In particular the distance $AC$ depends only on
$\beta$ and $r$. Set
$s(\beta,r)\buildrel\mathrm{def}\over{=\\!\\!=}AC.$
3.17. Proposition. Given $r>0$ and $\varepsilon>0$ there is $\delta>0$ such
that if $0<\beta<\delta$ then $s(r,\beta)<\varepsilon$.
Proof. Fix two point $A$ and $B$ such that $AB=r$.
Choose a point $X$ such that $\measuredangle ABX$ is positive. Let $Y\in[AX)$
be the point such that $AY=\tfrac{\varepsilon}{8}$; it exists by Proposition
2.
$A$$B$$C$$D$$Z$$Y$$X$$\,r\,$
$\,r\,$
Note that $X$ and $Y$ lie in the same side from $(AB)$; therefore $\angle ABY$
is positive. Set $\delta=\measuredangle ABY$.
Assume $0<\beta<\delta$, $\measuredangle ABC=\beta$ and $BC=\nobreak r$.
Applying Axiom IIa, we can choose a half-line $[BZ)$ such that $\measuredangle
ABZ=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\beta$. Note that $A$
and $Y$ lie on the opposite sides from $(BZ)$. Therefore $(BZ)$ intersects
$[AY]$; denote by $D$ the point of intersection.
Since $D\in(BZ)$, we get $\measuredangle ABD=\tfrac{\beta}{2}$ or
$\tfrac{\beta}{2}-\pi$. The later is impossible since $D$ and $Y$ lie on the
same side from $(AB)$. Therefore
$\measuredangle ABD=\measuredangle DBC=\tfrac{\beta}{2}.$
By Axiom III, $\triangle ABD\cong\triangle DBD$. In particular
$\displaystyle AC$ $\displaystyle\leqslant AD+DC=$ $\displaystyle=2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}AD\leqslant$ $\displaystyle\leqslant 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}AY=$ $\displaystyle=\tfrac{\varepsilon}{4}.$
Hence the result follows. ∎
3.18. Corollary. Fix a real number $r>0$ and two distinct points $A$ and $B$.
Then for any real number $\beta\in[0,\pi]$, there is unique point $C_{\beta}$
such that $BC_{\beta}=r$ and $\measuredangle ABC_{\beta}=\beta$. Moreover, the
map $\beta\mapsto C_{\beta}$ is a continuous map from $[0,\pi]$ to the plane.
Proof. The existence and uniqueness of $C_{\beta}$ follows from Axiom IIa and
Proposition 2.
Note that if $\beta_{1}\neq\beta_{2}$ then
$C_{\beta_{1}}C_{\beta_{2}}=s(r,|\beta_{1}-\beta_{2}|).$
Therefore Proposition 3 implies that the map $\beta\mapsto C_{\beta}$ is
continuous. ∎
Proof of Theorem 3. Fix points $A$ and $B$ such that $AB=c$. Given
$\beta\in[0,\pi]$, denote by $C_{\beta}$ the point in the plane such that
$BC_{\beta}=a$ and $\measuredangle ABC=\beta$.
According to Corollary 3, the map $\beta\mapsto C_{\beta}$ is a continuous.
Therefore function $b(\beta)=AC_{\beta}$ is continuous (formally it follows
from Exercise 1 and Exercise 1).
Note that $b(0)=c-a$ and $b(\pi)=c+a$. Note that $c-a\leqslant b\leqslant
c+a$. Therefore by Intermediate value theorem (3) there is
$\beta_{0}\in[0,\pi]$ such that $b(\beta_{0})=b$. Hence the result follows. ∎
### Chapter 4 Congruent triangles
#### Side-angle-side condition
Our next goal is to give conditions which guarantee congruence of two
triangles. One of such conditions is Axiom III, it is also called _side-angle-
side condition_ or briefly _SAS condition_.
#### Angle-side-angle condition
4.1. ASA condition. Assume that $AB=A^{\prime}B^{\prime}$, $\measuredangle
ABC\equiv\pm\measuredangle A^{\prime}B^{\prime}C^{\prime}$, $\measuredangle
CAB\equiv\pm\measuredangle C^{\prime}A^{\prime}B^{\prime}$ and $\triangle
A^{\prime}B^{\prime}C^{\prime}$ is nondegenerate. Then
$\triangle ABC\cong\triangle A^{\prime}B^{\prime}C^{\prime}.$
Note that for degenerate triangles the statement does not hold, say consider
one triangle with sides $1$, $4$, $5$ and the other with sides $2$, $3$, $5$.
$A^{\prime}$$B^{\prime}$$C^{\prime}$$C^{\prime\prime}$
Proof. According to Theorem 3, either
$\displaystyle\measuredangle ABC$ $\displaystyle\equiv\measuredangle
A^{\prime}B^{\prime}C^{\prime},$ ➊ $\displaystyle\measuredangle CAB$
$\displaystyle\equiv\measuredangle C^{\prime}A^{\prime}B^{\prime}$
or
$\displaystyle\measuredangle ABC$ $\displaystyle\equiv-\measuredangle
A^{\prime}B^{\prime}C^{\prime},$ ➋ $\displaystyle\measuredangle CAB$
$\displaystyle\equiv-\measuredangle C^{\prime}A^{\prime}B^{\prime}.$
Further we assume that ➊ ‣ 4 holds; the case ➋ ‣ 4 is analogous.
Let $C^{\prime\prime}$ be the point on the half-line $[A^{\prime}C^{\prime})$
such that that $A^{\prime}C^{\prime\prime}=\nobreak AC$.
By Axiom III, $\triangle A^{\prime}B^{\prime}C^{\prime\prime}\cong\triangle
ABC$. Applying Axiom III again, we get
$\measuredangle A^{\prime}B^{\prime}C^{\prime\prime}\equiv\measuredangle
ABC\equiv\measuredangle A^{\prime}B^{\prime}C^{\prime}.$
By Axiom IIa, $[B^{\prime}C^{\prime})=[BC^{\prime\prime})$. Hence
$C^{\prime\prime}$ lies on $(B^{\prime}C^{\prime})$ as well as on
$(A^{\prime}C^{\prime})$.
Since $\triangle A^{\prime}B^{\prime}C^{\prime}$ is not degenerate,
$(A^{\prime}C^{\prime})$ is distinct from $(B^{\prime}C^{\prime})$. Applying
Axiom I, we get $C^{\prime\prime}=C^{\prime}$.
Therefore $\triangle A^{\prime}B^{\prime}C^{\prime}=\triangle
A^{\prime}B^{\prime}C^{\prime\prime}\cong\triangle ABC$. ∎
#### Isosceles triangles
A triangle with two equal sides is called _isosceles_ ; the remaining side is
called _base_ of isosceles triangle.
4.2. Theorem. Assume $\triangle ABC$ is isosceles with base $[AB]$. Then
$\measuredangle ABC\equiv-\measuredangle BAC.$
Moreover, the converse holds if $\triangle ABC$ is nondegenerate.
$A$$B$$C$
The following proof is due to Pappus of Alexandria.
Proof. Note that
$CA=CB,\ CB=CA,\ \measuredangle ACB\equiv-\measuredangle BCA.$
Therefore by Axiom III,
$\triangle CAB\cong\triangle CBA.$
Applying the theorem on the signs of angles of triangles (3) and Axiom III
again, we get
$\measuredangle CAB\equiv-\measuredangle CBA.$
To prove the converse, we assume $\measuredangle
CAB\equiv\nobreak-\measuredangle CBA$. By ASA condition 4, $\triangle
CAB\cong\nobreak\triangle CBA$. Therefore $CA=\nobreak CB$. ∎
#### Side-side-side condition
4.3. SSS condition. $\triangle ABC\cong\triangle
A^{\prime}B^{\prime}C^{\prime}$ if
$A^{\prime}B^{\prime}=AB,\ \ B^{\prime}C^{\prime}=BC\ \ \text{and}\ \
C^{\prime}A^{\prime}=CA.$
Proof. Choose $C^{\prime\prime}$ so that
$A^{\prime}C^{\prime\prime}=A^{\prime}C^{\prime}$ and $\measuredangle
B^{\prime}A^{\prime}C^{\prime\prime}\equiv\measuredangle BAC$. According to
Axiom III,
$\triangle A^{\prime}B^{\prime}C^{\prime\prime}\cong\triangle ABC.$
$A^{\prime}$$B^{\prime}$$C^{\prime}$$C^{\prime\prime}$
It will suffice to prove that
$\triangle A^{\prime}B^{\prime}C^{\prime}\cong\triangle
A^{\prime}B^{\prime}C^{\prime\prime}.$ ➌
The condition ➌ ‣ 4 trivially holds if $C^{\prime\prime}=\nobreak C^{\prime}$.
Thus it remains to consider the case $C^{\prime\prime}\neq\nobreak
C^{\prime}$.
Clearly, the corresponding sides of $\triangle A^{\prime}B^{\prime}C^{\prime}$
and $\triangle A^{\prime}B^{\prime}C^{\prime\prime}$ are equal.
Note that triangles $\triangle C^{\prime}A^{\prime}C^{\prime\prime}$ and
$\triangle C^{\prime}B^{\prime}C^{\prime\prime}$ are isosceles. By Theorem 4,
we have
$\displaystyle\measuredangle A^{\prime}C^{\prime\prime}C^{\prime}$
$\displaystyle\equiv-\measuredangle A^{\prime}C^{\prime}C^{\prime\prime},$
$\displaystyle\measuredangle C^{\prime}C^{\prime\prime}B^{\prime}$
$\displaystyle\equiv-\measuredangle C^{\prime\prime}C^{\prime}B^{\prime}.$
Whence by addition
$\measuredangle A^{\prime}C^{\prime}B^{\prime}\equiv-\measuredangle
A^{\prime}C^{\prime\prime}B^{\prime}.$
Applying Axiom III again, we get ➌ ‣ 4. ∎
4.4. Exercise. Let $M$ be the midpoint of side $[AB]$ of a triangle $\triangle
ABC$ and $M^{\prime}$ be the midpoint of side $[A^{\prime}B^{\prime}]$ of a
triangle $\triangle A^{\prime}B^{\prime}C^{\prime}$. Assume
$C^{\prime}A^{\prime}=CA$, $C^{\prime}B^{\prime}=CB$ and
$C^{\prime}M^{\prime}=CM$. Prove that $\triangle
A^{\prime}B^{\prime}C^{\prime}\cong\triangle ABC$.
$B$$A$$A^{\prime}$$B^{\prime}$$C$
4.5. Exercise. Let $\triangle ABC$ be isosceles with base $[AB]$ and the
points $A^{\prime}\in[BC]$ and $B^{\prime}\in\nobreak[AC]$ be such that
$CA^{\prime}=CB^{\prime}$. Show that
1. (a)
$\triangle AB^{\prime}C\cong\triangle BA^{\prime}C$;
2. (b)
$\triangle ABB^{\prime}\cong\triangle BAA^{\prime}$.
4.6. Exercise. Show that if $AB+BC=AC$
then $B\in[AC]$.
4.7. Exercise. Let $\triangle ABC$ be a nondegenerate triangle and let $\iota$
be a motion of the plane such that
$\iota(A)=A,\ \ \iota(B)=B\ \ \text{and}\ \ \iota(C)=C.$
Show that $\iota$ is the identity; i.e. $\iota(X)=X$ for any point $X$ on the
plane.
### Chapter 5 Perpendicular lines
#### Right, acute and obtuse angles
* $\diamond$
If $|\measuredangle AOB|=\tfrac{\pi}{2}$, we say that the angle $\angle AOB$
is _right_ ;
* $\diamond$
If $|\measuredangle AOB|<\tfrac{\pi}{2}$, we say that the angle $\angle AOB$
is _acute_ ;
* $\diamond$
If $|\measuredangle AOB|>\tfrac{\pi}{2}$, we say that the angle $\angle AOB$
is _obtuse_.
On the diagrams, the right angles will be marked with a little square.
If $\angle AOB$ is right, we say also that $[OA)$ is _perpendicular_ to
$[OB)$; it will be written as $[OA)\perp\nobreak[OB)$.
From Theorem 2, it follows that two lines $(OA)$ and $(OB)$ are appropriately
called _perpendicular_ , if $[OA)\perp\nobreak[OB)$. In this case we also
write $(OA)\perp\nobreak(OB)$.
5.1. Exercise. Assume point $O$ lies between $A$ and $B$. Show that for any
point $X$ the angle $\angle XOA$ is acute if and only if $\angle XOB$ is
obtuse.
#### Perpendicular bisector
Assume $M$ is the midpoint of the segment $[AB]$; i.e., $M\in(AB)$ and
$AM=\nobreak MB$.
The line $\ell$ passing through $M$ and perpendicular to $(AB)$ passing
through $M$ is called _perpendicular bisector_ to the segment $[AB]$.
5.2. Theorem. Given distinct points $A$ and $B$, all points equidistant from
$A$ and $B$ and no others lie on the perpendicular bisector to $[AB]$.
$A$$B$$M$$P$
Proof. Let $M$ be the midpoint of $[AB]$.
Assume $PA=PB$ and $P\neq M$. According to SSS-condition (4), $\triangle
AMP\cong\nobreak\triangle BMP$. Hence
$\measuredangle AMP\equiv\pm\measuredangle BMP.$
Since $A\not=B$, we have “$-$” in the above formula. Further,
$\displaystyle\pi$ $\displaystyle\equiv\measuredangle AMB\equiv$
$\displaystyle\equiv\measuredangle AMP+\measuredangle PMB\equiv$
$\displaystyle\equiv 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
AMP.$
I.e. $\measuredangle AMP\equiv\pm\tfrac{\pi}{2}$ and therefore $P$ lies on the
perpendicular bisector.
To prove converse, suppose $P\neq M$ is any point in the perpendicular
bisector to $[AB]$. Then $\measuredangle AMP\equiv\pm\tfrac{\pi}{2}$,
$\measuredangle BMP\equiv\pm\tfrac{\pi}{2}$ and $AM=\nobreak BM$. Therefore
$\triangle AMP\cong\triangle BMP$; in particular $AP=BP$.∎
5.3. Exercise. Let $\ell$ be the perpendicular bisector the the segment $[AB]$
and $X$ be an arbitrary point on the plane.
Show that $AX<BX$ if and only if $X$ and $A$ lie on the same side from $\ell$.
5.4. Exercise. Let $\triangle ABC$ be nondegenerate. Show that $AB>BC$ if and
only if $|\measuredangle BCA|>|\measuredangle ABC|$.
#### Uniqueness of perpendicular
5.5. Theorem. There is one and only one line which pass through a given point
$P$ and perpendicular to a given line $\ell$.
$A$$B$$\ell$$P$$P^{\prime}$
According to the above theorem, there is unique point $Q\in\ell$ such that
$(QP)\perp\ell$. This point $Q$ is called _foot point_ of $P$ on $\ell$.
Proof. If $P\in\ell$ then both statements follows from Axiom II.
Existence for $P\not\in\ell$. Let $A$, $B$ be two distinct points of $\ell$.
Choose $P^{\prime}$ so that $AP^{\prime}=\nobreak AP$ and $\measuredangle
P^{\prime}AB\equiv-\measuredangle PAB$. According to Axiom III, $\triangle
AP^{\prime}B\cong\nobreak\triangle APB$. Therefore $AP=AP^{\prime}$ and
$BP=BP^{\prime}$.
$Q$$Q^{\prime}$$P$$P^{\prime}$$\ell$
$m$
According to Theorem 5, $A$ and $B$ lie on perpendicular bisector to
$[PP^{\prime}]$. In particular $(PP^{\prime})\perp(AB)=\ell$.
Uniqueness for $P\not\in\ell$. We will apply the theorem on perpendicular
bisector (5) few times. Assume $m\perp\ell$ and $m\ni P$. Then $m$ a
perpendicular bisector to some segment $[QQ^{\prime}]$ of $\ell$; in
particular, $PQ=PQ^{\prime}$.
Since $\ell$ is perpendicular bisector to $[PP^{\prime}]$, we get
$PQ=P^{\prime}Q$ and $PQ^{\prime}=P^{\prime}Q^{\prime}$. Therefore
$PQ=P^{\prime}Q=PQ^{\prime}=P^{\prime}Q^{\prime}.$
I.e. $P^{\prime}$ lies on the perpendicular bisector to $[QQ^{\prime}]$ which
is $m$. By Axiom I, $m=(PP^{\prime})$.∎
#### Reflection
To find the _reflection_ $P^{\prime}$ through the line $(AB)$ of a point $P$,
one drops a perpendicular from $P$ onto $(AB)$, and continues it to the same
distance on the other side.
According to Theorem 5, $P^{\prime}$ is uniquely determined by $P$.
Note that $P=P^{\prime}$ if and only if $P\in(AB)$.
5.6. Proposition. Assume $P^{\prime}$ is a reflection of the point $P$ in the
line $(AB)$. Then $AP^{\prime}=AP$ and if $A\neq P$ then $\measuredangle
BAP^{\prime}\equiv-\measuredangle BAP$.
$A$$B$$P$$P^{\prime}$
Proof. Note that if $P\in(AB)$ then $P=\nobreak P^{\prime}$ and by Corollary 2
$\measuredangle BAP=0$ or $\pi$. Hence the statement follows.
If $P\notin(AB)$, then $P^{\prime}\neq P$. By construction $(AB)$ is
perpendicular bisector of $[PP^{\prime}]$. Therefore, according to Theorem 5,
$AP^{\prime}=AP$ and $BP^{\prime}=\nobreak BP$.
In particular, $\triangle ABP^{\prime}\cong\triangle ABP$. Therefore
$\measuredangle BAP^{\prime}\equiv\pm\measuredangle BAP$. Since
$P^{\prime}\neq P$ and $AP^{\prime}=AP$, we get $\measuredangle
BAP^{\prime}\neq\measuredangle BAP$. I.e., we are left with the case
$\measuredangle BAP^{\prime}\equiv-\measuredangle BAP.$
∎
5.7. Corollary. Reflection through the line is a motion of the plane. More
over if $\triangle P^{\prime}Q^{\prime}R^{\prime}$ is the reflection of
$\triangle PQR$ then
$\measuredangle Q^{\prime}P^{\prime}R^{\prime}\equiv-\measuredangle QPR.$
Proof. From the construction it follows that the composition of two
reflections through the same line, say $(AB)$, is the identity map. In
particular reflection is a bijection.
Assume $P^{\prime}$, $Q^{\prime}$ and $R^{\prime}$ denote the reflections of
the points $P$, $Q$ and $R$ through $(AB)$. Let us first show that
$P^{\prime}Q^{\prime}=PQ\ \ \ \text{and}\ \ \ \measuredangle
AP^{\prime}Q^{\prime}\equiv-\measuredangle APQ.$ ➊
Without loss of generality we may assume that the points $P$ and $Q$ are
distinct from $A$ and $B$. By Proposition 5,
$\displaystyle\measuredangle BAP^{\prime}$ $\displaystyle\equiv-\measuredangle
BAP,$ $\displaystyle\measuredangle BAQ^{\prime}$
$\displaystyle\equiv-\measuredangle BAQ,$ $\displaystyle AP^{\prime}$
$\displaystyle=AP,$ $\displaystyle AQ^{\prime}$ $\displaystyle=AQ.$
It follows that $\measuredangle P^{\prime}AQ^{\prime}\equiv-\measuredangle
PAQ$. Therefore $\triangle P^{\prime}AQ^{\prime}\cong\triangle PAQ$ and ➊ ‣ 5
follows.
Repeating the same argument for $P$ and $R$, we get
$\measuredangle AP^{\prime}R^{\prime}\equiv-\measuredangle APR.$
Subtracting the second identity in ➊ ‣ 5, we get
$\measuredangle Q^{\prime}P^{\prime}R^{\prime}\equiv-\measuredangle QPR.$
∎
5.8. Exercise. Show that any motion of the plane can be presented as a
composition of at most three reflections.
Applying the exercise above and Corollary 5, we can divide the motions of the
plane in two types, _direct_ and _indirect motions_. The motion $m$ is direct
if
$\measuredangle Q^{\prime}P^{\prime}R^{\prime}=\measuredangle QPR$
for any $\triangle PQR$ and $P^{\prime}=m(P)$, $Q^{\prime}=m(Q)$ and
$R^{\prime}=m(R)$; if instead we have
$\measuredangle Q^{\prime}P^{\prime}R^{\prime}\equiv-\measuredangle QPR$
for any $\triangle PQR$ then the motion $m$ is called indirect.
5.9. Lemma. Let $Q$ be the foot point of $P$ on line $\ell$. Then
$PX>PQ$
for any point $X$ on $\ell$ distinct from $Q$.
$X$$\ell$$P$$Q$$P^{\prime}$
Proof. If $P\in\ell$ then the result follows since $PQ=0$. Further we assume
that $P\notin\ell$.
Let $P^{\prime}$ be the reflection of $P$ in $\ell$. Note that $Q$ is the
midpoint of $[PP^{\prime}]$ and $\ell$ is perpendicular bisector of
$[PP^{\prime}]$. Therefore
$PX=P^{\prime}X\ \ \text{and}\ \ PQ=P^{\prime}Q=\tfrac{1}{2}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}PP^{\prime}$
Note that $\ell$ meets $[PP^{\prime}]$ at the point $Q$ only. Therefore by the
triangle inequality and Exercise 4,
$PX+P^{\prime}X>PP^{\prime}.$
Hence the result follows. ∎
5.10. Exercise. Let $X$ and $Y$ be the reflections of $P$ through the lines
$(AB)$ and $(BC)$ correspondingly. Show that
$\measuredangle XBY\equiv 2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle ABC.$
#### Angle bisectors
If $\measuredangle ABX\equiv-\measuredangle CBX$ then we say that line $(BX)$
_bisects angle_ $\angle ABC$, or line $(BX)$ is a _bisector_ of $\angle ABC$.
If $\measuredangle ABX\equiv\pi-\measuredangle CBX$ then the line $(BX)$ is
called _external bisector_ of $\angle ABC$.
Note that bisector and external bisector are uniquely defined by the angle.
$A$$B$$C$
bisector
external
bisector
Note that if $\measuredangle ABA^{\prime}=\pi$, i.e., if $B$ lies between $A$
and $A^{\prime}$, then bisector of $\angle ABC$ is the external bisector of
$\angle A^{\prime}BC$ and the other way around.
5.11. Exercise. Show that for any angle, its bisector and external bisector
are orthogonal.
5.12. Lemma. Given angle $\angle ABC$ and a point $X$, consider foot points
$Y$ and $Z$ of $X$ on $(AB)$ and $(BC)$. Assume $\measuredangle
ABC\not\equiv\pi,0$.
Then $XY=XZ$ if and only if $X$ lies on the bisector or external bisector of
$\angle ABC$.
Proof. Let $Y^{\prime}$ and $Z^{\prime}$ be the reflections of $X$ through
$(AB)$ and $(BC)$ correspondingly. By Proposition 5,
$XB=Y^{\prime}B=Z^{\prime}B$.
$A$$B$$C$$Z$$X$$Y$$Y^{\prime}$$Z^{\prime}$
Note that
$XY^{\prime}=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}XY\ \ \text{and}\ \
XZ^{\prime}=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}XZ.$
Applying SSS and then SAS congruence conditions, we get
$\displaystyle XY$ $\displaystyle=XZ\ \Leftrightarrow\ $ ➋ $\displaystyle\
\Leftrightarrow\ XY^{\prime}$ $\displaystyle=XZ^{\prime}\ \Leftrightarrow\ $
$\displaystyle\ \Leftrightarrow\ \triangle BXY^{\prime}$
$\displaystyle\cong\triangle BXZ^{\prime}\ \Leftrightarrow\ $ $\displaystyle\
\Leftrightarrow\ \measuredangle XBY^{\prime}$
$\displaystyle\equiv\pm\measuredangle BXZ^{\prime}.$
According to Proposition 5,
$\displaystyle\measuredangle XBA$ $\displaystyle\equiv-Y^{\prime}BA,$
$\displaystyle\measuredangle XBC$ $\displaystyle\equiv-Z^{\prime}BC.$
Therefore
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
XBA\equiv\measuredangle XBY^{\prime}\ \ \text{and}\ \ 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XBC\equiv-XBZ^{\prime}.$
I.e., we can continue the chain of equivalence conditions ➋ ‣ 5 the following
way
$\measuredangle XBY^{\prime}\equiv\pm\measuredangle BXZ^{\prime}\
\Leftrightarrow\ 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
XBA\equiv\pm 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XBC.$
Since $(AB)\neq(BC)$, we have
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XBA\not\equiv 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XBC$
(compare to Exercise 2). Therefore
$XY=XZ\ \Leftrightarrow\ 2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle XBA\equiv-2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle XBC.$
The last identity means either
$\displaystyle\measuredangle XBA+\measuredangle XBC$ $\displaystyle\equiv 0$
or $\displaystyle\measuredangle XBA+\measuredangle XBC$
$\displaystyle\equiv\pi.$
Hence the result follows. ∎
#### Circles
Given a positive real number $r$ and a point $O$, the set $\Gamma$ of all
points on distant $r$ from $O$ is called _circle_ with _radius_ $r$ and
_center_ $O$.
We say that a point $P$ lies _inside_ $\Gamma$ if $OP<r$ and if $OP>r$, we say
that $P$ lies _outside_ $\Gamma$.
A segment between two points on $\Gamma$ is called _chord_ of $\Gamma$. A
chord passing through the center is called _diameter_.
5.13. Exercise. Assume two distinct circles $\Gamma$ and $\Gamma^{\prime}$
have a common chord $[AB]$. Show that the line between centers of $\Gamma$ and
$\Gamma^{\prime}$ forms a perpendicular bisector to $[AB]$.
5.14. Lemma. A line and a circle can have at most two points of intersection.
$A$$B$$C$$\ell$
$m$
$n$
Proof. Assume $A$, $B$ and $C$ are distinct points which lie on a line $\ell$
and a circle $\Gamma$ with center $O$.
Then $OA=OB=OC$; in particular $O$ lies on the perpendicular bisectors $m$ and
$n$ to $[AB]$ and $[BC]$ correspondingly.
Note that the midpoints of $[AB]$ and $[BC]$ are distinct. Therefore $m$ and
$n$ are distinct. The later contradicts the uniqueness of perpendicular
(Theorem 5). ∎
5.15. Exercise. Show that two distinct circles can have at most two points of
intersection.
In consequence of the above lemma, a line $\ell$ and a circle $\Gamma$ might
have 2, 1 or 0 points of intersections. In the first case the line is called
_secant line_ , in the second case it is _tangent line_ ; if $P$ is the only
point of intersection of $\ell$ and $\Gamma$, we say that _$\ell$ is tangent
to $\Gamma$ at $P$_.
Similarly, according Exercise 5, two circles might have 2, 1 or 0 points of
intersections. If $P$ is the only point of intersection of circles $\Gamma$
and $\Gamma^{\prime}$, we say that _$\Gamma$ is tangent to $\Gamma$ at $P$_.
5.16. Lemma. Let $\ell$ be a line and $\Gamma$ be a circle with center $O$.
Assume $P$ is a common point of $\ell$ and $\Gamma$. Then $\ell$ is tangent to
$\Gamma$ at $P$ if and only if and only if $(PO)\perp\ell$.
Proof. Let $Q$ be the foot point of $O$ on $\ell$.
Assume $P\neq Q$. Denote by $P^{\prime}$ the reflection of $P$ through $(OQ)$.
Note that $P^{\prime}\in\ell$ and $(OQ)$ is perpendicular bisector of
$[PP^{\prime}]$. Therefore $OP=OP^{\prime}$. Hence
$P,P^{\prime}\in\Gamma\cap\ell$; i.e., $\ell$ is secant to $\Gamma$.
If $P=Q$ then according to Lemma 5, $OP<OX$ for any point $X\in\ell$ distinct
from $P$. Hence $P$ is the only point in the intersection $\Gamma\cap\ell$;
i.e., $\ell$ is tangent to $\Gamma$ at $P$. ∎
5.17. Exercise. Let $\Gamma$ and $\Gamma^{\prime}$ be two circles with centers
at $O$ and $O^{\prime}$ correspondingly. Assume $\Gamma$ and $\Gamma^{\prime}$
intersect at point $P$. Show that $\Gamma$ is tangent to $\Gamma^{\prime}$ if
and only if $O$, $O^{\prime}$ and $P$ lie on one line.
5.18. Exercise. Let $\Gamma$ and $\Gamma^{\prime}$ be two distinct circles
with centers at $O$ and $O^{\prime}$ and radii $r$ and $r^{\prime}$.
1. (a)
Show that $\Gamma$ is tangent to $\Gamma^{\prime}$ if and only if
$OO^{\prime}=r+r^{\prime}\ \ \text{or}\ \ OO^{\prime}=|r-r^{\prime}|.$
2. (b)
Show that $\Gamma$ intersects $\Gamma^{\prime}$ if and only if
$|r-r^{\prime}|\leqslant OO^{\prime}\leqslant r+r^{\prime}.$
### Chapter 6 Parallel lines and similar triangles
#### Parallel lines
In consequence of Axiom I, any two distinct lines $\ell$ and $m$ have either
one point in common or none. In the first case they are _intersecting_ ; in
the second case, $\ell$ and $m$ are said to be _parallel_ (briefly
$\ell\parallel m$); in addition, a line is always regarded as parallel to
itself.
6.1. Proposition. Let $\ell$, $m$ and $n$ be the lines in the plane. Assume
that $n\perp m$ and $m\perp\ell$. Then $\ell\parallel n$.
Proof. Assume contrary; i.e., $\ell\nparallel n$. Then there is a point, say
$Z$, of intersection of $\ell$ and $n$. Then by Theorem 5, $\ell=n$. In
particular $\ell\parallel n$, a contradiction. ∎
6.2. Theorem. Given a point $P$ and line $\ell$ in the Euclidean plane there
is unique line $m$ which pass though $P$ and parallel to $\ell$.
The above theorem has two parts, existence and uniqueness. In the proof of
uniqueness we will use Axiom IV for the first time.
Proof; existence. Apply Theorem 5 two times, first to construct line $m$
through $P$ which is perpendicular to $\ell$ and second to construct line $n$
through $P$ which is perpendicular to $m$. Then apply Proposition 6.
Uniqueness. If $P\in\ell$ then $m=\ell$ by the definition of parallel lines.
Further we assume $P\notin\ell$.
Let us construct lines $n\ni P$ and $m\ni P$ as in the proof of existence, so
$m\parallel\ell$.
Assume there is yet an other line $s\ni P$ which is distinct from $m$ and
parallel to $\ell$. Choose a point $Q\in s$ which lies with $\ell$ on the same
side from $m$. Let $R$ be the foot point of $Q$ on $n$.
Let $D$ be the point of intersection of $n$ and $\ell$. According to
Proposition 6 $(QR)\parallel m$. Therefore $Q$, $R$ and $\ell$ lie on the same
side from $m$. In particular, $R\in[PD)$.
$P$$R$$D$$Q$$Z$$\ell$$m$
$s$
$n$
Choose $Z\in[PQ)$ such that
$\frac{PZ}{PQ}=\frac{PD}{PR}.$
Then by Axiom IV, $(ZD)\perp(PD)$; i.e. $Z\in\ell\cap s$, a contradiction.∎
6.3. Corollary. Assume $\ell$, $m$ and $n$ are lines in the Euclidean plane
such that $\ell\parallel m$ and $m\parallel n$. Then $\ell\parallel n$.
Proof. Assume contrary; i.e. $\ell\nparallel n$. Then there is a point
$P\in\ell\cap n$. By Theorem 6, $n=\ell$, a contradiction. ∎
Note that from the definition, we have $\ell\parallel m$ if and only if
$m\parallel\nobreak\ell$. Therefore according to the above corollary
“$\parallel$” is an equivalence relation.
6.4. Exercise. Let $k$, $\ell$, $m$ and $n$ be the lines in Euclidean plane.
Assume that $k\perp\ell$ and $m\perp n$. Show that if $k\parallel m$ then
$\ell\parallel n$.
#### Similar triangles
Two triangles $\triangle A^{\prime}B^{\prime}C^{\prime}$ and $\triangle ABC$
are _similar_ (briefly $\triangle
A^{\prime}B^{\prime}C^{\prime}\sim\nobreak\triangle ABC$) if their sides are
proportional, i.e.,
$A^{\prime}B^{\prime}=k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}AB,\ \
B^{\prime}C^{\prime}=k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}BC\ \ \text{and}\
\ C^{\prime}A^{\prime}=k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}CA$ ➊
for some $k>0$ and
$\displaystyle\measuredangle A^{\prime}B^{\prime}C^{\prime}$
$\displaystyle=\pm\measuredangle ABC,$ ➋ $\displaystyle\measuredangle
B^{\prime}C^{\prime}A^{\prime}$ $\displaystyle=\pm\measuredangle BCA,$
$\displaystyle\measuredangle C^{\prime}A^{\prime}B^{\prime}$
$\displaystyle=\pm\measuredangle CAB.$
Remarks.
* $\diamond$
According to 3, in the above three equalities the signs can be assumed to me
the same.
* $\diamond$
If $\triangle A^{\prime}B^{\prime}C^{\prime}\sim\triangle ABC$ with $k=1$ in ➊
‣ 6, then $\triangle A^{\prime}B^{\prime}C^{\prime}\cong\nobreak\triangle
ABC$.
* $\diamond$
Note that “$\sim$” is an _equivalence relation_.
I.e., if $\triangle A^{\prime}B^{\prime}C^{\prime}\sim\triangle ABC$ then
$\triangle ABC\sim\triangle A^{\prime}B^{\prime}C^{\prime}$
and if $\triangle
A^{\prime\prime}B^{\prime\prime}C^{\prime\prime}\sim\triangle
A^{\prime}B^{\prime}C^{\prime}$ and $\triangle
A^{\prime}B^{\prime}C^{\prime}\sim\nobreak\triangle ABC$ then
$\triangle A^{\prime\prime}B^{\prime\prime}C^{\prime\prime}\sim\triangle ABC.$
Using “$\sim$”, the Axiom IV can be formulated the following way.
6.5. Reformulation of Axiom IV. If for two triangles $\triangle ABC$,
$\triangle AB^{\prime}C^{\prime}$ and $k>0$ we have $B^{\prime}\in[AB)$,
$C^{\prime}\in[AC)$, $AB^{\prime}=k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}AB$
and $AC^{\prime}=k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}AC$ then $\triangle
ABC\sim\triangle AB^{\prime}C^{\prime}$.
In other words, the Axiom IV provides a condition which guarantee that two
triangles are similar. Let us formulate yet three such conditions.
6.6. Similarity conditions. Two triangles $\triangle ABC$ and $\triangle
A^{\prime}B^{\prime}C^{\prime}$ in the Euclidean plane are similar if one of
the following conditions hold.
(SAS) For some constant $k>0$ we have
$AB=k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}A^{\prime}B^{\prime},\ AC=k{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}A^{\prime}C^{\prime}$ $\ \text{and}\ \
\measuredangle BAC=\pm\measuredangle B^{\prime}A^{\prime}C^{\prime}.$
(AA) The triangle $\triangle A^{\prime}B^{\prime}C^{\prime}$ is nondegenerate
and
$\measuredangle ABC=\pm\measuredangle A^{\prime}B^{\prime}C^{\prime},\
\measuredangle BAC=\pm\measuredangle B^{\prime}A^{\prime}C^{\prime}.$
(SSS) For some constant $k>0$ we have
$AB=k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}A^{\prime}B^{\prime},\ AC=k{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}A^{\prime}C^{\prime},\ CB=k{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}C^{\prime}B^{\prime}.$
Each of these conditions is proved by applying the Axiom IV with SAS, ASA and
SSS congruence conditions correspondingly (see Axiom III and the conditions 4,
4).
Proof. Set $k=\tfrac{AB}{A^{\prime}B^{\prime}}$. Choose points
$B^{\prime\prime}\in[A^{\prime}B^{\prime})$ and
$C^{\prime\prime}\in[A^{\prime}C^{\prime})$ so that
$A^{\prime}B^{\prime\prime}=k{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}A^{\prime}B^{\prime}$ and $A^{\prime}C^{\prime\prime}=k{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}A^{\prime}C^{\prime}$. By Axiom IV, $\triangle
A^{\prime}B^{\prime}C^{\prime}\sim\nobreak\triangle
A^{\prime}B^{\prime\prime}C^{\prime\prime}$.
Applying SAS, ASA or SSS congruence condition, depending on the case, we get
$\triangle A^{\prime}B^{\prime\prime}C^{\prime\prime}\cong\triangle ABC$.
Hence the result follows. ∎
$A$$B$$C$$A^{\prime}$$B^{\prime}$
A triangle with all acute angles is called _acute_.
6.7. Exercise. Let $\triangle ABC$ be an acute triangle in the Euclidean
plane. Denote by $A^{\prime}$ the foot point of $A$ on $(BC)$ and by
$B^{\prime}$ the foot point of $B$ on $(AC)$. Prove that $\triangle
A^{\prime}B^{\prime}C\sim\triangle ABC$.
#### Pythagorean theorem
A triangle is called _right_ if one of its angles is right. The side opposite
the right angle is called the _hypotenuse_. The sides adjacent to the right
angle are called _legs_.
6.8. Theorem. Assume $\triangle ABC$ be a right triangle in the Euclidean
plane with right angle at $C$. Then
$AC^{2}+BC^{2}=AB^{2}.$
Proof. Let $D$ be the foot point of $C$ on $(AB)$.
$A$$B$$C$$D$
According to Lemma 5,
$\displaystyle AD$ $\displaystyle<AC<AB$ and $\displaystyle BD$
$\displaystyle<BC<AB.$
Therefore $D$ lies between $A$ and $B$; in particular,
$AD+BD=AB.$ ➌
Note that by AA similarity condition, we have
$\triangle ADC\sim\triangle ACB\sim\triangle CDB.$
In particular
$\frac{AD}{AC}=\frac{AC}{AB}\ \ \text{and}\ \ \frac{BD}{BC}=\frac{BC}{BA}.$ ➍
Let us rewrite identities ➍ ‣ 6 on an other way:
$AC^{2}=AB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}AD\ \ \text{and}\ \
BC^{2}=AB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}BD.$
summing up above two identities and applying ➌ ‣ 6, we get
$AC^{2}+BC^{2}=AB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(AD+BD)=AB^{2}.$
∎
#### Angles of triangle
6.9. Theorem. In any triangle $\triangle ABC$ in the Euclidean plane, we have
$\measuredangle ABC+\measuredangle BCA+\measuredangle CAB\equiv\pi.$
Proof. First note that if $\triangle ABC$ is degenerate then the equality
follows from Lemma 2. Further we assume that $\triangle ABC$ is nondegenerate.
$A$$B$$C$$\alpha$$\alpha$$\beta$$\gamma$$\beta$$\pm\gamma$$M$$K$$L$
Set
$\displaystyle\alpha$ $\displaystyle=\measuredangle CAB,$ $\displaystyle\beta$
$\displaystyle=\measuredangle ABC,$ $\displaystyle\gamma$
$\displaystyle=\measuredangle BCA.$
We need to prove that
$\alpha+\beta+\gamma\equiv\pi.$ ➎
Let $K$, $L$, $M$ be the midpoints of the sides $[BC]$, $[CA]$, $[AB]$
respectively. Observe that according to Axiom IV,
$\displaystyle\triangle AML$ $\displaystyle\sim\triangle ABC,$
$\displaystyle\triangle MBK$ $\displaystyle\sim\triangle ABC,$
$\displaystyle\triangle LKC$ $\displaystyle\sim\triangle ABC$
and
$LM=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}BC,\ \
MK=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}CA,\ \
KL=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}AB.$
According to SSS-condition (6), $\triangle KLM\sim\triangle ABC$. Thus,
$\measuredangle MKL=\pm\alpha,\ \ \measuredangle KLM=\pm\beta,\ \
\measuredangle BCA=\pm\gamma.$ ➏
According to 3, the “$+$” or “$-$” sign is to be the same throughout ➏ ‣ 6.
If in ➏ ‣ 6 we have “$+$” then ➎ ‣ 6 follows since
$\beta+\gamma+\alpha\equiv\measuredangle AML+\measuredangle LMK+\measuredangle
KMB\equiv\measuredangle AMB\equiv\pi$
It remains to show that we can not have “$-$” in ➏ ‣ 6. In this case the same
argument as above gives
$\alpha+\beta-\gamma\equiv\pi.$
The same way we get
$\alpha-\beta+\gamma\equiv\pi$
Adding last two identities we get
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha\equiv 0.$
Equivalently $\alpha\equiv\pi$ or $0$; i.e. $\triangle ABC$ is degenerate, a
contradiction. ∎
$A$$B$$C$$D$
6.10. Exercise. Let $\triangle ABC$ be a nondegenerate triangle. Assume there
is a point $D\in[BC]$ such that $(AD)$ bisects $\angle BAC$ and $BA=\nobreak
AD=\nobreak DC$. Find the angles of $\triangle ABC$.
6.11. Exercise. Show that
$|\measuredangle ABC|+|\measuredangle BCA|+|\measuredangle CAB|=\pi.$
for any $\triangle ABC$ in the Euclidean plane.
6.12. Corollary. In the Euclidean plane, $(AB)\parallel(CD)$ if and only if
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(\measuredangle ABC+\measuredangle
BCD)\equiv 0.$ ➐
Equivalently
$\measuredangle ABC+\measuredangle BCD\equiv 0\ \ \text{or}\ \ \measuredangle
ABC+\measuredangle BCD\equiv\pi;$
in the first case $A$ and $D$ lie on the opposite sides of $(BC)$, in the
second case $A$ and $D$ lie on the same sides of $(BC)$.
$A$$B$$C$$D$
Proof. If $(AB)\nparallel(CD)$ then there is $Z\in(AB)\cap(CD)$ and $\triangle
BCZ$ is nondegenerate.
According to Theorem 6,
$\measuredangle ZBC+\measuredangle BCZ\equiv\pi-\measuredangle CZB\not\equiv
0\ \text{or}\ \pi.$
Note that $2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ZBC\equiv
2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ABC$ and $2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle BCZ\equiv 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle BCD$. Therefore
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(\measuredangle ABC+\measuredangle
BCD)\equiv 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ZBC+2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle BCZ\not\equiv 0;$
i.e., ➐ ‣ 6 does not hold.
It remains to note that the identity ➐ ‣ 6 uniquely defines line $(CD)$.
Therefore by Theorem 6, if $(AB)\parallel(CD)$ then equality ➐ ‣ 6 holds.
Applying Proposition 3, we get the last part of the corollary. ∎
#### Parallelograms
A _quadrilateral_ is an ordered quadruple of pairwise distinct points in the
plane. A quadrilateral formed by quadruple $(A,B,C,D)$ will be called
_quadrilateral $ABCD$_.
Given a quadrilateral $ABCD$, the four segments $[AB]$, $[BC]$, $[CD]$ and
$[DA]$ are called _sides of $ABCD$_; the remaining two segments $[AC]$ and
$[BD]$ are called _diagonals of $ABCD$_.
6.13. Exercise. Show for any quadrilateral $ABCD$ in the Euclidean plane we
have
$\measuredangle ABC+\measuredangle BCD+\measuredangle CDA+\measuredangle
DAB\equiv 0$
A quadrilateral $ABCD$ in the Euclidean plane is called _nondegenerate_ if any
three points from $A,B,C,D$ do not lie on one line.
The nondegenerate quadrilateral $ABCD$ is called _parallelogram_ if
$(AB)\parallel\nobreak(CD)$ and $(BC)\parallel\nobreak(DA)$.
6.14. Lemma. If $ABCD$ is a parallelogram then
1. (a)
$\measuredangle DAB=\measuredangle BCD$;
2. (b)
$AB=CD$.
$A$$B$$C$$D$
Proof. Since $(AB)\parallel(CD)$, the points $C$ and $D$ lie on the same side
from $(AB)$. Hence $\angle ABD$ and $\angle ABC$ have the same sign.
Analogously, $\angle CBD$ and $\angle CBA$ have the same sign. Since $\angle
ABC\equiv-\angle CBA$, we get that the angles $\angle DBA$ and $\angle DBC$
have opposite signs; i.e., $A$ and $C$ lie on the opposite sides of $(BD)$.
According to Corollary 6,
$\measuredangle BDC\equiv-\measuredangle DBA\ \ \text{and}\ \ \measuredangle
DBC\equiv\nobreak-\measuredangle BDA.$
By angle-side-angle condition $\triangle ABD\cong\triangle CDB$. Which implies
both statements in the lemma. ∎
$P$$Q$$P_{\ell}$$Q_{\ell}$$P_{m}$$Q_{m}$$\ell$
$m$
6.15. Exercise. Let $\ell$ and $m$ be perpendicular lines in the Euclidean
plane. Given a points $P$ denote by $P_{\ell}$ and $P_{m}$ the foot points of
$P$ on $\ell$ and $m$ correspondingly.
1. (a)
Show that for any $X\in\ell$ and $Y\in m$ there is unique point $P$ such that
$P_{\ell}=X$ and $P_{m}=Y$.
1. (b)
Show that $PQ^{2}=P_{\ell}Q_{\ell}^{2}+P_{m}Q_{m}^{2}$ for any pair of points
$P$ and $Q$.
2. (c)
Conclude that Euclidean plane is isometric to $(\mathbb{R}^{2},d_{2})$ defined
on page $\circ$ ‣ $\diamond$ ‣ 1.
6.16. Exercise. Use the Exercise 6, to give an alternative proof of Theorem 3
in the Euclidean plane.
I.e., prove that given real numbers $a$, $b$ and $c$ such that
$0<a\leqslant b\leqslant c\leqslant a+c,$
there is a triangle $\triangle ABC$ such that $a=BC$, $b=CA$ and $c=AB$.
### Chapter 7 Triangle geometry
#### Circumcircle and circumcenter
7.1. Theorem. Perpendicular bisectors to the sides of any nondegenerate
triangle in the Euclidean plane intersect at one point.
The point of the intersection of the perpendicular bisectors is called
circumcenter. It is the center of the circumcircle of the triangle; i.e., the
circle which pass through all three vertices of the triangle. The circumcenter
of the triangle is usually denoted by $O$.
$B$$A$$C$$O$
$\ell$
$m$
Proof. Let $\triangle ABC$ be nondegenerate. Let $\ell$ and $m$ be
perpendicular bisectors to sides $[AB]$ and $[AC]$ correspondingly.
Assume $\ell$ and $m$ intersect, let $O=\ell\cap n$. Since $O\in\ell$, we have
$OA=OB$ and since $O\in m$, we have $OA=OC$. It follows that $OB=\nobreak OC$;
i.e. $O$ lies on the perpendicular bisector to $[BC]$.
It remains to show that $\ell\nparallel m$; assume contrary. Since
$\ell\perp(AB)$ and $m\perp(AC)$, we get $(AC)\parallel\nobreak(AB)$ (see
Exercise 6). Therefore by Theorem 5, $(AC)=(AB)$; i.e., $\triangle ABC$ is
degenerate, a contradiction. ∎
7.2. Corollary. There is unique circle which pass through vertices of a given
nondegenerate triangle in the Euclidean plane.
#### Altitudes and orthocenter
An _altitude_ of a triangle is a line through a vertex and perpendicular to
the line containing the opposite side. The term _altitude_ maybe also used for
the distance from the vertex to its foot point on the line containing opposite
side.
7.3. Theorem. The three altitudes of any nondegenerate triangle in the
Euclidean plane intersect in a single point.
The point of intersection of altitudes is called _orthocenter_ ; it is usually
denoted as $H$.
$B^{\prime}$$A$$A^{\prime}$$B$$C$$C^{\prime}$
Proof. Let $\triangle ABC$ be nondegenerate.
Consider three lines $\ell\parallel(BC)$ through $A$, $m\parallel\nobreak(CA)$
through $B$ and $n\parallel(AB)$ through $C$. Since $\triangle ABC$ is
nondegenerate, the lines $\ell$, $m$ and $n$ are not parallel. Set
$A^{\prime}=m\cap n$, $B^{\prime}=n\cap\ell$ and $C^{\prime}=\ell\cap m$.
Note that $ABA^{\prime}C$, $BCB^{\prime}A$ and $CBC^{\prime}A$ are
parallelograms. Applying Lemma 6 we get that $\triangle ABC$ is the median
triangle of $\triangle A^{\prime}B^{\prime}C^{\prime}$; i.e., $A$, $B$ and $C$
are the midpoints of $[B^{\prime}C^{\prime}]$, $[C^{\prime}A^{\prime}]$ and
$[A^{\prime}B^{\prime}]$ correspondingly. By Exercise 6,
$(B^{\prime}C^{\prime})\parallel(BC)$, the altitudes from $A$ is perpendicular
to $[B^{\prime}C^{\prime}]$ and from above it bisects
$[B^{\prime}C^{\prime}]$.
Thus altitudes of $\triangle ABC$ are also perpendicular bisectors of the
triangle $\triangle A^{\prime}B^{\prime}C^{\prime}$. Applying Theorem 7, we
get that altitudes of $\triangle ABC$ intersect at one point. ∎
7.4. Exercise. Assume $H$ is the orthocenter of an acute triangle $\triangle
ABC$ in the Euclidean plane. Show that $A$ is orthocenter of $\triangle HBC$.
#### Medians and centroid
A median of a triangle is a segment joining a vertex to the midpoint of the
opposing side.
7.5. Theorem. The three medians of any nondegenerate triangle in the Euclidean
plane intersect in a single point. Moreover the point of intersection divides
each median in ratio 2:1.
The point of intersection of medians is called _centroid_ ; it is usually
denoted by $M$.
Proof. Consider a nondegenerate triangle $\triangle ABC$. Let $[AA^{\prime}]$
and $[BB^{\prime}]$ be its medians.
According to Exercise 3, $[AA^{\prime}]$ and $[BB^{\prime}]$ are intersecting.
Let us denote by $M$ the point of intersection.
By side-angle-side condition, $\triangle
B^{\prime}A^{\prime}C\sim\nobreak\triangle ABC$ and
$A^{\prime}B^{\prime}=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}AB$.
In particular $\measuredangle ABC\equiv\measuredangle B^{\prime}A^{\prime}C$.
Since $A^{\prime}$ lies between $B$ and $C$, we get $\measuredangle
BA^{\prime}B^{\prime}+\measuredangle B^{\prime}A^{\prime}C=\pi$. Therefore
$\measuredangle B^{\prime}A^{\prime}B+\measuredangle A^{\prime}BC=\pi.$
By Corollary 6 $(AB)\parallel\nobreak(A^{\prime}B^{\prime})$.
$A$$A^{\prime}$$B$$B^{\prime}$$C$$M$
Note that $A^{\prime}$ and $A$ lie on the opposite sides from $(BB^{\prime})$.
Therefore by Corollary 6 we get
$\measuredangle B^{\prime}A^{\prime}M=\measuredangle BAM.$
The same way we get,
$\measuredangle A^{\prime}B^{\prime}M=\measuredangle ABM.$
By AA condition, $\triangle ABM\sim\triangle A^{\prime}B^{\prime}M$.
Since $A^{\prime}B^{\prime}=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}AB$, we have
$\frac{A^{\prime}M}{AM}=\frac{B^{\prime}M}{BM}=\frac{1}{2}.$
In particular $M$ divides medians $[AA^{\prime}]$ and $[BB^{\prime}]$ in ratio
2:1.
Note that $M$ is unique point on $[BB^{\prime}]$ such that
$\frac{B^{\prime}M}{BM}=\frac{1}{2}.$
Repeating the same argument for vertices $B$ and $C$ we get that all medians
$[CC^{\prime}]$ and $[BB^{\prime}]$ intersect in $M$.∎
#### Bisector of triangle
7.6. Lemma. Let $\triangle ABC$ be a nondegenerate triangle in the Euclidean
plane. Assume that the bisector of $\angle BAC$ intersects $[BC]$ at the point
$D$. Then
$\frac{AB}{AC}=\frac{DB}{DC}.$ ➊
$A$$B$$C$$D$$E$$\ell$
Proof. Let $\ell$ be the line through $C$ parallel to $(AB)$. Note that
$\ell\nparallel(AD)$; set $E=\ell\cap(AD)$.
Note that $B$ and $C$ lie on the opposite sides of $(AD)$. Therefore by
Corollary 6,
$\measuredangle BAD=\measuredangle CED.$ ➋
Further, note that $\angle ADB$ and $\angle EDC$ are vertical; in particular,
by 2
$\measuredangle ADB=\measuredangle EDC.$
By AA-similarity condition, $\triangle ABD\sim\triangle ECD$. In particular,
$\frac{AB}{EC}=\frac{DB}{DC}.$ ➌
Since $(AD)$ bisects $\angle BAC$, we get $\measuredangle BAD=\measuredangle
DAC$. Together with ➋ ‣ 7, it implies that $\measuredangle CEA=\measuredangle
EAC$. By Theorem 4, $\triangle ACE$ is isosceles; i.e.
$EC=AC.$
The later together with ➌ ‣ 7 implies ➊ ‣ 7. ∎
7.7. Exercise. Prove an analog of Lemma 7 for the external bisector.
#### Incenter
7.8. Theorem. The angle bisectors of any nondegenerate triangle intersect at
one point.
$A$$B$$C$$I$$Z$$A^{\prime}$$X$$Y$$B^{\prime}$
The point of intersection of bisectors is called _incenter_ ; it is usually
denoted as $I$. The point $I$ lies on the same distance from each side, it is
the center of a circle tangent to each side of triangle. This circle is called
_incircle_ and its radius is called _inradius_ of the triangle.
Proof. Let $\triangle ABC$ be a nondegenerate triangle.
Note that points $B$ and $C$ lie on the opposite sides from the bisector of
$\angle BAC$. Hence this bisector intersects $[BC]$ at a point, say
$A^{\prime}$.
Analogously, there is $B^{\prime}\in[AC]$ such the $(BB^{\prime})$ bisects
$\angle ABC$.
Applying Pasch’s theorem (3), twice for the triangles $\triangle AA^{\prime}C$
and $\triangle BB^{\prime}C$, we get that $[AA^{\prime}]$ and $[BB^{\prime}]$
intersect. Let us denote by $I$ the point of intersection.
Let $X$, $Y$ and $Z$ be the foot points of $I$ on $(BC)$, $(CA)$ and $(AB)$
correspondingly. Applying Lemma 5, we get
$IY=IZ=IX.$
From the same lemma we get that $I$ lies on a bisector or exterior bisector of
$\angle BCA$.
The line $(CI)$ intersects $[BB^{\prime}]$, the points $B$ and $B^{\prime}$
lie on opposite sides of $(CI)$. Therefore the angles $\angle ICA=\angle
ICB^{\prime}$ and $\angle ICB$ have opposite signs. I.e., $(CI)$ can not be
exterior bisector of $\angle BCA$. Hence the result follows. ∎
#### More exercises
7.9. Exercise. Assume that bisector at one vertex of a nondegenerate triangle
bisects the opposite side. Show that the triangle is isosceles.
7.10. Exercise. Assume that at one vertex of a nondegenerate triangle bisector
coincides with the altitude. Show that the triangle is isosceles.
$A$$B$$C$$X$$Y$$Z$
7.11. Exercise. Assume sides $[BC]$, $[CA]$ and $[AB]$ of $\triangle ABC$ are
tangent to incircle at $X$, $Y$ and $Z$ correspondingly. Show that
$AY=AZ=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(AB+AC-BC).$
By the definition, the _orthic triangle_ is formed by the base points of its
altitudes of the given triangle.
7.12. Exercise. Prove that orthocenter of an acute triangle coincides with
incenter of its orthic triangle.
What should be an analog of this statement for an obtuse triangle?
## Inversive geometry
### Chapter 8 Inscribed angles
#### Angle between a tangent line and a chord
8.1. Theorem. Let $\Gamma$ be a circle with center $O$ in the Euclidean plane.
Assume line $(XQ)$ is tangent to $\Gamma$ at $X$ and $[XY]$ is a chord of
$\Gamma$. Then
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
QXY\equiv\measuredangle XOY.$ ➊
Equivalently,
$\measuredangle QXY\equiv\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle XOY\ \ \text{or}\ \ \measuredangle
QXY\equiv\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
XOY+\pi.$
$Q$$X$$Y$$O$
Proof. Note that $\triangle XOY$ is isosceles. Therefore $\measuredangle
YXO=\measuredangle OYX$.
Applying Theorem 6 to $\triangle XOY$, we get
$\displaystyle\pi$ $\displaystyle\equiv\measuredangle YXO+\measuredangle
OYX+\measuredangle XOY\equiv$ $\displaystyle\equiv 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle YXO+\measuredangle XOY.$
By Lemma 5, $(OX)\perp\nobreak(XQ)$. Therefore
$\measuredangle QXY+\measuredangle YXO\equiv\pm\tfrac{\pi}{2}.$
Therefore
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle QXY\equiv\pi-2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle YXO\equiv\measuredangle XOY.$
∎
#### Inscribed angle
$P$$X$$Y$$O$
We say that triangle is _inscribed_ in the circle $\Gamma$ if all its vertices
lie on $\Gamma$.
8.2. Theorem. Let $\Gamma$ be a circle with center $O$ in the Euclidean plane,
and $X,Y$ be two distinct points on $\Gamma$. Then $\triangle XPY$ is
inscribed in $\Gamma$ if and only if
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
XPY\equiv\measuredangle XOY.$ ➋
Equivalently, if and only if
$\measuredangle XPY\equiv\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle XOY\ \ \text{or}\ \ \measuredangle
XPY\equiv\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
XOY+\pi.$
Proof. Choose a point $Q$ such that $(PQ)\perp(OP)$. By Lemma 5, $(PQ)$ is
tangent to $\Gamma$.
According to Theorem 8,
$\displaystyle 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle QPX$
$\displaystyle\equiv\measuredangle POX,$ $\displaystyle 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle QPY$
$\displaystyle\equiv\measuredangle POY.$
Subtracting one identity from the other we get ➋ ‣ 8.
To prove the converse, let us argue by contradiction. Assume that ➋ ‣ 8 holds
for some $P\notin\Gamma$. Note that $\measuredangle XOY\neq 0$ and therefore
$\measuredangle XPY$ is distinct from $0$ and $\pi$; i.e., $\triangle PXY$ is
nondegenerate.
$P^{\prime}$$P$$X$$Y$$O$$P$$X$$Y$$O$
If the line $(PY)$ is secant to $\Gamma$, denote by $P^{\prime}$ the point of
intersection of $\Gamma$ and $(PY)$ which is distinct from $Y$. From above we
get
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
XP^{\prime}Y\equiv\measuredangle XOY.$
In particular,
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XP^{\prime}Y\equiv
2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XPY.$
By Corollary 6, $(P^{\prime}X)\parallel(PX)$. Since $\triangle PXY$ is
nondegenerate, the later implies $P=P^{\prime}$, which contradicts
$P\notin\Gamma$.
In the remaining case, if $(PX)$ is tangent to $\Gamma$, the proof goes along
the same lines. Namely, by Theorem 8,
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
PYX\equiv\measuredangle XOY.$
In particular,
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle PYX\equiv 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XPY.$
$Y^{\prime}$$Y$$P$$X$$X^{\prime}$$O$
By Corollary 6, $(PY)\parallel(XY)$; therefore $(PY)=(XY)$. I.e., $\triangle
PXY$ is degenerate, a contradiction. ∎
8.3. Exercise. Let $[XX^{\prime}]$ and $[YY^{\prime}]$ be two chords of circle
$\Gamma$ with center $O$ and radius $r$ in the Euclidean plane. Assume
$(XX^{\prime})$ and $(YY^{\prime})$ intersect at point $P$. Show that
1. (a)
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XPY=\measuredangle
XOY+\measuredangle X^{\prime}OY^{\prime}$;
2. (b)
$\triangle PXY\sim\triangle PY^{\prime}X^{\prime}$;
3. (c)
$PX{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}PX^{\prime}=|OP^{2}-r^{2}|$.
8.4. Exercise. Assume that the chords $[XX^{\prime}]$, $[YY^{\prime}]$ and
$[ZZ^{\prime}]$ of the circle $\Gamma$ in the Euclidean plane intersect at one
point. Show that
$XY^{\prime}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}ZX^{\prime}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}YZ^{\prime}=X^{\prime}Y{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}Z^{\prime}X{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}Y^{\prime}Z.$
#### Inscribed quadrilateral
$A$$B$$C$$D$
A quadrilateral $ABCD$ is called _inscribed_ if all the points $A$, $B$, $C$
and $D$ lie on a circle or a line.
8.5. Theorem. A quadrilateral $ABCD$ in the Euclidean plane is inscribed if
and only if
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ABC+2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle CDA\equiv 0.$ ➌
Equivalently, if and only if
$\measuredangle ABC+\measuredangle CDA\equiv\pi\ \ \text{or}\ \ \measuredangle
ABC\equiv-\measuredangle CDA.$
Proof. Assume $\triangle ABC$ is degenerate. By Corollary 2,
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ABC\equiv 0;$
From the same corollary, we get
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle CDA\equiv 0$
if and only if $D\in(AB)$; hence the result follows.
It remains to consider the case if $\triangle ABC$ is nondegenerate.
Denote by $\Gamma$ the circumcircle of $\triangle ABC$ and let $O$ be the
center of $\Gamma$. According to Theorem 8,
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
ABC\equiv\measuredangle AOB.$ ➍
From the same theorem, $D\in\Gamma$ if and only if
$A$$B$$X$$Y$$X^{\prime}$$Y^{\prime}$
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
CDA\equiv\measuredangle BOA.$ ➎
Adding ➍ ‣ 8 and ➎ ‣ 8, we get the result. ∎
8.6. Exercise. Let $\Gamma$ and $\Gamma^{\prime}$ be two circles which
intersect at two distinct points $A$ and $B$. Assume $[XY]$ and
$[X^{\prime}Y^{\prime}]$ be the chords of $\Gamma$ and $\Gamma^{\prime}$
correspondingly such that $A$ lies between $X$ and $X^{\prime}$ and $B$ lies
between $Y$ and $Y^{\prime}$. Show that $(XY)\parallel(X^{\prime}Y^{\prime})$.
8.7. Exercise. Let $\triangle ABC$ be a nondegenerate triangle in the
Euclidean plane, $A^{\prime}$ and $B^{\prime}$ be foot points of altitudes
from $A$ and $B$. Show that $A$, $B$, $A^{\prime}$ and $B^{\prime}$ lie on one
circle.
What is the center of this circle?
#### Arcs
A subset of a circle bounded by two points is called a circle arc.
More precisely, let $\Gamma$ be a circle and $A,B,C\in\Gamma$ be three
distinct points. The subset which includes the points $A$, $C$ as well as all
the points on $\Gamma$ which lie with $B$ on the same side from $(AC)$ is
called _circle arc_ $ABC$.
$A$$B$$C$$X$$\Gamma$
For the circle arc $ABC$, the points $A$ and $C$ are called _endpoints_. Note
that given two distinct points $A$ and $C$ there are two circle arcs of
$\Gamma$ with the endpoints at $A$ and $C$.
A half-line $[AX)$ is called _tangent_ to arc $ABC$ at $A$ if the line $(AX)$
is tangent to $\Gamma$ and the points $X$ and $B$ lie on the same side from
the line $(AC)$.
If $B$ lies on the line $(AC)$, the arc $ABC$ degenerates to one of two
following a subsets of line $(AC)$.
* $\diamond$
If $B$ lies between $A$ and $C$ then we define the arc $ABC$ as the segment
$[AC]$. In this case the half-line $[AC)$ is tangent to the arc $ABC$ at $A$.
* $\diamond$
If $B\in(AC)\backslash[AC]$ then we define the arc $ABC$ as the line $(AC)$
without all the points between $A$ and $C$. If we choose points $X$ and
$Y\in(AC)$ such that the points $X$, $A$, $C$ and $Y$ appear in the same order
on the line then the arc $ABC$ is formed by two half-lines in $[AX)$ and
$[CY)$. The half-line $[AX)$ is tangent to the arc $ABC$ at $A$.
* $\diamond$
In addition, any half-line $[AB)$ will be regarded as an arc. This degenerate
arc has only one end point $A$ and it assumed to be tangent to itself at $A$.
The circle arcs together with the degenerate arcs will be called _arcs_.
8.8. Proposition. In the Euclidean plane, a point $D$ lies on the arc $ABC$ if
and only if
$\measuredangle ADC=\measuredangle ABC$
or $D$ coincides with $A$ or $C$.
$A$$B$$C$$D$
Proof. Note that if $A$, $B$ and $C$ lie on one line then the statement is
evident.
Assume $\Gamma$ be the circle passing through $A$, $B$ and $C$.
Assume $D$ is distinct from $A$ and $C$. According to Theorem 8, $D\in\Gamma$
if and only if
$\measuredangle ADC=\measuredangle ABC\ \ \text{or}\ \ \measuredangle
ADC\equiv\measuredangle ABC+\pi.$
By Exercise 3, the first identity holds then $B$ and $D$ lie on one side of
$(AC)$; i.e., $D$ belongs to the arc $ABC$. If the second identity holds then
the points $B$ and $D$ lie on the opposite sides from $(AC)$, in this case $D$
does not belong to the arc $ABC$. ∎
8.9. Proposition. In the Euclidean plane, a half-lines $[AX)$ is tangent to
the arc $ABC$ if and only if
$\measuredangle ABC+\measuredangle CAX\equiv\pi.$
Proof. Note that for a degenerate arc $ABC$ the statement is evident. Further
we assume the arc $ABC$ is nondegenerate.
Applying theorems 8 and 8, we get
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ABC+2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle CAX\equiv 0.$
Therefore either
$\measuredangle ABC+\measuredangle CAX\equiv\pi\ \ \ \text{or}\ \ \
\measuredangle ABC+\measuredangle CAX\equiv 0.$
$A$$B$$C$$X$
Since $[AX)$ is the tangent half-line to the arc $ABC$, $X$ and $B$ lie on the
same side from $(AC)$. Therefore the angles $\angle CAX$, $\angle CAB$ and
$\angle ABC$ have the same sign. In particular $\measuredangle
ABC+\measuredangle CAX\not\equiv 0$; i.e., we are left with the case
$\measuredangle ABC+\measuredangle CAX\equiv\pi.$
∎
8.10. Exercise. Assume that in the Euclidean plane, the half-lines $[AX)$ and
$[AY)$ are tangent to the arcs $ABC$ and $ACB$ correspondingly. Show that
$\angle XAY$ is straight.
8.11. Exercise. Show that in the Euclidean plane, there is unique arc with
endpoints at the given points $A$ and $C$ which is tangent at $A$ to the given
half line $[AX)$.
$A$$B_{1}$$B_{2}$$C$$X_{1}$$X_{2}$$Y_{1}$$Y_{2}$
8.12. Exercise. Consider two arcs $AB_{1}C$ and $AB_{2}C$ in the Euclidean
plane. Let $[AX_{1})$ and $[AX_{2})$ be the half-lines tangent to arcs
$AB_{1}C$ and $AB_{2}C$ at $A$ and $[CY_{1})$ and $[CY_{2})$ be the half-lines
tangent to arcs $AB_{1}C$ and $AB_{2}C$ at $C$. Show that
$\measuredangle X_{1}AX_{2}\equiv-\measuredangle Y_{1}CY_{2}.$
### Chapter 9 Inversion
Let $\Omega$ be the circle with center $O$ and radius $r$. The _inversion_ of
a point $P$ with respect to $\Omega$ is the point $P^{\prime}\in[OP)$ such
that
$OP{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}OP^{\prime}=r^{2}.$
In this case the circle will be called the _circle of inversion_ and its
center is called _center of inversion_.
$\Omega$$O$$P$$P^{\prime}$$T$
The inversion of $O$ is undefined. If $P$ is inside $\Omega$ then $P^{\prime}$
is outside and the other way around. Further, $P=P^{\prime}$ if and only if
$P\in\Omega$.
Note that the inversion takes $P^{\prime}$ back to $P$.
9.1. Exercise. Let $P$ be a point inside of a circle $\Omega$ centered at $O$
in the Euclidean plane. Let $T$ be a point where the perpendicular to $(OP)$
from $P$ intersects $\Omega$. Let $P^{\prime}$ be the point where the tangent
to $\Omega$ at $T$ intersects $(OP)$. Show that $P^{\prime}$ is the inversion
of $P$ in the circle $\Omega$.
9.2. Lemma. Let $A^{\prime}$ and $B^{\prime}$ be inversions of $A$ and $B$
with respect to a circle of center $O$ in the Euclidean plane. Then
$\triangle OAB\sim\triangle OB^{\prime}A^{\prime}.$
Moreover,
$\displaystyle\measuredangle AOB$ $\displaystyle\equiv-\measuredangle
B^{\prime}OA^{\prime},$ ➊ $\displaystyle\measuredangle OBA$
$\displaystyle\equiv-\measuredangle OA^{\prime}B^{\prime},$
$\displaystyle\measuredangle BAO$ $\displaystyle\equiv-\measuredangle
A^{\prime}B^{\prime}O.$
$A$$A^{\prime}$$B$$B^{\prime}$$O$
Proof. Let $r$ be the radius of the circle of the inversion.
From the definition of inversion, we get
$\displaystyle OA{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}OA^{\prime}=OB{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}OB^{\prime}=r^{2}.$
Therefore
$\frac{OA}{OB^{\prime}}=\frac{OB}{OA^{\prime}}.$
Clearly
$\measuredangle AOB=\measuredangle A^{\prime}OB^{\prime}\equiv-\measuredangle
B^{\prime}OA^{\prime}.$ ➋
From SAS, we get
$\triangle OAB\sim\nobreak\triangle OB^{\prime}A^{\prime}.$
Applying Theorem 3 and ➋ ‣ 9, we get ➊ ‣ 9. ∎
9.3. Exercise. Let $A^{\prime}$, $B^{\prime}$, $C^{\prime}$ be the images of
$A$, $B$, $C$ under inversion in the incircle of $\triangle ABC$ in the
Euclidean plane. Show that the incenter of $\triangle ABC$ is the orthocenter
of $\triangle A^{\prime}B^{\prime}C^{\prime}$.
#### Cross-ratio
Although inversion changes the distances and angles, some quantities expressed
in distances or angles do not change after inversion. The following theorem
gives the simplest examples of such quantities.
9.4. Theorem. Let $ABCD$ and $A^{\prime}B^{\prime}C^{\prime}D^{\prime}$ be two
quadrilaterals in the Euclidean plane such that the points $A^{\prime}$,
$B^{\prime}$, $C^{\prime}$ and $D^{\prime}$ are inversions of $A$, $B$, $C$,
and $D$ correspondingly.
Then
1. (a)
$\frac{AB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}CD}{BC{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}DA}=\frac{A^{\prime}B^{\prime}{\hskip
0.5pt\cdot\nobreak\hskip
0.5pt}C^{\prime}D^{\prime}}{B^{\prime}C^{\prime}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}D^{\prime}A^{\prime}}.$
2. (b)
$\measuredangle ABC+\measuredangle CDA\equiv-(\measuredangle
A^{\prime}B^{\prime}C^{\prime}+\measuredangle
C^{\prime}D^{\prime}A^{\prime}).$
3. (c)
If quadrilateral $ABCD$ is inscribed then so is
$A^{\prime}B^{\prime}C^{\prime}D^{\prime}$.
Proof; (a). Let $O$ be the center of inversion. According to Lemma 9,
$\triangle AOB\sim\nobreak\triangle B^{\prime}OA^{\prime}$. Therefore
$\displaystyle\frac{AB}{A^{\prime}B^{\prime}}$
$\displaystyle=\frac{OA}{OB^{\prime}}.$ Analogously,
$\displaystyle\frac{BC}{B^{\prime}C^{\prime}}$
$\displaystyle=\frac{OC}{OB^{\prime}},$
$\displaystyle\frac{CD}{C^{\prime}D^{\prime}}$
$\displaystyle=\frac{OC}{OD^{\prime}},$
$\displaystyle\frac{DA}{D^{\prime}A^{\prime}}$
$\displaystyle=\frac{OA}{OD^{\prime}}.$
Therefore
$\displaystyle\frac{AB}{A^{\prime}B^{\prime}}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\frac{B^{\prime}C^{\prime}}{BC}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\frac{CD}{C^{\prime}D^{\prime}}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\frac{D^{\prime}A^{\prime}}{DA}$
$\displaystyle=\frac{OA}{OB^{\prime}}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\frac{OB^{\prime}}{OC}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\frac{OC}{OD^{\prime}}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\frac{OD^{\prime}}{OA}=1.$
Hence (a) follows.
(b). According to Lemma 9,
$\displaystyle\measuredangle ABO$ $\displaystyle\equiv-\measuredangle
B^{\prime}A^{\prime}O,$ $\displaystyle\measuredangle OBC$
$\displaystyle\equiv-\measuredangle OA^{\prime}B^{\prime},$
$\displaystyle\measuredangle CDO$ $\displaystyle\equiv-\measuredangle
D^{\prime}C^{\prime}O,$ $\displaystyle\measuredangle ODA$
$\displaystyle\equiv-\measuredangle OA^{\prime}D^{\prime}.$
Summing these four identities we get
$\displaystyle\measuredangle ABC+\measuredangle CDA$
$\displaystyle\equiv-(\measuredangle
D^{\prime}C^{\prime}B^{\prime}+\measuredangle
B^{\prime}A^{\prime}D^{\prime}).$ Applying Axiom IIb and Exercise 6, we get
$\displaystyle\measuredangle A^{\prime}B^{\prime}C^{\prime}+\measuredangle
C^{\prime}D^{\prime}A^{\prime}$ $\displaystyle\equiv-(\measuredangle
B^{\prime}C^{\prime}D^{\prime}+\measuredangle
D^{\prime}A^{\prime}B^{\prime})\equiv$ $\displaystyle\equiv\measuredangle
D^{\prime}C^{\prime}B^{\prime}+\measuredangle B^{\prime}A^{\prime}D^{\prime}.$
Hence (b) follows.
(c). Follows from (b) and Theorem 8. ∎
#### Inversive plane and clines
Let $\Omega$ be a circle with center $O$ and radius $r$. Consider the
inversion in $\Omega$.
Recall that inversion of $O$ is not defined. To deal with this problem it is
useful to add to the plane an extra point; it will be called _the point at
infinity_ and we will denote it as $\infty$. W e can assume that $\infty$ is
inversion of $O$ and the other way around.
The Euclidean plane with added a point at infinity is called _inversive
plane_.
We will always assume that any line and half-line contains $\infty$.
It will be convenient to use notion of _cline_ , which means _circle or line_
; for example we may say _if cline contains $\infty$ then it is a line_ or
_cline which does not contain $\infty$ is a circle_.
Note that according to Theorem 7, for any $\triangle ABC$ there is unique
cline which pass through $A$, $B$ and $C$.
9.5. Theorem. In the inversive plane, inversion of a cline is a cline.
Proof. Denote by $O$ the center of inverse.
Let $\Gamma$ be a cline. Choose three distinct points $A$, $B$ and $C$ on
$\Gamma$. (If $\triangle ABC$ is nondegenerate then $\Gamma$ is the
circumcircle of $\triangle ABC$; if $\triangle ABC$ is degenerate then
$\Gamma$ is the line passing through $A$, $B$ and $C$.)
Denote by $A^{\prime}$, $B^{\prime}$ and $C^{\prime}$ the inversions of $A$,
$B$ and $C$ correspondingly. Let $\Gamma^{\prime}$ be the cline which pass
though $A^{\prime}$, $B^{\prime}$ and $C^{\prime}$. According to 7,
$\Gamma^{\prime}$ is well defined.
Assume $D$ is a point of inversive plane which is distinct from $A$, $C$, $O$
and $\infty$. According to Theorem 8, $D\in\Gamma$ if and only if
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle CDA+2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ABC\equiv 0.$
According to Theorem 9b, the later is equivalent to
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
C^{\prime}D^{\prime}A^{\prime}+2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle A^{\prime}B^{\prime}C^{\prime}\equiv 0.$
Applying Theorem 8 again, we get that the later is equivalent to
$D^{\prime}\in\Gamma^{\prime}$. Hence the result follows.
It remains to prove that $O\in\Gamma\ \Leftrightarrow\
\infty\in\Gamma^{\prime}$ and $\infty\in\Gamma\ \Leftrightarrow\
O\in\Gamma^{\prime}$. Since $\Gamma$ is inversion of $\Gamma^{\prime}$ it is
sufficient to prove only
$\infty\in\Gamma\ \Leftrightarrow\ O\in\Gamma^{\prime}.$
$\Gamma$$\Omega$$\Gamma^{\prime}$$Q^{\prime}$$Q$
Since $\infty\in\Gamma$ we get that $\Gamma$ is a line. Therefore for any
$\varepsilon>0$, the line $\Gamma$ contains point $P$ with
$OP>r^{2}/\varepsilon$. For the inversion $P^{\prime}\in\Gamma^{\prime}$ of
$P$, we have $OP^{\prime}=r^{2}/OP<\varepsilon$. I.e., the cline
$\Gamma^{\prime}$ contains points arbitrary close to $O$. It follows that
$O\in\Gamma^{\prime}$. ∎
9.6. Exercise. Assume that if circle $\Gamma^{\prime}$ is the inversion of
circle $\Gamma$ in the Euclidean plane. Denote by $Q$ the center of $\Gamma$
and by $Q^{\prime}$ the inversion of $Q$.
Show that $Q^{\prime}$ is not the center of $\Gamma^{\prime}$.
9.7. Exercise. Show that for any pair of tangent circles in the inversive
plane there is an inversion which sends them to a pair of parallel lines.
9.8. Theorem. Consider inversion with respect to circle $\Omega$ with center
$O$ in the inversive plane. Then
1. (a)
Line passing through $O$ is inverted into itself.
2. (b)
Line not passing through $O$ is inverted into a circle which pass through $O$,
and the other way around.
3. (c)
A circle not passing through $O$ is inverted into a circle not passing through
$O$.
Proof. In the proof we use Theorem 9 without mentioning.
(a). Note that if line passing through $O$ it contains both $\infty$ and $O$.
Therefore its inversion also contains $\infty$ and $O$. In particular image is
a line passing through $O$.
(b). Since any line $\ell$ pass through $\infty$, its image $\ell^{\prime}$
has to contain $O$. If the line did not contain $O$ then
$\ell^{\prime}\not\ni\infty$. Therefore $\ell^{\prime}$ is a circle which pass
through $O$.
(c). If circle $\Gamma$ does not contain $O$ then its image $\Gamma^{\prime}$
does not contain $\infty$. Therefore $\Gamma^{\prime}$ is a circle. Since
$\Gamma\not\ni\infty$ we get $\Gamma^{\prime}\not\ni O$. Hence the result
follows. ∎
#### Ptolemy’s identity
Here is one application of inversion, which we include as an illustration
only.
9.9. Theorem. Let $ABCD$ be an inscribed quadrilateral in the Euclidean plane.
Assume that the points $A$, $B$, $C$ and $D$ appear on the cline in the same
order. Then
$AB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}CD+BC{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}DA=AC{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}BD$
$A$$B$$C$$D$$A^{\prime}$$B^{\prime}$$C^{\prime}$$D^{\prime}$$x$$y$$z$
Proof. Assume the points $A,B,C,D$ lie on one line in this order.
Set $x=\nobreak AB$, $y=BC$, $z=\nobreak CD$. Note that
$x{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z+y{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(x+y+z)=(x+y){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(y+z).$
Since $AC=\nobreak x+y$, $BD=y+z$ and $DA=\nobreak x+y+z$, it proves the
identity.
It remains to consider the case when quadrilateral $ABCD$ is inscribed in a
circle, say $\Gamma$.
The identity can be rewritten as
$\frac{AB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}DC}{BD{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}CA}+\frac{BC{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}AD}{CA{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}DB}=1.$
On the left hand side we have two cross-ratios. According to Theorem 9(a), the
left hand side does not change if we apply an inversion to each point.
Consider an inversion in a circle centered at a point $O$ which lie on
$\Gamma$ between $A$ and $D$. By Theorem 9, this inversion maps $\Gamma$ to a
line. This reduces the problem to the case when $A$, $B$, $C$ and $D$ lie on
one line, which was already considered. ∎
#### Perpendicular circles
Assume two circles $\Gamma$ and $\Delta$ intersect at two points say $X$ and
$X^{\prime}$. Let $\ell$ and $m$ be tangent lines at $X$ to $\Gamma$ and
$\Delta$ correspondingly. Analogously, $\ell^{\prime}$ and $m^{\prime}$ be
tangent lines at $X^{\prime}$ to $\Gamma$ and $\Delta$.
From Exercise 8, we get that $\ell\perp m$ if and only if $\ell^{\prime}\perp
m^{\prime}$.
We say that circle $\Gamma$ is _perpendicular_ to circle $\Delta$ (briefly
$\Gamma\perp\Delta$) if they intersect and the lines tangent to the circle at
one point (and therefore both points) of intersection are perpendicular.
Similarly, we say that circle $\Gamma$ is perpendicular to a line $\ell$
(briefly $\Gamma\perp\ell$) if $\Gamma\cap\ell\neq\varnothing$ and $\ell$
perpendicular to the tangent lines to $\Gamma$ at one point (and therefore
both points) of intersection. According to Lemma 5, it happens only if the
line $\ell$ pass through the center of $\Gamma$.
Now we can talk about perpendicular clines.
9.10. Theorem. Assume $\Gamma$ and $\Omega$ are distinct circles in the
Euclidean plane. Then $\Omega\perp\Gamma$ if and only if the circle $\Gamma$
coincides with its inversion in $\Omega$.
$A$$B$$Q$$O$
Proof. Denote by $\Gamma^{\prime}$ the inversion of $\Gamma$.
($\Rightarrow$) Let $O$ be the center of $\Omega$ and $Q$ be the center of
$\Gamma$. Denote by $A$ and $B$ the points of intersections of $\Gamma$ and
$\Omega$. According to Lemma 5, $\Gamma\perp\Omega$ if and only if $(OA)$ and
$(OB)$ are tangent to $\Gamma$.
Note that $\Gamma^{\prime}$ also tangent to $(OA)$ and $(OB)$ at $A$ and $B$
correspondingly. It follows that $A$ and $B$ are the foot points of the center
of $\Gamma^{\prime}$ on $(OA)$ and $(OB)$. Therefore both $\Gamma^{\prime}$
and $\Gamma$ have the center $Q$. Finally, $\Gamma^{\prime}=\Gamma$, since
both circles pass through $A$.
($\Leftarrow$) Assume $\Gamma=\Gamma^{\prime}$.
Since $\Gamma\neq\Omega$, there is a point $P$ which lies on $\Gamma$, but not
on $\Omega$. Let $P^{\prime}$ be the inversion of $P$ in $\Omega$. Since
$\Gamma=\Gamma^{\prime}$, we have $P^{\prime}\in\Gamma$, in particular the
half-line $[OP)$ intersects $\Gamma$ at two points; i.e., $O$ lies outside of
$\Gamma$.
As $\Gamma$ has points inside and outside $\Omega$, the circles $\Gamma$ and
$\Omega$ intersect. The later follows from Exercise 5(b). Let $A$ be a point
of their intersection; we need to show that $A$ is the only intersection point
of $(OA)$ and $\Gamma$. Assume $X$ is an other point of intersection. Since
$O$ is outside of $\Gamma$, the point $X$ lies on the half-line $[OA)$.
Denote by $X^{\prime}$ the inversion of $X$ in $\Omega$. Clearly the three
points $X,X^{\prime},A$ lie on $\Gamma$ and $(OA)$. The later contradicts
Lemma 5. ∎
9.11. Corollary. A cline in the inversive plane which is distinct from the
circle of inversion inverts to itself if and only if it is perpendicular to
the circle of inversion.
Proof. By Theorem 9, it is sufficient to consider the case when the cline is a
line. The later follows from Theorem 9. ∎
9.12. Corollary. Let $P$ and $P^{\prime}$ be two distinct points in the
Euclidean plane such that $P^{\prime}$ is the inversion of $P$ in the circle
$\Omega$. Assume that a cline $\Gamma$ pass through $P$ and $P^{\prime}$. Then
$\Gamma\perp\Omega$.
Proof. Without loss of generality we may assume that $P$ is inside and
$P^{\prime}$ is outside $\Omega$. It follows that $\Gamma$ intersects
$\Omega$; denote by $A$ a point of intersection.00footnotemark: 0
Denote by $\Gamma^{\prime}$ the inversion of $\Gamma$. Since $A$ is inversion
of itself, the points $A$, $P$ and $P^{\prime}$ lie on $\Gamma$; therefore
$\Gamma^{\prime}=\Gamma$. By Theorem 9, $\Gamma\perp\Omega$. ∎
9.13. Corollary. Let $P$ and $Q$ be two distinct points inside the circle
$\Omega$ in the Euclidean plane. Then there is unique cline $\Gamma$
perpendicular to $\Omega$ which pass through $P$ and $Q$.
Proof. Let $P^{\prime}$ be the inversion of point $P$ in a circle $\Omega$.
According to Corollary 9, the cline passing through $P$ and $Q$ is
perpendicular to $\Omega$ if and only if it pass though $P^{\prime}$.
Note that $P^{\prime}$ lies outside of $\Omega$. Therefore the points $P$,
$P^{\prime}$ and $Q$ are distinct.
According to Corollary 7, there is unique cline passing through $P$, $Q$ and
$P^{\prime}$. Hence the result follows. ∎
9.14. Exercise. Let $\Omega_{1}$ and $\Omega_{2}$ be two distinct circles in
the Euclidean plane. Assume that the point $P$ does not lie on $\Omega_{1}$
nor on $\Omega_{2}$. Show that there is unique cline passing through $P$ which
is perpendicular $\Omega_{1}$ and $\Omega_{2}$.
9.15. Exercise. Let $P$, $Q$, $P^{\prime}$ and $Q^{\prime}$ be points in the
Euclidean plane. Assume $P^{\prime}$ and $Q^{\prime}$ are inversions of $P$
and $Q$ correspondingly. Show that the quadrilateral $PQP^{\prime}Q^{\prime}$
is inscribed.
9.16. Exercise. Let $\Omega_{1}$ and $\Omega_{2}$ be two perpendicular circles
with centers at $O_{1}$ and $O_{2}$ correspondingly. Show that the inversion
of $O_{1}$ in $\Omega_{2}$ coincides with the inversion of $O_{2}$ in
$\Omega_{1}$
#### Angles after inversion
9.17. Proposition. In the inversive plane, the inversion of an arc is an arc.
Proof. Consider four distinct points $A$, $B$, $C$ and $D$; let $A^{\prime}$,
$B^{\prime}$, $C^{\prime}$ and $D^{\prime}$ be their inverses. We need to show
that $D$ lies on the arc $ABC$ if and only if $D^{\prime}$ lies on the arc
$A^{\prime}B^{\prime}C^{\prime}$. According to Proposition 8, the later is
equivalent to the following
$\measuredangle ADC=\measuredangle ABC\ \ \Leftrightarrow\ \ \measuredangle
A^{\prime}D^{\prime}C^{\prime}=\measuredangle A^{\prime}B^{\prime}C^{\prime}.$
Which follows from Theorem 9(b). ∎
The following theorem roughly says that the angle between arcs changes sign
after the inversion. A deeper understanding of this theorem comes from complex
analysis.
$A$$A^{\prime}$$B_{1}$$B_{1}^{\prime}$$C_{1}$$C_{1}^{\prime}$$X_{1}$$Y_{1}$$B_{2}$$C_{2}$$B_{2}^{\prime}$$C_{2}^{\prime}$$X_{2}$$Y_{2}$
9.18. Theorem. Let $AB_{1}C_{1}$, $AB_{2}C_{2}$ be two arcs in the inversive
plane and $A^{\prime}B_{1}^{\prime}C_{1}^{\prime}$,
$A^{\prime}B_{2}^{\prime}C_{2}^{\prime}$ be their inversions. Let $[AX_{1})$
and $[AX_{2})$ be the half-lines tangent to $AB_{1}C_{1}$ and $AB_{2}C_{2}$ at
$A$ and $[A^{\prime}Y_{1})$ and $[A^{\prime}Y_{2})$ be the half-lines tangent
to $A^{\prime}B_{1}^{\prime}C_{1}^{\prime}$ and
$A^{\prime}B_{2}^{\prime}C_{2}^{\prime}$ at $A^{\prime}$. Then
$\measuredangle X_{1}AX_{2}\equiv-\measuredangle Y_{1}A^{\prime}Y_{2}.$
Proof. Applying to Proposition 8,
$\displaystyle\measuredangle X_{1}AX_{2}$ $\displaystyle\equiv\measuredangle
X_{1}AC_{1}+\measuredangle C_{1}AC_{2}+\measuredangle C_{2}AX_{2}\equiv$
$\displaystyle\equiv(\pi-\measuredangle C_{1}B_{1}A)+\measuredangle
C_{1}AC_{2}+(\pi-\measuredangle AB_{2}C_{2})\equiv$
$\displaystyle\equiv-(\measuredangle C_{1}B_{1}A+\measuredangle
AB_{2}C_{2}+\measuredangle C_{2}AC_{1})\equiv$
$\displaystyle\equiv-(\measuredangle C_{1}B_{1}A+\measuredangle
AB_{2}C_{1})-(\measuredangle C_{1}B_{2}C_{2}+\measuredangle C_{2}AC_{1}).$ The
same way we get $\displaystyle\measuredangle Y_{1}A^{\prime}Y_{2}$
$\displaystyle\equiv-(\measuredangle
C_{1}^{\prime}B_{1}^{\prime}A^{\prime}+\measuredangle
A^{\prime}B_{2}^{\prime}C_{1}^{\prime})-(\measuredangle
C_{1}^{\prime}B_{2}^{\prime}C_{2}^{\prime}+\measuredangle
C_{2}^{\prime}A^{\prime}C_{1}^{\prime}).$
By Theorem 9(b),
$\displaystyle\measuredangle C_{1}B_{1}A+\measuredangle AB_{2}C_{1}$
$\displaystyle\equiv-(\measuredangle
C_{1}^{\prime}B_{1}^{\prime}A^{\prime}+\measuredangle
A^{\prime}B_{2}^{\prime}C_{1}^{\prime}),$ $\displaystyle\measuredangle
C_{1}B_{2}C_{2}+\measuredangle C_{2}AC_{1}$
$\displaystyle\equiv-(\measuredangle
C_{1}^{\prime}B_{2}^{\prime}C_{2}^{\prime}+\measuredangle
C_{2}^{\prime}A^{\prime}C_{1}^{\prime}).$
Hence the result follows.∎
9.19. Corollary. Let $P^{\prime}$, $Q^{\prime}$ and $\Gamma^{\prime}$ be the
inversions of points $P$, $Q$ and circle $\Gamma$ in a circle $\Omega$ of the
Euclidean plane. Assume $P$ is inversion of $Q$ in $\Gamma$ then $P^{\prime}$
is inversion of $Q^{\prime}$ in $\Gamma^{\prime}$.
Proof. If $P=Q$ then $P^{\prime}=Q^{\prime}\in\Gamma^{\prime}$ therefore
$P^{\prime}$ is inversion of $Q^{\prime}$ in $\Gamma^{\prime}$.
It remains to consider the case $P\neq Q$. Let $\Delta_{1}$ and $\Delta_{2}$
be two distinct circles which intersect at $P$ and $Q$. According to Corollary
9, $\Delta_{1}\perp\Gamma$ and $\Delta_{2}\perp\Gamma$.
Denote by $\Delta_{1}^{\prime}$ and $\Delta_{2}^{\prime}$ the inversions of
$\Delta_{1}$ and $\Delta_{2}$ in $\Omega$. Clearly $\Delta_{1}^{\prime}$ and
$\Delta_{2}^{\prime}$ intersect at $P^{\prime}$ and $Q^{\prime}$.
From Theorem 9, the later is equivalent to
$\Delta_{1}^{\prime}\perp\Gamma^{\prime}$ and
$\Delta_{2}^{\prime}\perp\nobreak\Gamma^{\prime}$. By Corollary 9 the later
implies $P^{\prime}$ is inversion of $Q^{\prime}$ in $\Gamma^{\prime}$. ∎
## Non-Euclidean geometry
### Chapter 10 Absolute plane
Let us remove Axiom IV from the Definition 2. This way we define a new object
called _absolute plane_ or _neutral plane_. (In the absolute plane, the Axiom
IV may or may not hold.)
Clearly any theorem in absolute geometry holds in Euclidean geometry. In other
words, Euclidean plane is an example of absolute plane. In the next chapter we
will show that there are other examples of absolute plane distinct from the
Euclidean plane.
Many theorems in Euclidean geometry which we discussed, still hold in absolute
geometry.
In these lectures, the Axiom IV was used for the first time in the proof of
uniqueness of parallel line in Theorem 6. Therefore all the statements before
Theorem 6 also hold in absolute plane.
It makes all the discussed results about half-planes, signs of angles,
congruence conditions, perpendicular lines and reflections true in absolute
plane. If in the formulation of a statement above you do not see words
“Euclidean plane” or “inversive plane”, it means that the statement holds in
absolute plane and the same proof works.
Let us give an example of theorem in absolute geometry, which admits a shorter
proof in Euclidean geometry.
10.1. Theorem. Assume that triangles $\triangle ABC$ and $\triangle
A^{\prime}B^{\prime}C^{\prime}$ have right angles at $C$ and $C^{\prime}$
correspondingly, $AB=A^{\prime}B^{\prime}$ and $AC=\nobreak
A^{\prime}C^{\prime}$. Then $\triangle ABC\cong\triangle
A^{\prime}B^{\prime}C^{\prime}$.
Euclidean proof. By Pythagorean theorem $BC=B^{\prime}C^{\prime}$. Then the
statement follows from SSS congruence condition. ∎
Note that the proof of Pythagorean theorem used properties of similar
triangles, which in turn used Axiom IV. Hence the above proof is not working
in absolute plane.
$A$$B$$C$$D$
Absolute proof. Denote by $D$ the reflection of $A$ through $(BC)$ and by
$D^{\prime}$ the reflection of $A^{\prime}$ through $(B^{\prime}C^{\prime})$.
Note that
$AD=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}AC=2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}A^{\prime}C^{\prime}=A^{\prime}D^{\prime},$
$BD=BA=B^{\prime}A^{\prime}=B^{\prime}D^{\prime}.$
By SSS, we get $\triangle ABD\cong\triangle A^{\prime}B^{\prime}D^{\prime}$.
The theorem follows since $C$ is the midpoint of $[AD]$ and $C^{\prime}$ is
the midpoint of $[A^{\prime}D^{\prime}]$. ∎
10.2. Exercise. Give a proof of Exercise 7 which works in the absolute plane.
#### Two angles of triangle
In this section we will prove a weaker form of Theorem 6 which holds in
absolute plane.
10.3. Proposition. Let $\triangle ABC$ be nondegenerate triangle in the
absolute plane. Then
$|\measuredangle CAB|+|\measuredangle ABC|<\pi.$
Note that in Euclidean plane the theorem follows immediately from Theorem 6
and 3. In absolute geometry we need to work more.
Proof. Without loss of generality we may assume that $\angle CAB$ and $\angle
ABC$ are positive.
Let $M$ be the midpoint of $[AB]$. Chose $C^{\prime}\in(CM)$ distinct from $C$
so that $C^{\prime}M=CM$.
$B$$A$$C$$C^{\prime}$$M$
Note that the angles $\angle AMC$ and $\angle BMC^{\prime}$ are vertical; in
particular
$\measuredangle AMC=\measuredangle BMC^{\prime}.$
By construction $AM=\nobreak BM$ and $CM=\nobreak C^{\prime}M$. Therefore
$\triangle AMC\cong\triangle BMC^{\prime}$ and according to 3, we get
$\angle CAB=\angle C^{\prime}BA.$
In particular,
$\displaystyle\measuredangle C^{\prime}BC$ $\displaystyle\equiv\measuredangle
C^{\prime}BA+\measuredangle ABC\equiv$ $\displaystyle\equiv\measuredangle
CAB+\measuredangle ABC.$
Finally note that $C^{\prime}$ and $A$ lie on the same side from $(CB)$.
Therefore the angles $\measuredangle CAB$, $\angle ABC$ and $\angle
C^{\prime}BC$ are positive. By Exercise 3, the result follows. ∎
10.4. Exercise. Assume $A$, $B$, $C$ and $D$ be points in absolute plane such
that
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ABC+2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle BCD\equiv 0.$
Show that $(AB)\parallel(CD)$.
Note that one can not extract the solution of the above exercise from the
proof of Corollary 6
10.5. Exercise. Prove _side-angle-angle congruence condition_ in absolute
plane.
In other words, let $\triangle ABC$ and $\triangle
A^{\prime}B^{\prime}C^{\prime}$ be two triangles in absolute plane. Show that
$\triangle ABC\cong\triangle A^{\prime}B^{\prime}C^{\prime}$ if
$AB=A^{\prime}B^{\prime},\ \ \measuredangle ABC=\pm\measuredangle
A^{\prime}B^{\prime}C^{\prime}\ \ \text{and}\ \ \measuredangle
BCA=\pm\measuredangle B^{\prime}C^{\prime}A^{\prime}.$
Note that the Theorem 6 can not be applied in the above exercise.
10.6. Exercise. Assume that point $D$ lies between the vertices $A$ and $B$ of
triangle $\triangle ABC$ in the absolute plane. Show that
$CD<CA\ \ \text{or}\ \ CD<CB.$
#### Three angles of triangle
10.7. Proposition. Let $\triangle ABC$ and $\triangle A^{\prime}B^{\prime}C$
be two triangles in the absolute plane such that $AC=A^{\prime}C^{\prime}$ and
$BC=B^{\prime}C^{\prime}$. Then
$AB<\nobreak A^{\prime}B^{\prime}\ \ \text{if and only if}\ \ |\measuredangle
ACB|<|\measuredangle A^{\prime}C^{\prime}B^{\prime}|.$
$A$$C$$B$$B^{\prime}$$X$
Proof. Without loss of generality, we may assume that $A=A^{\prime}$ and
$C=C^{\prime}$ and $\measuredangle ACB,\measuredangle ACB^{\prime}\geqslant
0$. In this case we need to show that
$AB<AB^{\prime}\ \ \Leftrightarrow\ \ \measuredangle ACB<\measuredangle
ACB^{\prime}.$
Choose point $X$ so that
$\measuredangle ACX=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(\measuredangle ACB+\measuredangle ACB^{\prime}).$
Note that
* $\diamond$
$(CX)$ bisects $\angle BCB^{\prime}$
* $\diamond$
$(CX)$ is the perpendicular bisector of $[BB^{\prime}]$.
* $\diamond$
$A$ and $B$ lie on the same side from $(CX)$ if and only if
$\measuredangle ACB<\measuredangle ACB^{\prime}.$
From Exercise 5, $A$ and $B$ lie on the same side from $(CX)$ if and only if
$AB<AB^{\prime}$. Hence the result follows. ∎
10.8. Theorem. Let $\triangle ABC$ be a triangle in the absolute plane. Then
$|\measuredangle ABC|+|\measuredangle BCA|+|\measuredangle CAB|\leqslant\pi.$
The following proof is due to Legendre [6], earlier proofs were due to
Saccheri [9] and Lambert [5].
Proof. Let $\triangle ABC$ be the given triangle. Set
$\displaystyle a$ $\displaystyle=BC,$ $\displaystyle b$ $\displaystyle=CA,$
$\displaystyle c$ $\displaystyle=AB,$ $\displaystyle\alpha$
$\displaystyle=\measuredangle CAB$ $\displaystyle\beta$
$\displaystyle=\measuredangle ABC$ $\displaystyle\gamma$
$\displaystyle=\measuredangle BCA.$
Without loss of generality, we may assume that $\alpha,\beta,\gamma\geqslant
0$.
Fix a positive integer $n$. Consider points $A_{0}$, $A_{1},\dots,A_{n}$ on
the half-line $[BA)$ so that $BA_{i}=i{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}c$ for each $i$. (In particular, $A_{0}=B$ and $A_{1}=A$.) Let us
construct the points $C_{1}$, $C_{2},\dots,C_{n}$, so that $\measuredangle
A_{i}A_{i-1}C_{i}=\nobreak\beta$ and $A_{i-1}C_{i}=a$ for each $i$.
$A_{0}$$A_{1}$$A_{2}$$\dots$$A_{n}$$C_{1}$$C_{2}$$\dots$$C_{n}$$\,c\,$$\,c\,$$\,c\,$$\,c\,$$\,a$$\,b$$\,d$$\,a$$\,b$$\,d$$\,a$$\,b$$\,d$$\,a$$\,b$$\alpha$$\beta$$\gamma$$\delta$$\alpha$$\beta$$\gamma$
This way we construct $n$ congruent triangles
$\displaystyle\triangle ABC$ $\displaystyle=\triangle A_{1}A_{0}C_{1}\cong$
$\displaystyle\cong\triangle A_{2}A_{1}C_{2}\cong$ $\displaystyle\ \ \ \ \ \ \
\ \ \ \dots$ $\displaystyle\cong\triangle A_{n}A_{n-1}C_{n}.$
Set $d=C_{1}C_{2}$ and $\delta=\measuredangle C_{2}A_{1}C_{1}$. Note that
$\alpha+\beta+\delta=\pi.$ ➊
By Proposition 10, $\delta\geqslant 0$.
By construction
$\displaystyle\triangle A_{1}C_{1}C_{2}$ $\displaystyle\cong\triangle
A_{2}C_{2}C_{3}\cong\dots\cong\triangle A_{n-1}C_{n-1}C_{n}.$
In particular, $C_{i}C_{i+1}=d$ for each $i$.
By repeated application of the triangle inequality, we get that
$\displaystyle n{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}c$
$\displaystyle=A_{0}A_{n}\leqslant$ $\displaystyle\leqslant
A_{0}C_{1}+C_{1}C_{2}+\dots+C_{n-1}C_{n}+C_{n}A_{n}=$
$\displaystyle=a+(n-1){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}d+b.$
In particular,
$c\leqslant d+\tfrac{1}{n}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(a+b-d).$
Since $n$ is arbitrary positive integer, the later implies
$c\leqslant d.$
From Proposition 10 and SAS, the later is equivalent to
$\gamma\leqslant\delta.$
From ➊ ‣ 10, the theorem follows. ∎
Let us define the _defect of triangle_ $\triangle ABC$ is defined as
$\mathop{\rm defect}\nolimits(\triangle
ABC)\buildrel\mathrm{def}\over{=\\!\\!=}\pi-|\measuredangle
ABC|+|\measuredangle BCA|+|\measuredangle CAB|.$
$A$$C$$B$$D$
Note that Theorem 10 sates that, defect of any triangle in absolute plane has
to be nonnegative. According to Theorem 6, any triangle in Euclidean plane has
zero defect.
10.9. Exercise. Let $\triangle ABC$ be nondegenerate triangle in the absolute
plane. Assume $D$ lies between $A$ and $B$. Show that
$\mathop{\rm defect}\nolimits(\triangle ABC)=\mathop{\rm
defect}\nolimits(\triangle ADC)+\mathop{\rm defect}\nolimits(\triangle DBC).$
10.10. Exercise. Let $ABCD$ be an inscribed quadrilateral in the absolute
plane. Show that
$\measuredangle ABC+\measuredangle CDA\equiv\measuredangle BCD+\measuredangle
DAB.$
Note that the Theorem 8 can not be applied in the above exercise; it use
Theorems 8 and 8; which in turns use Theorem 6.
#### How to prove that the something
can not be proved?
Many attempts were made to prove that any theorem in Euclidean geometry holds
in absolute geometry. The later is equivalent to the statement that Axiom IV
is a _theorem_ in absolute geometry.
Many these attempts being accepted as proofs for long periods of time until
the mistake was found.
There is a number of statements in the geometry of absolute plane which are
equivalent to the Axiom IV. It means that if we exchange the Axiom IV in the
Definition 2 to any of these statements then we will obtain an equivalent
axiomatic system.
Here we give a short list of such statements. (We are not going to prove the
equivalence in the lectures.)
10.11. Theorem. An absolute plane is Euclidean if and only if one of the
following equivalent conditions hold.
1. (a)
There is a line $\ell$ and a point $P$ not on the line such that there is only
one line passing through $P$ and parallel to $\ell$.
2. (b)
Every nondegenerate triangle can be circumscribed.
3. (c)
There exists a pair of distinct lines which lie on a bounded distance from
each other.
4. (d)
There is a triangle with arbitrary large inradius.
5. (e)
There is a nondegenerate triangle with zero defect.
It is hard to imagine an absolute plane, which does not satisfy some of the
properties above. That is partly the reason why for the large number of false
proofs; they used such statements by accident.
Let us formulate the negation of the statement (a) above.
IVh.
For any line $\ell$ and any point $P\notin\ell$ there are at least two lines
which pass through $P$ and have no points of intersection with $\ell$.
According to the theorem above, any non-Euclidean absolute plane Axiom IVh
holds.
It opens a way to look for a proof by contradiction. Simply exchange Axiom IV
to Axiom IVh in the Definition 2 and start to prove theorems in the obtained
axiomatic system. In the case if we arrive to a contradiction, we prove the
Axiom IV in absolute plane.
These attempts were unsuccessful as well, but this approach led to a new type
of geometry.
This idea was growing since 5th century; the most notable result were obtained
by Saccheri in [9]. The more this new geometry was developed, it became more
and more believable that there will be no contradiction.
The statement that there is no contradiction appears first in private letters
of Bolyai, Gauss, Schweikart and Taurinus111The oldest surviving letters were
the Gauss letter to Gerling 1816 and yet more convincing letter dated by 1818
of Schweikart sent to Gauss via Gerling.. They all seem to be afraid to state
it in public. Say, in 1818 Gauss writes to Gerling
> …I am happy that you have the courage to express yourself as if you
> recognized the possibility that our parallels theory along with our entire
> geometry could be false. But the wasps whose nest you disturb will fly
> around your head.…
Lobachevsky came to the same conclusion independently, unlike the others he
had courage to state it in public and in print (see [7]). That cost him
serious troubles.
Later Beltrami gave a clean proof that if hyperbolic geometry has a
contradiction then so is Euclidean geometry. This was done by modeling points,
lines, distances and angle measures of hyperbolic geometry using some other
objects in Euclidean geometry; this is the subject of the next chapter.
Arguably, the discovery of non-Euclidean geometry was the second main
discoveries of 19th century, trailing only the Mendel’s laws.
#### Curvature
In a letter from 1824 Gauss writes:
> The assumption that the sum of the three angles is less than $\pi$ leads to
> a curious geometry, quite different from ours but thoroughly consistent,
> which I have developed to my entire satisfaction, so that I can solve every
> problem in it with the exception of a determination of a constant, which
> cannot be designated a priori. The greater one takes this constant, the
> nearer one comes to Euclidean geometry, and when it is chosen indefinitely
> large the two coincide. The theorems of this geometry appear to be
> paradoxical and, to the uninitiated, absurd; but calm, steady reflection
> reveals that they contain nothing at all impossible. For example, the three
> angles of a triangle become as small as one wishes, if only the sides are
> taken large enough; yet the area of the triangle can never exceed a definite
> limit, regardless how great the sides are taken, nor indeed can it ever
> reach it.
In the modern terminology the constant which Gauss mentions, can be expressed
as $1/\sqrt{-k}$, where $k$ denotes so called _curvature_ of the absolute
plane which we are about to introduce.
The identity in the Exercise 10 suggests that defect of triangle should be
proportional to its area.222We did not define _area_ ; instead we refer to
intuitive understanding of area which reader might have. The formal definition
of area is quite long and tedious.
In fact for any absolute plane there is a nonpositive real number $k$ such
that
$k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\mathop{\rm area}\nolimits(\triangle
ABC)+\mathop{\rm defect}\nolimits(\triangle ABC)=0$
for any triangle $\triangle ABC$. This number $k$ is called _curvature_ of the
plane.
For example, by Theorem 6, the Euclidean plane has zero curvature. By Theorem
10, curvature of any absolute plane is nonpositive.
It turns out that up to isometry, the absolute plane is characterized by its
curvature; i.e., two absolute planes are isometric if and only if they have
the same curvature.
In the next chapter we will construct hyperbolic plane, this is an example of
absolute plane with curvature $k=-1$.
Any absolute planes, distinct from Euclidean, can be obtained by rescaling
metric on the hyperbolic plane. Indeed, if we rescale the metric by factor
$c$, the area changes by positive factor $c^{2}$, while defect stays the same.
Therefore taking $c=\sqrt{-k}$, we can get the absolute plane given curvature
$k<0$. In other words, all the non-Euclidean absolute planes become identical
if we use $r=1/\sqrt{-k}$ as the unit of length.
In the Chapter 13, we briefly discuss the geometry of the unit sphere.
Although spheres are not absolute planes, the spherical geometry is a close
relative of Euclidean and hyperbolic geometries.
The nondegenerate spherical triangles have negative defect. Moreover if $R$ is
the radius of the sphere then
$\tfrac{1}{R^{2}}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\mathop{\rm
area}\nolimits(\triangle ABC)+\mathop{\rm defect}\nolimits(\triangle ABC)=0$
for any spherical triangle $\triangle ABC$. In other words, the sphere of
radius $R$ has positive curvature $k=\tfrac{1}{R^{2}}$.
### Chapter 11 Hyperbolic plane
In this chapter we use inversive geometry to construct the model of hyperbolic
plane — an example of absolute plane which is not Euclidean.
#### Poincaré disk model
Further we will discuss the _Poincaré disk model_ of hyperbolic plane; an
example of absolute plane in which Axiom IV does not hold, in particular this
plane is not Euclidean. This model was discovered by Beltrami in [2] and
popularized later by Poincaré.
On the figure above you see the Poincaré disk model of hyperbolic plane which
is cut into congruent triangles with angles $\tfrac{\pi}{3}$, $\tfrac{\pi}{3}$
and $\tfrac{\pi}{4}$.
#### Description of the model
In this section we describe the model; i.e., we give new names for some
objects in Euclidean plane which will represent lines, angle measures,
distances in the hyperbolic plane.
Hyperbolic plane. Let us fix a circle on the Euclidean plane and call it
_absolute_. The set of points inside the absolute will be called _hyperbolic
plane_ (or _h-plane_). (The absolute itself does _not_ lie in the h-plane.)
We will often assume that the absolute is a unit circle.
Hyperbolic lines. The intersections of h-plane with clines perpendicular to
the absolute are called _hyperbolic lines_ (or _h-lines_).
$P$$Q$$A$$B$$\Gamma$h-plane
Note that according to Corollary 9, there is unique h-line which pass through
given two distinct points $P$ and $Q$. This h-line will be denoted as
$(PQ)_{h}$.
The arcs of hyperbolic lines will be called _hyperbolic segments_ or _h-
segments_. An h-segment with endpoints $P$ and $Q$ will be denoted as
$[PQ]_{h}$.
The subset of h-line on one side from a point will be called _hyperbolic half-
line_ (or _h-half-line_). An h-half-line from $P$ passing through $Q$ will be
denoted as $[PQ)_{h}$.
If $\Gamma$ is the circle containing the h-line $(PQ)_{h}$ then the points of
intersection of $\Gamma$ with absolute are called _ideal points_ of
$(PQ)_{h}$. (Note that the ideal points of h-line do not belong to the
h-line.)
So far $(PQ)_{h}$ is just a subset of h-plane; below we will introduce
h-distance an later we will show that $(PQ)_{h}$ is a line for the h-distance
in the sense of the Definition 1.
Hyperbolic distance. Let $P$ and $Q$ be distinct points in h-plane. Denote by
$A$ and $B$ be the ideal points of $(PQ)_{h}$. Without loss of generality, we
may assume that on the Euclidean circle containing the h-line $(PQ)_{h}$, the
points $A,P,Q,B$ appear in the same order.
Consider function
$\delta(P,Q)\buildrel\mathrm{def}\over{=\\!\\!=}\frac{AQ{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}BP}{QB{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}PA}.$
Note that right hand side is the cross-ratio, which appeared in Theorem 9. Set
$\delta(P,P)=1$ for any point $P$ in h-plane. Set
$PQ_{h}\buildrel\mathrm{def}\over{=\\!\\!=}\ln\delta(P,Q).$
The proof that $PQ_{h}$ is a metric on h-plane will be given below, for now it
is just a function which returns a real value $PQ_{h}$ for any pair of points
$P$ and $Q$ in the h-plane.
Hyperbolic angles. Consider three points $P$, $Q$ and $R$ in the h-plane such
that $P\neq Q$ and $R\neq Q$. The hyperbolic angle $\angle_{h}PQR$ is ordered
pair of h-half-lines $[QP)_{h}$ and $[QR)_{h}$.
Let $[QX)$ and $[QY)$ be (Euclidean) half-lines which are tangent to
$[QP]_{h}$ and $[QR]_{h}$ at $Q$. Then the _hyperbolic angle measure_ (or _h-
angle measure_) $\measuredangle_{h}PQR$ is defined as $\measuredangle XQY$.
#### What has to be proved?
In the previous section we defined all the notions in the formulation of the
axioms. It remains to check that each axiom holds.
Namely we need to show the following statements.
11.1. Statement. The defined h-distance is a metric on h-plane. I.e., for any
three points $P$, $Q$ and $R$ in the h-plane we have
1. (a)
$PQ_{h}\geqslant 0$;
2. (b)
$P=Q$ if and only if $PQ_{h}=0$;
3. (c)
$PQ_{h}=QP_{h}.$
4. (d)
$QR_{h}\leqslant QP_{h}+PR_{h}.$
11.2. Statement. A subset $\ell$ of h-plane is an h-line if and only if it is
a line for h-distance; i.e., if there is a bijection
$\iota\colon\ell\to\mathbb{R}$ such that
$XY_{h}=|\iota(X)-\iota(Y)|$
for any $X$ and $Y\in\ell$.
11.3. Statement. Each Axiom of absolute plane holds. Namely we have to check
the following:
1. I.
There is one and only one h-line, that contains any two given distinct points
$P$ and $Q$ of h-plane.
2. II.
The h-angle measure satisfies the following conditions:
1. (a)
Given a h-half-line $[QA)_{h}$ and $\alpha\in(-\pi,\pi]$ there is unique
h-half-line $[QB)_{h}$ such that $\measuredangle_{h}AQB=\alpha$
2. (b)
For any points $A$, $B$ and $C$ distinct from $Q$, we have
$\measuredangle_{h}AQB+\measuredangle_{h}BQC\equiv\measuredangle_{h}AQC.$
3. (c)
The function
$\measuredangle_{h}\colon(A,Q,B)\mapsto\measuredangle AQB$
is continuous at any triple of points $(A,Q,B)$ in the h-plane such that
$Q\neq A$ and $Q\neq B$ and $\measuredangle_{h}AQB\neq\pi$.
3. III.
$\triangle_{h}ABC\cong\triangle_{h}A^{\prime}B^{\prime}C^{\prime}$ if and only
if $A^{\prime}B^{\prime}_{h}=AB_{h}$, $A^{\prime}C^{\prime}_{h}=\nobreak
AC_{h}$ and
$\measuredangle_{h}C^{\prime}A^{\prime}B^{\prime}\equiv\pm\measuredangle_{h}CAB$.
Finally we need to prove the following statement in order to show that h-plane
is distinct from Euclidean plane.
11.4. Statement. The Axiom IVh on page IVh. holds.
The proofs of these statements rely on the observation described in the next
section.
#### Auxiliary statements
11.5. Lemma. Consider h-plane with unit circle as absolute. Let $O$ be the
center of absolute and $P\neq O$ be an other point of h-plane. Denote by
$P^{\prime}$ the inversion of $P$ in the absolute.
Then the circle $\Gamma$ with center $P^{\prime}$ and radius
$1/\sqrt{1-OP^{2}}$ is orthogonal to the absolute. Moreover $O$ is the
inversion of $P$ in $\Gamma$.
$\Gamma$$O$$P$$P^{\prime}$$T$
Proof. Follows from Exercise 9. ∎
Assume $\Gamma$ is a cline which is perpendicular to the absolute. Consider
the inversion $X\mapsto X^{\prime}$ in $\Gamma$, or if $\Gamma$ is a line, set
$X\mapsto X^{\prime}$ to be the reflection through $\Gamma$.
The following proposition roughly says that the map $X\mapsto X^{\prime}$
respects all the notions introduced in the previous section. Together with the
lemma above, it implies that in any problem which formulated entirely _in
h-terms_ we can assume that a given point lies in the center of absolute.
11.6. Main observation. The map $X\mapsto X^{\prime}$ described above is a
bijection of h-plane to itself. Moreover for any points $P$, $Q$, $R$ in the
h-plane such that $P\neq Q$ and $Q\neq R$ the following conditions hold
1. (a)
The sets $(PQ)_{h}$, $[PQ)_{h}$ and $[PQ]_{h}$ are mapped to
$(P^{\prime}Q^{\prime})_{h}$, $[P^{\prime}Q^{\prime})_{h}$ and
$[P^{\prime}Q^{\prime}]_{h}$ correspondingly.
2. (b)
$\delta(P^{\prime},Q^{\prime})=\delta(P,Q)$ and
$P^{\prime}Q^{\prime}_{h}=PQ_{h}.$
3. (c)
$\measuredangle_{h}P^{\prime}Q^{\prime}R^{\prime}\equiv-\measuredangle_{h}PQR.$
Proof. According to Theorem 9 the map sends the absolute to itself. Note that
the points on $\Gamma$ do not move, it follows that points inside of absolute
remain inside after the mapping and the other way around.
Part (a) follows from 9 and 9.
Part (b) follows from Theorem 9.
Part (c) follows from Theorem 9. ∎
11.7. Lemma. Assume that the absolute is a unit circle centered at $O$. Given
a point $P$ in the h-plane, set $x=OP$ and $y=OP_{h}$. Then
$y=\ln\frac{1+x}{1-x}\ \ \text{and}\ \ x=\frac{e^{y}-1}{e^{y}+1}.$
$A$$O$$P$$B$
Proof. Note that h-line $(OP)_{h}$ lies in a diameter of absolute. Therefore
if $A$ and $B$ are points in the definition of h-distance then
$\displaystyle OA$ $\displaystyle=OB=1,$ $\displaystyle PA$
$\displaystyle=1+x,$ $\displaystyle PB$ $\displaystyle=1-x.$
Therefore
$\displaystyle y$ $\displaystyle=\ln\frac{AP{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}BO}{PB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}OA}=$
$\displaystyle=\ln\frac{1+x}{1-x}.$
Taking exponent of left and right hand side and applying obvious algebra
manipulations we get
$x=\frac{e^{y}-1}{e^{y}+1}.$
∎
11.8. Lemma. Assume points $P$, $Q$ and $R$ appear on one h-line in the same
order. Then
$PQ_{h}+QR_{h}=PR_{h}$
Proof. Note that
$PQ_{h}+QR_{h}=PR_{h}$
is equivalent to
$\delta(P,Q){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\delta(Q,R)=\delta(P,R).$ ➊
Let $A$ and $B$ be the ideal points of $(PQ)_{h}$. Without loss of generality
we can assume that the points $A,P,Q,R,B$ appear in the same order on the
cline containing $(PQ)_{h}$. Then
$\displaystyle\delta(P,Q){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\delta(Q,R)$
$\displaystyle=\frac{AQ{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}BP}{QB{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}PA}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\frac{AR{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}BQ}{RB{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}QA}=$ $\displaystyle=\frac{AR{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}BP}{RB{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}PA}=$ $\displaystyle=\delta(P,R)$
Hence ➊ ‣ 11 follows. ∎
Let $P$ be a point in h-plane and $\rho>0$. The set of all points $Q$ in the
h-plane such that $PQ_{h}=\rho$ is called _h-circle_ with center $P$ and _h-
radius_ $\rho$.
11.9. Lemma. Any h-circle is formed by a Euclidean circle which lies
completely in h-plane.
More precisely for any point $P$ in the h-plane and $\rho\geqslant 0$ there is
a $\hat{\rho}\geqslant 0$ and a point $\hat{P}$ such that
$PQ_{h}=\rho\ \ \Leftrightarrow\ \ \hat{P}Q=\hat{\rho}.$
Moreover, if $O$ is the center of absolute then
1. 1.
$\hat{O}=O$ for any $\rho$ and
2. 2.
$\hat{P}\in(OP)$ for any $P\neq O$.
$O$$Q$$P$$\hat{P}$$\Delta^{\prime}_{\rho}$
Proof. According to Lemma 11, $OQ_{h}=\rho$ if and only if
$OQ=\hat{\rho}=\frac{e^{\rho}-1}{e^{\rho}+1}.$
Therefore the locus of points $Q$ such that $OQ_{h}=\rho$ is formed by the
Euclidean circle, denote it by $\Delta_{\rho}$.
If $P\neq O$, applying Lemma 11 and the Main observation (11) we get a circle
$\Gamma$ perpendicular to the absolute such that $P$ is the inversion of $O$
in $\Gamma$.
Let $\Delta_{\rho}^{\prime}$ be the inversion of $\Delta_{\rho}$ in $\Gamma$.
Since the inversion in $\Gamma$ preserves the h-distance, $PQ_{h}=\rho$ if and
only if $Q\in\nobreak\Delta_{\rho}^{\prime}$.
According to Theorem 9, $\Delta_{\rho}^{\prime}$ is a circle. Denote by
$\hat{P}$ the center and by $\hat{\rho}$ the radius of
$\Delta_{\rho}^{\prime}$.
Finally note that $\Delta_{\rho}^{\prime}$ reflects to itself in $(OP)$; i.e.,
the center $\hat{P}$ lies on $(OP)$. ∎
#### The sketches of proofs
In this section we sketch the proofs of the statement 11–11. listed in the
section one before last.
We will always assume that absolute is a unit circle centered at the point
$O$.
Proof of 11; (a) and (b). Denote by $O$ the center of absolute. Without loss
of generality, we may assume that $Q=O$. If not apply Lemma 11, together with
Main Observation (11).
$P$$Q$$A$$B$
Note that
$\delta(O,P)=\frac{1+OP}{1-OP}\geqslant 1$
and the equality holds only if $P=O$.
Therefore
$OP_{h}=\ln\delta(O,P)\geqslant 0.$
and the equality holds if and only if $P=O$.
(c). Let $A$ and $B$ be ideal points of $(PQ)_{h}$ and $A,P,Q,B$ appear on the
cline containing $(PQ)_{h}$ in the same order.
Then
$\displaystyle PQ_{h}$ $\displaystyle=\ln\frac{AQ{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}BP}{QB{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}PA}=$ $\displaystyle=\ln\frac{BP{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}AQ}{PA{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}QB}=$
$\displaystyle=QP_{h}.$
$O$$P$$\hat{P}$$\Delta$$S$$T$$Q$
(d). Without loss of generality, we may assume that $RP_{h}\geqslant\nobreak
PQ_{h}$. Applying the main observation we may assume that $R=O$.
Denote by $\Delta$ the h-circle with center $P$ and h-radius $PQ_{h}$. Let $S$
and $T$ be the points of intersection of $(OP)$ and $\Delta$.
Since $PQ_{h}\leqslant OP_{h}$, by Lemma 11 we can assume that the points $O$,
$S$ $P$ and $T$ appear on the h-line in the same order.
Let $\hat{P}$ be as in Lemma 11 for $P$ and $\rho=PQ_{h}$. Note that $\hat{P}$
is the (Euclidean) midpoint of $[ST]$.
By the Euclidean triangle inequality
$OT=O\hat{P}+\hat{P}Q\geqslant OQ.$
Since the function $f(x)=\ln\frac{1+x}{1-x}$ is increasing for $x\in[0,1)$,
the Lemma 11 implies that $OT_{h}\geqslant OQ_{h}$.
Finally applying Lemma 11 again, we get
$OT_{h}=OP_{h}+PQ_{h}.$
Therefore
$OQ_{h}\leqslant OP_{h}+PQ_{h}.$ ➋
∎
Proof of 11. Let $\ell$ be an h-line. Applying the main observation we can
assume that $\ell$ contains the center of absolute. In this case $\ell$ is
formed by intersection of diameter of absolute and the h-plane. Let $A$ and
$B$ be the endpoints of the diameter.
Consider map $\iota\colon\ell\to\mathbb{R}$ defined as
$\iota(X)=\ln\frac{AX}{XB}.$
Note that $\iota\colon\ell\to\mathbb{R}$ is a bijection.
Further, if $X,Y\in\ell$ and the points $A$, $X$, $Y$ and $B$ appear on $[AB]$
in the same order then
$|\iota(Y)-\iota(X)|=\left|\ln\frac{AY}{YB}-\ln\frac{AX}{XB}\right|=\left|\ln\frac{AY{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}BX}{YB{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}XB}\right|=XY_{h};$
i.e., any h-line is a line for h-metric.
Note that the equality in ➋ ‣ 11 holds only if $Q=T$. In particular if $Q$
lies on $(OP)_{h}$. Hence any line for h-distance is an h-line. ∎
Proof of 11. Axiom I follows from Corollary 9.
Let us prove Axiom II. Applying the main observation, we may assume that
$Q=O$. In this case, for any point $X\neq O$ in h-plane, $[OX)_{h}$ is the
intersection of $[OX)$ with h-plane. Hence all the statements in Axiom IIa and
IIb follow.
In the proof of Axiom IIc, we can assume that $Q$ is distinct from $O$. Denote
by $Z$ the inversion of $Q$ in the absolute and by $\Gamma$ the circle
perpendicular to the absolute which is centered at $Q^{\prime}$. According to
Lemma 11, the point $O$ is the inversion of $Q$ in $\Gamma$; denote by
$A^{\prime}$ and $B^{\prime}$ the inversions in $\Gamma$ of the points $A$ and
$B$ correspondingly. Note that the point $A^{\prime}$ is completely determined
by the points $Q$ and $A$, moreover the map $(Q,A)\mapsto A^{\prime}$ is
continuous at any pair of points $(Q,A)$ such that $Q\neq O$. The same is true
for the map $(Q,B)\mapsto B^{\prime}$
According to the Main Observation
$\measuredangle_{h}AQB\equiv-\measuredangle_{h}A^{\prime}OB^{\prime}.$
Since $\measuredangle_{h}A^{\prime}OB^{\prime}=\measuredangle
A^{\prime}OB^{\prime}$ and the maps $(Q,A)\mapsto A^{\prime}$, $(Q,B)\mapsto
B^{\prime}$ are continuous, the Axiom IIc follows from the corresponding axiom
of Euclidean plane.
Now let us show that Axiom III holds. Applying the main observation, we can
assume that $A$ and $A^{\prime}$ coincide with the center of absolute $O$. In
this case
$\measuredangle
C^{\prime}OB^{\prime}=\measuredangle_{h}C^{\prime}OB^{\prime}=\pm\measuredangle_{h}COB=\pm\measuredangle
COB.$
Since
$OB_{h}=OB^{\prime}_{h}\ \ \text{and}\ \ OC_{h}=OC^{\prime}_{h},$
Lemma 11 implies that the same holds for the Euclidean distances; i.e.,
$OB=OB^{\prime}\ \ \text{and}\ \ OC=OC^{\prime}.$
By SAS, there is a motion of Euclidean plane which sends $O$ to itself, $B$ to
$B^{\prime}$ and $C$ to $C^{\prime}$
Note that the center of absolute is fixed by the corresponding motion. It
follows that this motion gives also a motion of h-plane; in particular the
h-triangles $\triangle_{h}OBC$ and $\triangle_{h}OB^{\prime}C^{\prime}$ are
h-congruent.
$A$$B$$O$
$m$
$n$
$\ell$
Proof of 11. Finally we need to check that the Axiom IVh holds.
Applying the main observation we can assume that $P=O$.
The remaining part of proof is left to the reader; it can be guessed from the
picture ∎
### Chapter 12 Geometry of h-plane
In this chapter we study the geometry of the plane described by Poincaré disc
model. For briefness, this plane will be called _h-plane_. Note that we can
work with this model directly from inside of Euclidean plane but we may also
use the axioms of absolute geometry since according to the previous chapter
they all hold in the h-plane.
#### Angle of parallelism
Let $P$ be a point off an h-line $\ell$. Drop a perpendicular $(PQ)_{h}$ from
$P$ to $\ell$ with foot point $Q$. Let $\varphi$ be the least angle such that
the h-line $(PZ)_{h}$ with $|\measuredangle_{h}QPZ|=\varphi$ does not
intersect $\ell$.
The angle $\varphi$ is called _angle of parallelism_ of $P$ to $\ell$. Clearly
$\varphi$ depends only on the distance $h=PQ_{h}$. Further
$\varphi(h)\to\pi/2$ as $h\to 0$, and $\varphi(h)\to 0$ as $h\to\infty$. (In
Euclidean geometry the angle of parallelism is identically equal to $\pi/2$.)
$P$$\ell$
If $\ell$, $P$ and $Z$ as above then the h-line $m=(PZ)_{h}$ is called
_asymptotically parallel_ to $\ell$.111In hyperbolic geometry the term
_parallel lines_ is often used for _asymptotically parallel lines_ ; we do not
follow this convention. In other words, two h-lines are asymptotically
parallel if they share one ideal point.
Given $P\not\in\ell$ there are exactly two asymptotically parallel lines
through $P$ to $\ell$; the remaining parallel lines t $\ell$ through $P$ are
called _ultra parallel_.
On the diagram, the two solid h-lines passing through $P$ are asymptotically
parallel to $\ell$; the dotted h-line is ultra parallel to $\ell$.
12.1. Proposition. Let $Q$ be the foot point of $P$ on h-line $\ell$. Denote
by $\varphi$ the angle of parallelism of $P$ to $\ell$ and let $h=PQ_{h}$.
Then
$h=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\ln\tfrac{1+\cos\varphi}{1-\cos\varphi}.$
$A$$B$$P$$X$$Z$$Q$$\varphi$
Proof. Applying a motion of h-plane if necessary, we may assume $P$ is the
center of absolute. Then the h-lines through $P$ are formed by the
intersections of Euclidean lines with the h-plane.
Let us denote by $A$ and $B$ the ideal points of $\ell$. Without loss of
generality we may assume that $\angle APB$ is positive. In this case
$\varphi=\measuredangle QPB=\measuredangle APQ=\tfrac{1}{2}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle APB.$
Let $Z$ be the center of the circle $\Gamma$ containing the h-line $\ell$. Set
$X$ to be the point of intersection of the Euclidean segment $[AB]$ and
$(PQ)$.
Note that, $OX=\cos\varphi$ therefore by Lemma 11,
$OX_{h}=\ln\tfrac{1+\cos\varphi}{1-\cos\varphi}.$
Note that both angles $\angle PBZ$ and $\angle BXZ$ are right. Therefore
$\triangle ZBX\sim\triangle ZPB$, sine the $\angle PZB$ is shared. In
particular
$ZX{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}XP=ZB^{2};$
i.e., $X$ is the inversion of $P$ in $\Gamma$.
The inversion in $\Gamma$ is the reflection of h-plane through $\ell$.
Therefore
$\displaystyle h$ $\displaystyle=PQ_{h}=QX_{h}=$
$\displaystyle=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}OX_{h}=$
$\displaystyle=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\ln\tfrac{1+\cos\varphi}{1-\cos\varphi}.$
∎
#### Inradius of h-triangle
12.2. Theorem. Inradius of any h-triangle is less than $\tfrac{1}{2}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\ln 3$.
$X$$Y$$Z$$A$$B$$C$
Proof. First note that any triangle in h-plane lies in an _ideal triangle_ ;
i.e., a region bounded by three pairwise asymptotically parallel lines.
A proof can be seen in the picture. Consider arbitrary h-triangle
$\triangle_{h}XYZ$. Denote by $A$, $B$ and $C$ the ideal points of the h-half-
lines $[XY)_{h}$, $[YZ)_{h}$ and $[ZX)_{h}$.
It should be clear that inradius of the ideal triangle $ABC$ is bigger than
inradius of $\triangle_{h}XYZ$.
Applying an inverse if necessary, we can assume that h-incenter ($O$) of the
ideal triangle is the center of absolute. Therefore, without loss of
generality, we may assume
$\measuredangle AOB=\measuredangle BOC=\measuredangle COA=\tfrac{2}{3}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi.$
$A$$B$$C$$O$$Q$
It remains to find the inradius. Denote by $Q$ the foot point of $O$ on
$(AB)_{h}$. Then $OQ_{h}$ is the inradius. Note that the angle of parallelism
of $(AB)_{h}$ at $O$ is equal to $\tfrac{\pi}{3}$.
By Proposition 12,
$\displaystyle OQ_{h}$ $\displaystyle=\tfrac{1}{2}{\hskip
0.5pt\cdot\nobreak\hskip
0.5pt}\ln\frac{1+\cos\frac{\pi}{3}}{1-\cos\frac{\pi}{3}}=$
$\displaystyle=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\ln\frac{1+\tfrac{1}{2}}{1-\tfrac{1}{2}}=$
$\displaystyle=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\ln 3.$
∎
12.3. Exercise. Let $ABCD$ be a quadrilateral in h-plane such that the
h-angles at $A$, $B$ and $C$ are right and $AB_{h}=BC_{h}$. Find the optimal
upper bound for $AB_{h}$.
#### Circles, horocycles and equidistants
Note that according to Lemma 11, any h-circle is formed by a Euclidean circle
which lies completely in the h-plane. Further any h-line is an intersection of
the h-plane with the circle perpendicular to the absolute.
In this section we will describe the h-geometric meaning of the intersections
of the other circles with the h-plane.
You will see that all these intersections formed by a _perfectly round shape_
in the h-plane; i.e., h-geometrically all the points on an equidistant look
the same.
One may think of these curves as about trajectories of a car which drives in
the plane with fixed position of the wheel. In the Euclidean plane, this way
you either run along a circles or along a line.
$m$$g$$A$$B$
In hyperbolic plane the picture is different. If you turn wheel far right, you
will run along a circle. If you turn it less, at certain position of wheel,
you will never come back, the path will be different from the line. If you
turn the wheel further a bit, you start to run along a path which stays on the
same distant from an h-line.
Equidistants of h-lines. Consider h-plane with absolute $\Omega$. Assume a
circle $\Gamma$ intersects $\Omega$ in two distinct points $A$ and $B$. Denote
by $g$ the intersection of $\Gamma$ with h-plane. Let us draw an h-line $m$
with the ideal points $A$ and $B$.
12.4. Exercise. Show that the h-line $m$ is uniquely determined by its ideal
points $A$ and $B$.
Consider any h-line $\ell$ perpendicular to $m$; let $\Delta$ be the circle
containing $\ell$.
Note that $\Delta\perp\Gamma$. Indeed, according to Corollary 9, $m$ and
$\Omega$ inverted to themselves in $\Delta$. It follows that $A$ is the
inversion of $B$ in $\Delta$. Finally, by Corollary 9, we get that
$\Delta\perp\Gamma$.
Therefore inversion in $\Delta$ sends both $m$ and $g$ to themselves. So if
$P^{\prime},P\in g$ are inversions of each other in $\Delta$ then they lie on
the same distance from $m$. Clearly we have plenty of choice for $\ell$, which
can be used to move points along $g$ arbitrary keeping the distance to $m$.
$\Gamma$$A$
It follows that $g$ is formed by the set of points which lie on fixed
h-distance and the same h-side from $m$.
Such curve $g$ is called _equidistant_ to h-line $m$. In Euclidean geometry
the equidistant from a line is a line; apparently in hyperbolic geometry the
picture is different.
Horocycles. If the circle $\Gamma$ touches the absolute from inside at one
point $A$ then $h=\Gamma\backslash\\{A\\}$ lie in h-plane. This set is called
_horocycle_. It also has perfectly round shape in the sense described above.
Horocycles are the boarder case between circles and equidistants to h-lines.
An horocycle might be considered as a limit of circles which pass through
fixed point which the centers running to infinity along a line. The same
horocycle is a limit of equidistants which pass through fixed point to the
h-lines running to infinity.
12.5. Exercise. Find the leg of be a right h-triangle inscribed in a
horocycle.
#### Hyperbolic triangles
12.6. Theorem. Any nondegenerate hyperbolic triangle has positive defect.
$A$$C$$B$
Proof. Consider h-trinagle $\triangle_{h}ABC$. According to Theorem 10,
$\mathop{\rm defect}\nolimits(\triangle_{h}ABC)\geqslant 0.$ ➊
It remains to show that in the case of equality the triangle
$\triangle_{h}ABC$ degenerates.
Without loss of generality, we may assume that $A$ is the center of absolute;
in this case $\measuredangle_{h}CAB=\measuredangle CAB$. Yet we may assume
that
$\measuredangle_{h}CAB,\ \measuredangle_{h}ABC,\ \measuredangle_{h}BCA,\
\measuredangle ABC,\ \measuredangle BCA\geqslant 0.$
Let $D$ be an arbitrary point in $[CB]_{h}$ distinct from $B$ and $C$. From
Proposition 8
$\measuredangle ABC-\measuredangle_{h}ABC\equiv\pi-\measuredangle
CDB\equiv\measuredangle BCA-\measuredangle_{h}BCA.$
From Exercise 6, we get
$\mathop{\rm defect}\nolimits(\triangle_{h}ABC)=2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(\pi-\measuredangle CDB).$
Therefore if we have equality in ➊ ‣ 12 then $\measuredangle CDB=\pi$. In
particular the h-segment $[BC]_{h}$ coincides with Euclidean segment $[BC]$.
The later can happen only if the h-line passes through the center of absolute;
i.e., if $\triangle_{h}ABC$ degenerates. ∎
The following theorem states in particular that hyperbolic triangles are
congruent if their corresponding angles are equal; in particular in hyperbolic
geometry similar triangles have to be congruent.
12.7. AAA congruence condition. Two nondegenerate triangles $\triangle_{h}ABC$
and $\triangle_{h}A^{\prime}B^{\prime}C^{\prime}$ in the h-plane are congruent
if
$\measuredangle_{h}ABC=\nobreak\pm\measuredangle_{h}A^{\prime}B^{\prime}C^{\prime}$,
$\measuredangle_{h}BCA=\nobreak\pm\measuredangle_{h}B^{\prime}C^{\prime}A^{\prime}$
and
$\measuredangle_{h}CAB=\pm\measuredangle_{h}C^{\prime}A^{\prime}B^{\prime}$.
Proof. Note hat if $AB_{h}=A^{\prime}B^{\prime}_{h}$ then the theorem follows
from ASA.
Assume contrary. Without loss of generality we may assume that
$AB_{h}<A^{\prime}B^{\prime}_{h}$. Therefore we can choose the point
$B^{\prime\prime}\in[A^{\prime}B^{\prime}]_{h}$ such that
$A^{\prime}B^{\prime\prime}_{h}=AB_{h}$.
Choose a point $X$ so that
$\measuredangle_{h}A^{\prime}B^{\prime\prime}X=\measuredangle_{h}A^{\prime}B^{\prime}C^{\prime}$.
According to Exercise 10,
$(B^{\prime\prime}X)_{h}\parallel(B^{\prime}C^{\prime})_{h}$.
By Pasch’s theorem (3), $(B^{\prime\prime}X)_{h}$ intersects
$[A^{\prime}C^{\prime}]_{h}$. Denote by $C^{\prime\prime}$ the point of
intersection.
According to ASA,
$\triangle_{h}ABC\cong\triangle_{h}A^{\prime}B^{\prime\prime}C^{\prime\prime}$;
in particular
$\mathop{\rm defect}\nolimits(\triangle_{h}ABC)=\mathop{\rm
defect}\nolimits(\triangle_{h}A^{\prime}B^{\prime\prime}C^{\prime\prime}).$ ➋
Note that
$\displaystyle\mathop{\rm
defect}\nolimits(\triangle_{h}A^{\prime}B^{\prime}C^{\prime})$
$\displaystyle=\mathop{\rm
defect}\nolimits(\triangle_{h}A^{\prime}B^{\prime\prime}C^{\prime\prime})+$ ➌
$\displaystyle+\mathop{\rm
defect}\nolimits(\triangle_{h}B^{\prime\prime}C^{\prime\prime}C^{\prime})+\mathop{\rm
defect}\nolimits(\triangle_{h}B^{\prime\prime}C^{\prime}B^{\prime}).$
By theorem 12 the defects has to be positive. Therefore
$\mathop{\rm
defect}\nolimits(\triangle_{h}A^{\prime}B^{\prime}C^{\prime})>\mathop{\rm
defect}\nolimits(\triangle_{h}ABC),$
a contradiction. ∎
#### Conformal interpretation
Let us give an other interpretation of the h-distance.
12.8. Lemma. Consider h-plane with absolute formed by the unit circle centered
at $O$. Fix a point $P$ and let $Q$ be an other point in the h-plane. Set
$x=PQ$ and $y=PQ_{h}$ then
$\lim_{x\to 0}\tfrac{y}{x}=\frac{2}{1-OP^{2}}.$
The above formula tells that the h-distance from $P$ to a near by point $Q$ is
nearly proportional to the Euclidean distance with the coefficient
$\tfrac{2}{1-OP^{2}}$. The value $\lambda(P)=\tfrac{2}{1-OP^{2}}$ is called
_conformal factor_ of h-metric.
One may think of conformal factor $\lambda(P)$ as the speed limit at the given
point. In this case the h-distance is the the minimal time needed to travel
from one point of h-plane to the other point.
$\Gamma$$O$$P$$Q$$Q^{\prime}$$Z$
Proof. If $P=O$, then according to Lemma 11
$\frac{y}{x}=\frac{\ln\tfrac{1+x}{1-x}}{x}\to 2$ ➍
as $x\to 0$.
If $P\neq O$, denote by $Z$ the inversion of $P$ in the absolute. Denote by
$\Gamma$ the circle with center $Z$ orthogonal to the absolute.
According to Main Observation 11 and Lemma 11 the inversion in $\Gamma$ is a
motion of h-plane which sends $P$ to $O$. In particular, if we denote by
$Q^{\prime}$ the inversion of $Q$ in $\Gamma$ then $OQ^{\prime}_{h}=PQ_{h}$.
Set $x^{\prime}=OQ^{\prime}$ According to Lemma 9,
$\frac{x^{\prime}}{x}=\frac{OZ}{ZQ}.$
Therefore
$\frac{x^{\prime}}{x}\to\frac{OZ}{ZP}=\frac{1}{1-OP^{2}}$
as $x\to 0$.
Together with ➍ ‣ 12, it implies
$\frac{y}{x}=\frac{y}{x^{\prime}}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\frac{x^{\prime}}{x}\to\frac{2}{1-OP^{2}}$
as $x\to 0$.∎
Here is an application of the lemma above.
12.9. Proposition. The circumference of an h-circle of h-radius $r$ is
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\operatorname{sh}r,$
where $\operatorname{sh}r$ denotes _hyperbolic sine_ of $r$; i.e.,
$\operatorname{sh}r\buildrel\mathrm{def}\over{=\\!\\!=}\frac{e^{r}-e^{-r}}{2}.$
Before we proceed with the proof let us discuss the same problem in the
Euclidean plane.
The circumference of the circle in the Euclidean plane can be defined as limit
of perimeters of regular $n$-gons inscribed in the circle as $n\to\infty$.
Namely, let us fix $r>0$. Given a positive integer $n$ consider $\triangle
AOB$ such that $\measuredangle AOB=\tfrac{2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\pi}{n}$ and $OA=OB=r$. Set $x_{n}=AB$. Note that $x_{n}$ is the side of
regular $n$-gon inscribed in the circle of radius $r$. Therefore the perimeter
of the $n$-gon is equal to $n{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}x_{n}$.
$A$$B$$O$
$\tfrac{2}{n}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi$
$\,r\,$
$\,r\,$
The circumference of the circle with radius $r$ might be defined as the limit
of
$\lim_{n\to\infty}n{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}x_{n}=2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}r.$ ➎
(This limit can be taken as the definition of $\pi$.)
In the following proof we repeat the same construction in the h-plane.
Proof. Without loss of generality we can assume that the center $O$ of the
circle is the center of absolute.
By Lemma 11, the h-circle with h-radius $r$ is formed by the Euclidean circle
with center $O$ and radius
$a=\frac{e^{r}-1}{e^{r}+1}.$
Denote by $x_{n}$ and $y_{n}$ the Euclidean and hyperbolic side lengths of the
regular $n$-gon inscribed in the circle.
Note that $x_{n}\to 0$ as $n\to\infty$. By Lemma 12,
$\displaystyle\lim_{n\to\infty}\frac{y_{n}}{x_{n}}$
$\displaystyle=\frac{2}{1-a^{2}}.$
Applying ➎ ‣ 12, we get that the circumference of the h-circle can be found
the following way
$\displaystyle\lim_{n\to\infty}n{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}y_{n}$
$\displaystyle=\frac{2}{1-a^{2}}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\lim_{n\to\infty}n{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}x_{n}=$
$\displaystyle=\frac{4{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}a}{1-a^{2}}=$ $\displaystyle=\frac{4{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\\!\left(\frac{e^{r}-1}{e^{r}+1}\right)}{1-\\!\left(\frac{e^{r}-1}{e^{r}+1}\right)^{2}}=$
$\displaystyle=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\frac{e^{r}-e^{-r}}{2}=$
$\displaystyle=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\operatorname{sh}r.$
∎
12.10. Exercise. Denote by $\mathop{\rm circum}\nolimits_{h}(r)$ the
circumference of the h-circle of radius $r$. Show that
$\mathop{\rm circum}\nolimits_{h}(r+1)>2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\mathop{\rm circum}\nolimits_{h}(r)$
for all $r>0$.
## Additional topics
### Chapter 13 Spherical geometry
Spherical geometry is the geometry of the surface of the unit sphere. This
type of geometry has practical applications in cartography, navigation and
astronomy.
The spherical geometry is a close relative of Euclidean and hyperbolic
geometries. Most of theorems of hyperbolic geometry have spherical analogs
which much easy to visualize.
We discuss few theorems in spherical geometry; the proofs are not completely
rigorous.
#### Space and spheres
Let us repeat the construction of metric $d_{2}$ (page $\circ$ ‣ $\diamond$ ‣
1) in the space.
We will denote by $\mathbb{R}^{3}$ the set of all triples $(x,y,z)$ of real
numbers. Assume $A=(x_{A},y_{A},z_{A})$ and $B=(x_{B},y_{B},z_{B})$ are
arbitrary points. Let us define the metric on $\mathbb{R}^{3}$ the following
way
$AB\buildrel\mathrm{def}\over{=\\!\\!=}\sqrt{(x_{A}-x_{B})^{2}+(y_{A}-y_{B})^{2}+(z_{A}-z_{B})^{2}}.$
The obtained metric space is called _Euclidean space_.
Assume at least one of the real numbers $a$, $b$ or $c$ is distinct from zero.
Then the subset of points $(x,y,z)\in\mathbb{R}^{3}$ described by equation
$a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}x+b{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}y+c{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z+d=0$
is called _plane_.
It is straightforward to show that any plane in Euclidean space is isometric
to Euclidean plane. Further any three points on the space lie on one plane.
That makes possible to generalize many notions and results from Euclidean
plane geometry to Euclidean space by applying plane geometry in the planes of
the space.
Sphere in the space is an analog of circle in the plane.
Formally, _sphere_ with center $O$ and radius $r$ is the set of points in the
space which lie on the distance $r$ from $O$.
Let $A$ and $B$ be two points on the unit sphere centered at $O$. The
_spherical distance_ from $A$ to $B$ (briefly $AB_{s}$) is defined as
$|\measuredangle AOB|$.
In the spherical geometry, the role of lines play the _great circles_ ; i.e.,
the intersection of the sphere with a plane passing through $O$.
Note that the great circles do not form lines in the sense of Definition 1.
Also any two distinct great circles intersect at two antipodal points. It
means that the sphere does not satisfy the axioms of absolute plane.
#### Pythagorean theorem
Here is an analog of Pythagorean Theorems (6 and 14) in spherical geometry.
13.1. Theorem. Let $\triangle_{s}ABC$ be a spherical triangle with right angle
at $C$. Set $a=BC_{s}$, $b=CA_{s}$ and $c=AB_{s}$. Then
$\cos c=\cos a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\cos b.$
In the proof we will use the notion of scalar product which we are about to
discuss.
Let $A$ and $B$ be two points in Euclidean space. Denote by
$v_{A}=\nobreak(x_{A},y_{A},z_{A})$ and $v_{B}=(x_{B},y_{B},z_{B})$ the
position vectors of $A$ and $B$ correspondingly. The scalar product of two
vectors $v_{A}$ and $v_{B}$ in $\mathbb{R}^{3}$ is defined as
$\langle v_{A},v_{B}\rangle\buildrel\mathrm{def}\over{=\\!\\!=}x_{A}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}x_{B}+y_{A}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}y_{B}+z_{A}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z_{B}.$ ➊
Assume both vectors $v_{A}$ and $v_{B}$ are nonzero and $\varphi$ is the angle
measure between these two vectors. In this case the scalar product can be
expressed the following way:
$\langle v_{A},v_{B}\rangle=|v_{A}|{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}|v_{B}|{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\cos\varphi,$
where
$\displaystyle|v_{A}|$ $\displaystyle=\sqrt{x_{A}^{2}+y_{A}^{2}+z_{A}^{2}},$
$\displaystyle|v_{B}|$ $\displaystyle=\sqrt{x_{B}^{2}+y_{B}^{2}+z_{B}^{2}}.$
Now, assume the points $A$ and $B$ lie on the unit sphere in $\mathbb{R}^{3}$
centered at the origin. In this case $|v_{A}|=|v_{B}|=1$. By ➊ ‣ 13 we get
$\cos AB_{s}=\langle v_{A},v_{B}\rangle.$ ➋
This is the key formula on which the following proof is build.
$O$$C$$B$$A$$x$$y$$z$
Proof. Since the angle at $C$ is right, we can choose coordinates in
$\mathbb{R}^{3}$ so that $v_{C}=\nobreak(0,0,1)$, $v_{A}$ lies in $xz$-plane,
so $v_{A}=\nobreak(x_{A},0,z_{A})$ and $v_{B}$ lies in $yz$-plane, so
$v_{B}=(0,y_{B},z_{B})$.
Applying, ➋ ‣ 13, we get
$\displaystyle z_{A}$ $\displaystyle=\langle v_{C},v_{A}\rangle=\cos b,$
$\displaystyle z_{B}$ $\displaystyle=\langle v_{C},v_{B}\rangle=\cos a.$
Applying, ➋ ‣ 13 again, we get
$\displaystyle\cos c$ $\displaystyle=\langle v_{A},v_{B}\rangle=$
$\displaystyle=x_{A}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}0+0{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}y_{B}+z_{A}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}z_{B}=$ $\displaystyle=\cos b{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\cos
a.$
∎
13.2. Exercise. Show that if $\triangle_{s}ABC$ be a spherical triangle with
right angle at $C$ and $AC_{s}=BC_{s}=\tfrac{\pi}{4}$ then
$AB_{s}=\tfrac{\pi}{3}$.
Try to find two solutions, with and without using the spherical Pythagorean
theorem.
#### Inversion of the space
Stereographic projection is special type of maps between sphere and the
inversive plane. Poincare model of hyperbolic geometry is a direct analog of
stereographic projection for spherical geometry.
One can also define inversion in the sphere the same way as we define
inversion in the circle.
Formally, let $\Sigma$ be the the sphere with center $O$ and radius $r$. The
_inversion_ in $\Sigma$ of a point $P$ is the point $P^{\prime}\in[OP)$ such
that
$OP{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}OP^{\prime}=r^{2}.$
In this case, the sphere $\Sigma$ will be called the _sphere of inversion_ and
its center is called _center of inversion_.
We also add $\infty$ to the space and assume that the center of inversion is
mapped to $\infty$ and the other way around. The space $\mathbb{R}^{3}$ with
the point $\infty$ will be called _inversive space_.
The inversion of the space has many properties of the inversion of the plane.
Most important for us is the analogs of theorems 9, 9, 9 which can be
summarized as follows.
13.3. Theorem. The inversion in the sphere has the following properties:
1. (a)
Inversion maps sphere or plane into sphere or plane.
2. (b)
Inversion maps circle or line into circle or line.
3. (c)
Inversion preserves cross-ratio; i.e., if $A^{\prime}$, $B^{\prime}$,
$C^{\prime}$ and $D^{\prime}$ be the inversions of the points $A$, $B$, $C$
and $D$ correspondingly then
$\frac{AB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}CD}{BC{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}DA}=\frac{A^{\prime}B^{\prime}{\hskip
0.5pt\cdot\nobreak\hskip
0.5pt}C^{\prime}D^{\prime}}{B^{\prime}C^{\prime}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}D^{\prime}A^{\prime}}.$
4. (d)
Inversion maps arcs into arcs.
5. (e)
Inversion preserves the absolute value of the angle measure between tangent
half-lines to the arcs.
Instead of proof. We do not present the proofs here, but they are very similar
to the corresponding proofs in plane geometry. If you want to do it yourself,
prove the following lemma and use it together with the observation that any
circle in the space can be presented as an intersection of two spheres.
13.4. Lemma. Let $\Sigma$ be a subset of Euclidean space which contains at
least two points. Fix a point $O$ in the space.
Then $\Sigma$ is a sphere if and only if for any plane $\Pi$ passing through
$O$, the intersection $\Pi\cap\Sigma$ is either empty set, one point set or a
circle.
#### Stereographic projection
Consider the unit sphere $\Sigma$ in Euclidean space centered at the origin
$(0,0,0)$. This sphere can be described by equation $x^{2}+y^{2}+z^{2}=1$.
Denote by $\Pi$ be the $xy$-plane; it is defined by the equation $z=0$.
Clearly $\Pi$ runs through the center of $\Sigma$.
Denote by $N=(0,0,1)$ the “North Pole” and by $S=(0,0,-1)$ be the “South Pole”
of $\Sigma$; these are the points on the sphere which have extremal distances
to $\Pi$. Denote by $\Omega$ the “equator” of $\Sigma$; it is the intersection
$\Sigma\cap\Pi$.
For any point $P\neq S$ on $\Sigma$, consider the line $(SP)$ in the space.
This line intersects $\Pi$ in exactly one point, say $P^{\prime}$. We set in
addition that $S^{\prime}=\infty$.
The map $P\mapsto P^{\prime}$ is the _stereographic projection from $\Sigma$
to $\Pi$ from the South Pole_. The inverse of this map
$P^{\prime}\mapsto\nobreak P$ is called _stereographic projection from $\Pi$
to $\Sigma$ from the South Pole_.
$O$$P$$S$$N$$P^{\prime}$ The plane through
$P$, $O$ and $S$.
The same way one can define _stereographic projection from the North Pole_.
Note that $P=P^{\prime}$ if and only if $P\in\Omega$.
Note that if $\Sigma$ and $\Pi$ as above. Then the stereographic projections
$\Sigma\to\Pi$ and $\Pi\to\Sigma$ from $S$ are the restrictions of the
inversion in the sphere with center $S$ and radius $\sqrt{2}$ to $\Sigma$ and
$\Pi$ correspondingly.
From above and Theorem 13, it follows that the stereographic projection
preserves the angles between arcs; more precisely _the absolute value of the
angle measure_ between arcs on the sphere.
This makes it particularly useful in cartography. A map of a big region of
earth can not be done in the same scale, but using stereographic projection,
one can keep the angles between roads the same as on earth.
In the following exercises, we assume that $\Sigma$, $\Pi$, $\Omega$, $O$, $S$
and $N$ are as above.
13.5. Exercise. Show that the composition of stereographic projections from
$\Pi$ to $\Sigma$ from $S$ and from $\Sigma$ to $\Pi$ from $N$ is the
inversion of the plane $\Pi$ in $\Omega$.
13.6. Exercise. Show that image of great circle is a cline on the plane which
intersects $\Omega$ at two opposite points.
13.7. Exercise. Let Fix a point $P\in\Pi$ and let $Q$ be yet an other point in
$\Pi$. Denote by $P^{\prime}$ and $Q^{\prime}$ their stereographic projections
in $\Sigma$. Set $x=PQ$ and $y=P^{\prime}Q^{\prime}_{s}$. Show that
$\lim_{x\to 0}\frac{y}{x}=\frac{2}{1+OP^{2}}.$
Compare with Lemma 12.
#### Central projection
Let $\Sigma$ be the unit sphere centered at the origin which will be denoted
as $O$. Denote by $\Pi^{+}$ the plane described by equation $z=1$. This plane
is parallel to $xy$-plane and it pass through the North Pole $N=(0,0,1)$ of
$\Sigma$.
Recall that north hemisphere of $\Sigma$, is the subset of points
$(x,y,z)\in\nobreak\Sigma$ such that $z>0$. The north hemisphere will be
denoted as $\Sigma^{+}$.
Given a point $P\in\Sigma^{+}$, consider half-line $[OP)$ and denote by
$P^{\prime}$ the intersection of $[OP)$ and $\Pi^{+}$. Note that if
$P=(x,y,z)$ then $P^{\prime}=(\tfrac{x}{z},\tfrac{y}{z},1)$. It follows that
$P\mapsto P^{\prime}$ is a bijection between $\Sigma^{+}$ and $\Pi^{+}$.
The described map $\Sigma^{+}\to\Pi^{+}$ is called _central projection_ of
hemisphere $\Sigma^{+}$.
In spherical geometry, central projection is analogous to the Klein model of
hyperbolic plane.
Note that the central projection sends intersections of great circles with
$\Sigma^{+}$ to the lines in $\Pi^{+}$. The later follows since great circles
are formed by intersection of $\Sigma$ with planes passing through the origin
and the lines in $\Pi^{+}$ are formed by intersection of $\Pi^{+}$ with these
planes.
13.8. Exercise. Assume that $\triangle_{s}NBC$ has right angle at $C$ and $N$
is the North Pole which lies completely in the north hemisphere. Let
$\triangle NB^{\prime}C^{\prime}$ be the image of $\triangle_{s}NBC$ under
central projection.
Observe that $\triangle NB^{\prime}C^{\prime}$ has right angle at
$C^{\prime}$.
Use this observation and the standard Pythagorean for $\triangle
NB^{\prime}C^{\prime}$ to prove spherical Pythagorean theorem for
$\triangle_{s}NBC$.
13.9. Exercise. Consider a nondegenerate spherical triangle
$\triangle_{s}ABC$. Assume that $\Pi^{+}$ is parallel to the plane passing
through $A$, $B$ and $C$. Denote by $A^{\prime}$, $B^{\prime}$ and
$C^{\prime}$ the central projections of $A$, $B$ and $C$.
1. (a)
Show that the midpoints of $[AB]$, $[BC]$ and $[CA]$ are central projections
of the midpoints of $[AB]_{s}$, $[BC]_{s}$ correspondingly.
2. (b)
Use part (a) to show that medians of spherical triangle intersect at one
point.
3. (c)
Compare to Exercise 14.
### Chapter 14 Klein model
Klein model is an other model of hyperbolic plane discovered by Beltrami. The
Klein and Poincaré models are saying exactly the same thing but in two
different languages. Some problems in hyperbolic geometry admit simpler proof
using Klein model and others have simpler proof in Poincaré model. Therefore
it worth to know both.
#### Special bijection of h-plane to itself
Consider the Poincaré disc model with absolute at the unit circle $\Omega$
centered at $O$. Choose a coordinate system $(x,y)$ on the plane with origin
at $O$, so the circle $\Omega$ is described by the equation $x^{2}+y^{2}=1$.
$O$$P$$N$$S$$P^{\prime}$$\hat{P}$ The plane through $P$, $O$ and $S$.
Let us think of our plane $\Pi$ as it lies in the Euclidean space as the
$xy$-plane. Denote by $\Sigma$ the unit sphere centered at $O$; it is
described by the equation
$x^{2}+y^{2}+z^{2}=1.$
Set $S=(0,0,-1)$ and $N=(0,0,1)$; these are the South and North Poles of
$\Sigma$.
Consider stereographic projection $\Pi\to\Sigma$ from $S$; given point
$P\in\Pi$ denote its image as $P^{\prime}$. Note that the h-plane is mapped to
the North Hemisphere; i.e., to the set of points $(x,y,z)$ in $\Sigma$
described by inequality $z>0$.
For a point $P^{\prime}\in\Sigma$ consider its foot point $\hat{P}$ on $\Pi$;
this is the closest point on $\Pi$ from $P^{\prime}$.
The composition $P\mapsto\nobreak\hat{P}$ of these two maps is a bijection of
h-plane to itself.
Note that $P=\hat{P}$ if and only if $P\in\Omega$ or $P=O$ or $P=\infty$.
14.1. Exercise. Show that the map $P\mapsto\hat{P}$ described above can be
described the following way: set $\hat{O}=O$ and for any other point point $P$
take $\hat{P}\in[OP)$ such that
$O\hat{P}=\frac{2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}x}{1-x^{2}},$
where $x=OP$.
14.2. Lemma. Let $(PQ)_{h}$ be an h-line with the ideal points $A$ and $B$.
Then $\hat{P},\hat{Q}\in[AB]$.
Moreover
$\frac{A\hat{Q}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}B\hat{P}}{\hat{Q}B{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\hat{P}A}=\left(\frac{AQ{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}BP}{QB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}PA}\right)^{2}.$ ➊
In particular
$PQ_{h}=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\left|\ln\frac{A\hat{Q}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}B\hat{P}}{\hat{Q}B{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\hat{P}A}\right|.$
Proof. Consider the stereographic projection $\Pi\to\Sigma$ from the South
Pole. Denote by $P^{\prime}$ and $Q^{\prime}$ the images of $P$ and $Q$.
According to Theorem 13(c),
$\frac{AQ{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}BP}{QB{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}PA}=\frac{AQ^{\prime}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}BP^{\prime}}{Q^{\prime}B{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}P^{\prime}A}.$ ➋
$A$$B$$\hat{P}$$P^{\prime}$ The plane $\Lambda$.
By Theorem 13(e), each cline in $\Pi$ which is perpendicular to $\Omega$ is
mapped to a circle in $\Sigma$ which is still perpendicular to $\Omega$. It
follows that the stereographic projection sends $(PQ)_{h}$ to the intersection
of the north hemisphere of $\Sigma$ with a plane, say $\Lambda$, perpendicular
to $\Pi$.
Consider the plane $\Lambda$. It contains points $A$, $B$, $P^{\prime}$,
$\hat{P}$ and the circle $\Gamma=\Sigma\cap\Lambda$. (It also contains
$Q^{\prime}$ and $\hat{Q}$ but we will not use these points for a while.)
Note that
* $\diamond$
$A,B,P^{\prime}\in\Gamma$,
* $\diamond$
$[AB]$ is a diameter of $\Gamma$,
* $\diamond$
$(AB)=\Pi\cap\Sigma$,
* $\diamond$
$\hat{P}\in[AB]$
* $\diamond$
$(P^{\prime}\hat{P})\perp(AB)$.
Since $[AB]$ is the diameter, the angle $\angle APB$ is right. Hence
$\triangle A\hat{P}P^{\prime}\sim\nobreak\triangle
AP^{\prime}B\sim\nobreak\triangle P^{\prime}\hat{P}B$. In particular
$\frac{AP^{\prime}}{BP^{\prime}}=\frac{A\hat{P}}{P^{\prime}\hat{P}}=\frac{P^{\prime}\hat{P}}{B\hat{P}}.$
Therefore
$\frac{A\hat{P}}{B\hat{P}}=\left(\frac{AP^{\prime}}{BP^{\prime}}\right)^{2}.$
➌
The same way we get
$\frac{A\hat{Q}}{B\hat{Q}}=\left(\frac{AQ^{\prime}}{BQ^{\prime}}\right)^{2}.$
➍
Finally note that ➋ ‣ 14+➌ ‣ 14+➍ ‣ 14 imply ➊ ‣ 14.
The last statement follows from ➊ ‣ 14 and the definition of h-distance.
Indeed,
$\displaystyle PQ_{h}$
$\displaystyle\buildrel\mathrm{def}\over{=\\!\\!=}\left|\ln\frac{AQ{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}BP}{QB{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}PA}\right|=$ $\displaystyle=\left|\ln\left(\frac{A\hat{Q}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}B\hat{P}}{\hat{Q}B{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\hat{P}A}\right)^{\frac{1}{2}}\right|=$
$\displaystyle=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\left|\ln\frac{A\hat{Q}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}B\hat{P}}{\hat{Q}B{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\hat{P}A}\right|.$
∎
$A_{1}$$B_{1}$$\Gamma_{1}$$\Omega$
14.3. Exercise. Let $\Gamma_{1}$, $\Gamma_{2}$ and $\Gamma_{3}$ be three
circles perpendicular to the circle $\Omega$. Let us denote by $[A_{1}B_{1}]$,
$[A_{2}B_{2}]$ and $[A_{3}B_{3}]$ the common chords of $\Omega$ and
$\Gamma_{1}$, $\Gamma_{2}$, $\Gamma_{3}$ correspondingly. Show that the chords
$[A_{1}B_{1}]$, $[A_{2}B_{2}]$ and $[A_{3}B_{3}]$ intersect at one point
inside $\Omega$ if and only if $\Gamma_{1}$, $\Gamma_{2}$ and $\Gamma_{3}$
intersect at two points.
#### Klein model
The following picture illustrates the map $P\mapsto\hat{P}$ described in the
previous section. If you take the picture on the left and apply the map
$P\mapsto\hat{P}$, you get the picture on the right. The picture on the right
gives a new way to look at the hyperbolic plane, which is called _Klein
model_. One may think of the map $P\mapsto\hat{P}$ as about translation from
one model to the other.
Poincaré modelKlein model
In the Klein model things look different; some become simpler, other things
become more complicated.
* $\diamond$
The h-lines in the Klein model are formed by chords. More precisely, they are
formed by the intersections of chords of the absolute wit the h-plane.
* $\diamond$
The h-circles and equidistants in the Klein model are formed by ellipses and
their intersections with the h-plane. It follows since the stereographic
projection sends circles one the plane to the circles on the unit sphere and
the orthogonal projection of circle back to plane is formed by ellipse111One
may define ellipse as the projection of a circle which lies in the space to
the plane..
$A$$B$$P$$Q$
* $\diamond$
To find the h-distance between the points $P$ and $Q$ in the Klein model, you
have to find the points of intersection, say $A$ and $B$, of the Euclidean
line $(PQ)$ with the absolute; then, by Lemma 14,
$PQ_{h}=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\left|\ln\frac{AQ{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}BP}{QB{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}PA}\right|.$
* $\diamond$
The angle measures in Klein model are very different from the Euclidean angles
and it is hard to figure out by looking on the picture. For example all the
intersecting h-lines on the picture above are perpendicular. There are two
useful exceptions
* $\circ$
If $O$ is the center of absolute then
$\measuredangle_{h}AOB=\measuredangle AOB.$
* $\circ$
If $O$ is the center of absolute and $\measuredangle OAB=\pm\tfrac{\pi}{2}$
then
$\measuredangle_{h}OAB=\measuredangle OAB=\pm\tfrac{\pi}{2}.$
To find the angle measure in Klein model, you may apply a motion of h-plane
which moves the vertex of the angle to the center of absolute; once it is done
the hyperbolic and Euclidean angles have the same measure.
The following exercise is hyperbolic analog of Exercise 13. This is the first
example of a statement which admits an easier proof using Klein model.
14.4. Exercise. Let $P$ and $Q$ be the point in h-plane which lie on the same
distance from the center of absolute. Observe that in Klein model, h-midpoint
of $[PQ]_{h}$ coincides with the Euclidean midpoint of $[PQ]_{h}$.
Conclude that if an h-triangle is inscribed in an h-circle then its medians
intersect at one point.
Think how to prove the same for a general h-triangle.
#### Hyperbolic Pythagorean theorem
14.5. Theorem. Assume that $\triangle_{h}ACB$ is a triangle in h-plane with
right angle at $C$. Set $a=BC_{h}$, $b=CB_{h}$ and $c=AB_{h}$. Then
$\operatorname{ch}c=\operatorname{ch}a{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\operatorname{ch}b.$ ➎
where $\operatorname{ch}$ denotes _hyperbolic cosine_ ; i.e., the function
defined the following way
$\operatorname{ch}x\buildrel\mathrm{def}\over{=\\!\\!=}\tfrac{e^{x}+e^{-x}}{2}.$
$A$$B$$C$$s$$t$
$u$
$X$$Y$
Proof. We will use Klein model of h-plane with a unit circle as the absolute.
We can assume that $A$ is the center of absolute. Therefore both
$\angle_{h}ACB$ and $\angle ACB$ are right.
Set $s=BC$, $t=CA$, $u=AB$. According to Euclidean Pythagorean theorem (6),
$u^{2}=s^{2}+t^{2}.$
Note that
$\displaystyle b$ $\displaystyle=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\ln\frac{1+t}{1-t};$ therefore $\displaystyle\operatorname{ch}b$
$\displaystyle=\frac{\left(\frac{1+t}{1-t}\right)^{\frac{1}{2}}+\left(\frac{1-t}{1+t}\right)^{\frac{1}{2}}}{2}=$
$\displaystyle=\frac{1}{\sqrt{1-t^{2}}}.$ The same way we get $\displaystyle
c$ $\displaystyle=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\ln\frac{1+u}{1-u}$ and $\displaystyle\operatorname{ch}c$
$\displaystyle=\frac{\left(\frac{1+u}{1-u}\right)^{\frac{1}{2}}+\left(\frac{1-u}{1+u}\right)^{\frac{1}{2}}}{2}=$
$\displaystyle=\frac{1}{\sqrt{1-u^{2}}}.$
Let $X$ and $Y$ are the ideal points of $(BC)_{h}$. Applying the Pythagorean
theorem (6) again, we get
$CX^{2}=CY^{2}=1-t^{2}.$
Therefore
$\displaystyle a$ $\displaystyle=\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\ln\frac{\sqrt{1-t^{2}}+s}{\sqrt{1-t^{2}}-s}$ and
$\displaystyle\operatorname{ch}a$
$\displaystyle=\frac{\left(\frac{\sqrt{1-t^{2}}+s}{\sqrt{1-t^{2}}-s}\right)^{\frac{1}{2}}+\left(\frac{\sqrt{1-t^{2}}-s}{\sqrt{1-t^{2}}+s}\right)^{\frac{1}{2}}}{2}=$
$\displaystyle=\frac{\sqrt{1-t^{2}}}{\sqrt{1-t^{2}-s^{2}}}$
$\displaystyle=\frac{\sqrt{1-t^{2}}}{\sqrt{1-u^{2}}}$
Hence ➎ ‣ 14 follows. ∎
14.6. Exercise. Give a proof of Proposition 12 using Klein model.
#### Bolyai’s construction
Assume we need to construct a line asymptotically parallel to the given line
through the given point. The initial configuration is given by three points,
say $P$, $A$ and $B$ and we need to construct a line through $P$ which is
asymptotically parallel to $\ell=(AB)$.
Note that ideal points do not lie in the h-plane, so there is no way to use
them in the construction.
The following construction was given by Bolyai. Unlike the other construction
given earlier in the lectures, this construction works in absolute plane;
i.e., it works in Euclidean and in hyperbolic plane as well. We assume that
you know a compass-and-ruler construction of perpendicular line through the
given point.
14.7. Bolyai’s construction.
1. 1.
Construct the line $m$ through $P$ which perpendicular to $\ell$. Denote by
$Q$ the foot point of $P$ on $\ell$.
2. 2.
Construct the line $n$ through $P$ which perpendicular to $m$.
3. 3.
Draw the circle $\Gamma_{1}$ with center $Q$ through $P$ and mark by $R$ a
point of intersection of $\Gamma_{1}$ with $\ell$.
4. 4.
Construct the line $k$ through $R$ which perpendicular to $n$.
5. 5.
Draw the circle $\Gamma_{2}$ with center $P$ through $Q$ and mark by $T$ a
point of intersection of $\Gamma_{2}$ with $k$.
6. 6.
The line $PT$ is asymptotically parallel to $\ell$.
You can use this link to a java applet to perform the construction.
Note that in Euclidean plane $\Gamma_{2}$ is tangent to $k$, so the point $T$
is uniquely defined. In hyperbolic plane the $\Gamma_{2}$ intersects $k$ in
two points, both of the corresponding lines are asymptotically parallel to
$\ell$, one from left and one from right.
To prove that Bolyai’s construction gives the asymptotically parallel line in
h-plane, it is sufficient to show the following.
$P$$Q$$R$$S$$T$$A$$B$$C$
14.8. Proposition. Assume $P$, $Q$, $R$, $S$, $T$ be points in h-plane such
that
* $\diamond$
$S\in(RT)_{h}$,
* $\diamond$
$(PQ)_{h}\perp(QR)_{h}$,
* $\diamond$
$(PS)_{h}\perp(PQ)_{h}$,
* $\diamond$
$(RT)_{h}\perp(PS)_{h}$ and
* $\diamond$
$(PT)_{h}$ and $(QR)_{h}$ are asymptotically parallel.
Then $QR_{h}=PT_{h}$.
Proof. We will use the Klein’s model. Without loss of generality, we may
assume that $P$ is the center of absolute. As it was noted on page $\diamond$
‣ 14, in this case the corresponding Euclidean lines are also perpendicular;
i.e., $(PQ)\perp(QR)$, $(PS)\perp(PQ)$ and $(RT)\perp(PS)$.
Denote by $A$ be the ideal point of $(QR)_{h}$ and $(PT)_{h}$. Denote by $B$
and $C$ the remaining ideal points of $(QR)_{h}$ and $(PT)_{h}$
correspondingly.
Note that the Euclidean lines $(PQ)$, $(TR)$ and $(CB)$ are parallel.
Therefore $\triangle AQP\sim\triangle ART\sim\triangle ABC$. In particular,
$\frac{AC}{AB}=\frac{AT}{AR}=\frac{AP}{AQ}.$
It follows that
$\frac{AT}{AR}=\frac{AP}{AQ}=\frac{BR}{CT}=\frac{BQ}{CP}.$
In particular
$\frac{AT{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}CP}{TC{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}PA}=\frac{AR{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}BQ}{RB{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}QA};$
hence $QR_{h}=PT_{h}$. ∎
### Chapter 15 Complex coordinates
In this chapter we give an interpretation of inversive geometry using complex
coordinates. The results of this chapter will not be used further in the
lectures.
#### Complex numbers
Informally, a complex number is a number that can be put in the form
$z=x+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}y,$ ➊
where $x$ and $y$ are real numbers and $i^{2}=-1$.
The set of complex numbers will be further denoted by $\mathbb{C}$. If $x$,
$y$ and $z$ as in ➊ ‣ 15, then $x$ is called the real part and $y$ the
imaginary part of the complex number $z$. Briefly it is written as
$x=\mathop{\rm Re}\nolimits z$ and $y=\mathop{\rm Im}\nolimits z$.
On the more formal level, a complex number is a pair of real numbers $(x,y)$
with addition and multiplication described below. The formula $x+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}y$ is only convenient way to write the pair
$(x,y)$.
$\displaystyle(x_{1}+i{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}y_{1})+(x_{2}+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}y_{2})$
$\displaystyle\buildrel\mathrm{def}\over{=\\!\\!=}(x_{1}+x_{2})+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(y_{1}+y_{2});$ $\displaystyle(x_{1}+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}y_{1}){\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(x_{2}+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}y_{2})$
$\displaystyle\buildrel\mathrm{def}\over{=\\!\\!=}(x_{1}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}x_{2}-y_{1}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}y_{2})+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(x_{1}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}y_{2}+y_{1}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}x_{2}).$
#### Complex coordinates
Recall that one can think of Euclidean plane as the set of all pairs of real
numbers $(x,y)$ equipped with the metric
$AB=\sqrt{(x_{A}-x_{B})^{2}+(y_{A}-y_{B})^{2}}$
where $A=(x_{A},y_{A})$ and $B=(x_{B},y_{B})$.
One can pack coordinates $(x,y)$ of a point in the Euclidean plane, in one
complex number $z=x+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}y$. This way we
get one-to-one correspondence between points of Euclidean plane and
$\mathbb{C}$. Given a point $Z=(x,y)$, the complex number $z=x+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}y$ is called _complex coordinate_ of $Z$.
Note that if $O$, $E$ and $I$ are the points in the plane with complex
coordinates $0$, $1$ and $i$ then $\measuredangle EOI=\pm\tfrac{\pi}{2}$.
Further we assume that $\measuredangle EOI=\tfrac{\pi}{2}$; if not, one has to
change the direction of the $y$-coordinate.
#### Conjugation and absolute value
Let $z=x+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}y$ and both $x$ and $y$ are
real. Denote by $Z$ the point in the plane with complex coordinate $z$.
If $y=0$, we say that $z$ is a _real_ and if $x=0$ we say that $z$ is an
_imaginary_ complex number. The set of points with real and imaginary complex
coordinates form lines in the plane, which are called _real_ and _imaginary_
lines which will be denoted as $\mathbb{R}$ and $i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\mathbb{R}$.
The complex number $\bar{z}=x-iy$ is called _complex conjugate_ of $z$.
Note that the point $\bar{Z}$ with complex coordinate $\bar{z}$ is the
reflection of $Z$ in the real line.
It is straightforward to check that
$\displaystyle x$ $\displaystyle=\mathop{\rm Re}\nolimits
z=\frac{z+\bar{z}}{2},$ $\displaystyle y$ $\displaystyle=\mathop{\rm
Im}\nolimits z=\frac{z-\bar{z}}{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}2},$
$\displaystyle x^{2}+y^{2}$ $\displaystyle=z{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\bar{z}.$ ➋
The last formula in ➋ ‣ 15 makes possible to express the quotient
$\tfrac{w}{z}$ of two complex numbers $w$ and $z=x+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}y$:
$\frac{w}{z}=\tfrac{1}{z{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\bar{z}}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}w{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\bar{z}=\tfrac{1}{x^{2}+y^{2}}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}w{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\bar{z}.$
Note that
$\displaystyle\overline{z+w}$ $\displaystyle=\bar{z}+\bar{w},$
$\displaystyle\overline{z-w}$ $\displaystyle=\bar{z}-\bar{w},$
$\displaystyle\overline{z{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}w}$
$\displaystyle=\bar{z}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\bar{w},$
$\displaystyle\overline{z/w}$ $\displaystyle=\bar{z}/\bar{w};$
i.e., all the algebraic operations _respect_ conjugation.
The value $\sqrt{x^{2}+y^{2}}=\sqrt{z{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\bar{z}}$ is called _absolute value_ of $z$ and denoted by $|z|$.
Note that if $Z$ and $W$ are points in the Euclidean plane and $z$ and $w$
their complex coordinates then
$ZW=|z-w|.$
#### Euler’s formula
Let $\alpha$ be a real number. The following identity is called _Euler’s
formula_.
$e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha}=\cos\alpha+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\sin\alpha.$ ➌
In particular, $e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi}=-1$ and
$e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\frac{\pi}{2}}=i$.
Geometrically Euler’s formula means the following. Assume that $O$ and $E$ are
the point with complex coordinates $0$ and $1$ correspondingly. Assume $OZ=1$
and $\measuredangle EOZ\equiv\alpha$ then $e^{i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\alpha}$ is the complex coordinate of $Z$. In
particular, the complex coordinate of any point on the unit circle centered at
$O$ can be uniquely expressed as $e^{i{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\alpha}$ for some $\alpha\in(-\pi,\pi]$.
A complex number $z$ is called _unit_ if $|z|=1$. According to Euler’s
identity, in this case
$z=e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha}=\cos\alpha+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\sin\alpha$
for some value $\alpha\in(-\pi,\pi]$.
Why should you think that ➌ ‣ 15 is true? The proof of Euler’s identity
depends on the way you define exponent. If you never had to take exponent of
imaginary number, you may take the right hand side in ➌ ‣ 15 as the definition
of the $e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha}$.
In this case formally nothing has to be proved, but it is better to check that
$e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha}$ the satisfies familiar
identities. For example
$e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}e^{i{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\beta}=e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(\alpha+\beta)}.$
Which can be proved using the following trigonometric formulas, which we
assume to be known:
$\displaystyle\cos(\alpha+\beta)$ $\displaystyle=\cos\alpha{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\cos\beta-\sin\alpha{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\sin\beta$ $\displaystyle\sin(\alpha+\beta)$
$\displaystyle=\sin\alpha{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\cos\beta+\cos\alpha{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\sin\beta$
If you know power series for sine, cosine and exponent, the following might
convince that ➌ ‣ 15 is the right definition.
$\displaystyle e^{i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}x}$
$\displaystyle{}=1+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}x+\frac{(i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}x)^{2}}{2!}+\frac{(i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}x)^{3}}{3!}+\frac{(i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}x)^{4}}{4!}+\frac{(i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}x)^{5}}{5!}+\cdots=$ $\displaystyle=1+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}x-\frac{x^{2}}{2!}-i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\frac{x^{3}}{3!}+\frac{x^{4}}{4!}+i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\frac{x^{5}}{5!}-\cdots=$
$\displaystyle=\left(1-\frac{x^{2}}{2!}+\frac{x^{4}}{4!}-\cdots\right)+i{\hskip
0.5pt\cdot\nobreak\hskip
0.5pt}\left(x-\frac{x^{3}}{3!}+\frac{x^{5}}{5!}-\cdots\right)=$
$\displaystyle=\cos x+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\sin x.$
#### Argument and polar coordinates
As above, assume that $O$ and $E$ denote the points with complex coordinates
$0$ and $1$ correspondingly.
Let $Z$ be the point distinct form $O$. Set $\rho=OZ$ and
$\vartheta=\measuredangle EOZ$. The pair $(\rho,\vartheta)$ is called _polar
coordinates_ of $Z$.
If $z$ is the complex coordinate of $Z$ then then $\rho=|z|$. The value
$\vartheta$ is called argument of $z$ (briefly, $\vartheta=\arg z$). In this
case
$z=\rho{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}e^{i{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\vartheta}=\rho{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(\cos\vartheta+i{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\sin\vartheta).$
Note that
$\displaystyle\arg(z{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}w)$
$\displaystyle\equiv\arg z+\arg w$ and $\displaystyle\arg\tfrac{z}{w}$
$\displaystyle\equiv\arg z-\arg w$
if $z,w\neq 0$. In particular, if $Z$, $V$, $W$ be points with complex
coordinates $z$, $v$ and $w$ correspondingly then
$\measuredangle
VZW=\arg\\!\left(\frac{w-z}{v-z}\right)\equiv\arg(w-z)-\arg(v-z)$ ➍
once the left hand side is defined.
15.1. Exercise. Use the formula ➍ ‣ 15 to show that in any triangle $\triangle
ZVW$
$\measuredangle ZVW+\measuredangle VWZ+\measuredangle WZV\equiv\pi.$
15.2. Exercise. Assume that points $V$, $W$ and $Z$ have complex coordinates
$v$, $w$ and $v{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}w$ correspondingly and
the point $O$ and $E$ as above. Sow that
$\triangle OEV\sim\triangle OWZ.$
The following Theorem is a reformulation of Theorem 8 which use complex
coordinates.
15.3. Theorem. Let $UVWZ$ be a quadrilateral and $u$, $v$, $w$ and $z$ be the
complex coordinates of its vertices. Then $UVWZ$ is inscribed if and only if
the number
$\frac{(v-u){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(w-z)}{(v-w){\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(z-u)}$
is real.
The value $\frac{(v-u){\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(w-z)}{(v-w){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(z-u)}$ will be
called _complex cross-ratio_ , it will be discussed in more details below.
15.4. Exercise. Observe that the complex number $z\neq 0$ is real if and only
if $\arg z=0$ or $\pi$; in other words, $2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\arg z\equiv 0$.
Use this observation to show that Theorem 15 is indeed a reformulation of
Theorem 8.
#### Möbius transformations
15.5. Exercise. Watch video “Möbius Transformations Revealed” by Douglas
Arnold and Jonathan Rogness. (It is 3 minutes long and available on YouTube.)
The complex plane $\mathbb{C}$ extended by one ideal number $\infty$ is called
extended complex plane. It is denoted by $\hat{\mathbb{C}}$, so
$\hat{\mathbb{C}}=\mathbb{C}\cup\\{\infty\\}$
_Möbius transformation_ of $\hat{\mathbb{C}}$ is a function of one complex
variable $z$ which can be written as
$f(z)=\frac{a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z+b}{c{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}z+d},$
where the coefficients $a$, $b$, $c$, $d$ are complex numbers satisfying
$a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}d-b{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}c\not=0$. (If $a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}d-b{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}c=0$ the function defined above is a constant
and is not considered to be a Möbius transformation.)
In case $c\not=0$, we assume that
$f(-d/c)=\infty\ \ \text{and}\ \ f(\infty)=a/c;$
and if $c=0$ we assume
$f(\infty)=\infty.$
#### Elementary transformations
The following three types of Möbius transformations are called _elementary_.
1. 1.
$z\mapsto z+w,$
2. 2.
$z\mapsto w{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z$ for $w\neq 0,$
3. 3.
$z\mapsto\frac{1}{z}.$
The geometric interpretations. As before we will denote by $O$ the point with
complex coordinate $0$.
The first map $z\mapsto z+w,$ corresponds to so called _parallel translation_
of Euclidean plane, its geometric meaning should be evident.
The second map is called _rotational homothety_ with center at $O$. I.e., the
point $O$ maps to itself and any other point $Z$ maps to a point $Z^{\prime}$
such that $OZ^{\prime}=|w|{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}OZ$ and
$\measuredangle ZOZ^{\prime}=\arg w$.
The third map can be described as a composition of inversion in the unit
circle centered at $O$ and the reflection in $\mathbb{R}$ (any order). Indeed,
$\arg z\equiv-\arg\tfrac{1}{z}$ therefore
$\arg z=\arg(1/\bar{z});$
i.e., if the points $Z$ and $Z^{\prime}$ have complex coordinates $z$ and
$1/\bar{z}$ then $Z^{\prime}\in[OZ)$. Clearly $OZ=|z|$ and
$OZ^{\prime}=|1/\bar{z}|=\tfrac{1}{|z|}$. Therefore $Z^{\prime}$ is inversion
of $Z$ in the unit circle centered at $O$. Finally the reflection of
$Z^{\prime}$ in $\mathbb{R}$, has complex coordinate
$\tfrac{1}{z}=\overline{(1/\bar{z})}$.
15.6. Proposition. A map $f\colon\hat{\mathbb{C}}\to\hat{\mathbb{C}}$ is a
Möbius transformation if and only if it can be expressed as a composition of
elementary Möbius transformation.
Proof; ($\Rightarrow$). Consider, the Möbius transformation
$f(z)=\frac{a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z+b}{c{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}z+d}.$
It is straightforward to check that
$f(z)=f_{4}\circ f_{3}\circ f_{2}\circ f_{1}(z),$ ➎
where
* $\diamond$
$f_{1}(z)=z+\tfrac{d}{c}$,
* $\diamond$
$f_{2}(z)=\tfrac{1}{z}$,
* $\diamond$
$f_{3}(z)=-\tfrac{a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}d-b{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}c}{c^{2}}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}z$,
* $\diamond$
$f_{4}(z)=z+\tfrac{a}{c}$
if $c\neq 0$ and
* $\diamond$
$f_{1}(z)=\tfrac{a}{d}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z$,
* $\diamond$
$f_{2}(z)=z+\tfrac{b}{d}$,
* $\diamond$
$f_{3}(z)=f_{4}(z)=z$
if $c=0$.
($\Leftarrow$). We need to show that composing elementary transformations, we
can only get Möbius transformations. Note that it is sufficient to check that
composition of a Möbius transformations
$f(z)=\frac{a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z+b}{c{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}z+d}.$
with any elementary transformation is a Möbius transformations.
The later is done by means of direct calculations.
$\displaystyle\frac{a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(z+w)+b}{c{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(z+w)+d}$ $\displaystyle=\frac{a{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}z+(b+a{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}w)}{c{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z+(d+c{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}w)}$ $\displaystyle\frac{a{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(w{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}z)+b}{c{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(w{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}z)+d}$ $\displaystyle=\frac{(a{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}w){\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}z+b}{(c{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}w){\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}z+d}$ $\displaystyle\frac{a{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\frac{1}{z}+b}{c{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\frac{1}{z}+d}$ $\displaystyle=\frac{b{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}z+a}{d{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}z+c}$
∎
15.7. Corollary. The image of cline under Möbius transformation is a cline.
Proof. By Proposition 15, it is sufficient to check that each elementary
transformation sends cline to cline.
For the first and second elementary transformation the later is evident.
As it was noted above, the map $z\mapsto\tfrac{1}{z}$ is a composition of
inversion and reflection. By Theorem 9, inversion sends cline to cline. Hence
the result follows. ∎
15.8. Exercise. Show that inverse of Möbius transformation is a Möbius
transformation.
15.9. Exercise. Given distinct values
$z_{0},z_{1},z_{\infty}\in\hat{\mathbb{C}}$, construct a Möbius transformation
$f$ such that $f(z_{0})=0$, $f(z_{1})=1$ and $f(z_{\infty})=\nobreak\infty$.
Show that such transformation is unique.
#### Complex cross-ratio
Given four distinct complex numbers $u$, $v$, $w$, $z$, the complex number
$\frac{(u-w){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(v-z)}{(v-w){\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(u-z)}$
is called _complex cross-ratio_ ; it will be denoted as $(u,v;w,z)$.
If one of the numbers $u,v,w,z$, is $\infty$, then the complex cross-ratio has
to be defined by taking the appropriate limit; in other words, we assume that
$\frac{\infty}{\infty}=1$. For example,
$(u,v;w,\infty)=\frac{(u-w)}{(v-w)}.$
Assume that $U$, $V$, $W$ and $Z$ be the points with complex coordinates $u$,
$v$, $w$ and $z$ correspondingly. Note that
$\displaystyle\frac{UW{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}VZ}{VW{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}UZ}$ $\displaystyle=|(u,v;w,z)|,$
$\displaystyle\measuredangle WUZ+\measuredangle ZVW$
$\displaystyle=\arg\frac{u-w}{u-z}+\arg\frac{v-z}{v-w}\equiv$
$\displaystyle\equiv\arg(u,v;w,z).$
It makes possible to reformulate Theorem 9 using the complex coordinates the
following way.
15.10. Theorem. Let $UWVZ$ and $U^{\prime}W^{\prime}V^{\prime}Z^{\prime}$ be
two quadrilaterals such that the points $U^{\prime}$, $W^{\prime}$,
$V^{\prime}$ and $Z^{\prime}$ are inversions of $U$, $W$, $V$, and $Z$
correspondingly. Assume $u$, $w$, $v$, $z$, $u^{\prime}$, $w^{\prime}$,
$v^{\prime}$ and $z^{\prime}$ be the complex coordinates of $U$, $W$, $V$,
$Z$, $U^{\prime}$, $W^{\prime}$, $V^{\prime}$ and $Z^{\prime}$
correspondingly.
Then
$(u^{\prime},v^{\prime};w^{\prime},z^{\prime})=\overline{(u,v;w,z)}.$
The following Exercise is a generalization of the Theorem above. It admits a
short and simple solution which use Proposition 15.
15.11. Exercise. Show that complex cross-ratios are invariant under Möbius
transformations. That is, if a Möbius transformation maps four distinct
numbers $u,v,w,z$ to numbers $u^{\prime},v^{\prime},w^{\prime},z^{\prime}$
respectively, then
$(u^{\prime},v^{\prime};w^{\prime},z^{\prime})=(u,v;w,z).$
### Chapter 16 Hints
#### Chapter 1
Exercise 1. We will discuss only $d_{2}$. The cases $d_{1}$ and $d_{\infty}$
can be proved along the same lines, but the calculations are simpler.
Among the conditions in Definition 1, only the triangle inequality require
proof, the rest of conditions are evident. Let $A=(x_{A},y_{A})$,
$B=(x_{B},y_{B})$ and $C=(x_{C},y_{C})$. Set
$\displaystyle x_{1}$ $\displaystyle=x_{B}-x_{A},$ $\displaystyle y_{1}$
$\displaystyle=y_{B}-y_{A},$ $\displaystyle x_{2}$
$\displaystyle=x_{C}-x_{B},$ $\displaystyle y_{2}$
$\displaystyle=y_{C}-y_{B}.$
Then the inequality
$d_{2}(A,C)\leqslant d_{2}(A,B)+d_{2}(B,C)$ ➊
can be written as
$\sqrt{\bigl{(}x_{1}+x_{2}\bigr{)}^{2}+\bigl{(}y_{1}+y_{2}\bigr{)}^{2}}\leqslant\sqrt{x_{1}^{2}+y_{1}^{2}}+\sqrt{x_{2}^{2}+y_{2}^{2}}.$
Taking square of left and right hand sides, simplify take square again and
again simplify. You should get the following inequality
$0\leqslant(x_{1}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}y_{2}-x_{2}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}y_{1})^{2}.$
which is equivalent to ➊ ‣ 16 and evidently true.
Exercise 1. We apply Definition 1 only.
If $A\neq B$ then $d_{\mathcal{X}}(A,B)>0$. Since $f$ is distance-preserving,
$d_{\mathcal{Y}}(f(A),f(B))=d_{\mathcal{X}}(A,B).$
Therefore $d_{\mathcal{Y}}(f(A),f(B))>0$ and hence $f(A)\neq f(B)$.
Exercise 1. Set $f(0)=a$ and $f(1)=b$. Note that that $b=a+1$ or $a-1$.
Moreover, $f(x)=a\pm x$ and at the same time, $f(x)=b\pm(x-1)$ for any $x$.
If $b=a+1$, it follows that $f(x)=a+x$ for any $x$.
The same way, if $b=a-1$, it follows that $f(x)=a-x$ for any $x$.
Exercise 1. Show that the map $(x,y)\mapsto(x+y,x-y)$ is an isometry
$(\mathbb{R}^{2},d_{1})\to(\mathbb{R}^{2},d_{\infty})$. I.e., you need to
check that this map is bijective and distance preserving.
Exercise 1. First prove that two points $A=(x_{A},y_{A})$ and
$B=\nobreak(x_{B},y_{B})$ on the Manhattan plane have unique midpoint if and
only if $x_{A}=x_{B}$ or $y_{A}=y_{B}$; compare with the example on page 1.
Then use above statement to prove that any motion of the Manhattan plane can
be written in one of the following two ways
$\displaystyle(x,y)$ $\displaystyle\mapsto(\pm x+a,\pm y+b),$
$\displaystyle(x,y)$ $\displaystyle\mapsto(\pm y+b,\pm x+a),$
for some fixed real numbers $a$ and $b$.
Exercise 1. Set $A=(-1,1)$, $B=(0,0)$ and $C=(1,1)$. Show that for $d_{1}$ and
$d_{2}$ all the triangle inequalities with the points $A$, $B$ and $C$ are
strict. The apply Exercise 1 to show that the graph is not a line.
For $d_{\infty}$ show that $(x,|x|)\mapsto x$ gives the isometry of the graph
to $\mathbb{R}$. Conclude that the graph is a line in
$(\mathbb{R}^{2},d_{\infty})$.
Exercise 1. Applying the definition of line, the problems are reduced to the
finding number of solutions for each of the following two equations.
$|x-a|=|x-b|$
and
$|x-a|=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}|x-b|$
if $a\neq b$.
Each can be solved by taking square of left and right hand sides. The numbers
of solutions are 1 and 2 correspondingly.
Exercise 1. (a). By triangle inequality
$|f(A^{\prime})-f(A)|\leqslant d(A^{\prime},A).$
Therefore we can take $\delta=\varepsilon$.
(b). By triangle inequality
$\displaystyle|f(A^{\prime},B^{\prime})-f(A,B)|$
$\displaystyle\leqslant|f(A^{\prime},B^{\prime})-f(A,B^{\prime})|+$
$\displaystyle\ \ \ \ \ \ +|f(A,B^{\prime})-f(A,B)|\leqslant$
$\displaystyle\leqslant d(A^{\prime},A)+d(B^{\prime},B)$
Therefore we can take $\delta=\tfrac{\varepsilon}{2}$.
Exercise 1. Fix $A\in\mathcal{X}$ and $B\in\mathcal{Y}$ such that $f(A)=B$.
Fix $\varepsilon>0$. Since $g$ is continuous at $B$, there is $\delta_{1}>0$
such that
$d_{\mathcal{Z}}(g(B^{\prime}),g(B))<\varepsilon$
if $d_{\mathcal{Y}}(B^{\prime},B)<\delta_{1}$.
Since $f$ is continuous at $A$, there is $\delta_{2}>0$ such that
$d_{\mathcal{Y}}(f(A^{\prime}),f(A))<\nobreak\delta_{1}$
if $d_{\mathcal{X}}(A^{\prime},A)<\delta_{2}$.
Since $f(A)=B$, we get
$d_{\mathcal{Z}}(h(A^{\prime}),h(A))<\varepsilon$
if $d_{\mathcal{X}}(A^{\prime},A)<\delta_{2}$. Hence the result follows.
Exercise 1. The equation $2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha\equiv
0$ means that $2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\alpha=2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}k{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi$
for some integer $k$. Therefore $\alpha=k{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\pi$ for some integer $k$.
Equivalently $\alpha=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}n{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\pi$ or $\alpha=(2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}n+1){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi$
for some integer $n$. The first identity means that $\alpha\equiv 0$ and the
second means $\alpha\equiv\pi$.
#### Chapter 2
Exercise 2. By Axiom I, $(OA)=(OA^{\prime})$. Therefore the statement boils
down two the following.
Assume $f\colon\mathbb{R}\to\mathbb{R}$ is a motion of the plane which sends
$0\to 0$ and one positive number to a positive number then $f$ is an identity
map.
The later follows from Exercise 1.
Exercise 2. By Proposition 2, $\measuredangle AOA=0$. It remains to apply
Axiom IIa.
Exercise 2. Apply Proposition 2, Theorem 2 and Exercise 1.
Exercise 2. By Axiom IIb,
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle BOC\equiv 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle AOC-2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}AOB\equiv 0.$
By Exercise 1, it implies that $\measuredangle BOC=0$ or $\pi$.
It remains to apply Exercise 2 and Theorem 2 correspondingly in these cases.
Exercise 2. Apply Proposition 2 to show that $\measuredangle
AOC=\measuredangle BOD$ and then Axiom III.
#### Chapter 3
Exercise 3. Set $\alpha=\measuredangle AOB$ and $\beta=\measuredangle BOA$.
Note that $\alpha=\pi$ if and only if $\beta=\pi$. Otherwise $\alpha=-\beta$.
Hence the result follows.
Exercise 3. Set $\alpha=\measuredangle AOX$ and $\beta=\measuredangle BOX$.
Since $\angle AOB$ is straight,
$\alpha-\beta\equiv\pi.$ ➊
It follows that $\alpha=\pi$ $\Leftrightarrow$ $\beta=0$ and $\alpha=0$
$\Leftrightarrow$ $\beta=\pi$. In the remaining cases, note that
$|\alpha|,|\beta|<\pi$. If $\alpha$ and $\beta$ have the same sign then
$|\alpha-\beta|<\pi$ which contradicts ➊ ‣ 16.
Exercise 3. Set $\alpha=\measuredangle BOC$, $\beta=\measuredangle COA$ and
$\gamma=\measuredangle AOB$. By Axiom IIb and Proposition 2
$\alpha+\beta+\gamma\equiv 0$ ➋
Note that $0<\alpha+\beta<2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi$ and
$|\gamma|\leqslant\pi$. If $\gamma>0$ then ➋ ‣ 16 implies
$\alpha+\beta+\gamma=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi$
and if $\gamma<0$ then ➋ ‣ 16 implies
$\alpha+\beta+\gamma=0.$
Exercise 3. Note that $O$ and $A^{\prime}$ lie on the same side from $(AB)$.
Analogously $O$ and $B^{\prime}$ lie on the same side from $(AB)$. Hence the
result follows.
Exercise 3. Apply Theorem 3 for triangles $\triangle PQX$ and $\triangle PQY$
and then Proposition 3(a).
Exercise 3. Note that it is sufficient to consider the cases when
$A^{\prime}\neq B,C$ and $B^{\prime}\neq A,C$.
Apply Pasch’s theorem (3) twice; for $\triangle AA^{\prime}C$ with line
$(BB^{\prime})$ and for $\triangle BB^{\prime}C$ with line $(AA^{\prime})$.
Exercise 3. Assume that $Z$ is the point of intersection.
Note first $Z\neq P$ and $Z\neq Q$, therefore that $Z\notin(PQ)$.
Then show that $Z$ and $X$ lie on one side from $(PQ)$. Repeat the argument to
show that $Z$ and $Y$ lie on one side from $(PQ)$. In particular $X$ and $Y$
lie on the same side from $(PQ)$, a contradiction.
#### Chapter 4
Exercise 4. Consider point $D$ and $D^{\prime}$, so that $M$ is the midpoint
of $[AD]$ and $M^{\prime}$ is the midpoint of $[A^{\prime}D^{\prime}]$. Show
first that $\triangle ABD\cong\nobreak\triangle
A^{\prime}B^{\prime}D^{\prime}$.
Exercise 4. (a) Apply SAS.
(b) Use (a) and apply SSS.
Exercise 4. Choose $B^{\prime}\in[AC]$ such that $AB=AB^{\prime}$. Note that
$BC=B^{\prime}C$. Note that by SSS, $\triangle ABC\cong AB^{\prime}C$.
Exercise 4. Without loss of generality, we may assume that $X$ is distinct
from $A$, $B$ and $C$.
Set $\iota(X)=X^{\prime}$; assume $X^{\prime}\neq X$.
Note that $AX=AX^{\prime}$, $BX=BX^{\prime}$ and $CX=CX^{\prime}$. Therefore
$\measuredangle ABX\equiv\pm\measuredangle ABX^{\prime}$. Since $X\neq
X^{\prime}$, we get
$\measuredangle ABX\equiv-\measuredangle ABX^{\prime}.$
The same way we get
$\measuredangle CBX\equiv-\measuredangle CBX^{\prime}.$
Subtracting these two identities from each other, we get
$\measuredangle ABC\equiv-\measuredangle ABC,$
i.e., $\triangle ABC$ is degenerate, a contradiction.
#### Chapter 5
Exercise 5. Assume $X$ and $A$ lie on the same side from $\ell$.
$A$$B$$X$$Y$$\ell$
Note that $A$ and $B$ lie on the opposite sides of $\ell$. Therefore, by
Proposition 3, $[AX]$ does not intersect $\ell$ and $[BX]$ intersects $\ell$;
set $Y=[BX]\cap\ell$.
Note that $Y\notin[AX]$, therefore by Exercise 4,
$BX=AY+YX>AX.$
This way we proved “if”-part. To prove “only if”, it remains to switch $A$ and
$B$, repeat the above argument and yet apply Theorem 5.
Exercise 5. Apply Exercise 5, Theorem 4 and Exercise 3.
Exercise 5. Choose arbitrary nondegenerate triangle $\triangle ABC$. Denote by
$\triangle\hat{A}\hat{B}\hat{C}$ its image after the motion.
If $A\neq\hat{A}$, apply the reflection through the perpendicular bisector of
$[A\hat{A}]$. This reflection sends $A$ to $\hat{A}$. Denote by $B^{\prime}$
and $C^{\prime}$ the reflections of $B$ and $C$ correspondingly.
If $B^{\prime}\neq\hat{B}$, apply the reflection through the perpendicular
bisector of $[B^{\prime}\hat{B}]$. This reflection sends $B^{\prime}$ to
$\hat{B}$. Note that $\hat{A}\hat{B}=\hat{A}B^{\prime}$; i.e., $\hat{A}$ lies
on the bisector and therefore $\hat{A}$ reflects to itself. Denote by
$C^{\prime\prime}$ the reflections of $C^{\prime}$.
Finally if $C^{\prime\prime}\neq\hat{C}$ apply the reflection through
$(\hat{A}\hat{B})$. Note that $\hat{A}\hat{C}=\hat{A}C^{\prime\prime}$ and
$\hat{B}\hat{C}=\hat{B}C^{\prime\prime}$; i.e., $(AB)$ is the perpendicular
bisector of $[C^{\prime\prime}\hat{C}]$. Therefore this reflection sends
$C^{\prime\prime}$ to $\hat{C}$.
Apply Exercise 4 to show that the composition of constructed reflections
coincides with the given motion.
Exercise 5. Note that $\measuredangle XBA=\measuredangle ABP$, $\measuredangle
PBC=\measuredangle CBY$. Therefore
$\displaystyle\measuredangle XBY$ $\displaystyle\equiv\measuredangle
XBP+\measuredangle PBY\equiv$ $\displaystyle\equiv 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(\measuredangle ABP+\measuredangle PBC)\equiv$
$\displaystyle\equiv 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
ABC.$
Exercise 5. Let $(BX)$ and $(BY)$ be internal and external bisectors of
$\angle ABC$. Then
$\displaystyle 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle XBY$
$\displaystyle\equiv 2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
XBA+2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle ABY\equiv$
$\displaystyle\equiv\measuredangle CBA+\pi+2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle ABC\equiv$ $\displaystyle\equiv\pi+\measuredangle
CBC=\pi.$
Hence the result.
Exercise 5. Apply Theorem 5.
Exercise 5. Use Exercise 5 and the uniqueness of perpendicular (Theorem 5).
Exercise 5. Let $P^{\prime}$ be the reflection of $P$ through $(OO^{\prime})$.
Note that $P^{\prime}$ lies on both circles and $P^{\prime}\neq P$ if and only
if $P\notin(OO^{\prime})$.
Exercise 5. To prove (a), apply Exercise 5.
To prove (b), apply Theorem 3.
#### Chapter 6
Exercise 6. Show first that $k\perp n$.
Exercise 6. First show that $\triangle AA^{\prime}C\sim\triangle
BB^{\prime}C$.
Exercise 6. If $\triangle ABC$ is degenerate then one of the angle measures is
$\pi$ and the other two are $0$. Hence the result follows.
Assume $\triangle ABC$ is nondegenerate. Set $\alpha=\measuredangle CAB$,
$\beta=\measuredangle ABC$ and $\gamma=\measuredangle BCA$.
According to 3, we may assume that $0<\alpha,\beta,\gamma<\pi$. Therefore
$0<\alpha+\beta+\gamma<3{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\pi.$ ➊
By Theorem 6,
$\alpha+\beta+\gamma\equiv\pi.$ ➋
From ➊ ‣ 16 and ➋ ‣ 16 the result follows.
Exercise 6. Apply Theorem 6 to $\triangle ABC$ and $\triangle BDA$.
Exercise 6. (a). Use the uniqueness of parallel line (Theorem 6).
(b) Use lemma about parallelogram (Lemma 6) and Pythagorean theorem (6).
Exercise 6. Set $A=(0,0)$, $B=(c,0)$ and $C=(x,y)$. Clearly $AB=c$,
$AC^{2}=x^{2}+y^{2}$ and $BC^{2}=(c-x)^{2}+y^{2}$.
It remains to show that the there is a pair of real numbers $(x,y)$ which
satisfy the following system of equations
$\left[\begin{aligned} b^{2}&=x^{2}+y^{2}\\\
a^{2}&=(c-x)^{2}+y^{2}\end{aligned}\right.$
if $0<a\leqslant b\leqslant c\leqslant a+c$. ∎
#### Chapter 7
Exercise 7. Note that $(AC)\perp(BH)$ and $(BC)\perp(AH)$ and apply Theorem 7.
Exercise 7. If $E$ is the point of intersection of $(BC)$ with the external
bisector of $\angle BAC$ then
$\frac{AB}{AC}=\frac{EB}{EC}.$
It can be proved along the same lines as Lemma 7.
Exercise 7.111Check Exercise 10 for yet an other solution. Apply Lemma 7.
Exercise 7. Apply ASA for the two triangles which bisector cuts from the
original triangle.
Exercise 7. Let $I$ be the incenter. By SAS, we get $\triangle
AIZ\cong\nobreak\triangle AIY$ and therefore $AY=AZ$. The same way we get
$BX=BZ$ and $CX=CY$. Hence the result follows.
Exercise 7. Let $\triangle ABC$ be the given acute triangle and $\triangle
A^{\prime}B^{\prime}C^{\prime}$ be its orthic triangle. Apply Exercise 6 to
show that $\measuredangle A^{\prime}B^{\prime}C\equiv\nobreak\measuredangle
AB^{\prime}C^{\prime}$. Conclude that $(BB^{\prime})$ is bisecting $\angle
A^{\prime}B^{\prime}C^{\prime}$.
For the triangle $\triangle ABC$ is obtuse then orthocenter coincides with one
of the excenters of $\triangle ABC$; i.e., the point of intersection of two
external and one internal bisectors of $\triangle ABC$.
#### Chapter 8
Exercise 8. (a). Apply Theorem 8 for $\angle XX^{\prime}Y$ and $\angle
X^{\prime}YY^{\prime}$ and Theorem 6 for $\triangle PYX^{\prime}$.
(b) Note first that the angles $\angle XPY$ and $\angle X^{\prime}PY^{\prime}$
are vertical. Therefore $\measuredangle XPY=\measuredangle
X^{\prime}PY^{\prime}$.
Applying Theorem 8 we get
$2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\measuredangle
Y^{\prime}X^{\prime}P\equiv 2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle PYX.$
According to Theorem 3, $\angle Y^{\prime}X^{\prime}P$ and $\angle PYX$ have
the same sign; therefore
$\angle Y^{\prime}X^{\prime}P\equiv\measuredangle PYX.$
It remains to apply the AA similarity condition.
(c) Apply (b) assuming $[YY^{\prime}]$ is the diameter of $\Gamma$.
Exercise 8. Apply Exercise 8(b) three times.
Exercise 8. Apply Theorem 8 twice for quadrilaterals $ABYX$ and
$ABY^{\prime}X^{\prime}$ and use Corollary 6.
Exercise 8. Note that $\measuredangle AA^{\prime}B=\pm\tfrac{\pi}{2}$ and
$\measuredangle AB^{\prime}B=\pm\tfrac{\pi}{2}$. Then apply Theorem 8 to
quadrilateral $AA^{\prime}BB^{\prime}$.
If $O$ is the center of the circle then
$\measuredangle AOB\equiv 2{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\measuredangle AA^{\prime}B\equiv\pi.$
I.e., $O$ is the midpoint of $[AB]$.
Exercise 8. Note that by Theorem 6,
$\measuredangle ABC+\measuredangle BCA+\measuredangle CAB\equiv\pi.$
Then apply Proposition 8 twice.
Exercise 8. If $C\in(AX)$ then the arc is formed by $[AC]$ or two half-lines
of $(AX)$ with vertices at $A$ and $C$.
Assume $C\notin(AX)$. Let $\ell$ be the line through $A$ perpendicular to
$[AX)$ and $m$ be the perpendicular bisector of $[AC]$. Note that
$\ell\nparallel m$; set $O=\ell\cap m$. Note that the circle with center $O$
passing though $A$ is also passing through $C$ and tangent to $(AX)$. Note
that one the two arcs with endpoints $A$ and $C$ is tangent to $[AX)$.
The uniqueness follow from the propositions 8 and 8.
#### Chapter 9
Exercise 9. By Lemma 5, $\angle OTP^{\prime}$ is right. Therefore $\triangle
OPT\sim\nobreak\triangle OTP^{\prime}$ and in particular
$OP{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}OP^{\prime}=OT^{2}.$
Hence the result follows.
Exercise 9. By Lemma 9,
$\displaystyle\measuredangle IA^{\prime}B^{\prime}$
$\displaystyle\equiv-\measuredangle IBA,$ $\displaystyle\measuredangle
IB^{\prime}A^{\prime}$ $\displaystyle\equiv-\measuredangle IAB,$
$\displaystyle\measuredangle IB^{\prime}C^{\prime}$
$\displaystyle\equiv-\measuredangle ICB,$ $\displaystyle\measuredangle
IC^{\prime}B^{\prime}$ $\displaystyle\equiv-\measuredangle IBC,$
$\displaystyle\measuredangle IC^{\prime}A^{\prime}$
$\displaystyle\equiv-\measuredangle IAC,$ $\displaystyle\measuredangle
IA^{\prime}C^{\prime}$ $\displaystyle\equiv-\measuredangle ICA.$
It remains to apply the theorem on the sum of angles of triangle (Theorem 6)
to show that $(A^{\prime}I)\perp(B^{\prime}C^{\prime})$,
$(B^{\prime}I)\perp(C^{\prime}A^{\prime})$ and
$(C^{\prime}I)\perp(B^{\prime}A^{\prime})$.
Exercise 9. Show first that for any $r>0$ and any real numbers $x,y$ distinct
from $0$, we have
$\frac{r}{(x+y)/2}\neq\\!\left(\frac{r}{x}+\frac{r}{y}\right)/2.$
Note that for appropriately chosen isometry $(OO^{\prime})\to\mathbb{R}$, left
hand side is the coordinate of the inversion of the center of $\Gamma$ and
right hand side is the coordinate of the of the center of inversion of
$\Gamma$, assuming $x$ and $y$ are coordinates of the intersections
$(OO^{\prime})\cap\Gamma$.
Exercise 9. Apply an inversion in a circle with the center at the only point
of intersection of the circles; then use Theorem 9.
Exercise 9. Let $P_{1}$ and $P_{2}$ be the inversions of $P$ in $\Omega_{1}$
and $\Omega_{2}$. Note that the points $P$, $P_{1}$ and $P_{2}$ are mutually
distinct.
According to Theorem 7, there is unique cline $\Gamma$ which pass through $P$,
$P_{1}$ and $P_{2}$.
By Corollary 9, $\Gamma\perp\Omega_{1}$ and $\Gamma\perp\Omega_{2}$.
On the other hand if $\Gamma^{\prime}\ni P$ and
$\Gamma^{\prime}\perp\Omega_{1}$, $\Gamma^{\prime}\perp\Omega_{2}$ then by
Theorem 9 we have $\Gamma^{\prime}\ni P_{1},P_{2}$. I.e.
$\Gamma^{\prime}=\Gamma$.
Exercise 9. Apply Theorem 9(b), Exercise 6 and Theorem 8.
Exercise 9. Denote by $T$ the point of intersection of $\Omega_{1}$ and
$\Omega_{2}$. Let $P$ be the foot point of $T$ on $(O_{1}O_{2})$. Show first
that
$\triangle O_{1}PT\sim\triangle O_{1}TO_{2}\sim\triangle TPO_{2}.$
Conclude that $P$ is the point of interest.
#### Chapter 10
Exercise 10. Denote by $D$ the midpoint of $[BC]$. Assume $(AD)$ is the
bisector of the angle at $A$.
Mark point $A^{\prime}\in[AD)$ which is distinct from $A$ and
$AD=A^{\prime}D$. Note that $\triangle CAD\cong\triangle BA^{\prime}D$. In
particular $\measuredangle BAA^{\prime}=\measuredangle AA^{\prime}B$; it
remains to apply Theorem 4 for $\triangle ABA^{\prime}$.
Exercise 10. Arguing by contradiction, assume
$\measuredangle ABC+\measuredangle BCD\equiv\pi,$
but $(AB)\nparallel\nobreak(CD)$. Let $Z$ be the point of intersection of
$(AB)$ and $(CD)$.
Note that
$\measuredangle ABC\equiv\measuredangle ZBC\ \ \text{or}\ \ \pi+\measuredangle
ZBC$
and
$\measuredangle BCD\equiv\measuredangle BCZ\ \ \text{or}\ \ \pi+\measuredangle
BCZ.$
$B^{\prime}$$A^{\prime}$$C^{\prime}$$C^{\prime\prime}$
Apply Proposition 10 to $\triangle ZBC$ and try to arrive to a contradiction.
Exercise 10. Let $C^{\prime\prime}\in[B^{\prime}C^{\prime})$ be the point such
that $B^{\prime}C^{\prime\prime}=BC$.
Note that by SAS, $\triangle ABC\cong\triangle
A^{\prime}B^{\prime}C^{\prime\prime}$. Conclude that $\measuredangle
B^{\prime}C^{\prime}A^{\prime}\equiv\measuredangle
B^{\prime}C^{\prime\prime}A^{\prime}$.
Therefore it is sufficient to show that $C^{\prime\prime}=C^{\prime}$. If
$C^{\prime}\neq C^{\prime\prime}$ apply Proposition 10 to $\triangle
A^{\prime}C^{\prime}C^{\prime\prime}$ and try to arrive to a contradiction.
(The same proof is given in [1, Book I, Proposition 26].)
Exercise 10. Use Exercise 5 and Proposition 10.
Exercise 10. Note that
$|\measuredangle ADC|+|\measuredangle CDB|=\pi.$
Then apply the definition of defect.
Exercise 10. The statement is evident if $A$, $B$, $C$ and $D$ lie on one
line.
In the remaining case, denote by $O$ the center of the circumscribed circle.
Apply theorem about isosceles triangle (4) to the triangles $\triangle AOB$,
$\triangle BOC$, $\triangle COD$, $\triangle DOA$.
(Note that in the Euclidean plane the statement follows from Theorem 8 and
Exercise 6, but one can not use these statements in the absolute plane.)
#### Chapter 12
Exercise 12. Note that the the angle of parallelism of $B$ to $(CD)_{h}$ is
bigger than $\tfrac{\pi}{4}$, and it converges to $\tfrac{\pi}{4}$ as
$CD_{h}\to\infty$.
Applying Proposition 12, we get
$BC_{h}<\tfrac{1}{2}{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\ln\frac{1+\frac{1}{\sqrt{2}}}{1-\frac{1}{\sqrt{2}}}=\ln\left(1+\sqrt{2}\right).$
The right hand side is the limit of $BC_{h}$ if $CD_{h}\to\infty$. Therefore
$\ln\left(1+\sqrt{2}\right)$ is the optimal upper bound.
$Q$$P$$R$$A$$B$
Exercise 12. Note that the center of the circle containing $m$ lies on at the
intersection of lines tangent to absolute at $A$ and $B$.
Exercise 12. Consider hyperbolic triangle $\triangle_{h}PQR$ with right angle
at $Q$, such that $PQ=QR$ and the vertices $P$, $Q$ and $R$ lie on a
horocycle.
Without loss of generality, we may assume that $Q$ is the center of absolute.
In this case $\measuredangle_{h}PQR=\measuredangle PQR=\pm\tfrac{\pi}{2}$.
The rest of proof should be easy to guess from the picture. The answer is
$\displaystyle QP_{h}$ $\displaystyle=\ln\frac{AP}{PQ}$
$\displaystyle=\ln\frac{1+\frac{1}{\sqrt{2}}}{1-\frac{1}{\sqrt{2}}}=$
$\displaystyle=2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\ln(1+\sqrt{2}).$
Exercise 12. Let us apply Proposition 12.
$\displaystyle\mathop{\rm circum}\nolimits_{h}(r+1)$ $\displaystyle=\pi{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(e^{r+1}-e^{-r-1})=$ $\displaystyle=\pi{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}e{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(e^{r}-e^{-r-2})>$ $\displaystyle>\pi{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}e{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(e^{r}-e^{-r})=$
$\displaystyle=e{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\mathop{\rm
circum}\nolimits_{h}(r)\geqslant$ $\displaystyle\geqslant 2{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\mathop{\rm circum}\nolimits_{h}(r).$
#### Chapter 14
Exercise 14. Let $N$, $O$, $S$, $P$, $P^{\prime}$ and $\hat{P}$ be as on the
diagram on page 14.
Notice that $\triangle NOP\sim\triangle NP^{\prime}S\sim\triangle
P^{\prime}\hat{P}P$ and $2{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}NO=\nobreak
NS$. It remains to do algebraic manipulations.
Exercise 14. Consider the bijection $P\mapsto\hat{P}$ of the h-plane with
absolute $\Omega$.
Note that $\hat{P}\in[A_{i}B_{i}]$ if and only if $P\in\Gamma_{i}$.
Exercise 14. The observation follows since the reflection through the
perpendicular bisector of $[PQ]$ is a motion of Euclidean plane and h-plane at
the same time.
Without loss of generality we may assume that the center of circumcircle
coincides with the center of absolute. In this case the h-median of the
triangle coincide with the Euclidean medians. It remains to apply Theorem 7
#### Chapter 13
Exercise 13. Applying Pythagorean theorem, we get
$\cos AB_{s}=\cos AC_{s}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\cos
BC_{s}=\tfrac{1}{2}.$
Therefore $AB_{s}=\tfrac{\pi}{3}$.
To see this without Pythagorean theorem, look at the tessellation of the
sphere on the picture below; it is made from 24 copies of $\triangle_{s}ABC$
and yet 8 equilateral triangles. From the symmetry of this tessellation, it
follows that $[AB]_{s}$ occupies $\tfrac{1}{6}$ of the equator.
$A$$B$$C$
Exercise 13. Note that points on $\Omega$ do not move. Moreover, the points
inside $\Omega$ are mapped outside of $\Omega$ and the other way around.
Further note that this maps sends circles to circles; moreover the orthogonal
circles are mapped to orthogonal circles. In particular the circle orthogonal
to $\Omega$ are mapped to itself.
Consider arbitrary point $P\notin\Omega$. Denote by $P^{\prime}$ the inversion
of $P$ in $\Omega$. Choose two distinct circles which pass through $P$ and
$P^{\prime}$. According to Corollary 9, $\Gamma_{1}\perp\Omega$ and
$\Gamma_{2}\perp\Omega$.
Therefore the inversion in $\Omega$ sends $\Gamma_{1}$ to itself and the same
holds for $\Gamma_{2}$.
The image of $P$ has to lie on $\Gamma_{1}$ and $\Gamma_{2}$. Since the image
has to be distinct from $P$, we get that it has to be $P^{\prime}$.
#### Chapter 15
Exercise 15. Denote by $z$, $v$ and $w$ the complex coordinates of $Z$, $V$
and $W$ correspondingly. Then
$\displaystyle\measuredangle ZVW+\measuredangle VWZ+\measuredangle WZV$
$\displaystyle\equiv\arg\tfrac{w-v}{z-v}+\arg\tfrac{z-w}{v-w}+\arg\tfrac{v-z}{w-z}\equiv$
$\displaystyle\equiv\arg\tfrac{(w-v){\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(z-w){\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}(v-z)}{(z-v){\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}(v-w){\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(w-z)}\equiv$ $\displaystyle\equiv\arg(-1)\equiv$
$\displaystyle\equiv\pi$
Exercise 15. Note that
$\displaystyle\measuredangle EOV$ $\displaystyle=\measuredangle WOZ=\arg v$
$\displaystyle\frac{OW}{OZ}$ $\displaystyle=\frac{OZ}{OW}=|v|.$
Exercise 15. Find the inverse of each elementary transformation and use
Proposition 15.
Exercise 15. The Möbius transformation
$f(z)=\frac{(z_{1}-z_{\infty}){\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(z-z_{0})}{(z_{1}-z_{0}){\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(z-z_{\infty})}$
meets the conditions.
To show uniqueness, assume there is an other Möbius transformation $g(z)$
which meets the conditions. Then the composition $h=g\circ f^{-1}$ is a Möbius
transformation; set
$h(z)=\frac{a{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}z+b}{c{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}z+d}.$
Note that $h(\infty)=\infty$; therefore $c=0$. Further $h(0)=0$ implies $b=0$.
Finally, since $h(1)=1$ we get $\tfrac{a}{d}=1$. Therefore $h$ is the
_identity_ ; i.e., $h(z)=z$ for any $z$. It follows that $g=f$.
Exercise 15. Check the statement for each elementary transformation. Then
apply Proposition 15.
### Index
* $(u,v;w,z)$, 127
* $\cong$, 15
* $\infty$, 69
* $\parallel$, 45
* $\perp$, 37
* $\sim$, 46
* $d_{1}$, 12
* $d_{2}$, 12
* $d_{\infty}$, 12
* absolute, 88
* absolute plane, 79
* absolute value of complex number, 122
* acute
* acute angle, 37
* acute triangle, 48
* altitude, 53
* angle, 14
* acute angle, 37
* angle of parallelism, 97
* negative angle, 27
* obtuse angle, 37
* positive angle, 27
* right angle, 37
* straight angle, 24
* vertical angles, 25
* angle measure, 22
* hyperbolic angle measure, 89
* angle-side-angle congruence condition, 33
* arc, 65
* area, 86
* ASA congruence condition, 33
* asymptotically parallel lines, 97
* base of isosceles triangle, 34
* between, 24
* bijection, 13
* bisector, 41
* external bisector, 41
* center, 43
* centroid, 54
* chord, 43
* circle, 43
* circle arc, 64
* cline, 69
* complex conjugate, 122
* conformal factor, 102
* congruent triangles, 15
* cross-ratio, 68
* complex cross-ratio, 124, 127
* curvature, 86
* defect of triangle, 83
* diagonal
* diagonals of quadrilateral, 51
* diameter, 43
* direct motio, 40
* discrete metric, 11
* distance, 11
* distance-preserving map, 13
* elementary transformation, 125
* endpoint, 65
* equidistant, 100
* equivalence relation, 47
* Euclidean metric, 12
* Euclidean plane, 22
* Euclidean space, 107
* Euler’s formula, 123
* foot point, 38
* great circle, 108
* h-angle measure, 89
* h-circle, 92
* h-half-line, 88
* h-line, 88
* h-plane, 88
* h-radius, 92
* h-segment, 88
* half-plane, 29
* horocycle, 101
* hyperbolic angle measure, 89
* hyperbolic cosine, 117
* hyperbolic plane, 88
* hypotenuse, 48
* ideal point, 88
* imaginary complex number, 122
* imaginary line, 122
* incenter, 56
* incircle, 56
* indirect motion, 40
* inradius, 56
* inscribed triangle, 62
* intersecting lines, 45
* inverse, 13
* inversion, 67
* center of inversion, 67, 109
* circle of inversion, 67
* inversion in a sphere, 109
* sphere of inversion, 109
* inversive plane, 69
* inversive space, 109
* isometry, 13
* isosceles triangle, 34
* Klein model, 116
* leg, 48
* line, 14
* Möbius transformation, 125
* elementary transformation, 125
* Manhattan metric, 12
* maximum metric, 12
* metric, 11
* metric space, 11
* motion, 13
* neutral plane, 79
* obtuse
* obtuse angle, 37
* orthic triangle, 57
* orthocenter, 54
* parallel lines, 45
* ultra parallel lines, 97
* parallel translation, 125
* parallelogram, 51
* perpendicular, 37
* perpendicular bisector, 37
* perpendicular circles, 72
* pint at infinity, 69
* plan
* hyperbolic plane, 88
* plane
* absolute plane, 79
* Euclidean plane, 22
* h-plane, 88
* inversive plane, 69
* neutral plane, 79
* plane in the space, 107
* Poincaré disk model, 88
* point, 11
* ideal point, 88
* polar coordinates, 124
* quadrilateral, 51
* inscribed quadrilateral, 63
* nondegenerate quadrilateral, 51
* radius, 43
* real complex number, 122
* real line, 12, 122
* reflection, 39
* rotational homothety, 125
* SAS condition, 33
* secant line, 43
* side
* side of quadrilateral, 51
* side of the triangle, 30
* side-angle-angle congruence condition, 81
* side-angle-side condition, 33
* side-side-side congruence condition, 34
* similar triangles, 46
* sphere, 108
* spherical distance, 108
* stereographic projection, 110
* tangent circles, 43
* tangent half-line, 65
* tangent line, 43
* triangle, 15
* congruent triangles, 15
* degenerate triangle, 25
* ideal triangle, 99
* orthic triangle, 57
* right triangle, 48
* similar triangles, 46
* unit complex number, 123
* vertex of the angle, 14
* vertical angles, 25
### References
* [1] Euclid’s Elements, a web version by David Joyce available at aleph0.clarku.edu/`~`djoyce/java/elements/toc.html
* [2] Eugenio Beltrami, Teoria fondamentale degli spazii di curvatura costante, Annali. di Mat., ser II, 2 (1868), 232--255
* [3] Birkhoff, George David, A Set of postulates for plane geometry, based on scale and protractors, Annals of Mathematics 33 (1932), 329--345.
* [4] Marvin J. Greenberg Euclidean and Non-Euclidean Geometries: Development and History.
* [5] Lambert, Johann Heinrich, Theorie der Parallellinien, F. Engel, P. Stäckel (Eds.) (1786) Leipzig
* [6] Legendre, Adrien-Marie, Eléments de géométrie, 1794
* [7] , , , , . 25--28 (1829--1830 .).
* [8] Moise, Elementary Geometry From An Advanced Standpoint.
* [9] Saccheri, Giovanni Girolamo, Euclides ab omni nævo vindicatus, 1733
* [10] , ,̈ 7–9, .: , 1997.--352 .
* [11] Kiselev’s Geometry. Book I. Planimetry, by A. P. Kiselev Adapted from Russian by Alexander Givental ISBN 0-9779852-0-2 viii+240 pp.
|
arxiv-papers
| 2013-02-07T02:17:25 |
2024-09-04T02:49:41.459976
|
{
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"authors": "Anton Petrunin",
"submitter": "Anton Petrunin",
"url": "https://arxiv.org/abs/1302.1630"
}
|
1302.1666
|
Approximations of the tail index estimator of heavy-tailed distributions under
random censoring and application
Brahim [email protected] , Djamel
[email protected], Abdelhakim Necir333Corresponding author:
[email protected]
Abstract
We make use of the empirical process theory to approximate the adapted Hill
estimator, for censored data, in terms of Gaussian processes. Then, we derive
its asymptotic normality, only under the usual second-order condition of
regular variation, with the same variance as that obtained by Einmahl et al.
(2008). The newly proposed Gaussian approximation agrees perfectly with the
asymptotic representation of the classical Hill estimator in the non censoring
framework. Our results will be of great interest to establish the limit
distributions of many statistics in extreme value theory under random
censoring such as the estimators of tail indices, the actuarial risk measures
and the goodness-of-fit functionals for heavy-tailed distributions. As an
application, we establish the asymptotic normality of an estimator of the
excess-of-loss reinsurance premium.
Keywords: Empirical process; Gaussian approximation; Hill estimator; Limit
distribution; Random censoring; Reinsurance premium.
AMS 2010 Subject Classification: 62P05; 62H20; 91B26; 91B30.
## 1\. Introduction
For $n\geq 1,$ let $X_{1},X_{2},...,X_{n}$ be $n$ independent copies of a non-
negative random variable (rv) $X,$ defined over some probability space
$\left(\Omega,\mathcal{A},\mathbb{P}\right),$ with cumulative distribution
function (cdf) $F.\ $We assume that the distribution tail $1-F$ is regularly
varying at infinity, with index $\left(-1/\gamma_{1}\right),$ notation:
$1-F\in\mathcal{RV}_{\left(-1/\gamma_{1}\right)}.$ That is
$\lim_{t\rightarrow\infty}\frac{1-F\left(tx\right)}{1-F\left(t\right)}=x^{-1/\gamma_{1}},\text{
for any }x>0,$ (1.1)
where $\gamma_{1}>0,$ called shape parameter or tail index or extreme value
index (EVI), is a very crucial parameter in the analysis of extremes. It
governs the thickness of the distribution right tail: the heavier the tail,
the larger $\gamma_{1}.$ Its estimation has got a great deal of interest for
complete samples, as one might see in the textbook of Beirlant et al. (2004).
In this paper, we focus on the most celebrated estimator of $\gamma_{1},$ that
was proposed by Hill (1975):
$\widehat{\gamma}_{1}^{H}=\widehat{\gamma}_{1}^{H}\left(k\right):=\frac{1}{k}{\displaystyle\sum\limits_{i=1}^{k}}\log
X_{n-i+1,n}-\log X_{n-k,n},$
where $X_{1,n}\leq...\leq X_{n,n}$ are the order statistics pertaining to the
sample $\left(X_{1},...,X_{n}\right)$ and $k=k_{n}$ is an integer sequence
satisfying
$1<k<n,\text{ }k\rightarrow\infty\text{ and }k/n\rightarrow 0\text{ as
}n\rightarrow\infty.$ (1.2)
The consistency of $\widehat{\gamma}_{1}^{H}$ was proved by Mason (1982) by
only assuming the regular variation condition $\left(\ref{first-
condition}\right)$ while its asymptotic normality was established under a
suitable extra assumption, known as the second-order regular variation
condition (see de Haan and Stadtmüller, 1996 and de Haan and Ferreira, 2006,
page 117).
In the analysis of lifetime, reliability or insurance data, the observations
are usually randomly censored. In other words, in many real situations the
variable of interest $X$ is not always available. An appropriate way to model
this matter, is to introduce a non-negative rv $Y,$ called censoring rv,
independent of $X$ and then to consider the rv $Z:=\min\left(X,Y\right)$ and
the indicator variable $\delta:=\mathbf{1}\left(X\leq Y\right),$ which
determines whether or not $X$ has been observed. The cdf’s of $Y$ and $Z$ will
be denoted by $G$ and $H$ respectively. The analysis of extreme values of
randomly censored data is a new research topic to which Reiss and Thomas
(1997) made a very brief reference, in Section 6.1, as a first step but with
no asymptotic results. Considering Hall’s model Hall (1982), Beirlant et al.
(2007) proposed estimators for the EVI and high quantiles and discussed their
asymptotic properties, when the data are censored by a deterministic
threshold. More recently, Einmahl et al. (2008) adapted various EVI estimators
to the case where data are censored, by a random threshold, and proposed a
unified method to establish their asymptotic normality by imposing some
assumptions that are rather unusual to the context of extreme value theory.
The obtained estimators are then used in the estimation of extreme quantiles
under random censorship. Gomes and Neves (2011) also made a contribution to
this field by providing a detailed simulation study and applying the
estimation procedures on some survival data sets.
We start by a reminder of the definition of the adapted Hill estimator, of the
tail index $\gamma_{1},$ under random censorship. The tail of the censoring
distribution is assumed to be regularly varying too, that is
$1-G\in\mathcal{RV}_{\left(-1/\gamma_{2}\right)},$ for some $\gamma_{2}>0.$ By
virtue of the independence of $X$ and $Y,$ we have
$1-H\left(x\right)=\left(1-F\left(x\right)\right)\left(1-G\left(x\right)\right)$
and therefore $1-H\in\mathcal{RV}_{\left(-1/\gamma\right)},$ with
$\gamma:=\gamma_{1}\gamma_{2}/\left(\gamma_{1}+\gamma_{2}\right).$ Let
$\left\\{\left(Z_{i},\delta_{i}\right),\text{ }1\leq i\leq n\right\\}$ be a
sample from the couple of rv’s $\left(Z,\delta\right)$ and $Z_{1,n}\leq...\leq
Z_{n,n}$ represent the order statistics pertaining to
$\left(Z_{1},...,Z_{n}\right).$ If we denote the concomitant of the $i$th
order statistic by $\delta_{\left[i:n\right]}$ (i.e.
$\delta_{\left[i:n\right]}=\delta_{j}$ if $Z_{i,n}=Z_{j}),$ then the adapted
Hill estimator of the tail index $\gamma_{1}$ is defined by
$\widehat{\gamma}_{1}^{\left(H,c\right)}:=\frac{\widehat{\gamma}^{H}}{\widehat{p}},$
(1.3)
where
$\widehat{\gamma}^{H}:=\frac{1}{k}\sum\limits_{i=1}^{k}\log Z_{n-i+1,n}-\log
Z_{n-k:n}$ (1.4)
and
$\widehat{p}:=\frac{1}{k}{\displaystyle\sum\limits_{i=1}^{k}}\delta_{\left[n-i+1:n\right]},$
(1.5)
with $k=k_{n}$ satisfying $\left(\ref{K}\right).$ Roughly speaking, the
adapted Hill estimator is equal to the quotient of the classical Hill
estimator to the proportion of non censored data.
To derive the asymptotic normality of
$\widehat{\gamma}_{1}^{\left(H,c\right)},$ we will adopt a new approach which
is different from that of Einmahl et al. (2008). We notice that the asymptotic
normality of extreme value theory based estimators is achieved in the second-
order framework (see de Haan and Stadtmüller, 1996). Thus, it seems quite
natural to suppose that cdf’s $F,$ $G$ and $H$ satisfy the well-known second-
order condition of regular variation. That is, we assume that there exist a
constant $\tau_{j}<0$ and a function $A_{j},$ $j=1,2$ not changing sign near
infinity, such that for any $x>0$
$\begin{array}[c]{c}\underset{t\rightarrow\infty}{\lim}\dfrac{\overline{F}\left(tx\right)/\overline{F}\left(t\right)-x^{-1/\gamma_{1}}}{A_{1}\left(t\right)}=x^{-1/\gamma_{1}}\dfrac{x^{\tau_{1}}-1}{\tau_{1}},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\underset{t\rightarrow\infty}{\lim}\dfrac{\overline{G}\left(tx\right)/\overline{G}\left(t\right)-x^{-1/\gamma_{2}}}{A_{2}\left(t\right)}=x^{-1/\gamma_{2}}\dfrac{x^{\tau_{2}}-1}{\tau_{2}},\end{array}$
(1.6)
where $\overline{S}\left(x\right):=S\left(\infty\right)-S\left(x\right),$ for
any $S.$ For convenience, the same condition on cdf $H$ will be expressed in
terms of its quantile function
$H^{-1}\left(s\right):=\inf\left\\{x:H\left(x\right)\geq s\right\\},$ $0<s<1.$
There exist a constant $\tau_{3}<0$ and a function $A_{3}$ not changing sign
near zero, such that for any $x>0$
$\underset{t\downarrow
0}{\lim}\dfrac{H^{-1}\left(1-tx\right)/H^{-1}\left(1-t\right)-x^{-\gamma}}{A_{3}\left(t\right)}=x^{-\gamma}\dfrac{x^{\tau_{3}}-1}{\tau_{3}}.$
(1.7)
Actually what interests us most is the Gaussian approximation to the
distribution of the adapted estimator
$\widehat{\gamma}_{1}^{\left(H,c\right)},$ similar to that obtained for Hill’s
estimator $\widehat{\gamma}_{1}^{H}$ in the case of complete data. Indeed, if
$\left(\ref{second-order}\right)$ holds for $F,$ then, for an integer sequence
$k$ satisfying $\left(\ref{K}\right)$ with
$\sqrt{n/k}A_{1}\left(n/k\right)\rightarrow 0,$ we have as
$n\rightarrow\infty,$
$\sqrt{k}\left(\widehat{\gamma}_{1}^{H}-\gamma_{1}\right)=\gamma_{1}\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\widetilde{B}_{n}\left(1-\frac{k}{n}s\right)ds-\gamma_{1}\sqrt{\frac{n}{k}}\widetilde{B}_{n}\left(1-\frac{k}{n}\right)+o_{p}\left(1\right),$
where $\left\\{\widetilde{B}_{n}\left(s\right);\text{ }0\leq s\leq 1\right\\}$
is a sequence of Brownian bridges (see for instance Csörgő and Mason, 1985 and
de Haan and Ferreira, 2006, page 163). In other words,
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H\right)}-\gamma_{1}\right)$
converges in distribution to a centred Gaussian rv with variance
$\gamma_{1}^{2}.$ The Gaussian approximation above enables to solve many
problems with regards to the asymptotic behavior of several statistics of
heavy-tailed distributions, such as the estimators of: the mean (Peng, 2001
and 2004; Brahimi et al., 2013), the excess-of-loss reinsurance premium (Necir
et al., 2007), the distortion risk measures (Necir and Meraghni, 2009 and
Brahimi et al., 2011), the Zenga index (Greselin et al., 2013) and the
goodness-of-fit functionals as well (Koning and Peng, 2008).
The rest of the paper is organized a follows. In Section 2, we state our main
result which consists in a Gaussian approximation to
$\widehat{\gamma}_{1}^{\left(H,c\right)}$ only by assuming the second-order
conditions of regular variation $\left(\ref{second-order}\right)$ and
$\left(\ref{second-order H}\right).$ More precisely, we will show that there
exists a sequence of Brownian bridges $\left\\{B_{n}\left(s\right);\text{
}0\leq s\leq 1\right\\}$ defined on
$\left(\Omega,\mathcal{A},\mathbb{P}\right),$ such that as
$n\rightarrow\infty,$
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right)=\Psi\left(B_{n}\right)+o_{p}\left(1\right),$
for some functional $\Psi$ to be defined in such a way that
$\Psi\left(B_{n}\right)$ is normal with mean $0$ and variance
$p\gamma_{1}^{2}.$ Section 3 is devoted to an application of the main result
as we derive the asymptotic normality of an excess-of-loss reinsurance premium
estimator. The proofs are postponed to Section 4 and some results, that are
instrumental to our needs, are gathered in the Appendix.
## 2\. Main result
In addition to the Gaussian approximation of
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right),$ our
main result (stated in Theorem 2.1) consists in the asymptotic
representations, with Gaussian processes, of two other useful statistics,
namely $\sqrt{k}\left(\widehat{p}-p\right)$ and
$\sqrt{k}\left(\frac{Z_{n-k:n}}{H^{-1}\left(1-k/n\right)}-1\right).$ The
functions defined below are crucial to our needs
$H^{0}\left(z\right):=\mathbb{P}\left(Z\leq
z,\delta=0\right)=\int_{0}^{z}\overline{F}\left(y\right)dG\left(y\right)$
(2.8)
and
$H^{1}\left(z\right):=\mathbb{P}\left(Z\leq
z,\delta=1\right)=\int_{0}^{z}\overline{G}\left(y\right)dF\left(y\right).$
(2.9)
Throughout the paper, we use the notations
$h=h_{n}:=H^{-1}\left(1-k/n\right),\text{
}\theta:=H^{1}\left(\infty\right)\text{ and }p=1-q:=\gamma/\gamma_{1},$
and, for two sequences of rv’s, we write
$V_{n}^{\left(1\right)}=o_{p}\left(\mathbf{V}_{n}^{\left(2\right)}\right)$
and$\ V_{n}^{\left(1\right)}\approx V_{n}^{\left(2\right)}$ to say that, as
$n\rightarrow\infty,$
$V_{n}^{\left(1\right)}/V_{n}^{\left(2\right)}\rightarrow 0$ in probability
and
$V_{n}^{\left(1\right)}=V_{n}^{\left(2\right)}\left(1+o_{p}\left(1\right)\right)$
respectively.
###### Theorem 2.1.
Assume that the second-order conditions $(\ref{second-order})$ and
$(\ref{second-order H})$ hold. Let $k=k_{n}$ be an integer sequence
satisfying, in addition to $(\ref{K}),$
$\sqrt{k}A_{j}\left(h\right)\rightarrow 0,$ for $j=1,2$ and
$\sqrt{k}A_{3}\left(k/n\right)\rightarrow\lambda<\infty$ as
$n\rightarrow\infty.$ Then there exists a sequence of Brownian bridges
$\left\\{B_{n}\left(s\right);\text{ }0\leq s\leq 1\right\\}$ such that, as
$n\rightarrow\infty,$
$\sqrt{k}\left(\frac{Z_{n-k:n}}{h}-1\right)=\gamma\sqrt{\frac{n}{k}}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}\right)+o_{p}\left(1\right),$
$\sqrt{k}\left(\widehat{p}-p\right)=\sqrt{\frac{n}{k}}\left(q\mathbb{B}_{n}\left(\frac{k}{n}\right)-p\widetilde{\mathbb{B}}_{n}\left(\frac{k}{n}\right)\right)+o_{p}\left(1\right)$
and
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right)=\gamma_{1}\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}s\right)ds-\frac{\gamma_{1}}{p}\sqrt{\frac{n}{k}}\mathbb{B}_{n}\left(\frac{k}{n}\right)+o_{p}\left(1\right),$
where
$\mathbb{B}_{n}\left(s\right):=B_{n}\left(\theta\right)-B_{n}\left(\theta-
ps\right),\text{\
}\widetilde{\mathbb{B}}_{n}\left(s\right):=-B_{n}\left(1-qs\right)$
and
$\mathbb{B}_{n}^{\ast}\left(s\right):=\mathbb{B}_{n}\left(s\right)+\widetilde{\mathbb{B}}_{n}\left(s\right),0<s<1,$
are sequences of centred Gaussian processes.
###### Corollary 2.1.
Under the assumptions of Theorem $\ref{Theorem1},$ we have
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right)\overset{d}{\rightarrow}\mathcal{N}\left(0,p\gamma_{1}^{2}\right),\text{
as }n\rightarrow\infty.$
$\mathcal{N}\left(0,a^{2}\right)$ designates the centred normal distribution
with variance $a^{2}.$
To the best of our knowledge, this is the first time that
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right)$ is
expressed in terms of Gaussian processes. This asymptotic representation will
be of great usefulness in a lot of applications of extreme value theory under
random censoring, as we will see in the following example.
## 3\. Application: Excess-of-loss reinsurance premium estimation
In this section, we apply Theorem 2.1 to derive the asymptotic normality of an
estimator of the excess-of-loss reinsurance premium obtained with censored
data. The choice of this example is motivated mainly by two reasons. The first
one is that the area of reinsurance is by far the most important field of
application of extreme value theory. The second is that data sets with
censored extreme observations often occur in insurance. The aim of
reinsurance, where emphasis lies on modelling extreme events, is to protect an
insurance company, called ceding company, against losses caused by excessively
large claims and/or a surprisingly high number of moderate claims. Nice
discussions on the use of extreme value theory in the actuarial world
(especially in the reinsurance industry) can be found, for instance, in
Embrechts et al. (1997), a major textbook on the subject, and Beirlant et al.
(1994).
Let $X_{1},...,X_{n}$ $\left(n\geq 1\right)$ be $n$ individual claim amounts
of an insured loss $X$ with finite mean. In the excess-of-loss reinsurance
treaty, the ceding company covers claims that do not exceed a (high) number
$R\geq 0,$ called retention level, while the reinsurer pays the part
$(X_{i}-R)_{+}:=\max\left(0,X_{i}-R\right)$ of each claim exceeding $R.$
Applying Wang’s premium calculation principle, with a distortion equal to the
identical function (Wang, 1996), to this reinsurance policy yields the
following expression for the net premium for the layer from $R$ to infinity
$\Pi(R):=\mathbf{E}\left[(X-R)_{+}\right]=\int_{R}^{\infty}\overline{F}\left(x\right)dx.$
Taking $h$ as a retention level, we have
$\Pi_{n}=\Pi(h)=h\overline{F}\left(h\right)\int_{1}^{\infty}\frac{\overline{F}\left(hx\right)}{\overline{F}\left(h\right)}dx.$
After noticing that the finite mean assumption yields that $\gamma_{1}<1,$ we
use the first-order regular variation condition $\left(\ref{first-
condition}\right)$ together with Potter’s inequalities, to get
$\Pi_{n}\sim\frac{\gamma_{1}}{1-\gamma_{1}}h\overline{F}\left(h\right),\text{
as }n\rightarrow\infty,\text{ }0<\gamma_{1}<1.$
Let
$F_{n}\left(x\right):=1-{\displaystyle\prod\limits_{Z_{i:n}\leq
x}^{n}}\left[1-\dfrac{\delta_{\left[i:n\right]}}{n-i+1}\right]$
be the well-known Kaplan-Meier estimator (Kaplan and Meier, 1958) of cdf $F.$
Then, by replacing $\gamma_{1},$ $h$ and $\overline{F}\left(h\right)$ by their
respective estimates $\widehat{\gamma}_{1}^{\left(H,c\right)},$ $Z_{n-k:n}$
and
$1-F_{n}(Z_{n-k:n})={\textstyle\prod_{i=1}^{n-k}}\left(1-\delta_{\left[i:n\right]}/\left(n-i+1\right)\right),$
we define our estimator of $\Pi_{n}$ as follows
$\widehat{\Pi}_{n}:=\frac{\widehat{\gamma}_{1}^{\left(H,c\right)}}{1-\widehat{\gamma}_{1}^{\left(H,c\right)}}Z_{n-k:n}{\displaystyle\prod_{i=1}^{n-k}}\left(1-\frac{\delta_{\left[i:n\right]}}{n-i+1}\right).$
(3.10)
The asymptotic normality of $\widehat{\Pi}_{n}$ is established in the
following theorem.
###### Theorem 3.1.
Assume that the assumptions of Theorem 2.1 hold with $\gamma_{1}<1$ and that
both cdf’s $F$ and $G$ are absolutely continuous, then
$\displaystyle\frac{\sqrt{k}\left(\widehat{\Pi}_{n}-\Pi_{n}\right)}{h\overline{F}\left(h\right)}$
$\displaystyle=-\frac{p\gamma_{1}^{2}}{1-\gamma_{1}}\sqrt{\frac{n}{k}}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}\right)$
$\displaystyle+\frac{\gamma_{1}}{\left(1-\gamma_{1}\right)^{2}}\left\\{\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}s\right)ds-p^{-1}\sqrt{\frac{n}{k}}\mathbb{B}_{n}\left(\frac{k}{n}\right)\right\\}+o_{p}\left(1\right),$
where $\mathbb{B}_{n}$ and $\mathbb{B}_{n}^{\ast}$ are those defined in
Theorem $\ref{Theorem1}.$
###### Corollary 3.1.
Under the assumptions of Theorem $\ref{Theorem2},$ we have
$\frac{\sqrt{k}\left(\widehat{\Pi}_{n}-\Pi_{n}\right)}{h\overline{F}\left(h\right)}\overset{d}{\rightarrow}\mathcal{N}\left(0,\sigma_{\Pi}^{2}\right),\text{
as }n\rightarrow\infty,$
where
$\sigma_{\Pi}^{2}:=\frac{p\gamma_{1}^{2}}{\left(1-\gamma_{1}\right)^{2}}\left[p\gamma_{1}^{2}+\frac{1}{\left(1-\gamma_{1}\right)^{2}}\right],\text{
for }\gamma_{1}<1.$
## 4\. Proofs
We begin by a brief introduction on some uniform empirical processes under
random censoring. The empirical counterparts of $H^{j}$ $\left(j=0,1\right)$
are defined, for $z\geq 0,$ by
$H_{n}^{j}\left(z\right):=\\#\left\\{i:1\leq i\leq n,\text{ }Z_{i}\leq
z,\delta_{i}=j\right\\}/n,\text{ }j=0,1.$
In the sequel, we will use the following two empirical processes
$\sqrt{n}\left(\overline{H}_{n}^{j}\left(z\right)-\overline{H}^{j}\left(z\right)\right),\text{
}j=0,1;\text{ }z>0,$
which may be represented, almost surely, by a uniform empirical process.
Indeed, let us define, for each $i=1,...,n,$ the following rv
$U_{i}:=\delta_{i}H^{1}\left(Z_{i}\right)+\left(1-\delta_{i}\right)\left(\theta+H^{0}\left(Z_{i}\right)\right).$
From Einmahl and Koning (1992), the rv’s $U_{1},...,U_{n}$ are iid
$(0,1)$-uniform. The empirical cdf and the uniform empirical process based
upon $U_{1},...,U_{n}$ are respectively denoted by
$\mathbb{U}_{n}\left(s\right):=\\#\left\\{i:1\leq i\leq n,\text{ }U_{i}\leq
s\right\\}/n\text{ and
}\alpha_{n}\left(s\right):=\sqrt{n}\left(\mathbb{U}_{n}\left(s\right)-s\right),\text{
}0\leq s\leq 1.$
Deheuvels and Einmahl (1996) state that almost surely
$H_{n}^{0}\left(z\right)=\mathbb{U}_{n}\left(H^{0}\left(z\right)+\theta\right)-\mathbb{U}_{n}\left(\theta\right),\text{
for }0<H^{0}\left(z\right)<1-\theta,$
and
$H_{n}^{1}\left(z\right)=\mathbb{U}_{n}\left(H^{1}\left(z\right)\right),\text{
for }0<H^{1}\left(z\right)<\theta.$
It is easy to verify that almost surely
$\sqrt{n}\left(\overline{H}_{n}^{1}\left(z\right)-\overline{H}^{1}\left(z\right)\right)=\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right),\text{
for }0<\overline{H}^{1}\left(z\right)<\theta,$ (4.11)
and
$\sqrt{n}\left(\overline{H}_{n}^{0}\left(z\right)-\overline{H}^{0}\left(z\right)\right)=-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right),\text{
for }0<\overline{H}^{0}\left(z\right)<1-\theta.$ (4.12)
Our methodology strongly relies on the well-known Gaussian approximation given
by Csörgő et al. (1986): on the probability space
$\left(\Omega,\mathcal{A},\mathbb{P}\right),$ there exists a sequence of
Brownian bridges $\left\\{B_{n}\left(s\right);\text{ }0\leq s\leq 1\right\\}$
such that for every $0\leq\xi<1/4$
$\sup_{\frac{1}{n}\leq s\leq
1-\frac{1}{n}}\frac{\left|\alpha_{n}\left(s\right)-B_{n}\left(s\right)\right|}{\left(s\left(1-s\right)\right)^{1/2-\xi}}=O_{p}\left(n^{-\xi}\right),\text{
as }n\rightarrow\infty.$ (4.13)
The following processes will be crucial to our needs:
$\beta_{n}\left(z\right):=\sqrt{\frac{n}{k}}\left\\{\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(zZ_{n-k:n}\right)\right)\right\\},\text{
for }0<\overline{H}^{1}\left(z\right)<\theta$ (4.14)
and
$\widetilde{\beta}_{n}\left(z\right):=-\sqrt{\frac{n}{k}}\alpha_{n}\left(1-\overline{H}^{0}\left(zZ_{n-k:n}\right)\right),\text{
for }0<\overline{H}^{0}\left(z\right)<1-\theta.$ (4.15)
### 4.1. Proof of Theorem 2.1
First, observe that
$\frac{Z_{n-k:n}}{h}=\frac{H^{-1}\left(H\left(Z_{n-k:n}\right)\right)}{H^{-1}\left(H_{n}\left(Z_{n-k:n}\right)\right)}.$
Let
$x_{n}:=\overline{H}\left(Z_{n-k:n}\right)/\overline{H}_{n}\left(Z_{n-k:n}\right)$
and $t_{n}:=\overline{H}_{n}\left(Z_{n-k:n}\right)=k/n.$ By using the second-
order regular variation condition $\left(\ref{second-order H}\right)$ we get
$\frac{H^{-1}\left(H\left(Z_{n-k:n}\right)\right)}{H^{-1}\left(H_{n}\left(Z_{n-k:n}\right)\right)}-x_{n}^{-\gamma}\approx
A_{3}\left(k/n\right)x_{n}^{-\gamma}\dfrac{x_{n}^{\tau_{3}}-1}{\tau_{3}}.$
Since $x_{n}\approx 1,$ it follows that
$x_{n}^{-\gamma}\dfrac{x_{n}^{\tau_{3}}-1}{\tau_{3}}$ tends in probability to
zero. This means that
$\frac{H^{-1}\left(H\left(Z_{n-k:n}\right)\right)}{H^{-1}\left(H_{n}\left(Z_{n-k:n}\right)\right)}=\left(\frac{\overline{H}\left(Z_{n-k:n}\right)}{\overline{H}_{n}\left(Z_{n-k:n}\right)}\right)^{-\gamma}+o_{p}\left(A_{3}\left(k/n\right)\right).$
Using the mean value theorem, we get
$\left(\frac{\overline{H}\left(Z_{n-k:n}\right)}{\overline{H}_{n}\left(Z_{n-k:n}\right)}\right)^{-\gamma}-1=-\gamma
c_{n}\left(\frac{\overline{H}\left(Z_{n-k:n}\right)}{\overline{H}_{n}\left(Z_{n-k:n}\right)}-1\right),$
where $c_{n}$ is a sequence of rv’s lying between $1$ and
$\left(\overline{H}\left(Z_{n-k:n}\right)/\overline{H}_{n}\left(Z_{n-k:n}\right)\right)^{-\gamma-1}.$
Since $c_{n}\approx 1,$ then
$\left(\frac{\overline{H}\left(Z_{n-k:n}\right)}{\overline{H}_{n}\left(Z_{n-k:n}\right)}\right)^{-\gamma}-1\approx-\gamma\left(\frac{\overline{H}\left(Z_{n-k:n}\right)}{\overline{H}_{n}\left(Z_{n-k:n}\right)}-1\right).$
By assumption we have
$\sqrt{k}A_{3}\left(k/n\right)\rightarrow\lambda<\infty,$ then
$\sqrt{k}\left(\frac{Z_{n-k:n}}{h}-1\right)=-\gamma\sqrt{k}\left(\frac{\overline{H}\left(Z_{n-k:n}\right)}{\overline{H}_{n}\left(Z_{n-k:n}\right)}-1\right)+o_{p}\left(1\right).$
We have $\overline{H}_{n}\left(Z_{n-k:n}\right)=k/n,$ then
$\sqrt{k}\left(\frac{Z_{n-k:n}}{h}-1\right)=\gamma\sqrt{k}\frac{n}{k}\left(\overline{H}_{n}\left(Z_{n-k:n}\right)-\overline{H}\left(Z_{n-k:n}\right)\right)+o_{p}\left(1\right).$
which may be decomposed into
$\gamma\sqrt{k}\frac{n}{k}\left(\left(\overline{H}_{n}^{1}\left(Z_{n-k:n}\right)-\overline{H}^{1}\left(Z_{n-k:n}\right)\right)+\left(\overline{H}_{n}^{0}\left(Z_{n-k:n}\right)-\overline{H}^{0}\left(Z_{n-k:n}\right)\right)\right)+o_{p}\left(1\right).$
Using $\left(\ref{betan}\right)$ and $\left(\ref{beta-tild}\right)$ with
$z=1,$ leads to
$\sqrt{k}\left(\frac{Z_{n-k:n}}{h}-1\right)=\gamma\left(\beta_{n}\left(1\right)+\widetilde{\beta}_{n}\left(1\right)\right)+o_{p}\left(1\right).$
(4.16)
Now, we apply assertions $\left(i\right)$ and $\left(ii\right)$ of Lemma 5.2
to complete the proof of the first result of the theorem.
For the second result of the theorem, observe that
$\widehat{p}=\frac{n}{k}\overline{H}_{n}^{1}\left(Z_{n-k:n}\right),$
then consider the following decomposition
$\displaystyle\widehat{p}-p$
$\displaystyle=\frac{n}{k}\left(\overline{H}_{n}^{1}\left(Z_{n-k:n}\right)-\overline{H}^{1}\left(Z_{n-k:n}\right)\right)$
(4.17)
$\displaystyle+\frac{n}{k}\left(\overline{H}^{1}\left(Z_{n-k:n}\right)-\overline{H}^{1}\left(h\right)\right)+\left(\frac{n}{k}\overline{H}^{1}\left(h\right)-p\right).$
Notice that from $\left(\ref{betan}\right),$ almost surely, we have
$\frac{n}{k}\left(\overline{H}_{n}^{1}\left(Z_{n-k:n}\right)-\overline{H}^{1}\left(Z_{n-k:n}\right)\right)=\frac{1}{\sqrt{k}}\beta_{n}\left(1\right).$
(4.18)
The second term in the right-hand side of $\left(\ref{phate-p}\right)$ may be
written as
$\frac{n}{k}\left(\overline{H}^{1}\left(Z_{n-k:n}\right)-\overline{H}^{1}\left(h\right)\right)=\frac{n}{k}\overline{H}^{1}\left(h\right)\left(\frac{\overline{H}^{1}\left(Z_{n-k:n}\right)}{\overline{H}^{1}\left(h\right)}-1\right).$
(4.19)
Making use of Lemma 5.1, with $z=1$ and $z=Z_{n-k:n}/h,$ we respectively get
as $n\rightarrow\infty$
$\frac{n}{k}\overline{H}^{1}\left(h\right)=p+O\left(A\left(h\right)\right)\text{
and
}\frac{n}{k}\overline{H}^{1}\left(Z_{n-k:n}\right)=p\left(\frac{Z_{n-k:n}}{h}\right)^{-1/\gamma}+O_{p}\left(A\left(h\right)\right),$
(4.20)
where $A\left(h\right),$ defined later on in Lemma 5.1, is a sequence tending
to zero as $n\rightarrow\infty.$ It follows that
$\frac{\overline{H}^{1}\left(Z_{n-k:n}\right)}{\overline{H}^{1}\left(h\right)}-1=\left(\frac{p}{p+O_{p}\left(A\left(h\right)\right)}\right)\left(\left(Z_{n-k:n}/h\right)^{-1/\gamma}-1\right)+\frac{O_{p}\left(A\left(h\right)\right)}{p+O_{p}\left(A\left(h\right)\right)}.$
Putting things in a simple way, we have, since
$A\left(h\right)=o\left(1\right),$
$\frac{p}{p+O_{p}\left(A\left(h\right)\right)}=1+o_{p}\left(1\right)\text{ and
}\frac{O_{p}\left(A\left(h\right)\right)}{p+O_{p}\left(A\left(h\right)\right)}=O_{p}\left(A\left(h\right)\right).$
Therefore
$\frac{\overline{H}^{1}\left(Z_{n-k:n}\right)}{\overline{H}^{1}\left(h\right)}-1=\left(1+o_{p}\left(1\right)\right)\left(\left(Z_{n-k:n}/h\right)^{-1/\gamma}-1\right)+O_{p}\left(A\left(h\right)\right).$
Recalling $\left(\ref{prod}\right)$ and using $\overline{H}^{1}\left(h\right)$
from $\left(\ref{H1-a}\right),$ we get
$\frac{n}{k}\left(\overline{H}^{1}\left(Z_{n-k:n}\right)-\overline{H}^{1}\left(h\right)\right)=p\left(\left(\frac{Z_{n-k:n}}{h}\right)^{-1/\gamma}-1\right)\left(1+o_{p}\left(1\right)\right)+O_{p}\left(A\left(h\right)\right)$
By applying the mean value theorem and using the fact that $Z_{n-k:n}/h\approx
1,$ we readily verify that
$\left(\frac{Z_{n-k:n}}{h}\right)^{-1/\gamma}-1\approx-\frac{1}{\gamma}\left(\frac{Z_{n-k:n}}{h}-1\right).$
Hence
$\frac{n}{k}\left(\overline{H}^{1}\left(Z_{n-k:n}\right)-\overline{H}^{1}\left(h\right)\right)=-\frac{p}{\gamma}\left(\frac{Z_{n-k:n}}{h}-1\right)\left(1+o\left(1\right)\right)+O_{p}\left(A\left(h\right)\right).$
(4.21)
From the assumptions on the funcions $A_{1}$ and $A_{2},$ we have
$\sqrt{k}A\left(h\right)\rightarrow 0.$By combining $\left(\ref{Zh}\right)$
and $\left(\ref{D1}\right),$ we obtain
$\sqrt{k}\frac{n}{k}\left(\overline{H}^{1}\left(Z_{n-k:n}\right)-\overline{H}^{1}\left(h\right)\right)=-p\left(\beta_{n}\left(1\right)+\widetilde{\beta}_{n}\left(1\right)\right)+o_{p}\left(1\right).$
(4.22)
For the third term in the right-hand side of $\left(\ref{phate-p}\right),$ we
use conditions $\left(\ref{second-order}\right),$ as in the proof of Lemma
5.1, to have
$\sqrt{k}\left(\frac{n}{k}\overline{H}^{1}\left(h\right)-p\right)\sim\frac{pq}{\gamma_{1}}\left(\frac{\sqrt{k}A_{1}\left(h\right)}{1-p\tau_{1}}+\frac{q\sqrt{k}A_{2}\left(h\right)}{1-q\tau_{2}}\right),$
(4.23)
which tends to $0$ as $n\rightarrow\infty$ because, by assumption,
$\sqrt{k}A_{j}\left(h\right)$ goes to $0,$ $j=1,2.$ Substituting results
$\left(\ref{beta-p}\right),$ $\left(\ref{Z}\right)$ and
$\left(\ref{mup}\right)$ in decomposition $\left(\ref{phate-p}\right),$ yields
$\sqrt{k}\left(\widehat{p}-p\right)=q\beta_{n}\left(1\right)-p\widetilde{\beta}_{n}\left(1\right)+o_{p}\left(1\right).$
(4.24)
The final form of the second result of the theorem is then obtained by
applying assertions $\left(i\right)$ and $\left(ii\right)$ of Lemma 5.2.
Finally, we focus on the third result of the theorem. It is clear that we have
the following decomposition
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right)=\frac{1}{\widehat{p}}\sqrt{k}\left(\widehat{\gamma}^{H}-\gamma\right)-\frac{\gamma_{1}}{\widehat{p}}\sqrt{k}\left(\widehat{p}-p\right).$
(4.25)
Recall that one way to define Hill’s estimator $\widehat{\gamma}^{H}$ is to
use the limit
$\gamma=\lim_{t\rightarrow\infty}\int_{t}^{\infty}z^{-1}\overline{H}\left(z\right)/\overline{H}(t)dz.$
Then, by replacing $\overline{H}$ by $\overline{H}_{n}$ and letting
$t=Z_{n-k:n},$ we write
$\widehat{\gamma}^{H}=\frac{n}{k}\int_{Z_{n-k:n}}^{\infty}z^{-1}\overline{H}_{n}(z)dz.$
For details, see for instance, (de Haan and Ferreira, 2006, page 69). Let’s
consider the following decomposition
$\hat{\gamma}^{H}-\gamma=T_{n1}+T_{n2}+T_{n3},$ where
$T_{n1}:=\frac{n}{k}\int_{Z_{n-k:n}}^{\infty}z^{-1}\left(\overline{H}_{n}^{0}(z)-\overline{H}^{0}(z)+\overline{H}_{n}^{1}(z)-\overline{H}^{1}(z)\right)dz,$
$T_{n2}:=\frac{n}{k}\int_{Z_{n-k:n}}^{h}z^{-1}\overline{H}\left(z\right)dz\text{
and
}T_{n3}:=\frac{n}{k}\int_{h}^{\infty}z^{-1}\overline{H}\left(z\right)dz-\gamma.$
We use the integral convention that $\int_{a}^{b}=\int_{\left[a,b\right)}$ as
integration is with respect to the measure induced by a right-continuous
function. Making a change of variables in the first term $T_{n1}$ and using
the uniform empirical representations of $\overline{H}_{n}^{0}$ and
$\overline{H}_{n}^{1},$ we get almost surely
$\sqrt{k}T_{n1}=\int_{1}^{\infty}z^{-1}\left(\beta_{n}\left(z\right)+\widetilde{\beta}_{n}\left(z\right)\right)dz.$
For the second term $T_{n2},$ we apply the mean value theorem to have
$T_{n2}=\frac{\overline{H}\left(z_{n}^{\ast}\right)}{z_{n}^{\ast}}\frac{n}{k}\left(h-Z_{n-k:n}\right),$
where $z_{n}^{\ast}$ is a sequence of rv’s between $Z_{n-k:n}$ and $h.$ It is
obvious that $z_{n}^{\ast}\approx h,$ this implies that
$\overline{H}\left(z_{n}^{\ast}\right)\approx k/n.$ It follows that the right-
hand side of the previous equation is $\approx-\left(Z_{n-k:n}/h-1\right).$
Hence, from $\left(\ref{Zh}\right),$ we have
$\sqrt{k}T_{n2}=-\gamma\left(\beta_{n}\left(1\right)+\widetilde{\beta}_{n}\left(1\right)\right)+o_{p}\left(1\right).$
Finally, for $T_{n3},$ we use the second-order conditions $\left(\ref{second-
order}\right)$ to get
$\sqrt{k}T_{n3}\sim
p^{2}\frac{\sqrt{k}A_{1}\left(h\right)}{1-p\tau_{1}}+q^{2}\frac{\sqrt{k}A_{2}\left(h\right)}{1-q\tau_{2}}.$
(4.26)
Since by assumption $\sqrt{k}A_{j}\left(h\right)\rightarrow 0,$ $j=1,2,$ as
$n\rightarrow\infty,$ then $\sqrt{k}T_{n3}\rightarrow 0.$ By similar arguments
as the above, we obtain
$\sqrt{k}\left(\hat{\gamma}^{H}-\gamma\right)=\int_{1}^{\infty}z^{-1}\left(\beta_{n}\left(z\right)+\widetilde{\beta}_{n}\left(z\right)\right)dz-\gamma\left(\beta_{n}\left(1\right)+\widetilde{\beta}_{n}\left(1\right)\right)+o_{p}\left(1\right).$
(4.27)
Combining $\left(\ref{p-hate}\right)$ and $\left(\ref{rephill}\right)$ yields
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right)=\frac{1}{p}\int_{1}^{\infty}z^{-1}\left(\beta_{n}\left(z\right)+\widetilde{\beta}_{n}\left(z\right)\right)dz-\frac{\gamma}{p^{2}}\beta_{n}\left(1\right)+o_{p}\left(1\right).$
(4.28)
We achieve the proof of the third result of the theorem by using assertions
$\left(i\right)$ and $\left(ii\right)$ of Lemma 5.2.
### 4.2. Proof of Corollary 2.1
From the third result of Theorem 2.1, we deduce that
$\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right)$ is
asymptotically centred Gaussian with variance
$\sigma^{2}=\gamma_{1}^{2}\lim_{n\rightarrow\infty}\mathbf{E}\left[\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left(psk/n\right)ds-\frac{1}{p}\sqrt{\frac{n}{k}}\mathbb{B}_{n}\left(k/n\right)\right]^{2}.$
We check that the processes $\mathbb{B}_{n}\left(s\right),$
$\widetilde{\mathbb{B}}_{n}\left(s\right)$ and
$\mathbb{B}_{n}^{\ast}\left(s\right)$ satisfy
$p^{-1}\mathbf{E}\left[\mathbb{B}_{n}\left(s\right)\mathbb{B}_{n}\left(t\right)\right]=\min\left(s,t\right)-pst,\text{
}q^{-1}\mathbf{E}\left[\widetilde{\mathbb{B}}_{n}\left(s\right)\widetilde{\mathbb{B}}_{n}\left(t\right)\right]=\min\left(s,t\right)-qst,$
and
$p^{-1}\mathbf{E}\left[\mathbb{B}_{n}\left(s\right)\mathbb{B}_{n}^{\ast}\left(t\right)\right]=\mathbf{E}\left[\mathbb{B}_{n}^{\ast}\left(s\right)\mathbb{B}_{n}^{\ast}\left(t\right)\right]=\min\left(s,t\right)-st.$
Then, by elementary calculation (we omit details), we get
$\sigma^{2}=p\gamma_{1}^{2}.$ $\Box$
### 4.3. Proof of Theorem 3.1
First, recall that
$\Pi_{n}=h\overline{F}\left(h\right)\int_{1}^{\infty}\frac{\overline{F}\left(hx\right)}{\overline{F}\left(h\right)}dx\text{
and
}\widehat{\Pi}_{n}=Z_{n-k:n}\left(1-F_{n}\left(Z_{n-k:n}\right)\right)\frac{\widehat{\gamma}_{1}^{\left(H,c\right)}}{1-\widehat{\gamma}_{1}^{\left(H,c\right)}}.$
Observe that we have the following decomposition
$\frac{\widehat{\Pi}_{n}-\Pi_{n}}{h\overline{F}\left(h\right)}=\sum_{i=1}^{6}S_{ni},$
where
$S_{n1}:=\frac{\widehat{\gamma}_{1}^{\left(H,c\right)}}{1-\widehat{\gamma}_{1}^{\left(H,c\right)}}\frac{Z_{n-k:n}}{h}\left\\{\frac{\left(1-F_{n}\left(Z_{n-k:n}\right)\right)}{\overline{F}\left(h\right)}-\frac{\overline{F}\left(Z_{n-k:n}\right)}{\overline{F}\left(h\right)}\right\\},$
$S_{n2}:=\frac{\widehat{\gamma}_{1}^{\left(H,c\right)}}{1-\widehat{\gamma}_{1}^{\left(H,c\right)}}\frac{Z_{n-k:n}}{h}\left\\{\frac{\overline{F}\left(Z_{n-k:n}\right)}{\overline{F}\left(h\right)}-\left(\frac{Z_{n-k:n}}{h}\right)^{-1/\gamma_{1}}\right\\},$
$S_{n3}:=\frac{\widehat{\gamma}_{1}^{\left(H,c\right)}}{1-\widehat{\gamma}_{1}^{\left(H,c\right)}}\frac{Z_{n-k:n}}{h}\left\\{\left(\frac{Z_{n-k:n}}{h}\right)^{-1/\gamma_{1}}-1\right\\},\text{
}S_{n4}:=\frac{\widehat{\gamma}_{1}^{\left(H,c\right)}}{1-\widehat{\gamma}_{1}^{\left(H,c\right)}}\left\\{\frac{Z_{n-k:n}}{h}-1\right\\},$
$S_{n5}:=\frac{\widehat{\gamma}_{1}^{\left(H,c\right)}}{1-\widehat{\gamma}_{1}^{\left(H,c\right)}}-\frac{\gamma_{1}}{1-\gamma_{1}}\text{
and
}S_{n6}:=\frac{\gamma_{1}}{1-\gamma_{1}}-\frac{\Pi_{n}}{h\overline{F}\left(h\right)}.$
Since $Z_{n-k:n}\approx h$ and
$\widehat{\gamma}_{1}^{\left(H,c\right)}\approx\gamma_{1},$ then
$S_{n1}\approx-\frac{\gamma_{1}}{1-\gamma_{1}}\frac{F_{n}\left(Z_{n-k:n}\right)-F\left(Z_{n-k:n}\right)}{\overline{F}\left(Z_{n-k:n}\right)}.$
(4.29)
In view of Proposition 5 in Csörgő (1996), we have for any $x\leq Z_{n-k:n},$
$\displaystyle\frac{F_{n}\left(x\right)-F\left(x\right)}{\overline{F}\left(x\right)}$
$\displaystyle=\frac{H_{n}^{1}\left(x\right)-H^{1}\left(x\right)}{\overline{H}\left(x\right)}-\int_{0}^{x}\frac{H_{n}^{1}\left(z\right)-H^{1}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)$
(4.30)
$\displaystyle-\int_{0}^{x}\frac{\overline{H}_{n}\left(z\right)-\overline{H}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH^{1}\left(z\right)+O_{p}\left(\frac{1}{k}\right).$
Notice that
$\sqrt{n}\left(\overline{H}_{n}\left(z\right)-\overline{H}\left(z\right)\right)=\sqrt{n}\left(\overline{H}_{n}^{1}\left(z\right)-\overline{H}^{1}\left(z\right)\right)+\sqrt{n}\left(\overline{H}_{n}^{0}\left(z\right)-\overline{H}^{0}\left(z\right)\right),$
(4.31)
and recall that from representations $\left(\ref{rep-H1}\right)$ and
$\left(\ref{rep-H0}\right),$ we have
$\sqrt{n}\left(H_{n}^{1}\left(x\right)-H^{1}\left(x\right)\right)=\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right),$
$\sqrt{n}\left(\overline{H}_{n}^{1}\left(z\right)-\overline{H}^{1}\left(z\right)\right)=\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right),$
and
$\sqrt{n}\left(\overline{H}_{n}^{0}\left(z\right)-\overline{H}^{0}\left(z\right)\right)=-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right).$
It follows, from $\left(\ref{Hn}\right),$ that
$\sqrt{n}\left(\overline{H}_{n}\left(z\right)-\overline{H}\left(z\right)\right)=\left(\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)\right)-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right).$
By using the above representations in $\left(\ref{ratio}\right),$ we obtain
$\displaystyle\sqrt{n}\frac{F_{n}\left(x\right)-F\left(x\right)}{\overline{F}\left(x\right)}$
$\displaystyle=\frac{\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)}{\overline{H}\left(x\right)}-\int_{0}^{x}\frac{\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)$
$\displaystyle-\int_{0}^{x}\frac{\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right)}{\overline{H}^{2}\left(z\right)}d\overline{H}^{1}\left(z\right)+O_{p}\left(\frac{\sqrt{n}}{k}\right).$
By writing
$\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)=\alpha_{n}\left(\theta\right)-\left(\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)\right),$
it is easy to check that
$\int_{0}^{x}\frac{\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)=\frac{\alpha_{n}\left(\theta\right)}{\overline{H}\left(x\right)}-\int_{0}^{x}\frac{\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right),$
and therefore
$\displaystyle\sqrt{n}\frac{F_{n}\left(x\right)-F\left(x\right)}{\overline{F}\left(x\right)}$
$\displaystyle=-\frac{\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(x\right)\right)}{\overline{H}\left(x\right)}+\int_{0}^{x}\frac{\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(x\right)\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)$
$\displaystyle-\int_{0}^{x}\frac{\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right)}{\overline{H}^{2}\left(z\right)}dH^{1}\left(z\right)+O_{p}\left(\frac{\sqrt{n}}{k}\right).$
By multiplying both sides of the previous equation by $\sqrt{k/n},$ then by
using the Gaussian approximation $\left(\ref{approx}\right),$ in
$x=Z_{n-k:n},$ we get
$\displaystyle\sqrt{k}\frac{F_{n}\left(Z_{n-k:n}\right)-F\left(Z_{n-k:n}\right)}{\overline{F}\left(Z_{n-k:n}\right)}$
$\displaystyle=-\sqrt{\frac{n}{k}}\mathbf{B}_{n}\left(Z_{n-k:n}\right)+\sqrt{\frac{k}{n}}\int_{0}^{Z_{n-k:n}}\frac{\mathbf{B}_{n}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)$
$\displaystyle-\sqrt{\frac{k}{n}}\int_{0}^{Z_{n-k:n}}\frac{\mathbf{B}_{n}^{\ast}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH^{1}\left(z\right)+O_{p}\left(\frac{1}{\sqrt{k}}\right),$
where $\mathbf{B}_{n}\left(z\right)$ and $\mathbf{B}_{n}^{\ast}\left(z\right)$
are two Gaussian processes defined by
$\mathbf{B}_{n}\left(z\right)\begin{tabular}[c]{l}$:=$\end{tabular}\
B_{n}\left(\theta\right)-B_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)\text{
and
}\mathbf{B}_{n}^{\ast}\left(z\right)\begin{tabular}[c]{l}$:=$\end{tabular}\
\mathbf{B}_{n}\left(z\right)-B_{n}\left(1-\overline{H}^{0}\left(z\right)\right).$
(4.32)
The assertions of Lemma 5.3 and the fact that $1/\sqrt{k}\rightarrow 0$ yield
$\displaystyle\sqrt{k}\frac{F_{n}\left(Z_{n-k:n}\right)-F\left(Z_{n-k:n}\right)}{\overline{F}\left(Z_{n-k:n}\right)}$
(4.33)
$\displaystyle=-\sqrt{\frac{n}{k}}\mathbf{B}_{n}\left(Z_{n-k:n}\right)+\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)-\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}^{\ast}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH^{1}\left(z\right)+o_{p}\left(1\right).$
Applying the results of Lemma 5.4 leads to
$\sqrt{k}\frac{F_{n}\left(Z_{n-k:n}\right)-F\left(Z_{n-k:n}\right)}{\overline{F}\left(Z_{n-k:n}\right)}=-p\sqrt{\frac{n}{k}}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}\right)+o_{p}\left(1\right),$
which in turn implies that
$\sqrt{k}S_{n1}\approx\frac{\gamma_{1}p}{1-\gamma_{1}}\sqrt{\frac{n}{k}}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}\right).$
In view of the second-order regular variation condition $\left(\ref{second-
order}\right)$ for $\overline{F},$ we have
$\sqrt{k}S_{n2}\approx\frac{\gamma_{1}}{1-\gamma_{1}}\sqrt{k}A_{1}\left(h\right),$
which, by assumption tends to $0.$ As for the term $S_{n3},$ we use Taylor’s
expansion and the fact that $Z_{n-k:n}\approx h,$ to get
$\sqrt{k}S_{n3}\approx-\frac{1}{1-\gamma_{1}}\sqrt{k}\left(\frac{Z_{n-k:n}}{h}-1\right).$
By using Theorem 2.1 we get
$\sqrt{k}S_{n3}\approx-\frac{\gamma}{1-\gamma_{1}}\sqrt{\frac{n}{k}}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}\right).$
Similar arguments, applied to $S_{n4},$ yield
$\sqrt{k}S_{n4}\approx-\frac{\gamma_{1}\gamma}{1-\gamma_{1}}\sqrt{\frac{n}{k}}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}\right).$
In view of the consistency of $\widehat{\gamma}_{1}^{\left(H,c\right)},$ it
easy to verify that
$\sqrt{k}S_{n5}\approx\frac{1}{\left(1-\gamma_{1}\right)^{2}}\sqrt{k}\left(\widehat{\gamma}_{1}^{\left(H,c\right)}-\gamma_{1}\right).$
Once again by using Theorem $\ref{Theorem1},$ we get
$\sqrt{k}S_{n5}\approx\frac{\gamma_{1}}{\left(1-\gamma_{1}\right)^{2}}\left\\{\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left(\frac{k}{n}s\right)ds-\frac{1}{p}\sqrt{\frac{n}{k}}\mathbb{B}_{n}\left(\frac{k}{n}\right)\right\\}.$
For the term $S_{n6},$ we write
$\sqrt{k}S_{n6}=-\sqrt{k}\int_{1}^{\infty}\left(\frac{\overline{F}\left(hx\right)}{\overline{F}\left(h\right)}-x^{-1/\gamma_{1}}\right)dx,$
and we apply the uniform inequality of regularly varying functions (see, e.g.,
Theorem B. 2.18 in de Haan and Ferreira, 2006, page 383) to show that
$\sqrt{k}S_{n6}\approx-\sqrt{k}A_{1}\left(h\right)\rightarrow 0,$ as
$n\rightarrow\infty.$ Finally, combining the results above, on all six terms
$S_{ni},$ achieves the proof.
### 4.4. Proof of Corollary 3.1
It is clear
that$\sqrt{k}\left(\widehat{\Pi}_{n}-\Pi_{n}\right)/\left(h\overline{F}\left(h\right)\right)$
is an asymptotically centred Gaussian rv. By using the covariance formulas and
after elementary calculation we show that its asymptotic variance equals
$\frac{p\gamma_{1}^{2}}{\left(1-\gamma_{1}\right)^{2}}\left[p\gamma_{1}^{2}+\frac{1}{\left(1-\gamma_{1}\right)^{2}}\right].$
Concluding notes
The primary object of the present work consists in providing a Gaussian
limiting distribution for the estimator of the shape parameter of a heavy-
tailed distribution, under random censorship. Our approach is based on the
approximation of the uniform empirical process by a sequence of Brownian
bridges. The Gaussian representation will be of great use in the statistical
inference on quantities related to the tail index in the context of censored
data, such as high quantiles, risk measures,… It is noteworthy that for $p=1$
(which corresponds to the non censoring case), our main result (number three
of Theorem 2.1) perfectly agrees with the Gaussian approximation of the
classical Hill estimator, given in Section 1. On the other hand, the variance
we obtain in Corollary 2.1 is the same as that given by Einmahl et al. (2008).
## 5\. Appendix
###### Lemma 5.1.
Assume that conditions $\left(\ref{second-order}\right)$ hold and let
$k:=k_{n}$ be an integer sequence satisfying $(\ref{K}),$ then for $z\geq 1,$
we have
$\frac{n}{k}\overline{H}^{1}\left(zh\right)=pz^{-1/\gamma}+O\left(A\left(h\right)\right),\text{
as }n\rightarrow\infty,$
where
$A\left(h\right):=A_{1}\left(h\right)+A_{2}\left(h\right)+A_{1}\left(h\right)A_{2}\left(h\right).$
###### Proof.
Let $z\geq 1$ and recall that
$\overline{H}^{1}\left(z\right)=-\int_{z}^{\infty}\overline{G}\left(x\right)d\overline{F}\left(x\right).$
It is clear that
$\overline{H}^{1}\left(zh\right)=-\int_{z}^{\infty}\overline{G}\left(xh\right)d\overline{F}\left(xh\right).$
Since
$\overline{H}\left(h\right)=\overline{G}\left(h\right)\overline{F}\left(h\right),$
then
$\frac{\overline{H}^{1}\left(zh\right)}{\overline{H}\left(h\right)}=-\int_{z}^{\infty}\frac{\overline{G}\left(xh\right)}{\overline{G}\left(h\right)}d\frac{\overline{F}\left(xh\right)}{\overline{F}\left(h\right)}.$
It is easy to verify that
$\displaystyle\frac{\overline{H}^{1}\left(zh\right)}{\overline{H}\left(h\right)}$
$\displaystyle=-\int_{z}^{\infty}\left(\frac{\overline{G}\left(xh\right)}{\overline{G}\left(h\right)}-x^{-1/\gamma_{2}}\right)d\left(\frac{\overline{F}\left(xh\right)}{\overline{F}\left(h\right)}-x^{-1/\gamma_{1}}\right)$
$\displaystyle-\int_{z}^{\infty}\left(\frac{\overline{G}\left(xh\right)}{\overline{G}\left(h\right)}-x^{-1/\gamma_{2}}\right)dx^{-1/\gamma_{1}}-\int_{z}^{\infty}x^{-1/\gamma_{2}}d\left(\frac{\overline{F}\left(xh\right)}{\overline{F}\left(h\right)}-x^{-1/\gamma_{1}}\right)$
$\displaystyle-\int_{z}^{\infty}x^{-1/\gamma_{2}}dx^{-1/\gamma_{1}}.$
For the purpose of using the second-order regular variation conditions
$\left(\ref{second-order}\right),$ we write
$\displaystyle\frac{\overline{H}^{1}\left(zh\right)}{\overline{H}\left(h\right)}$
$\displaystyle=-A_{1}\left(h\right)A_{2}\left(h\right)\int_{z}^{\infty}\frac{\frac{\overline{G}\left(xh\right)}{\overline{G}\left(h\right)}-x^{-1/\gamma_{2}}}{A_{2}\left(h\right)}d\frac{\frac{\overline{F}\left(xh\right)}{\overline{F}\left(h\right)}-x^{-1/\gamma_{1}}}{A_{1}\left(h\right)}$
$\displaystyle-
A_{2}\left(h\right)\int_{z}^{\infty}\frac{\frac{\overline{G}\left(xh\right)}{\overline{G}\left(h\right)}-x^{-1/\gamma_{2}}}{A_{2}\left(h\right)}dx^{-1/\gamma_{1}}-A_{1}\left(h\right)\int_{z}^{\infty}x^{-1/\gamma_{2}}d\frac{\frac{\overline{F}\left(xh\right)}{\overline{F}\left(h\right)}-x^{-1/\gamma_{1}}}{A_{1}\left(h\right)}$
$\displaystyle-\int_{z}^{\infty}x^{-1/\gamma_{2}}dx^{-1/\gamma_{1}}.$
Next, we apply the uniform inequality of regularly varying functions (see,
e.g., Theorem B. 2.18 in de Haan and Ferreira, 2006, page 383). For all
$\epsilon,\omega>0,$ there exists $t_{1}$ such that for $hx\geq t_{1}:$
$\left|\frac{\frac{\overline{F}\left(xh\right)}{\overline{F}\left(h\right)}-x^{-1/\gamma_{1}}}{A_{1}\left(h\right)}-x^{-1/\gamma_{1}}\dfrac{x^{\tau_{1}}-1}{\tau_{1}}\right|\leq\epsilon
x^{-1/\gamma_{1}}\max\left(x^{\omega},x^{-\omega}\right),\text{ }$
Likewise, there exists $t_{2}$ such that for $hx\geq t_{2}:$
$\left|\frac{\frac{\overline{G}\left(xh\right)}{\overline{G}\left(h\right)}-x^{-1/\gamma_{2}}}{A_{2}\left(h\right)}-x^{-1/\gamma_{2}}\dfrac{x^{\tau_{2}}-1}{\tau_{2}}\right|\leq\epsilon
x^{-1/\gamma_{2}}\max\left(x^{\omega},x^{-\omega}\right).$
Making use of the previous two inequalities and noting that
$\overline{H}\left(h\right)$ $=k/n$ and
$-\int_{z}^{\infty}x^{-1/\gamma_{2}}dx^{-1/\gamma_{1}}=pz^{-1/\gamma}$
achieve the proof. ∎
###### Lemma 5.2.
In addition to the assumptions of Lemma 5.1, suppose that both cdf’s $F$ and
$G$ are absolutely continuous, then for $z\geq 1,$ we have
$(i)\text{
}\beta_{n}\left(z\right)=\sqrt{\dfrac{n}{k}}\mathbb{B}_{n}\left(\dfrac{k}{n}z^{-\gamma}\right)+o_{p}\left(1\right)\vskip
6.0pt plus 2.0pt minus 2.0pt$
---
$(ii)\text{
}\widetilde{\beta}_{n}\left(z\right)=\sqrt{\dfrac{n}{k}}\widetilde{\mathbb{B}}_{n}\left(\dfrac{k}{n}z^{-\gamma}\right)+o_{p}\left(1\right)\vskip
6.0pt plus 2.0pt minus 2.0pt$
$(iii)\text{
}{\displaystyle\int_{1}^{\infty}}z^{-1}\left(\beta_{n}\left(z\right)+\widetilde{\beta}_{n}\left(z\right)\right)dz=\gamma\sqrt{\dfrac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left(\dfrac{k}{n}s\right)ds+o_{p}\left(1\right).$
###### Proof.
Let’s begin by assertion $\left(i\right).$ A straightforward application of
the weak approximation $\left(\ref{approx}\right)$ yields
$\beta_{n}\left(z\right)=\sqrt{\frac{n}{k}}\left\\{B_{n}\left(\theta\right)-B_{n}\left(\theta-\overline{H}^{1}\left(zZ_{n-k:n}\right)\right)\right\\}+o_{p}\left(1\right).$
Then we have to show that
$\sqrt{\frac{n}{k}}\left\\{B_{n}\left(\theta-\overline{H}^{1}\left(zZ_{n-k:n}\right)\right)-B_{n}\left(\theta-\frac{k}{n}z^{-\gamma}\right)\right\\}=o_{p}\left(1\right).$
Indeed, let $\left\\{W_{n}\left(t\right);0\leq t\leq 1\right\\}$ be a sequence
of Wiener processes defined on $\left(\Omega,\mathcal{A},\mathbb{P}\right)$ so
that
$\left\\{B_{n}\left(t\right);0\leq t\leq
1\right\\}\overset{d}{=}\left\\{W_{n}\left(t\right)-tW_{n}\left(1\right);0\leq
t\leq 1\right\\}.$ (5.34)
Then without loss of generality, we write
$\displaystyle\sqrt{\frac{n}{k}}\left\\{B_{n}\left(\theta-\overline{H}^{1}\left(zZ_{n-k:n}\right)\right)-B_{n}\left(\theta-\frac{k}{n}z^{-\gamma}\right)\right\\}$
$\displaystyle=\sqrt{\frac{n}{k}}\left\\{W_{n}\left(\theta-\overline{H}^{1}\left(zZ_{n-k:n}\right)\right)-W_{n}\left(\theta-\frac{k}{n}z^{-\gamma}\right)\right\\}$
$\displaystyle\ \ \ \ \ \ \ \ \ \ \ \ \ \ \
-\sqrt{\frac{n}{k}}\left(\frac{k}{n}z^{-\gamma}-\overline{H}^{1}\left(zZ_{n-k:n}\right)\right)W_{n}\left(1\right).$
Let $z\geq 1$ be fixed and recall that
$\overline{H}^{1}\left(zZ_{n-k:n}\right)\approx z^{-\gamma}k/n,$ then it is
easy to verify that the second term of the previous quantity tends to zero (in
probability) as $n\rightarrow\infty.\ $Next we show that the first one also
goes to zero in probability. For given $0<\eta<1$ and $0<\varepsilon<1$ small
enough, we have for all large $n$
$\mathbb{P}\left(\left|\frac{\overline{H}^{1}\left(zZ_{n-k:n}\right)}{z^{-\gamma}k/n}-1\right|>\eta^{2}\frac{\varepsilon^{2}}{4z^{\gamma}}\right)<\varepsilon/2.$
Observe now that
$\displaystyle\mathbb{P}\left(\sqrt{\frac{n}{k}}\left|W_{n}\left(\theta-\overline{H}^{1}\left(zZ_{n-k:n}\right)\right)-W_{n}\left(\theta-\frac{k}{n}z^{-\gamma}\right)\right|>\eta\right)$
$\displaystyle=\mathbb{P}\left(\sqrt{\frac{n}{k}}W_{n}\left(\left|\overline{H}^{1}\left(zZ_{n-k:n}\right)-\frac{k}{n}z^{-\gamma}\right|\right)>\eta\right)$
$\displaystyle\leq\mathbb{P}\left(\left|\frac{\overline{H}^{1}\left(zZ_{n-k:n}\right)}{z^{-\gamma}k/n}-1\right|>\eta^{2}\frac{\varepsilon^{2}}{4z^{\gamma}}\right)+\mathbb{P}\left(\sup_{0\leq
t\leq\frac{\varepsilon^{2}}{4}\frac{k}{n}}W_{n}\left(t\right)>\eta\sqrt{k/n}\right).$
It is clear that the first term of the latter expression tends to zero as
$n\rightarrow\infty.$ On the other hand, since
$\left\\{W_{n}\left(t\right);0\leq t\leq 1\right\\}$ is a martingale then by
using the classical Doob inequality we have, for any $u>0$ and $T>0$
$\mathbb{P}\left(\sup_{0\leq t\leq
T}W_{n}\left(t\right)>u\right)\leq\mathbb{P}\left(\sup_{0\leq t\leq
T}\left|W_{n}\left(t\right)\right|>u\right)\leq\frac{\mathbf{E}\left|W_{n}\left(T\right)\right|}{u}\leq\frac{\sqrt{T}}{u}.$
Letting $T=\eta^{2}\frac{\varepsilon^{2}}{4}\frac{k}{n}$ and
$u=\eta\sqrt{k/n},$ yields
$\mathbb{P}\left(\sup_{0\leq
t\leq\eta^{2}\frac{\varepsilon^{2}}{4}\frac{k}{n}}W_{n}\left(t\right)>\eta\sqrt{k/n}\right)\leq\varepsilon/2,$
(5.35) which completes the proof of assertion $\left(i\right).$ The proof of
assertion $\left(ii\right)$ follows by similar arguments. For assertion
$(iii),$ let us write
$\displaystyle\int_{1}^{\infty}z^{-1}\left(\beta_{n}\left(z\right)+\widetilde{\beta}_{n}\left(z\right)\right)dz$
$\displaystyle=\sqrt{\frac{n}{k}}\int_{Z_{n-k:n}}^{Z_{n:n}}z^{-1}\left(\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right)\right)dz,$
which may be decomposed into
$T_{n1}^{\left(1\right)}+T_{n1}^{\left(2\right)}+T_{n1}^{\left(3\right)}$
where
$T_{n1}^{\left(1\right)}:=\sqrt{\frac{n}{k}}\int_{h}^{H^{-1}\left(1-1/n\right)}z^{-1}\left(\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right)\right)dz,$
$T_{n1}^{\left(2\right)}:=\sqrt{\frac{n}{k}}\int_{Z_{n-k:n}}^{h}z^{-1}\left(\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right)\right)dz$
and
$T_{n1}^{\left(3\right)}:=\sqrt{\frac{n}{k}}\int_{H^{-1}\left(1-1/n\right)}^{Z_{n,n}}z^{-1}\left(\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right)\right)dz.$
Once again by using approximation $\left(\ref{approx}\right),$ we get
$T_{n1}^{\left(1\right)}=\sqrt{\frac{n}{k}}\int_{1}^{\frac{H^{-1}\left(1-1/n\right)}{h}}z^{-1}\left(B_{n}\left(\theta\right)-B_{n}\left(\theta-\overline{H}^{1}\left(hz\right)\right)-B_{n}\left(1-\overline{H}^{0}\left(hz\right)\right)\right)dz+o_{p}\left(1\right).$
Since $H^{-1}\left(1-1/n\right)/h\rightarrow 0,$ then by elementary
calculations we show that the latter quantity equals (as $n\rightarrow\infty)$
$\sqrt{\frac{n}{k}}\int_{1}^{0}z^{-1}\left(B_{n}\left(\theta\right)-B_{n}\left(\theta-p\frac{k}{n}z^{-\gamma}\right)-B_{n}\left(1-q\frac{k}{n}z^{-\gamma}\right)\right)dz+o_{p}\left(1\right).$
By a change of variables and inverting the integration limits we end up with
$\gamma\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\left(B_{n}\left(\theta\right)-B_{n}\left(\theta-p\frac{k}{n}s\right)-B_{n}\left(1-q\frac{k}{n}s\right)\right)ds+o_{p}\left(1\right),$
which equals the right-hand side of equation $\left(i\right).$ We have to show
that both $T_{n1}^{\left(2\right)}$ and $T_{n1}^{\left(3\right)}$ tend to zero
in probability as $n\rightarrow\infty.\ $Observe that
$\mathbb{P}\left(\left|T_{n1}^{\left(2\right)}\right|>\eta\right)\leq\mathbb{P}\left(I_{n}>\eta\right)+\mathbb{P}\left(\left|\frac{Z_{n-k:n}}{h}-1\right|>\varepsilon\right),$
where
$I_{n}:=\sqrt{\frac{n}{k}}\int_{h}^{\left(1+\varepsilon\right)h}z^{-1}\left|\alpha_{n}\left(\theta\right)-\alpha_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-\alpha_{n}\left(1-\overline{H}^{0}\left(z\right)\right)\right|dz.$
We already have
$\mathbb{P}\left(\left|Z_{n-k:n}/h-1\right|>\varepsilon\right)\rightarrow 0,$
it remains to show that $\mathbb{P}\left(I_{n}>\eta\right)\rightarrow 0$ as
well. By applying approximation $\left(\ref{approx}\right),$ we get
$I_{n}=\widetilde{I}_{n}+o_{p}\left(1\right),$ where
$\widetilde{I}_{n}:=\sqrt{\frac{n}{k}}\int_{h}^{\left(1+\varepsilon\right)h}z^{-1}\left|B_{n}\left(\theta\right)-B_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-B_{n}\left(1-\overline{H}^{0}\left(z\right)\right)\right|dz.$
Next we show that $\mathbb{P}\left(\widetilde{I}_{n}>\eta\right)\rightarrow
0.$ By letting
$B_{n}^{\ast}\left(z\right):=B_{n}\left(\theta\right)-B_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)-B_{n}\left(1-\overline{H}^{0}\left(z\right)\right)$
we showed that
$\mathbf{E}\left[B_{n}^{\ast}\left(x\right)B_{n}^{\ast}\left(y\right)\right]=\min\left(\overline{H}\left(x\right),\overline{H}\left(y\right)\right)-\overline{H}\left(x\right)\overline{H}\left(y\right),$
which implies that
$\mathbf{E}\left|B_{n}^{\ast}\left(z\right)\right|\leq\sqrt{\overline{H}\left(z\right)}$
and since $\overline{H}\left(zh\right)\sim\dfrac{k}{n}z^{-1/\gamma},$ then
$\mathbf{E}\left|\widetilde{I}_{n}\right|\leq\sqrt{\frac{n}{k}}\int_{1}^{1+\varepsilon}z^{-1}\sqrt{\overline{H}\left(zh\right)}dz\sim
2\gamma\left(1-\left(1+\varepsilon\right)^{-1/2\gamma}\right),$
which tend to zero as $\varepsilon\downarrow 0,$ this means that
$\widetilde{I}_{n}\rightarrow 0$ in probability. By similar arguments we also
show that $T_{n1}^{\left(3\right)}\overset{\mathbb{P}}{\rightarrow}0,$
therefore we omit the details. ∎
###### Lemma 5.3.
Under the assumptions of Lemma 5.1 we have
$\left(i\right)\text{
}\sqrt{\dfrac{k}{n}}{\displaystyle\int_{h}^{Z_{n-k:n}}}\dfrac{\mathbf{B}_{n}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)=o_{p}\left(1\right),\vskip
6.0pt plus 2.0pt minus 2.0pt$
---
$\left(ii\right)\text{
}\sqrt{\dfrac{k}{n}}{\displaystyle\int_{h}^{Z_{n-k:n}}}\dfrac{\mathbf{B}_{n}^{\ast}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH^{1}\left(z\right)=o_{p}\left(1\right).$
###### Proof.
We begin by proving the first assertion. To this end let us fix $\upsilon>0$
and write
$\displaystyle\mathbb{P}\left(\left|\sqrt{\frac{k}{n}}\int_{h}^{Z_{n-k:n}}\mathbf{B}_{n}\left(z\right)\frac{dH\left(z\right)}{\overline{H}^{2}\left(z\right)}\right|>\upsilon\right)$
$\displaystyle\leq\mathbb{P}\left(\left|\frac{Z_{n-k:n}}{h}-1\right|>\upsilon\right)+\mathbb{P}\left(\left|\sqrt{\frac{k}{n}}\int_{h}^{\left(1+\upsilon\right)h}\mathbf{B}_{n}\left(z\right)\frac{dH\left(z\right)}{\overline{H}^{2}\left(z\right)}\right|>\upsilon\right).$
It is clear that the first term of the previous expression tends to zero as
$n\rightarrow\infty.$ Then we have to show that the second one goes to zero as
well. Indeed, observe that
$\mathbf{E}\left|\sqrt{\frac{k}{n}}\int_{h}^{\left(1+\upsilon\right)h}\mathbf{B}_{n}\left(z\right)\frac{dH\left(z\right)}{\overline{H}^{2}\left(z\right)}\right|\leq\sqrt{\frac{k}{n}}\int_{h}^{\left(1+\upsilon\right)h}\mathbf{E}\left|\mathbf{B}_{n}\left(z\right)\right|\frac{dH\left(z\right)}{\overline{H}^{2}\left(z\right)}.$
Since
$\mathbf{E}\left|\mathbf{B}_{n}\left(z\right)\right|\leq\sqrt{\overline{H}^{1}\left(z\right)},$
then the right-hand side of the latter expression is less than or equal to
$\sqrt{\frac{k}{n}}\int_{h}^{\left(1+\upsilon\right)h}\sqrt{\overline{H}^{1}\left(z\right)}\frac{dH\left(z\right)}{\overline{H}^{2}\left(z\right)}\leq\sqrt{\frac{k}{n}}\sqrt{\overline{H}^{1}\left(h\right)}\left[\frac{1}{\overline{H}\left(\left(1+\upsilon\right)h\right)}-\frac{1}{\overline{H}\left(h\right)}\right],$
which may be rewritten into
$\sqrt{\frac{\overline{H}^{1}\left(h\right)}{\overline{H}\left(h\right)}}\left[\frac{\overline{H}\left(h\right)}{\overline{H}\left(\left(1+\upsilon\right)h\right)}-1\right].$
Since $\overline{H}^{1}\left(h\right)\sim p\overline{H}\left(h\right)$ and
$\overline{H}\in\mathcal{RV}_{\left(-\gamma\right)},$ then the previous
quantity tends to
$p^{1/2}\left(\left(1+\upsilon\right)^{\gamma}-1\right)\text{ as
}n\rightarrow\infty.$
Since $\upsilon$ is arbitrary then it may be chosen small enough so that the
latter quantity tends to zero. By similar arguments we also show assertion
$\left(ii\right),$ therefore we omit the details. ∎
###### Lemma 5.4.
Under the assumptions of Lemma 5.1 we have, for $z\geq 1$
$\left(i\right)\text{
}\sqrt{\dfrac{k}{n}}{\displaystyle\int_{0}^{h}}\dfrac{\mathbf{B}_{n}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)=\sqrt{\dfrac{n}{k}}\mathbb{B}_{n}\left(\dfrac{k}{n}\right)+o_{p}\left(1\right),\vskip
6.0pt plus 2.0pt minus 2.0pt$
---
$\left(ii\right)\text{
}\sqrt{\dfrac{k}{n}}{\displaystyle\int_{0}^{h}}\dfrac{\mathbf{B}_{n}^{\ast}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH^{1}\left(z\right)=p\sqrt{\dfrac{n}{k}}\mathbb{B}_{n}^{\ast}\left(\dfrac{k}{n}\right)+o_{p}\left(1\right),\vskip
6.0pt plus 2.0pt minus 2.0pt$
$\left(iii\right)\sqrt{\dfrac{n}{k}}\mathbf{B}_{n}\left(Z_{n-k:n}\right)=\sqrt{\dfrac{n}{k}}\mathbb{B}_{n}\left(\dfrac{k}{n}\right)+o_{p}\left(1\right).$
###### Proof.
We only show assertion $\left(i\right),$ since $\left(ii\right)$ and
$\left(iii\right)$ follow by similar arguments. Observe that
$\int_{0}^{h}\frac{dH\left(z\right)}{\overline{H}^{2}\left(z\right)}=\frac{1}{\overline{H}\left(h\right)}-1,$
and $\overline{H}\left(h\right)=k/n,$ then
$\sqrt{\frac{n}{k}}\mathbf{B}_{n}\left(h\right)=\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left(h\right)dH\left(z\right)}{\overline{H}^{2}\left(z\right)}+\sqrt{\frac{k}{n}}.$
Let us write
$\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left(z\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)-\sqrt{\frac{n}{k}}\mathbf{B}_{n}\left(h\right)=\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left(z\right)-\mathbf{B}_{n}\left(h\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)+\sqrt{\frac{k}{n}}.$
We have
$\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left(z\right)-\mathbf{B}_{n}\left(h\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)=\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{B_{n}\left(\theta-\overline{H}^{1}\left(h\right)\right)-B_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right).$
It is clear that
$\displaystyle\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left(z\right)-\mathbf{B}_{n}\left(h\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)$
(5.36)
$\displaystyle=\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{W_{n}\left(\theta-\overline{H}^{1}\left(h\right)\right)-W_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)$
$\displaystyle\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
-W_{n}\left(1\right)\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\overline{H}^{1}\left(z\right)-\overline{H}^{1}\left(h\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right),$
where $\left\\{W_{n}\left(t\right),0\leq t\leq 1\right\\}$ is the sequence of
Wiener processes defined in $\left(\ref{W}\right).$ Next we show that both
terms of the last expression tend to zero in probability. Indeed, it is easy
to verify
$\displaystyle\mathbf{E}\left|\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{W_{n}\left(\theta-\overline{H}^{1}\left(h\right)\right)-W_{n}\left(\theta-\overline{H}^{1}\left(z\right)\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)\right|$
$\displaystyle\leq\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\sqrt{\overline{H}^{1}\left(z\right)-\overline{H}^{1}\left(h\right)}}{\overline{H}^{2}\left(z\right)}dH\left(z\right).$
It is clear that
$\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\sqrt{\overline{H}^{1}\left(z\right)-\overline{H}^{1}\left(h\right)}}{\overline{H}^{2}\left(z\right)}dH\left(z\right)=\sqrt{\overline{H}\left(h\right)}{\displaystyle\int_{0}^{h}}\frac{\sqrt{\overline{H}^{1}\left(z\right)-\overline{H}^{1}\left(h\right)}}{\overline{H}^{2}\left(z\right)}dH\left(z\right).$
Elementary calculations by using L’Hôpital’s rule, we infer that the latter
quantity tends to zero as $n\rightarrow\infty.$ Likewise, we also show that
$\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\overline{H}^{1}\left(z\right)-\overline{H}^{1}\left(h\right)}{\overline{H}^{2}\left(z\right)}dH\left(z\right)\rightarrow
0,\text{ as }n\rightarrow\infty,$
which implies that the right side of equation $\left(\ref{diff}\right)$ goes
to zero in probability. It remains to check that
$\sqrt{\frac{n}{k}}\mathbf{B}_{n}\left(h\right)=\sqrt{\frac{n}{k}}\mathbb{B}_{n}\left(\frac{k}{n}\right)+o_{p}\left(1\right).$
Recalling
$\mathbf{B}_{n}\left(h\right)=B_{n}\left(\theta\right)-B_{n}\left(\theta-\overline{H}^{1}\left(h\right)\right)\text{
and
}\mathbb{B}_{n}\left(\frac{k}{n}\right)=B_{n}\left(\theta\right)-B_{n}\left(\theta-p\frac{k}{n}\right),$
we write
$\sqrt{\frac{n}{k}}\left(\mathbf{B}_{n}\left(h\right)-\mathbb{B}_{n}\left(\frac{k}{n}\right)\right)=\sqrt{\frac{n}{k}}\left(B_{n}\left(\theta-
pk/n\right)-B_{n}\left(\theta-\overline{H}^{1}\left(h\right)\right)\right).$
Then, we have to show that this latter tends to zero in probability. By
writing $B_{n}$ in terms of $W_{n}$ as above, it is easy to verify that
$\displaystyle\sqrt{\frac{n}{k}}\mathbf{E}\left|B_{n}\left(\theta-
pk/n\right)-B_{n}\left(\theta-\overline{H}^{1}\left(h\right)\right)\right|$
$\displaystyle\leq\sqrt{\frac{n}{k}}\sqrt{\overline{H}^{1}\left(h\right)-pk/n}+\sqrt{\frac{n}{k}}\left|\overline{H}^{1}\left(h\right)-pk/n\right|$
$\displaystyle=\sqrt{\frac{\overline{H}^{1}\left(h\right)}{k/n}-p}+\sqrt{\frac{k}{n}}\left|\frac{\overline{H}^{1}\left(h\right)}{k/n}-p\right|,$
which converges to zero as $n\rightarrow\infty,$ since
$\overline{H}^{1}\left(h\right)\approx pk/n.$ This achieves the proof. ∎
## References
* Beirlant et al. (1994) Beirlant, J., Teugels, J. and Vynckier, P., 1994. Extremes in non-life insurance. In Extreme Value Theory and Applications (ed. J. Galambos), 489-510. Kluwer Academic Publishers.
* Beirlant et al. (2004) Beirlant, J., Goegebeur, Y., Segers, J. and Teugels, J., 2004. Statistics of Extremes - Theory and applications. Wiley.
* Beirlant et al. (2007) Beirlant, J., Guillou, A., Dierckx, G., Fils-Villetard, A., 2007. Estimation of the extreme value index and extreme quantiles under random censoring. Extremes. 10, no. 3, 151-174.
* Brahimi et al. (2011) Brahimi, B., Meraghni, D., Necir, A. and Zitikis, R., 2011. Estimating the distortion parameter of the proportional-hazard premium for heavy-tailed losses. Insurance Math. Econom. 49, no. 3, 325-334.
* Brahimi et al. (2013) Brahimi, B., Meraghni, D., Necir, A. and Yahia, D., 2013. A bias-reduced estimator for the mean of a heavy-tailed distribution with an infinite second moment. J. Statist. Plann. Inference 143, no. 6, 1064-1081.
* Csörgő and Mason (1985) Csörgő, S. and Mason, D.M., 1985. Central limit theorems for sums of extreme values. Math. Proc. Cambridge Philos. Soc. 98, no. 3, 547-558.
* Csörgő et al. (1986) Csörgő, M., Csörgő, S., Horváth, L. and Mason, D.M., 1986. Weighted empirical and quantile processes. Ann. Probab. 14, no. 1, 31-85.
* Csörgő (1996) Csörgő, S., 1996. Universal Gaussian approximations under random censorship. Ann. Statist. 24, no. 6, 2744-2778.
* Deheuvels and Einmahl (1996) Deheuvels, P. and Einmahl, J.H.J., 1996. On the strong limiting behavior of local functionals of empirical processes based upon censored data. Ann. Probab. 24, no. 1, 504-525.
* Einmahl et al. (2008) Einmahl, J.H.J., Fils-Villetard, A. and Guillou, A., 2008. Statistics of extremes under random censoring. Bernoulli. 14, no.1, 207-227.
* Einmahl and Koning (1992) Einmahl, J.H.J. and Koning, A.J., 1992\. Limit theorems for a general weighted process under random censoring. Canad. J. Statist. 20, no. 1, 77-89.
* Embrechts et al. (1997) Embrechts, P., Klüppelberg, C., Mikosch, T., 1997. Modelling Extremal Events for Insurance and Finance. Springer-Verlag, New York.
* Ghosh and Resnick (2010) Ghosh, S. and Resnick, S., 2010. A discussion on mean excess plots. Stochastic Process. Appl. 120, no 8, 1492-1517.
* Gomes and Neves (2011) Gomes, M.I. and Neves, M.M., 2011. Estimation of the extreme value index for randomly censored data. Biometrical Letters. 48, no.1, 1-22.
* Greselin et al. (2013) Greselin, F., Pasquazzi, L. and Zitikis, R., 2013. Heavy tailed capital incomes: Zenga index, statistical inference, and ECHP data analysis. Extremes, DOI 10.1007/s10687-013-0177-2.
* de Haan and Stadtmüller (1996) de Haan, L. and Stadtmüller, U., 1996. Generalized regular variation of second order. J. Australian Math. Soc. (Series A) 61, 381-395.
* de Haan and Peng (1998) de Haan, L. and Peng, L., 1998. Comparison of tail index estimators. Statist. Neerlandica. 52, no. 1. 60-70.
* de Haan and Ferreira (2006) de Haan, L. and Ferreira, A., 2006\. Extreme Value Theory: An Introduction. Springer.
* Hall (1982) Hall, P., 1982. On some simple estimates of an exponent of regular variation. Journal of the Royal Statistical Society. 44. 37-42.
* Hill (1975) Hill, B.M., 1975. A simple general approach to inference about the tail of a distribution. Ann. Statist. 3, no.5. 1163-1174.
* Kaplan and Meier (1958) Kaplan, E.L., Meier, P., 1958. Nonparametric estimation from incomplete observations. J. Amer. Statist. Assoc. 53, 457-481.
* Koning and Peng (2008) Koning, A.J.and Peng, L., 2008. Goodness-of-fit tests for a heavy tailed distribution. J. Statist. Plann. Inference. 138, no. 12, 3960-3981.
* Mason (1982) Mason, D.M., 1982. Laws of large numbers for sums of extreme values. Ann. Probab. 10. 756-764.
* Necir et al. (2007) Necir, A., Meraghni, D. and Meddi, F., 2007. Statistical estimate of the proportional hazard premium of loss. Scand. Actuar. J., no. 3, 147-161.
* Necir and Meraghni (2009) Necir, A. and Meraghni, D., 2009. Empirical estimation of the proportional hazard premium for heavy-tailed claim amounts. Insurance Math. Econom. 45, no. 1, 49-58.
* Peng (2001) Peng, L., 2001. Estimating the mean of a heavy tailed distribution. Statist. Probab. Lett. 52, no. 3, 255-264.
* Peng (2004) Peng, L., 2004. Empirical-likelihood-based confidence interval for the mean with a heavy-tailed distribution. Ann. Statist. 32, no. 3, 1192-1214.
* Reiss and Thomas (1997) Reiss, R.D. and Thomas, M., 1997. Statistical Analysis of Extreme Values with Applications to Insurance, Finance, Hydrology and Other Fields. Birkhäuser.
* Shorack and Wellner (1986) Shorack, G., R. and Wellner, J.A., 1986\. Empirical Processes with Applications to Statistics. Wiley.
* Stute (1995) Stute, W., 1995. The central limit theorem under random censorship. Ann. Statist. 23, 422-439.
* Vandewalle and Beirlant (2006) Vandewalle, B. and Beirlant, J., 2006. On univariate extreme value statistics and the estimation of reinsurance premiums. Insurance Math. Econom. 38, no. 3, 441-459.
* Wang (1996) Wang, S.S., 1996. Premium Calculation by Transforming the Layer Premium Density. ASTIN Bulletin. 26, 71-92.
|
arxiv-papers
| 2013-02-07T08:12:07 |
2024-09-04T02:49:41.490162
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Brahim Brahimi, Djamel Meraghni and Abdelhakim Necir",
"submitter": "Brahimi Brahim",
"url": "https://arxiv.org/abs/1302.1666"
}
|
1302.1916
|
Estimation of Distribution Overlap of Urn Models
Jerrad Hampton1, Manuel E. Lladser1,∗
1 Department of Applied Mathematics, University of Colorado, Boulder,
Colorado, United States of America
$\ast$ E-mail: [email protected]
## Abstract
A classical problem in statistics is estimating the expected coverage of a
sample, which has had applications in gene expression, microbial ecology,
optimization, and even numismatics. Here we consider a related extension of
this problem to random samples of two discrete distributions. Specifically, we
estimate what we call the dissimilarity probability of a sample, i.e., the
probability of a draw from one distribution not being observed in $k$ draws
from another distribution. We show our estimator of dissimilarity to be a
$U$-statistic and a uniformly minimum variance unbiased estimator of
dissimilarity over the largest appropriate range of $k$. Furthermore, despite
the non-Markovian nature of our estimator when applied sequentially over $k$,
we show it converges uniformly in probability to the dissimilarity parameter,
and we present criteria when it is approximately normally distributed and
admits a consistent jackknife estimator of its variance. As proof of concept,
we analyze V35 16S rRNA data to discern between various microbial
environments. Other potential applications concern any situation where
dissimilarity of two discrete distributions may be of interest. For instance,
in SELEX experiments, each urn could represent a random RNA pool and each draw
a possible solution to a particular binding site problem over that pool. The
dissimilarity of these pools is then related to the probability of finding
binding site solutions in one pool that are absent in the other.
## Introduction
An inescapable problem in microbial ecology is that a sample from an
environment typically does not observe all species present in that
environment. In [1], this problem has been recently linked to the concepts of
_coverage probability_ (i.e. the probability that a member from the
environment is represented in the sample) and the closely related _discovery_
or _unobserved probability_ (i.e. the probability that a previously unobserved
species is seen with another random observation from that environment). The
mathematical treatment of coverage is not limited, however, to microbial
ecology and has found applications in varied contexts, including gene
expression, microbial ecology, optimization, and even numismatics.
The point estimation of coverage and discovery probability seem to have been
first addressed by Turing and Good [2] to help decipher the Enigma Code, and
subsequent work has provided point predictors and prediction intervals for
these quantities under various assumptions [3, 4, 5, 1].
Following Robbins [6] and in more generality Starr [7], an unbiased estimator
of the expected discovery probability of a sample of size $n$ is
$\sum_{k=1}^{r}\frac{{r-1\choose k-1}}{{n+r\choose k}}\cdot N(k,n+r),$ (1)
where $N(k,n+r)$ is the number of species observed exactly $k$-times in a
sample with replacement of size $(n+r)$. Using the theory of U-statistics
developed by Halmos [8], Clayton and Frees [9] show that the above estimator
is the _uniformly minimum variance unbiased estimator_ (UMVUE) of the expected
discovery probability of a sample of size $n$ based on an enlarged sample of
size $(n+r)$.
A quantity analogous to the discovery probability of a sample from a single
environment but in the context of two environments is _dissimilarity_ , which
we broadly define as the probability that a draw in one environment is not
represented in a random sample (of a given size) from a possibly different
environment. Estimating the dissimilarity of two microbial environments is
therefore closely related to the problem of assessing the species that are
unique to each environment, and the concept of dissimilarity may find
applications to measure sample quality and allocate additional sampling
resources, for example, for a more robust and reliable estimation of the
UniFrac distance [10, 11] between pairs of environments. Dissimilarity may
find applications in other and very different contexts. For instance, in SELEX
experiments [12]—a laboratory technique in which an initial pool of
synthesized random RNA sequences is repeatedly screened to yield a pool
containing only sequences with given biological functions—the dissimilarity of
two RNA pools corresponds to the probability of finding binding site solutions
in one pool that are absent in the other.
In this manuscript, we study an estimator of dissimilarity probability similar
to Robbins’ and Starr’s statistic for discovery probability. Our estimator is
optimal among the appropriate class of unbiased statistics, while being
approximately normally distributed in a general case. The variance of this
statistic is estimated using a consistent jackknife. As proof of concept, we
analyze samples of processed V35 16S rRNA data from the Human Microbiome
Project [13].
### Probabilistic Formulation and Inference Problem
To study dissimilarity probability, we use the mathematical model of a pair of
urns, where each urn has an unknown composition of balls of different colors,
and where there is no a priori knowledge of the contents of either urn.
Information concerning the urn composition is inferred from repeated draws
with replacement from that urn.
In what follows, $X_{1},X_{2},\ldots$ and $Y_{1},Y_{2},\ldots$ are independent
sequences of independent and identically distributed (i.i.d.) discrete random
variables with probability mass functions ${\mathbb{P}}_{x}$ and
${\mathbb{P}}_{y}$, respectively. Without loss of generality we assume that
${\mathbb{P}}_{x}$ and ${\mathbb{P}}_{y}$ are supported over possibly infinite
subsets of ${\mathbb{N}}=\\{1,2,3,\ldots\\}$, and think of outcomes from these
distributions as “colors”: i.e. we speak of color-$1$, color-$2$, etc. Let
$I_{x}$ denote the set of colors $i$ such that ${\mathbb{P}}_{x}(i)>0$, and
similarly define $I_{y}$. Under this perspective, $X_{k}$ denotes the color of
the $k$-th ball drawn with replacement from urn-$x$. Similarly, $Y_{k}$ is the
color of the $k$-th ball drawn with replacement from urn-$y$. Note that based
on our formulation, distinct draws are always independent.
The mathematical analysis that follows was motivated by the problem of
estimating the fraction of balls in urn-$x$ with a color that is absent in
urn-$y$. We can write this parameter as
$\theta_{x,y}(\infty):=\sum_{i\in(I_{x}\setminus
I_{y})}{\mathbb{P}}_{x}(i)=\lim_{k\to\infty}\theta_{x,y}(k),$ (2)
where
$\theta_{x,y}(k):=\sum_{i\in
I_{x}}{\mathbb{P}}_{x}(i)(1-{\mathbb{P}}_{y}(i))^{k}={\mathbb{P}}\left(X_{1}\notin\left\\{Y_{1},\ldots,Y_{k}\right\\}\right).$
(3)
The parameter $\theta_{x,y}(\infty)$ measures the proportion of urn-$x$ which
is unique from urn-$y$. On the other hand, $\theta_{x,y}(k)$ is a measure of
the effectiveness of $k$-samples from urn-$y$ to determine uniqueness in
urn-$x$. This motivates us to refer to the quantity in (2) as the
_dissimilarity of urn- $x$ from urn-$y$_, and to the quantity in (3) as the
_average dissimilarity of urn- $x$ relative to $k$-draws from urn-$y$_. Note
that these parameters are in general asymmetric in the roles of the urns. In
what follows, urns-$x$ and -$y$ are assumed fixed, which motivates us to
remove subscripts and write $\theta(k)$ instead of $\theta_{x,y}(k)$.
Unfortunately, one cannot estimate unbiasedly the dissimilarity of one urn
from another based on finite samples, as stated in the following result. (See
the Materials and Methods section for the proofs of all of our results.)
###### Theorem 1.
(No unbiased estimator of dissimilarity.) There is no unbiased estimator of
$\theta(\infty)$ based on finite samples from two arbitrary urns-$x$ and -$y$.
Furthermore, estimating $\theta(\infty)$ accurately without further
assumptions on the compositions of urns-$x$ and -$y$ seems a difficult if not
impossible task. For instance, arbitrarily small perturbations of urn-$y$ are
likely to be unnoticed in a sample of a given size from this urn but may
drastically affect the dissimilarity of other urns from urn-$y$. To
demonstrate this idea, consider a parameter $0\leq\epsilon\leq 1$ and let
$\mathbb{P}_{x}(1):=1$, $\mathbb{P}_{y}(1):=\epsilon$ and
$\mathbb{P}_{y}(2):=(1-\epsilon)$. If $\epsilon=0$ then $\theta(\infty)=1$
while, for each $\epsilon>0$, $\theta(\infty)=0$.
In contrast with the above, for fixed $k$, $\theta(k)$ depends continuously on
$({\mathbb{P}}_{x},{\mathbb{P}}_{y})$ e.g. under the metric
$d\big{(}({\mathbb{P}}_{x},{\mathbb{P}}_{y}),({\mathbb{P}}_{x^{\prime}},{\mathbb{P}}_{y^{\prime}})\big{)}:=\|{\mathbb{P}}_{x}-{\mathbb{P}}_{x^{\prime}}\|+\|{\mathbb{P}}_{y}-{\mathbb{P}}_{y^{\prime}}\|,$
where $\|\nu\|:=\sup_{A\subset{\mathbb{N}}}|\nu(A)|=\sum_{i}|\nu(i)|/2$
denotes the total variation of a signed measure $\nu$ over ${\mathbb{N}}$ such
that $\nu({\mathbb{N}})=0$. This is the case because
$\displaystyle\left|\sum_{i}{\mathbb{P}}_{x}(i)(1-{\mathbb{P}}_{y}(i))^{k}-\sum_{i}{\mathbb{P}}_{x^{\prime}}(i)(1-{\mathbb{P}}_{y^{\prime}}(i))^{k}\right|$
$\displaystyle\leq$
$\displaystyle\sum_{i}|{\mathbb{P}}_{x}(i)-{\mathbb{P}}_{x^{\prime}}(i)|+k\sum_{i}|{\mathbb{P}}_{y}(i)-{\mathbb{P}}_{y^{\prime}}(i)|,$
$\displaystyle\leq$ $\displaystyle 2(k+1)\cdot
d\big{(}({\mathbb{P}}_{x},{\mathbb{P}}_{y}),({\mathbb{P}}_{x^{\prime}},{\mathbb{P}}_{y^{\prime}})\big{)}.$
The above implies that $\theta(k)$ is continuous with respect to any metric
equivalent to $d$. Many such metrics can be conceived. For instance, if
$({\mathbb{P}}_{x}^{m}\times{\mathbb{P}}_{y}^{n})$ denotes the probability
measure associated with $m$ samples with replacement from urn-$x$ that are
independent of $n$ samples with replacement from urn-$y$ then $\theta(k)$ is
also continuous with respect to any of the metrics
$d_{m,n}\big{(}({\mathbb{P}}_{x},{\mathbb{P}}_{y}),({\mathbb{P}}_{x^{\prime}},{\mathbb{P}}_{y^{\prime}})\big{)}:=\|({\mathbb{P}}_{x}^{m}\times{\mathbb{P}}_{y}^{n})-({\mathbb{P}}_{x^{\prime}}^{m}\times{\mathbb{P}}_{y^{\prime}}^{n})\|$,
with $m,n\geq 1$, because
$d\big{(}({\mathbb{P}}_{x},{\mathbb{P}}_{y}),({\mathbb{P}}_{x^{\prime}},{\mathbb{P}}_{y^{\prime}})\big{)}/2\leq
d_{m,n}\big{(}({\mathbb{P}}_{x},{\mathbb{P}}_{y}),({\mathbb{P}}_{x^{\prime}},{\mathbb{P}}_{y^{\prime}})\big{)}\leq\max\\{m,n\\}\cdot
d\big{(}({\mathbb{P}}_{x},{\mathbb{P}}_{y}),({\mathbb{P}}_{x^{\prime}},{\mathbb{P}}_{y^{\prime}})\big{)}.$
Because of the above considerations, we discourage the direct estimation of
$\theta(\infty)$ and focus on the problem of estimating $\theta(k)$
accurately.
## Results
Consider a finite number of draws with replacement $X_{1},\ldots,X_{n_{x}}$
and $Y_{1},\ldots,Y_{n_{y}}$, from urn-$x$ and urn-$y$, respectively, where
$n_{x},n_{y}\geq 1$ are assumed fixed. Using this data we can estimate
$\theta(k)$, for $k=1:n_{y}$, via the estimator:
$\hat{\theta}(k):=\frac{1}{n_{x}{n_{y}\choose
k}}\mathop{\sum}\limits_{j=0}^{n_{y}-k}{n_{y}-j\choose k}Q(j),$ (4)
where
$Q(j):=\left\\{\begin{array}[]{l}\hbox{number of indices $i=1:n_{x}$ such
that}\\\ \hbox{color $X_{i}$ occurs $j$-times in
$Y_{1},\ldots,Y_{n_{y}}$.}\end{array}\right.$ (5)
We refer to $Q(0),\ldots,Q(n_{y})$ as the $Q$-statistics summarizing the data
from both urns. Due to the well-known relation: $\sum_{i=1}^{r}i=r(r+1)/2$, at
most $(1+\sqrt{2n_{y}})$ of these estimators are non-zero. This sparsity may
be exploited in the calculation of the right-hand side of (4) over a large
range of $k$’s.
Our statistic in $\hat{\theta}(k)$ is the U-statistic associated with the
kernel $[\\![X_{1}\notin\\{Y_{1},\ldots,Y_{k}\\}]\\!]$, where
$[\\![\cdot]\\!]$ is used to denote the indicator function of the event within
the brackets (Iverson’s bracket notation). Following the approach by Halmos in
[8], we can show that this U-statistic is optimal amongst the unbiased
estimators of $\theta(k)$ for $k=1:n_{y}$. We note that no additional samples
from either urn are necessary to estimate $\theta(k)$ unbiasedly over this
range when $n_{x}\geq 1$. This contrasts with the estimator in equation (1),
which requires sample enlargement for unbiased estimation of discovery
probability of a sample of size $n$.
###### Theorem 2.
(Minimum variance unbiased estimator.) If $n_{x}\geq 1$ and $n_{y}\geq k$ then
$\hat{\theta}(k)$ is the unique uniformly minimum variance unbiased estimator
of $\theta(k)$. Further, no unbiased estimator of $\theta(k)$ exists for
$n_{x}=0$ or $n_{y}<k$.
Our next result shows that $\hat{\theta}(k)$ converges uniformly in
probability to $\theta(k)$ over the largest possible range where unbiased
estimation of the later parameter is possible, despite the non-Markovian
nature of $\hat{\theta}(k)$ when applied sequentially over $k$. The result
asserts that $\hat{\theta}(k)$ is likely to be a good approximation of
$\theta(k)$, uniformly for $k=1:n_{y}$, when $n_{x}$ and $n_{y}$ are large.
The method of proof uses an approach by Hoeffding [14] for the exact
calculation of the variance of a $U$-statistic.
###### Theorem 3.
(Uniform convergence in probability.) Independently of how $n_{x}$ and $n_{y}$
tend to infinity, it follows for each $\epsilon>0$ that
$\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\mathbb{P}\left(\mathop{\max}\limits_{k=1:n_{y}}|\hat{\theta}(k)-\theta(k)|>\epsilon\right)=0.$
(6)
We may estimate the variance of $\hat{\theta}(k)$ for $k=1:n_{y}$ via a leave-
one-out or also called delete-$1$ jackknife estimator, using an approach
studied by Efron and Stein [15] and Shao and Wu [16].
To account for variability in the $x$-data through a leave-one-out jackknife
estimate, we require that $n_{x}\geq 2$ and let
$\displaystyle S^{2}_{x}(k)$ $\displaystyle:=$
$\displaystyle\frac{1}{n_{x}(n_{x}-1)}\sum\limits_{j=0}^{n_{y}-k}Q(j)\left(\frac{{n_{y}-j\choose
k}}{{n_{y}\choose k}}-\hat{\theta}(k)\right)^{2}.$ (7)
On the other hand, to account for variability in the $y$-data, consider for
$i\geq 1$ and $j\geq 0$ the statistics
$M(i,j):=\left\\{\begin{array}[]{l}\hbox{number of colors $c$\, such\, that\,
color\, $c$\, occurs\, exactly}\\\ \hbox{$i$-times in
$(X_{1},\ldots,X_{n_{x}})$ and $j$-times in
$(Y_{1},\ldots,Y_{n_{y}})$.}\end{array}\right.$ (8)
Clearly, $\sum_{i}i\,M(i,j)=Q(j)$; in particular, the $M$-statistics are a
refinement of the $Q$-statistics. Define $S^{2}_{y}(n_{y}):=0$ and, for
$k<n_{y}$, define
$\displaystyle S^{2}_{y}(k)$ $\displaystyle:=$
$\displaystyle\frac{n_{y}-1}{n_{y}}\mathop{\sum}\limits_{i=1}^{n_{x}}\mathop{\sum}\limits_{j=1}^{n_{y}-k}j\,M(i,j)\left(i(c_{j-1}(k)-c_{j}(k))+\hat{\theta}_{y}(k)-\hat{\theta}(k)\right)^{2},$
(9)
where
$\displaystyle c_{j}(k)$ $\displaystyle:=$
$\displaystyle\frac{{n_{y}-j-1\choose k}}{n_{x}{n_{y}-1\choose k}};$ (10)
$\displaystyle\hat{\theta}_{y}(k)$ $\displaystyle:=$
$\displaystyle\mathop{\sum}\limits_{j=0}^{n_{y}-k-1}c_{j}(k)\,Q(j).$ (11)
Our estimator of the variance of $\hat{\theta}(k)$ is obtained by summing the
variance attributable to the $x$-data and the $y$-data and is given by
$S^{2}(k):=S^{2}_{x}(k)+S^{2}_{y}(k),$ (12)
for $k=1:n_{y}$; in particular, $S(k)$ is our jackknife estimate of the
standard deviation of $\hat{\theta}(k)$.
To assess the quality of $S^{2}(k)$ as an estimate of the variance of
$\hat{\theta}(k)$ and the asymptotic distribution of the later statistic, we
require a few assumptions that rule out degenerate cases. The following
conditions are used in the remaining theorems in this section:
1. (a)
$|I_{x}\cap I_{y}|<\infty$.
2. (b)
there are at least two colors in $(I_{x}\cap I_{y})$ that occur in different
proportions in urn-$y$; in particular, the conditional probability
${\mathbb{P}}_{y}(\cdot\mid I_{x}\cap I_{y})$ is not a uniform distribution.
3. (c)
urn-$x$ contains at least one color that is absent in urn-$y$; in particular,
$\theta(\infty)>0$.
4. (d)
$n_{x}$ and $n_{y}$ grow to infinity at a comparable rate i.e.
$n_{x}=\Theta(n_{y})$, which means that there exist finite constants
$c_{1},c_{2}>0$ such that $c_{1}n_{y}\leq n_{x}\leq c_{2}n_{y}$, as
$n_{x},n_{y}$ tend to infinity.
Conditions (a-c) imply that $\hat{\theta}(k)$ has a strictly positive variance
and that a projection random variable, intermediate between $\hat{\theta}(k)$
and $\theta(k)$, has also a strictly positive variance. The idea of projection
is motivated by the analysis of Grams and Serfling in [17].
Condition (d) is technical and only used to show that the result in Theorem 5
holds for the largest possible range of values of $k$ namely, for $k=1:n_{y}$.
See [18] for results with uniformity related to Theorem 4, as well as
uniformity results when condition (d) is not assumed.
Because the variance of $\hat{\theta}(k)$, from now on denoted
$\mathbb{V}(\hat{\theta}(k))$, and its estimate $S^{2}(k)$ tend to zero as
$n_{x}$ and $n_{y}$ increase, the unnormalized consistency result is
unsatisfactory. As an alternative, we can show that $S^{2}(k)$ is a consistent
estimator relative to $\mathbb{V}(\hat{\theta}(k))$, as stated next.
###### Theorem 4.
(Asymptotic consistency of variance estimation.) If conditions (a)-(c) are
satisfied then, for each $k\geq 1$ and $\epsilon>0$, it applies that
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}{\mathbb{P}}\left(\left|\frac{S^{2}(k)}{\mathbb{V}(\hat{\theta}(k))}-1\right|>\epsilon\right)=0.$
(13)
Finally, under conditions (a)-(d), we show that $\hat{\theta}(k)$ is
asymptotically normally distributed for all $k=1:n_{y}$, as $n_{x}$ and
$n_{y}$ increase at a comparable rate.
###### Theorem 5.
(Asymptotic normality.) Let $Z\sim\mathcal{N}(0,1)$ i.e. $Z$ has a standard
normal distribution. If conditions (a)-(d) are satisfied then
$\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\,\,\mathop{\max}\limits_{k=1:n_{y}}\left|{\mathbb{P}}\left(\frac{\hat{\theta}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}(k)}}\leq
t\right)-{\mathbb{P}}(Z\leq t)\right|=0,$ (14)
for all real number $t$.
The non-trivial aspect of the above result is the asymptotic normality of
$\hat{\theta}(k)$ when $k=\Theta(n_{y})$, e.g. $\hat{\theta}(n_{y})$, as the
results we have found in the literature [14, 19, 20] only guarantee the
asymptotic normality of our estimator of $\theta(k)$ for fixed $k$. We note
that, due to Slutsky’s theorem [21], it follows from (13) and (14) that the
ratio
$\frac{\hat{\theta}(k)-\theta(k)}{S(k)}$
has, for fixed $k$, approximately a standard normal distribution when $n_{x}$
and $n_{y}$ are large and of a comparable order of magnitude.
## Discussion
As proof of concept, we use our estimators to analyze data from the Human
Microbiome Project (HMP) [13]. In particular, our samples are V35 16S rRNA
data, processed by Qiime into an operational taxonomic unit (OTU) count table
format (see File S1 in Supporting Information). Each of the $266$ samples
analyzed have more than $5000$ successfully identified bacteria (see File S2
in Supporting Information). We sort these samples by the body location
metadata describing the origin of the sample. This sorting yields the
assignments displayed in Table 1.
We present our estimates of $\hat{\theta}(n_{y})$ for all $266\cdot 265$
possible sample comparisons in Figure 1, i.e., we estimate the average
dissimilarity of sample-$x$ relative to the full sample-$y$. Due to (4),
observe that $\hat{\theta}(n_{y})=Q(0)/n_{y}$. At the given sample sizes, we
can differentiate four broad groups of environments: stool, vagina,
oral/throat and skin/nostril. We differentiate a larger proportion of
oral/throat bacteria found in stool than stool bacteria found in the
oral/throat environments. We may also differentiate the throat, gingival and
saliva samples, but cannot reliably differentiate between tongue and throat
samples or between the subgingival and supragingival plaques. On the other
hand, the stool samples have larger proportions of unique bacteria relative to
other stool samples of the same type, and vaginal samples also have this
property. In contrast the skin/nostril samples have relatively few bacteria
that are not identified in other skin/nostril samples.
The above effects may be a property of the environments from which samples are
taken, or an effect of noise from inaccurate estimates due to sampling. To
rule out the later interpretation, we show estimates of the standard deviation
of $\hat{\theta}(n_{y})$ based on the jackknife estimator $S^{2}(n_{y})$ from
(12) in Figure 2. As $S_{n_{y}}^{2}(n_{y})$ is zero, the error estimate is
given by $S_{x}(n_{y})$. We see from (7), with $k=n_{y}$, that
$S(n_{y})=\sqrt{\frac{\hat{\theta}(n_{y})\cdot(1-\hat{\theta}(n_{y}))}{n_{x}-1}}.$
Assuming a normal distribution and an accurate jackknife estimate of variance,
$\theta(n_{y})$ will be in the interval $\hat{\theta}(n_{y})\pm 0.01$ with at
least approximately 95% confidence, for any choice of sample comparisons in
our data; in particular, on a linear scale, we expect at least 95% of the
estimates in Figure 1 to be accurate in at least the first two digits.
As we mentioned earlier, estimating $\theta(\infty)$ accurately is a difficult
problem. We end this section with two heuristics to assess how representative
$\hat{\theta}(n_{y})$ is of $\theta(\infty)$, when urn-$y$ has at least two
colors and at least one color in common with urn-$x$. First, observe that:
$\displaystyle\theta(k)$ $\displaystyle=\theta(\infty)+\sum_{i\in(I_{x}\cap
I_{y})}{\mathbb{P}}_{x}(i)(1-{\mathbb{P}}_{y}(i))^{k}.$ (15)
In particular, $\theta(k)$ is a strictly concave-up and monotonically
decreasing function of the real-variable $k\geq 0$. Hence, if $\theta(n_{y})$
is close to the asymptotic value $\theta(\infty)$, then
$\theta(n_{y})-\theta(n_{y}-1)$ should be of small magnitude. We call the
later quantity the _discrete derivative_ of $\theta(k)$ at $k=n_{y}$. Since we
may estimate the discrete derivative from our data, the following heuristic
arises: _relatively large values of
$|\hat{\theta}(n_{y})-\hat{\theta}(n_{y}-1)|$ are evidence that
$\hat{\theta}(n_{y})$ is not a good approximation of $\theta(\infty)$._
Figure 3 shows the heat map of $|\hat{\theta}(n_{y})-\hat{\theta}(n_{y}-1)|$
for each pair of samples. These estimates are of order $10^{-5}$ for the
majority of the comparisons, and spike to $10^{-4}$ for several sample-$y$ of
varied environment types, when sample-$x$ is associated with a skin or vaginal
sample. In particular, further sampling effort from environments associated
with certain vaginal, oral or stool samples are likely to reveal bacteria
associated with broadly defined skin or vaginal environments.
Another heuristic may be more useful to assess how close $\hat{\theta}(n_{y})$
is to $\theta(\infty)$, particularly when the previous heuristic is
inconclusive. As motivation, observe that
$\theta(k)=\theta(\infty)+\Theta(\rho^{k})$, because of the identity in (15),
where
$\rho:=1-\min_{i\in(I_{x}\cap I_{y})}{\mathbb{P}}_{y}(i).$
Furthermore, $\log(\theta(k-1)-\theta(k))=k(\ln\rho)+c+o(1)$, where $c$ is
certain finite constant. We can justify this approximation only when
$\log(\theta(k-1)-\theta(k))$ is well approximated by a linear function of
$k$, in which case we let $\hat{\rho}$ denote the estimated value for $\rho$
obtained from the linear regression. Since
$0\leq\theta(n_{y})-\theta(\infty)\leq\rho^{n_{y}}$, the following more
precise heuristic comes to light: _$\hat{\theta}(n_{y})$ is a good
approximation of $\theta(\infty)$ if the linear regression of
$\log|\hat{\theta}(k-1)-\hat{\theta}(k)|$ for $k$ near $n_{y}$ gives a good
fit, $S(n_{y})$ is small relative to $\hat{\theta}(n_{y})$, and
$\hat{\rho}^{n_{y}}$ is also small._
To fix ideas we have applied the above heuristic to three pairs of samples:
$(255,176)$, $(200,139)$ and $(100,10)$, with each ordered pair denoting
urn-$x$ and urn-$y$, respectively. As seen in Table 2 for these three cases,
$\hat{\theta}(n_{y})$ is at least 14-times larger than $S(n_{y})$; in
particular, due to the asymptotic normality of the later statistic, an
appropriate use of the heuristic is reduced to a good linear fit and a small
$\hat{\rho}^{n_{y}}$ value. In all three cases, $\hat{\rho}$ was computed from
the estimates $\hat{\theta}(k)$, with $k=5001:n_{y}$.
For the $(255,176)$-pair, $\hat{\rho}^{n_{y}}$ and the regression error,
measured as the largest absolute residual associated with the best linear fit,
are zero to machine precision, suggesting that $\hat{\theta}(n_{y})=0.9998$ is
a good approximation of $\theta(\infty)$. This is reinforced by the blue plot
in Figure 4. On the other hand, for the $(200,139)$-pair, the regression error
is small, suggesting that the linear approximation
$\log(\hat{\theta}(k-1)-\hat{\theta}(k))$ is good for $k=5001:n_{y}$. However,
because $\hat{\rho}^{n_{y}}=0.9997$, we cannot guarantee that
$\hat{\theta}(n_{y})$ is a good approximation of $\theta(\infty)$. In fact, as
seen in the red-plot in Figure 4, $\hat{\theta}(k)$, with $k=1:n_{y}$, exposes
a steady and almost linear decay that suggests that $\theta(\infty)$ may be
much smaller than $\hat{\theta}(n_{y})$. Finally, for the $(100,10)$-pair, the
regression error is large and the heuristic is therefore inconclusive. Due to
the green-plot in Figure 4, the lack of fit indicates that the exponential
rate of decay of $\theta(k)$ to $\theta(\infty)$ has not yet been captured by
the data from these urns. Note that the heuristic based on the discrete
derivative shows no evidence that $\hat{\theta}(n_{y})$ is far from
$\theta(\infty)$.
## Materials and Methods
Here we prove the theorems given in the Results section. The key idea to prove
each theorem may be summarized as follows.
To show Theorem 1, we identify pairs of urns for which unbiased estimation of
$\theta(\infty)$ is impossible for any statistic. To show Theorem 2, we
exploit the diversity of possible urn distributions to show that there are
relatively few unbiased estimators of $\theta(k)$ and, in fact, there is a
single unbiased estimator $\hat{\theta}(k)$ that is symmetric on the data. The
uniqueness of the symmetric estimator is obtained via a completeness argument:
a symmetric statistic having expected value zero is shown to correspond to a
polynomial with identically zero coefficients, which themselves correspond to
values returned by the statistic when presented with specific data. The
symmetric estimator is a U-statistic in that it corresponds to an average of
unbiased estimates of $\theta(k)$, based on all possible sub-samples of size
$1$ and $k$ from the samples of urn-$x$ and -$y$, respectively. As any
asymmetric estimator has higher variance than a corresponding symmetric
estimator, the symmetric estimator must be the UMVUE.
To show Theorem 3 we use bounds on the variance of the U-statistic and show
that, uniformly for relatively small $k$, $\hat{\theta}(k)$ converges to
$\theta(k)$ in the $\mathcal{L}^{2}$-norm. In contrast, for relatively large
values of $k$, we exploit the monotonicity of $\theta(k)$ and
$\hat{\theta}(k)$ to show uniform convergence.
Finally, theorems 4 and 5 are shown using an approximation of
$\hat{\theta}(k)$ by sums i.i.d. random variables, as well as results
concerning the variance of both $\hat{\theta}(k)$ and its approximation. In
particular, the approximation satisfies the hypotheses the Central Limit
Theorem and Law of Large Numbers, which we use to transfer these results to
$\hat{\theta}(k)$.
In what follows, ${\mathcal{D}}$ denotes the set of all probability
distributions that are finitely supported over ${\mathbb{N}}$.
Proof of Theorem 1. Consider in ${\mathcal{D}}$ probability distributions of
the form ${\mathbb{P}}_{x}(1)=1$, ${\mathbb{P}}_{y}(1)=u$ and
${\mathbb{P}}_{y}(2)=(1-u)$, where $0\leq u\leq 1$ is a given parameter. Any
statistic $h(\cdot)$ which takes as input $n_{x}$ draws from urn-$x$ and
$n_{y}$ draws from urn-$y$ has that
${\mathbb{E}}(h(X_{1},\ldots,X_{n_{x}},Y_{1},\ldots,Y_{n_{y}}))$ is a
polynomial of degree at most $n_{y}$ in the variable $u$; in particular, it is
a continuous function of $u$ over the interval $[0,1]$. Since
$\theta(\infty)=[\\![u=0]\\!]$ has a discontinuity at $u=0$ over this
interval, there exists no estimator of $\theta(\infty)$ that is unbiased over
pairs of distributions in ${\mathcal{D}}$. $\Box$
We use lemmas 6-11 to first show Theorem 2. The method of proof of this
theorem follows an approach similar to the one used by Halmos [8] for single
distributions, which we extend here naturally to the setting of two
distributions.
Our next result implies that no uniformly unbiased estimator of $\theta(k)$ is
possible when using less than one sample from urn-$x$ and $k$ samples from
urn-$y$.
###### Lemma 6.
If $g(X_{1},\ldots,X_{m},Y_{1},\ldots,Y_{n})$ is unbiased for $\theta(k)$ for
all ${\mathbb{P}}_{x},{\mathbb{P}}_{y}\in{\mathcal{D}}$, then $m\geq 1$ and
$n\geq k$.
###### Proof.
Consider in ${\mathcal{D}}$ probability distributions of the form
${\mathbb{P}}_{x}(1)=u$, ${\mathbb{P}}_{x}(2)=(1-u)$, ${\mathbb{P}}_{y}(1)=v$
and ${\mathbb{P}}_{y}(2)=(1-v)$, where $0\leq u,v\leq 1$ are arbitrary real
numbers. Clearly, ${\mathbb{E}}[g(X_{1},\ldots,X_{m},Y_{1},\ldots,Y_{n})]$ is
a linear combination of polynomials of degree $m$ in $u$ and $n$ in $v$ and,
as a result, it is a polynomial of degree at most $m$ in $u$ and $n$ in $v$.
Since $\theta(k)=u(1-v)^{k}+(1-u)v^{k}$ has degree $1$ in $u$ and $k$ in $v$,
and $g(X_{1},\ldots,X_{m},Y_{1},\ldots,Y_{n})$ is unbiased for $\theta(k)$, we
conclude that $1\leq m$ and $k\leq n$. ∎
The form of $\hat{\theta}(k)$ given in equation (4) is convenient for
computation but, for mathematical analysis, we prefer its $U$-statistic form
associated with the kernel function
$(x,y_{1},\ldots,y_{k})\to[\\![x\notin\\{y_{1},\ldots,y_{k}\\}]\\!]$.
In what follows, $S_{k,n}$ denotes the set of all functions
$\sigma:\\{1,\ldots,k\\}\to\\{1,\ldots,n_{y}\\}$ that are one-to-one.
###### Lemma 7.
$\hat{\theta}(k)=\frac{1}{n_{x}|S_{k,n_{y}}|}\sum\limits_{i=1}^{n_{x}}\sum\limits_{\sigma\in
S_{k,n_{y}}}[\\![X_{i}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k)}\\}]\\!],$
(16)
where $|S_{k,n_{y}}|=k!{n_{y}\choose k}$.
###### Proof.
Fix $1\leq i\leq n_{x}$ and suppose that color $X_{i}$ occurs $j$-times in
$Y_{1},\ldots,Y_{n_{y}}$. If $j>(n_{y}-k)$ then any sublist of size $k$ of
$Y_{1},\ldots,Y_{n_{y}}$ contains $X_{i}$, hence
$[\\![X_{i}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k)}\\}]\\!]=0$, for all
$\sigma\in S_{k,n_{y}}$. On the other hand, if $j\leq(n_{y}-k)$ then
$\sum_{\sigma\in
S_{k,n_{y}}}[\\![X_{i}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k)}]\\!]=k!{n_{y}-j\choose
k}$. Since the rightmost sum only depends on the number of times that color
$X_{i}$ was observed in $Y_{1},\ldots,Y_{n_{y}}$, we may use the
$Q$-statistics defined in equation (5) to rewrite:
$\frac{1}{n_{x}|S_{k,n_{y}}|}\sum\limits_{i=1}^{n_{x}}\sum\limits_{\sigma\in
S_{k,n_{y}}}[\\![X_{i}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k)}\\}]\\!]=\frac{1}{n_{x}{n_{y}\choose
k}}\sum_{j=0}^{n_{y}-k}{n_{y}-j\choose k}Q(j).$
The right-hand side above now corresponds to the definition of
$\hat{\theta}(k)$ given in equation (4). ∎
In what follows, we say that a function
$f:{\mathbb{N}}^{n_{x}+n_{y}}\to{\mathbb{R}}$ is _$(n_{x},n_{y})$ -symmetric_
when
$f(x_{1},\ldots,x_{n_{x}};y_{1},\ldots,y_{n_{y}})=f(x_{\sigma(1)},\ldots,x_{\sigma(n_{x})};y_{\sigma^{\prime}(1)},\ldots,y_{\sigma^{\prime}(n_{y})}),$
for all $x_{1},\ldots,x_{n_{x}},y_{1},\ldots,y_{n_{y}}\in{\mathbb{N}}$ and
permutations $\sigma$ and $\sigma^{\prime}$ of $1,\ldots,n_{x}$ and
$1,\ldots,n_{y}$, respectively. Alternatively, $f$ is
$(n_{x},n_{y})$-symmetric if and only if it may be regarded a function of
$(x_{(1\ldots n_{x})},y_{(1\ldots n_{y})})$, where $x_{(1\ldots n_{x})}$ and
$y_{(1\ldots n_{y})}$ correspond to the order statistics
$x_{(1)},\ldots,x_{(n_{x})}$ and $y_{(1)},\ldots,y_{(n_{y})}$, respectively.
Accordingly, a statistic of $(X_{1},\ldots,X_{n_{x}},Y_{1},\ldots,Y_{n_{y}})$
is called _$(n_{x},n_{y})$ -symmetric_ when it may be represented in the form
$f(X_{1},\ldots,X_{n_{x}},Y_{1},\ldots,Y_{n_{y}})$, for some
$(n_{x},n_{y})$-symmetric function $f$. It is immediate from Lemma 7 that
$\hat{\theta}(k)$ is $(n_{x},n_{y})$-symmetric.
The next result asserts that the variance of any non-symmetric unbiased
estimator of $\theta(k)$ may be reduced by a corresponding symmetric unbiased
estimator. The proof is based on the well-known fact that conditioning
preserves the mean of a statistic and cannot increase its variance.
###### Lemma 8.
An asymmetric unbiased estimator of $\theta(k)$ that is square-integrable has
a strictly larger variance than a corresponding $(n_{x},n_{y})$-symmetric
unbiased estimator.
###### Proof.
Let ${\mathcal{F}}$ denote the sigma-field generated by the random vector
$(X_{(1\ldots n_{x})};Y_{(1\ldots n_{y})})$ and suppose that the statistic
$T=f(X_{1},\ldots,X_{n_{x}},Y_{1},\ldots,Y_{n_{y}})$ is unbiased for
$\theta(k)$ and square-integrable. In particular,
$U={\mathbb{E}}[T\\!\mid\\!{\mathcal{F}}]$ is a well-defined statistic and
there is an $(n_{x},n_{y})$-symmetric function
$g:{\mathbb{N}}^{n_{x}+n_{y}}\to{\mathbb{R}}$ such that
$U=g(X_{1},\ldots,X_{n_{x}};Y_{1},\ldots,Y_{n_{y}})$. Clearly, $U$ is unbiased
for $\theta(k)$ and $(n_{x},n_{y})$-symmetric. Since
${\mathbb{E}}(T^{2})<+\infty$, Jensen’s inequality for conditional
expectations [22] implies that ${\mathbb{E}}(U^{2})\leq{\mathbb{E}}(T^{2})$,
with equality if and only if $T$ is $(n_{x},n_{y})$-symmetric. ∎
Since $\hat{\theta}(k)$ is $(n_{x},n_{y})$-symmetric and bounded, the above
lemma implies that if an UMVUE for $\theta(k)$ exists then it must be
$(n_{x},n_{y})$-symmetric. Next, we show that there is a unique symmetric and
unbiased estimator of $\theta(k)$, which immediately implies that
$\hat{\theta}(k)$ is the UMVUE.
In what follows, $k_{1},k_{2}\geq 0$ denote integers. We say that a polynomial
$Q(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n})$ is _$(k_{1},k_{2})$ -homogeneous_
when it is a linear combination of polynomials of the form
$\prod_{i=1}^{m}u_{i}^{m_{i}}\prod_{j=1}^{n}v_{j}^{n_{j}}$, with
$\sum_{i=1}^{m}m_{i}=k_{1}$ and $\sum_{j=1}^{n}n_{j}=k_{2}$. Furthermore, we
say that $Q$ satisfies the _partial vanishing condition_ if
$Q(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n})=0$ whenever
$u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}\geq 0$, $\sum_{i=1}^{m}u_{i}=1$ and
$\sum_{i=1}^{n}v_{i}=1$.
The next lemma is an intermediate step to show that a
$(k_{1},k_{2})$-homogeneous polynomial which satisfies the partial vanishing
condition is the zero polynomial, which is shown in Lemma 10.
###### Lemma 9.
If $Q$ is a $(k_{1},k_{2})$-homogeneous polynomial in the real variables
$u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}$, with $m,n\geq 1$, that satisfies the
partial vanishing condition, then $Q(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n})=0$
whenever $u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}\geq 0$,
$\sum_{i=1}^{m}u_{i}>0$ and $\sum_{i=1}^{n}v_{i}>0$.
###### Proof.
Fix $u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}\geq 0$ such that
$\sum_{i=1}^{m}u_{i}>0$ and $\sum_{i=1}^{n}v_{i}>0$ and observe that
$Q(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n}):=\left(\sum_{i=1}^{m}u_{i}\right)^{k_{1}}\left(\sum_{i=1}^{n}v_{i}\right)^{k_{2}}Q\left(\frac{u_{1}}{\sum_{i=1}^{m}u_{i}},\ldots,\frac{u_{m}}{\sum_{i=1}^{m}u_{i}};\frac{v_{1}}{\sum_{i=1}^{n}v_{i}},\ldots,\frac{v_{n}}{\sum_{i=1}^{n}v_{i}}\right),$
because $Q$ is a $(k_{1},k_{2})$-homogeneous polynomial. Notice now that the
right hand-side above is zero because $Q$ satisfies the partial vanishing
condition. ∎
###### Lemma 10.
Let $Q$ be a $(k_{1},k_{2})$-homogeneous polynomial in the real variables
$u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}$, with $m,n\geq 1$. If $Q$ satisfies
the partial vanishing condition then $Q=0$ identically.
###### Proof.
We prove the lemma using structural induction on $(m,n)$ for all
$k_{1},k_{2}\geq 0$.
If $m=n=1$ then a $(k_{1},k_{2})$-homogeneous polynomial $Q(u_{1},v_{1})$ must
be of the form $cu^{k_{1}}_{1}v^{k_{2}}_{1}$, for an appropriate constant $c$.
As such a polynomial satisfies the partial-vanishing condition only when
$c=0$, the base case for induction is established.
Next, consider a $(k_{1},k_{2})$-homogeneous polynomial
$Q(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n},v_{n+1})$, with $m,n\geq 1$, that
satisfies the partial vanishing condition, and let $d$ denote its degree with
respect to the variable $v_{n+1}$. In particular, there are polynomials
$Q_{0},\ldots,Q_{d}$ in the variables $u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}$
such that
$Q(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n},v_{n+1})=\sum_{i=0}^{d}Q_{i}(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n})v_{n+1}^{i}.$
Now fix $u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}\geq 0$ such that
$\sum_{i=1}^{m}u_{i}>0$ and $\sum_{i=1}^{n}v_{i}>0$. Because $Q$ satisfies the
partial vanishing condition, Lemma 9 implies that
$\sum_{i=0}^{d}Q_{i}(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n})v_{n+1}^{i}=0$ for
all $v_{n+1}>0$. In particular, for each $i$,
$Q_{i}(u_{1},\ldots,u_{m};v_{1},\ldots,v_{n})=0$ whenever
$u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}\geq 0$, $\sum_{i=1}^{m}u_{i}>0$ and
$\sum_{i=1}^{n}v_{i}>0$. Thus each $Q_{i}$ satisfies the partial vanishing
condition. Since $Q_{i}$ is a $(k_{1},k_{2}-i)$-homogeneous polynomial, the
inductive hypothesis implies that $Q_{i}=0$ identically and hence $Q=0$
identically. The same argument shows that if
$Q(u_{1},\ldots,u_{m},u_{m+1};v_{1},\ldots,v_{n})$, with $m,n\geq 1$, is a
$(k_{1},k_{2})$-homogeneous polynomial that satisfies the partial vanishing
condition then $Q=0$ identically, completing the inductive proof of the lemma.
∎
Our final resultbefore proving Theorem 2 implies that $\theta(k)$ cannot admit
more than one symmetric and unbiased estimator. Its proof depends on the
variety of distributions in ${\mathcal{D}}$, and uses the requirement that our
estimator must be unbiased for any pair of distributions chosen from
${\mathcal{D}}$.
###### Lemma 11.
If $f$ is an $(n_{x},n_{y})$-symmetric function such that
${\mathbb{E}}[f(X_{1},\ldots,X_{n_{x}},Y_{1},\ldots,Y_{n_{y}})]=0$, for all
${\mathbb{P}}_{x},{\mathbb{P}}_{y}\in{\mathcal{D}}$, then $f=0$ identically.
###### Proof.
Consider a point
$\vec{z}=(x_{1},\ldots,x_{n_{x}},y_{1},\ldots,y_{n_{y}})\in\mathbb{N}^{n_{x}+n_{y}}$
and define $m_{1}$ and $m_{2}$ as the cardinalities of the sets
$\\{x_{1},\ldots,x_{n_{x}}\\}$ and $\\{y_{1},\ldots,y_{n_{y}}\\}$,
respectively. Furthermore, let $x^{\prime}_{1},\ldots,x^{\prime}_{m_{1}}$
denote the distinct elements in the set $\\{x_{1},\ldots,x_{n_{x}}\\}$ and
define $m_{1,i}$ to be the number of times that $x^{\prime}_{i}$ appears in
this set. Furthermore, let ${\mathbb{P}}_{x}\in{\mathcal{D}}$ be a probability
distribution such that
${\mathbb{P}}_{x}(\\{x_{1}^{\prime},\ldots,x_{m_{1}}^{\prime}\\})=1$ and
define $p_{1,i}:={\mathbb{P}}_{x}(x_{i}^{\prime})$. In a completely analogous
manner define $y^{\prime}_{1},\ldots,y^{\prime}_{m_{2}}$, $m_{2,j}$,
${\mathbb{P}}_{y}$ and $p_{2,j}$.
Notice that ${\mathbb{E}}[f(\vec{Z}_{m})]$ is a polynomial in the real
variables $p_{1,1},\ldots,p_{1,m_{1}},p_{2,1},\ldots,p_{2,m_{2}}$ that
satisfies the hypothesis of Lemma 10; in particular, this polynomial is
identically zero. However, because $f$ is $(n_{x},n_{y})$-symmetric, the
coefficient of
$\mathop{\prod}_{i=1}^{2}\mathop{\prod}_{j=1}^{m_{i}}p^{m_{i,j}}_{i,j}$ in
${\mathbb{E}}[f(\vec{Z}_{m})]$ is
$f(\vec{z})\,{n_{x}\choose m_{1,1};\ldots;m_{1,m_{1}}}\,{n_{y}\choose
m_{2,1};\ldots;m_{2,m_{2}}},$
implying that $f(\vec{z})=0$. ∎
Proof of Theorem 2. From Lemma 8, as we mentioned already, if the UMVUE for
$\theta(k)$ exists then it must be $(n_{x},n_{y})$-symmetric. Suppose there
are two $(n_{x},n_{y})$-symmetric functions such that
$f(X_{1},\ldots,X_{n_{x}};Y_{1},\ldots,Y_{n_{y}})$ and
$g(X_{1},\ldots,X_{n_{x}};Y_{1},\ldots,Y_{n_{y}})$ are unbiased for
$\theta(k)$. Applying Lemma 11 to $(f-g)$ shows that $f=g$, and $\theta(k)$
admits therefore a unique symmetric and unbiased estimator. From Lemma 7,
$\hat{\theta}(k)$ is $(n_{x},n_{y})$-symmetric and unbiased for $\theta(k)$
hence it is the UMVUE for $\theta(k)$. From Lemma 6, it follows that no
unbiased estimator of $\theta(k)$ exists for $n_{x}=0$ or $n_{y}<k$. $\Box$
Our next goal is to show Theorem 3, for which we prove first lemmas 12-13. We
note that the later lemma applies in a much more general context than our
treatment of dissimilarity.
###### Lemma 12.
If, for each $n\geq 1$, $k_{n}\geq 1$ is an integer such that $k_{n}^{2}=o(n)$
then
$\displaystyle\frac{{k\choose 1}{n-k\choose k-1}}{{n\choose k}}$
$\displaystyle=\frac{k^{2}}{n}+O\left(\frac{k^{4}}{n^{2}}\right);$ (17)
$\displaystyle\sum\limits_{j=2}^{k}\frac{{k\choose j}{n-k\choose
k-j}}{{n\choose k}}$ $\displaystyle=O\left(\frac{k^{4}}{n^{2}}\right);$ (18)
uniformly for $k=1:k_{n}$ as $n\to\infty$.
###### Proof.
First observe that for all $n$ sufficiently large and $k=1:k_{n}$, it applies
that
$\displaystyle\frac{{k\choose 1}{n-k\choose k-1}}{{n\choose
k}}=\frac{k^{2}}{n}\mathop{\prod}\limits_{i=0}^{k-2}\left(1-\frac{k-1}{n-1-i}\right)=\frac{k^{2}}{n}\exp\left\\{\sum_{i=0}^{k-2}\log\left(1-\frac{k-1}{n-1-i}\right)\right\\}.$
Note that $-x/(1-x)\leq\log(1-x)\leq-x$, for all $0\leq x<1$. As a result, we
may bound the exponential factor on the right-hand side above as follows:
$e^{-(k-1)^{2}/(n-2k+2)}\leq\exp\left\\{\sum_{i=0}^{k-2}\log\left(1-\frac{k-1}{n-1-i}\right)\right\\}\leq
e^{-(k-1)^{2}/(n-1)}.$
Since $e^{-(k-1)^{2}/(n-2k+2)}=1+O(k^{2}/n)$ and
$e^{-(k-1)^{2}/(n-1)}=1+O(k^{2}/n)$, uniformly for all $k=1:k_{n}$ as
$n\to\infty$, (17) follows.
To show (18), first note the combinatorial identity
$\mathop{\sum}\limits_{j=0}^{k}\frac{{k\choose j}{n-k\choose k-j}}{{n\choose
k}}=1.$ (19)
Proceeding in an analogous manner as we did to show (17), we see now that the
term associated with the index $j=0$ in the above summation satisfies that
$e^{-k^{2}/(n-2k+1)}\leq\frac{{k\choose 0}{n-k\choose k}}{{n\choose k}}\leq
e^{-k^{2}/n},$
for all $n$ sufficiently large and $k=1:k_{n}$. Since
$e^{-k^{2}/(n-2k+1)}=1-k^{2}/n+O(k^{4}/n^{2})$ and
$e^{-k^{2}/n}=1-k^{2}/n+O(k^{4}/n^{2})$, the above inequalities together with
(17) and (19) establish (18). ∎
###### Lemma 13.
Define $\lambda(k):=\mathbb{E}(h(X_{1},Y_{1},\ldots,Y_{k}))$, where
$h(x_{1},y_{1},\ldots,y_{k})$ is a bounded $(1,k)$-symmetric function, and let
$\hat{\lambda}(k)=\hat{\lambda}_{n_{x},n_{y}}(k):=\frac{1}{n_{x}|S_{k,n_{y}}|}\sum_{i=1}^{n_{x}}\sum_{\sigma\in
S_{k,n_{y}}}h(X_{i},Y_{\sigma(1)},\ldots,Y_{\sigma(k)})$
be the U-statistic of $\lambda(k)$ associated with $n_{x}$ draws from urn-$x$
and $n_{y}$ draws from urn-$y$; in particular,
$\mathbb{E}(\hat{\lambda}(k))=\lambda(k)$. Furthermore, assume that
1. (i)
$0\leq h\leq 1$,
2. (ii)
there is a function $f:I_{x}\to[0,1]$ such that
$\mathop{\lim}\limits_{k\rightarrow\infty}h(X_{1},Y_{1},\ldots,Y_{k})\mathop{=}\limits^{\mbox{a.s}}f(X_{1})$,
3. (iii)
$\hat{\lambda}(k)\geq\hat{\lambda}(k+1)$; in particular,
$\lambda(k)\geq\lambda(k+1)$.
Under the above assumptions, it follows that
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\mathbb{E}\left(\mathop{\max}\limits_{k=1:n_{y}}\left|\hat{\lambda}(k)-\lambda(k)\right|^{2}\right)$
$\displaystyle=0.$
###### Proof.
Define $k_{n}:=1+\min\big{\\{}\lfloor
n_{x}^{1/2}\rfloor,\lfloor\log(n_{y})\rfloor\big{\\}}$; in particular, $1\leq
k_{n}\leq n_{y}$ and $k_{n}\to\infty$, $k_{n}=o(n_{x})$ and
$k_{n}^{p}=o(n_{y})$, for any $p>0$, as $n_{x},n_{y}\to\infty$. The proof of
the theorem is reduced to show that
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\mathbb{E}\left(\mathop{\max}\limits_{k=k_{n}:n_{y}}|\hat{\lambda}(k)-\lambda(k)|^{2}\right)$
$\displaystyle=0;$ (20)
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\mathbb{E}\left(\mathop{\max}\limits_{k=1:k_{n}}|\hat{\lambda}(k)-\lambda(k)|^{2}\right)$
$\displaystyle=0.$ (21)
Next, we compute the variance of $\hat{\lambda}(k)$ following an approach
similar to Hoeffding [14]. Because $h$ is $(1,k)$-symmetric, a tedious yet
standard calculation shows that
$\displaystyle\mathbb{V}\big{(}\hat{\lambda}(k)\big{)}$
$\displaystyle=\frac{\mathop{\sum}\limits_{j=0}^{k}{k\choose j}{n_{y}-k\choose
k-j}((n_{x}-1)\xi_{0,j}(k)+\xi_{1,j}(k))}{n_{x}{n_{y}\choose k}},$ (22)
where
$\displaystyle\xi_{0,j}(k)$ $\displaystyle:=$
$\displaystyle\mathbb{V}\big{(}\mathbb{E}(h(X_{1},Y_{1},\ldots,Y_{k})|Y_{1},\ldots,Y_{j})\big{)};$
(23) $\displaystyle\xi_{1,j}(k)$ $\displaystyle:=$
$\displaystyle\mathbb{V}\big{(}\mathbb{E}(h(X_{1},Y_{1},\ldots,Y_{k})|X_{1},Y_{1},\ldots,Y_{j})\big{)}.$
(24)
Clearly, $\xi_{1,j}(k)\leq 1$. On the other hand, if $W$ is any random
variable with finite expectation and $\mathcal{F}_{1}\subset\mathcal{F}_{2}$
are sigma-fields then
$\mathbb{V}({\mathbb{E}}(W|\mathcal{F}_{1}))\leq\mathbb{V}({\mathbb{E}}(W|\mathcal{F}_{2}))$,
due to well-known properties of conditional expectations [22]. In particular,
for each $0\leq j\leq k$, we have that
$\xi_{0,j}(k)\leq\xi_{0,k}(k),\hbox{ and }\xi_{1,j}(k)\leq\xi_{1,k}(k).$ (25)
Consequently, (22) implies that
$\mathbb{V}\big{(}\hat{\lambda}(k)\big{)}\leq\xi_{0,k}(k)+\frac{1}{n_{x}}.$
(26)
We claim that
$\lim_{k\to\infty}\xi_{0,k}(k)=0.$ (27)
Indeed, using an argument similar as above, we find that
$\displaystyle\xi_{0,k}(k)$ $\displaystyle=$
$\displaystyle\mathbb{V}\Big{(}{\mathbb{E}}\big{(}h(X_{1},Y_{1},\ldots,Y_{k})-f(X_{1})\mid
Y_{1},\ldots,Y_{k}\big{)}\Big{)},$ $\displaystyle\leq$
$\displaystyle\mathbb{V}\Big{(}{\mathbb{E}}\big{(}h(X_{1},Y_{1},\ldots,Y_{k})-f(X_{1})\mid
Y_{1},\ldots,Y_{k},X_{1}\big{)}\Big{)},$ $\displaystyle=$
$\displaystyle\mathbb{V}\big{(}h(X_{1},Y_{1},\ldots,Y_{k})-f(X_{1})\big{)}.$
Due to assumptions (i)-(ii) and the Bounded Convergence Theorem, the right-
hand side above tends to $0$, and the claim follows.
It follows from (26) and (27) that
$\lim_{k\to\infty}\mathbb{V}\big{(}\hat{\lambda}(k)\big{)}=0.$
Finally, because of assumption (iii),
$\displaystyle\mathbb{E}\left(\mathop{\max}\limits_{k=k_{n}:n_{y}}|\hat{\lambda}(k)-\lambda(k)|^{2}\right)\leq
2|{\lambda}(k_{n})-\lambda(n_{y})|^{2}+2\mathbb{V}(\hat{\lambda}(k_{n}))+2\mathbb{V}(\hat{\lambda}(n_{y})).$
Since each term on the right-hand side above tends to zero as
$n_{x},n_{y}\rightarrow\infty$, (20) follows.
We now show (21). As $\xi_{i,j}(k)\leq 1$ and $\xi_{0,0}(k)=0$, it follows by
(22) and Lemma 12 that
$\displaystyle\mathbb{V}\left(\hat{\lambda}(k)\right)$ $\displaystyle\leq
1-\frac{{n_{y}-k\choose k}}{{n_{y}\choose
k}}+\frac{1}{n_{x}}=\frac{1}{n_{x}}+\sum_{j=1}^{k}\frac{{k\choose
j}{n_{y}-k\choose k-j}}{{n_{y}\choose
k}}=\frac{1}{n_{x}}+\frac{k^{2}}{n_{y}}+O\left(\frac{k^{4}}{n_{y}^{2}}\right),$
uniformly for $k=1:k_{n}$ as $n_{x},n_{y}\to\infty$. In particular,
$\displaystyle\mathbb{E}\left(\mathop{\max}\limits_{k=1:k_{n}}|\hat{\lambda}(k)-\lambda(k)|^{2}\right)$
$\displaystyle\leq\mathop{\sum}\limits_{k=1}^{k_{n}}\mathbb{V}(\hat{\lambda}(k))\leq\frac{k_{n}}{n_{x}}+\frac{k_{n}^{3}}{n_{y}}+O\left(\frac{k_{n}^{5}}{n_{y}^{2}}\right).$
Due to the definition of the coefficients $k_{n}$, the right-hand side above
tends to zero, and (21) follows. ∎
Proof of Theorem 3. Note that
$\theta(k)\\!=\\!\mathbb{E}(h(X_{1},Y_{1},\ldots,Y_{k}))$, with
$h(x_{1},y_{1},\ldots,y_{k})\\!:=\\![\\![x_{1}\\!\notin\\!\\{y_{1},\ldots,y_{k}\\}]\\!]$.
We show that the kernel function $h$ and the U-statistics $\hat{\theta}(k)$
satisfy the hypotheses of Lemma 13. From this the theorem is immediate because
$\mathcal{L}^{2}$-convergence implies convergence in probability.
Clearly $h$ is $(1,k)$-symmetric and $0\leq h\leq 1$, which shows assumption
(i) in Lemma 13. On the other hand, due to the Law of Large Numbers,
$\lim_{k\to\infty}h(X_{1},Y_{1},\ldots,Y_{k})=[\\![X_{1}\notin I_{y}]\\!]$
almost surely, from which assumption (ii) in the lemma also follows.
Finally, to show assumption (iii), recall that $S_{k,n}$ is the set of one-to-
one functions from $\\{1,\ldots,k\\}$ into $\\{1,\ldots,n\\}$; in particular,
$|S_{k+1,n_{y}}|=(n_{y}-k)\cdot|S_{k,n_{y}}|$. Now note that for each
indicator of the form
$[\\![X_{1}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k)}\\}]\\!]$, with
$\sigma\in S_{k+1,n_{y}}$, there are $(n_{y}-k)$ choices of $\sigma(k+1)$
outside the set $\\{\sigma(1),\ldots,\sigma(k)\\}$. Because
$[\\![X_{1}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k)}\\}]\\!]\geq[\\![X_{1}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k+1)}\\}]\\!]$,
it follows that $\hat{\theta}(k)\geq\hat{\theta}(k+1)$ for all
$k=1:(n_{y}-1)$. This shows condition (iii) in Lemma 13, and Theorem 3
follows. $\Box$
Proof of equation (7). The jackknife estimate of the variance of
$\hat{\theta}(k)$ obtained from removing a single $x$-data is, by definition,
the quantity
$\displaystyle S^{2}_{x}(k)$
$\displaystyle:=\frac{n_{x}-1}{n_{x}}\sum_{i=1}^{n_{x}}\left(\frac{1}{(n_{x}-1)|S_{k,n_{y}}|}\sum\limits_{j\neq
i}\sum\limits_{\sigma\in
S_{k,n_{y}}}[\\![X_{j}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k)}\\}]\\!]-\hat{\theta}(k)\right)^{2}.$
(28)
Note that removing a color from the $x$-data which would otherwise add to
$Q(j)$, decrements this quantity by one unit. Let $Q_{i}$ denote the
$Q$-statistics associated with the data when observation $X_{i}$ from urn-$x$
is removed from the sample. Note that as each draw from urn-$x$ contributes to
exactly one $Q(j)$, $Q_{i}(j)=Q(j)$ for all $j$ except for some
$j_{i}^{\star}$ where $Q_{i}(j_{i}^{\star})=Q(j_{i}^{\star})-1$. We have
therefore that
$\displaystyle S^{2}_{x}(k)$
$\displaystyle=\frac{n_{x}-1}{n_{x}}\sum_{i=1}^{n_{x}}\left(\sum\limits_{j=0}^{n_{y}-k}\frac{{n_{y}-j\choose
k}Q_{i}(j)}{{(n_{x}-1)|S_{k,n_{y}}|}}-\sum\limits_{j=0}^{n_{y}-k}\frac{{n_{y}-j\choose
k}Q(j)}{n_{x}|S_{k,n_{y}}|}\right)^{2},$
$\displaystyle=\frac{n_{x}-1}{n_{x}}\sum_{i=1}^{n_{x}}\left(\sum\limits_{j=0}^{n_{y}-k}\frac{{n_{y}-j\choose
k}Q(j)}{n_{x}(n_{x}-1)|S_{k,n_{y}}|}-\frac{{n_{y}-j_{i}^{\star}\choose
k}}{|S_{k,n_{y}}|(n_{x}-1)}\right)^{2},$
$\displaystyle=\frac{1}{n_{x}(n_{x}-1)}\sum_{i=1}^{n_{x}}\left(\hat{\theta}(k)-\frac{{n_{y}-j_{i}^{\star}\choose
k}}{|S_{k,n_{y}}|}\right)^{2}.$
Since there are $Q(j)$ draws from urn-$x$ which contribute to $Q(j)$, the
above sum may be now rewritten in the form given in (7). $\Box$
Proof of equation (9). Similarly, $S^{2}_{y}(k)$ corresponds to the jackknife
summed over each possible deletion of a single $y$-data, which is more
precisely given by
$\displaystyle S^{2}_{y}(k)$
$\displaystyle=\frac{n_{y}-1}{n_{y}}\sum_{r=1}^{n_{y}}\left(\frac{1}{n_{x}|S_{r}|}\sum\limits_{i=1}^{n_{x}}\sum\limits_{\sigma\in
S_{r}}[\\![X_{i}\notin\\{Y_{\sigma(1)},\ldots,Y_{\sigma(k)}\\}]\\!]-\hat{\theta}(k)\right)^{2},$
(29)
where $S_{r}$ is the set of one-to-one functions from $\\{1,\ldots,k\\}$ into
$\\{1,\ldots,n_{y}\\}\setminus\\{r\\}$.
Recall that $M(i,j)$ is the number of colors seen $i$ times in draws from
urn-$x$ and $j$ times in draws from urn-$y$, giving that
$\mathop{\sum}_{i}iM(i,j)=Q(j)$.
Fix $1\leq r\leq n_{y}$ and suppose that $Y_{r}$ is of a color that
contributes to $M(i_{r}^{\star},j_{r}^{\star})$, for some
$i_{r}^{\star},j_{r}^{\star}.$ Removing $Y_{r}$ from the data decrements
$M(i_{r}^{\star},j_{r}^{\star})$ and increments
$M(i_{r}^{\star},j_{r}^{\star}-1)$ by one unit. Proceeding similarly as in the
case for $S^{2}_{x}(k)$, if $M_{r}$ is used to denote the $M$-statistics when
observation $Y_{r}$ is removed from sample-$y$, then
$\displaystyle S^{2}_{y}(k)$
$\displaystyle=\frac{n_{y}-1}{n_{y}}\sum_{r=1}^{n_{y}}\left(\sum\limits_{j=0}^{n_{y}-k-1}\frac{{n_{y}-j-1\choose
k}}{n_{x}|S_{r}|}\sum\limits_{i=1}^{n_{x}}iM_{r}(i,j)-\hat{\theta}(k)\right)^{2},$
$\displaystyle=\frac{n_{y}-1}{n_{y}}\sum_{r=1}^{n_{y}}\left(i_{r}^{\star}\frac{{n_{y}-j_{r}^{\star}\choose
k}}{n_{x}|S_{r}|}-i_{r}^{\star}\frac{{n_{y}-j_{r}^{\star}-1\choose
k}}{n_{x}|S_{r}|}+\sum_{j=0}^{n_{y}-k-1}\frac{{n_{y}-j_{r}^{\star}-1\choose
k}}{n_{x}|S_{r}|}Q(j)-\hat{\theta}(k)\right)^{2},$
$\displaystyle=\frac{n_{y}-1}{n_{y}}\sum_{r=1}^{n_{y}}\left(i_{r}^{\star}\left(c_{j_{r}^{\star}-1}(k)-c_{j_{r}^{\star}}(k)\right)+\hat{\theta}_{y}(k)-\hat{\theta}(k)\right)^{2},$
where $c_{j}(k)$ and $\hat{\theta}_{y}(k)$ are as defined in (10) and (11).
Noting that for each $i$ there are $j$ draws from urn-$y$ that contribute to
$M(i,j)$, the form in (9) follows. $\Box$
In what follows, we specialize the coefficients in (23) and (24) to the kernel
function of dissimilarity,
$h(x_{1},y_{1},\ldots,y_{k}):=[\\![x_{1}\notin\\{y_{1},\ldots,y_{k}]\\!]$.
From now on, for each $j\geq 0$ and $k\geq 1$, define
$\displaystyle\xi_{0,j}(k)$
$\displaystyle:=\mathbb{V}({\mathbb{P}}(X_{1}\notin\\{Y_{1},\ldots,Y_{k}\\}|Y_{1},\ldots,Y_{j}));$
(30) $\displaystyle\xi_{1,j}(k)$
$\displaystyle:=\mathbb{V}({\mathbb{P}}(X_{1}\notin\\{Y_{1},\ldots,Y_{k}\\}|X_{1},Y_{1},\ldots,Y_{j})).$
(31)
Above it is understood that the sigma-field generated by
$(Y_{1},\ldots,Y_{j})$ when $j=0$ is $\\{\emptyset,\Omega\\}$; in particular,
$\xi_{0,0}(k)=0$, for all $k\geq 1$.
The following asymptotic properties of $\xi_{i,j}(k)$ are useful in the
remaining proofs.
###### Lemma 14.
Assume that conditions (a)-(c) are satisfied and define
$c:=\min_{i\in(I_{x}\cap I_{y})}{\mathbb{P}}_{y}(i)$. It follows that $0<c<1$
and $0<\theta(\infty)<1$. Furthermore
$\displaystyle\xi_{1,k}(k)-\xi_{1,0}(k)$
$\displaystyle=\Theta\big{(}(1-c)^{k}\big{)};$ (32)
$\displaystyle\xi_{1,0}(k)$
$\displaystyle=\theta(\infty)\,\big{(}1-\theta(\infty)\big{)}+\Theta\big{(}(1-c)^{k}\big{)};$
(33) $\displaystyle\xi_{0,k}(k)$ $\displaystyle=O\big{(}(1-c)^{k}\big{)}.$
(34)
###### Proof.
Observe that conditions (a)-(b) imply that $0<c<1$. In addition, condition (b)
implies that $\theta(\infty)<1$, whereas condition (c) implies that
$\theta(\infty)>0$.
Next, consider the set
$I^{\star}:=\big{\\{}i\in(I_{x}\cap I_{y})\hbox{ such that
}{\mathbb{P}}_{y}(i)=c\big{\\}},$
i.e. $I^{\star}$ is the set of rarest colors in urn-$y$ which are also in
urn-$x$. Also note that
$\theta(k)=\theta(\infty)+\sum_{i\in(I_{x}\cap
I_{y})}{\mathbb{P}}_{x}(i)\big{(}1-{\mathbb{P}}_{y}(i)\big{)}^{k}.$ (35)
As an intermediate step before showing (32), we prove that
$\xi_{1,j}(k)=\theta(2k-j)-\theta^{2}(k).$ (36)
For this, first observe that
${\mathbb{P}}(X_{1}\notin\\{Y_{1},\ldots,Y_{k}\\}\mid
X_{1},Y_{1},\ldots,Y_{j})=[\\![X_{1}\notin\\{Y_{1},\ldots,Y_{j}\\}]\\!](1-{\mathbb{P}}_{y}(X_{1}))^{k-j}.$
Hence
$\displaystyle{\mathbb{E}}\big{(}{\mathbb{P}}(X_{1}\notin\\{Y_{1},\ldots,Y_{k}\\}\mid
X_{1},Y_{1},\ldots,Y_{j})^{2}\big{)}$
$\displaystyle={\mathbb{E}}\big{(}[\\![X_{1}\notin\\{Y_{1},\ldots,Y_{j}\\}]\\!](1-{\mathbb{P}}_{y}(X_{1}))^{2k-2j}\big{)},$
$\displaystyle={\mathbb{E}}\big{(}{\mathbb{P}}(X_{1}\notin\\{Y_{1},\ldots,Y_{2k-j}\\}\mid
X_{1},Y_{1},\ldots,Y_{j})\big{)},$ $\displaystyle=\theta(2k-j),$
from which (36) now easily follows.
To show (32) note that (36) implies
$\displaystyle\xi_{1,k}(k)-\xi_{1,0}(k)$ $\displaystyle=\theta(k)-\theta(2k),$
$\displaystyle=\mathop{\sum}\limits_{i\in
I_{x}}{\mathbb{P}}_{x}(i)(1-{\mathbb{P}}_{y}(i))^{k}(1-(1-{\mathbb{P}}_{y}(i))^{k}),$
$\displaystyle=\mathop{\sum}\limits_{i\in(I_{x}\cap
I_{y})}{\mathbb{P}}_{x}(i)(1-{\mathbb{P}}_{y}(i))^{k}(1-(1-{\mathbb{P}}_{y}(i))^{k}),$
$\displaystyle=(1-c)^{k}\mathop{\sum}\limits_{i\in
I^{\star}}{\mathbb{P}}_{x}(i)+o\left((1-c)^{k}\right),$
which establishes (32).
Now note that
$\displaystyle\xi_{1,0}(k)$ $\displaystyle=$
$\displaystyle\theta(2k)-\theta^{2}(k),$ $\displaystyle=$
$\displaystyle\mathop{\sum}\limits_{i\in
I_{x}}{\mathbb{P}}_{x}(i)\big{(}1-{\mathbb{P}}_{y}(i)\big{)}^{2k}-\left(\mathop{\sum}\limits_{i\in
I_{x}}{\mathbb{P}}_{x}(i)\big{(}1-{\mathbb{P}}_{y}(i)\big{)}^{k}\right)^{2},$
$\displaystyle=$
$\displaystyle\theta(\infty)-\theta^{2}(\infty)-2\theta(\infty)(1-c)^{k}\left(\mathop{\sum}\limits_{i\in
I^{\star}}{\mathbb{P}}_{x}(i)\right)+o(1-c)^{k},$
which establishes (33).
Next we show (34), which we note gives more precise information than (27).
Consider the random variable $T$ defined as the smallest $n\geq 1$ such that
$(I_{x}\cap I_{y})\subset\\{Y_{1},\ldots,Y_{n}\\}$. We may bound the
probability of $T$ being large by ${\mathbb{P}}(T>k)\leq n(1-c)^{k}$, where
$n:=|I_{x}\cap I_{y}|$ is finite because of condition (a). On the other hand,
note that
${\mathbb{P}}\big{(}X_{1}\notin\\{Y_{1},\ldots,Y_{k}\\}\mid
Y_{1},\ldots,Y_{k}\big{)}=1-{\mathbb{P}}_{x}(\\{Y_{1},\ldots,Y_{k}\\}).$
Define $W_{k}:=1-{\mathbb{P}}_{x}(\\{Y_{1},\ldots,Y_{k}\\})-\theta(k)$ and
observe that, over the event $T\leq k$, $W_{k}=\theta(\infty)-\theta(k)$.
Since $|W_{k}|\leq 1$, we obtain that
$\displaystyle\xi_{0,k}(k)$ $\displaystyle=$
$\displaystyle\mathbb{E}\Big{(}\mathbb{E}\big{(}W_{k}^{2}\mid
T>k\big{)}\Big{)}\cdot{\mathbb{P}}(T>k)+\mathbb{E}\Big{(}\mathbb{E}\big{(}W_{k}^{2}\mid
T\leq k\big{)}\Big{)}\cdot{\mathbb{P}}(T\leq k),$ $\displaystyle\leq$
$\displaystyle{\mathbb{P}}(T>k)+\mathbb{E}\Big{(}\mathbb{E}\big{(}W_{k}^{2}\mid
T\leq k\big{)}\Big{)},$ $\displaystyle\leq$ $\displaystyle
n(1-c)^{k}+\big{(}\theta(\infty)-\theta(k)\big{)}^{2}.$
The identity in equation (34) is now a direct consequence of (35). ∎
Our next goal is to show Theorems 4 and 5. To do so we rely on the method of
projection by Grams and Serfling [17]. This approach approximates
$\hat{\theta}(k)$ by the random variable
$\displaystyle\hat{\theta}_{P}(k)$
$\displaystyle:=\theta(k)+\mathop{\sum}\limits_{i=1}^{n_{x}}({\mathbb{E}}(\hat{\theta}(k)|X_{i})-\theta(k))+\mathop{\sum}\limits_{j=1}^{n_{y}}({\mathbb{E}}(\hat{\theta}(k)|Y_{j})-\theta(k)).$
The projection is the best approximation in terms of mean squared error to
$\hat{\theta}(k)$ that is a linear combination of individual functions of each
datapoint.
Under the stated conditions, $\hat{\theta}_{P}(k)$ is the sum of two
independent sums of non-degenerate i.i.d. random variables and therefore
satisfies the hypotheses of the classical central limit theorem. The variance
of the projection is easier to analyze and estimate than the $U$-statistic
directly, which is relevant in establishing consistency for the jackknife
estimation of variance.
Let
$\displaystyle R(k)$ $\displaystyle:=\hat{\theta}(k)-\hat{\theta}_{P}(k),$
be the remainder of $\hat{\theta}(k)$ that is not accounted for by its
projection. When $R(k)$ is small relative to $\hat{\theta}_{P}(k)$,
$\hat{\theta}(k)$ is mostly explained by $\hat{\theta}_{P}(k)$ in relative
terms.
The next lemma summarizes results about the asymptotic properties of $R(k)$,
particularly with relation to the scale of $\hat{\theta}_{P}(k)$ as given by
its variance.
###### Lemma 15.
We have that
$\displaystyle\mathbb{V}(\hat{\theta}_{P}(k))$
$\displaystyle=n_{x}^{-1}\xi_{1,0}(k)+k^{2}n_{y}^{-1}\xi_{0,1}(k),$ (37)
$\displaystyle{\mathbb{E}}(R^{2}(k))$
$\displaystyle=\mathbb{V}(\hat{\theta}(k))-\mathbb{V}(\hat{\theta}_{P}(k)).$
(38)
Under assumptions (a)-(c), for a fixed $k\geq 1$, we have that
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}$
$\displaystyle\left|\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}-1\right|=0.$
(39)
Furthermore, under assumptions (a)-(d) we have that
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\max_{k=1:n_{y}}$
$\displaystyle\left|\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}-1\right|=0;$
(40)
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\max_{k=1:n_{y}}$
$\displaystyle\mathbb{P}\left(\left|\frac{\hat{\theta}(k)-\hat{\theta}_{P}(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}}\right|>\epsilon\right)=0;$
(41)
for all $\epsilon>0$.
###### Proof.
A direct calculation from the form given in (16) gives that
$\displaystyle{\mathbb{E}}(\hat{\theta}(k)|X_{i}))$
$\displaystyle=n_{x}^{-1}{\mathbb{P}}(X_{i}\notin\\{Y_{1},\ldots,Y_{k}\\}|X_{i})+\left(1-n_{x}^{-1}\right)\theta(k);$
(42) $\displaystyle\mathbb{V}({\mathbb{E}}(\hat{\theta}(k)|X_{i}))$
$\displaystyle=\frac{\xi_{1,0}(k)}{n_{x}^{2}};$
$\displaystyle{\mathbb{E}}(\hat{\theta}(k)|Y_{i}))$
$\displaystyle=kn_{y}^{-1}{\mathbb{P}}(X_{i}\notin\\{Y_{i},\ldots,Y_{k+i-1}\\}|Y_{i})+\left(1-kn_{y}^{-1}\right)\theta(k);$
(43) $\displaystyle\mathbb{V}({\mathbb{E}}(\hat{\theta}(k)|Y_{i}))$
$\displaystyle=\frac{k^{2}\xi_{0,1}(k)}{n_{y}^{2}}.$
As
$\mathbb{V}(\hat{\theta}_{P}(k))=n_{x}\mathbb{V}({\mathbb{E}}(\hat{\theta}(k)|X_{i}))+n_{y}\mathbb{V}({\mathbb{E}}(\hat{\theta}(k)|Y_{i}))$,
(37) follows.
To show (38), first observe that
$\mathbb{E}(R^{2}(k))=\mathbb{V}(R(k))=\mathbb{V}(\hat{\theta}(k))+\mathbb{V}(\hat{\theta}_{P}(k))-2\,\mbox{Cov}(\hat{\theta}(k),\hat{\theta}_{P}(k)).$
(44)
Next, using the definition of the projection, we obtain that
$\displaystyle\mbox{Cov}(\hat{\theta}(k),\hat{\theta}_{P}(k))$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n_{x}}\mbox{Cov}(\hat{\theta}(k),\mathbb{E}(\hat{\theta}(k)|X_{i}))+\sum_{j=1}^{n_{y}}\mbox{Cov}(\hat{\theta}(k),\mathbb{E}(\hat{\theta}(k)|Y_{j})),$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n_{x}}\mathbb{V}(\mathbb{E}(\hat{\theta}(k)|X_{i}))+\sum_{j=1}^{n_{y}}\mathbb{V}(\mathbb{E}(\hat{\theta}(k)|Y_{j})),$
$\displaystyle=$
$\displaystyle\mathbb{V}\left(\sum_{i=1}^{n_{x}}\mathbb{E}(\hat{\theta}(k)|X_{i})+\sum_{j=1}^{n_{y}}\mathbb{E}(\hat{\theta}(k)|Y_{j})\right),$
$\displaystyle=$ $\displaystyle\mathbb{V}(\hat{\theta}_{P}(k)),$
from which (38) follows, due to the identity in (44). Note that the last
identity implies that $\hat{\theta}_{P}(k)$ and $R(k)$ are uncorrelated.
Before continuing, we note that (41) is a direct consequence of (38), (40) and
Chebyshev’s inequality [22]. To complete the proof of the lemma all reduces
therefore to show (41) under conditions (a)-(d). Indeed, if $b>1$ and we let
$k_{n}=\log_{b}(n_{y})$ then due to the identities in (22) and (37) and Lemma
12, we obtain under (a)-(c) that
$\mathbb{V}(\hat{\theta}(k))=\mathbb{V}(\hat{\theta}_{P}(k))+O\left(\frac{k^{2}}{n_{x}n_{y}}+\frac{k^{4}}{n_{y}^{2}}\right),$
uniformly for all $k=1:k_{n}$, as $n_{x},n_{y}\to\infty$. Since
$\mathbb{V}(\hat{\theta}_{P}(k))>0$ for all $k\geq 1$, we have thus shown
(39). Furthermore, note that $\xi_{1,0}(k)>0$ for all $k\geq 1$; in
particular, due to (33) and conditions (a)-(d), we can assert that
$\inf_{k\geq 1}\xi_{1,0}(k)>0$. Since $\xi_{0,1}(k)\geq 0$, the above identity
together with the one in (37) let us conclude that
$\max_{k=1:k_{n}}\left|\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}-1\right|=O\left(\frac{k_{n}^{2}}{n_{y}}+\frac{n_{x}k_{n}^{4}}{n_{y}^{2}}\right),$
as $n_{x},n_{y}\to\infty$. Because of condition (d), the big-O term above
tends to $0$. As a result:
$\lim_{n_{x},n_{y}\to\infty}\,\max_{k=1:k_{n}}\left|\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}-1\right|=0.$
(45)
On the other hand, (38) implies that
$\mathbb{V}(\hat{\theta}(k))\geq\mathbb{V}(\hat{\theta}_{P}(k))$. Hence, using
(19) and (25) to bound from above the variance of the U-statistic, we obtain:
$1\leq\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}\leq\frac{\xi_{0,k}(k)+\xi_{1,k}(k)/n_{x}}{k^{2}\xi_{0,1}(k)/n_{y}+\xi_{1,0}(k)/n_{x}}\leq
n_{x}\,\frac{\xi_{0,k}(k)}{\xi_{1,0}(k)}+\frac{\xi_{1,k}(k)}{\xi_{1,0}(k)}=1+n_{x}\cdot
O\big{(}(1-c)^{k}\big{)},$
as $k\to\infty$, where for the last identity we have used (32) and (34). Since
$n_{x}=\Theta(n_{y})$, it follows from the above identity that
$\max_{k=k_{n}:n_{y}}\left|\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}-1\right|=O\big{(}n_{y}(1-c)^{k_{n}}\big{)}=O\big{(}n_{y}^{1+\log_{b}(1-c)}\big{)}.$
In particular, if the base-$b$ in the logarithm is selected to satisfy that
$1<b<1/(1-c)$, then
$\lim_{n_{x},n_{y}\to\infty}\,\max_{k=k_{n}:n_{y}}\left|\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}-1\right|=0.$
(46)
The identities in equation (45) and (46) show (41), which completes the proof
of the lemma. ∎
Proof of Theorem 5. For a fixed $k$, note that $\hat{\theta}_{P}(k)$ is the
sum of two independent sums of non-degenerate i.i.d. random variables and
thus,
$(\hat{\theta}_{P}(k)-\theta(k))/\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}$
is asymptotically a standard Normal random variable as $n_{x},n_{y}\to\infty$
by the classical Central Limit Theorem. We would like to show however that
this convergence also applies if we let $k$ vary with $n_{x}$ and $n_{y}$. We
do so using the Berry-Esseen inequality [23]. Motivated by this we define the
random variables
$\displaystyle X_{i}^{\prime}(k)$
$\displaystyle:=\frac{{\mathbb{E}}(\hat{\theta}(k)|X_{i})-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}};$
$\displaystyle Y_{j}^{\prime}(k)$
$\displaystyle:=\frac{{\mathbb{E}}(\hat{\theta}(k)|Y_{j})-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}}.$
Note that ${\mathbb{E}}(X_{i}^{\prime}(k))={\mathbb{E}}(Y_{j}^{\prime}(k))=0$,
and that
$\displaystyle\mathop{\sum}\limits_{i=1}^{n_{x}}{\mathbb{E}}(|X_{i}^{\prime}(k)|^{2})+\mathop{\sum}\limits_{j=1}^{n_{y}}{\mathbb{E}}(|Y_{j}^{\prime}(k)|^{2})=1.$
We need to show that
$\mathop{\sum}\limits_{i=1}^{n_{x}}{\mathbb{E}}(|X_{i}^{\prime}(k)|^{3})+\mathop{\sum}\limits_{j=1}^{n_{y}}{\mathbb{E}}(|Y_{j}^{\prime}(k)|^{3})=o(1),$
(47)
uniformly for $k=1:n_{y}$, as $n_{x},n_{y}\to\infty$.
Note that from (42) and (43),
$\displaystyle\big{|}{\mathbb{E}}(\hat{\theta}(k)|X_{i})-\theta(k)\big{|}^{3}$
$\displaystyle=\frac{|{\mathbb{P}}(X_{i}\notin\\{Y_{1},\ldots,Y_{k}\\}|X_{i})-\theta(k)|^{3}}{n^{3}_{x}};$
$\displaystyle\big{|}{\mathbb{E}}(\hat{\theta}(k)|Y_{i})-\theta(k)\big{|}^{3}$
$\displaystyle=\frac{k^{3}|{\mathbb{P}}(X_{1}\notin\\{Y_{i},\ldots,Y_{k+1-i}\\}|Y_{i})-\theta(k)|^{3}}{n^{3}_{y}}.$
Let
$\displaystyle\eta_{1,0}(k)$
$\displaystyle:={\mathbb{E}}|{\mathbb{P}}(X_{i}\notin\\{Y_{1},\ldots,Y_{k}\\}|X_{i})-\theta(k)|^{3};$
$\displaystyle\eta_{0,1}(k)$
$\displaystyle:={\mathbb{E}}|{\mathbb{P}}(X_{1}\notin\\{Y_{i},\ldots,Y_{k+1-i}\\}|Y_{i})-\theta(k)|^{3}.$
It follows from (37) that
$\displaystyle\mathop{\sum}\limits_{i=1}^{n_{x}}{\mathbb{E}}(|X_{i}^{\prime}(k)|^{3})+\mathop{\sum}\limits_{j=1}^{n_{y}}{\mathbb{E}}(|Y_{j}^{\prime}(k)|^{3})$
$\displaystyle=\frac{\eta_{1,0}(k)/n^{2}_{x}+k^{3}\eta_{0,1}(k)/n^{2}_{y}}{\big{(}\mathbb{V}(\hat{\theta}_{P}(k))\big{)}^{3/2}}=\frac{\eta_{1,0}(k)/n^{2}_{x}+k^{3}\eta_{0,1}(k)/n^{2}_{y}}{\big{(}\xi_{1,0}(k)/n_{x}+k^{2}\xi_{0,1}(k)/n_{y}\big{)}^{3/2}}.$
But note that $0\leq\eta_{0,1}(k)\leq\xi_{0,1}(k)$. Since, according to Lemma
14, $\xi_{0,1}(k)$ decreases exponentially fast, we obtain
$\displaystyle k^{3}\eta_{0,1}(k)/n_{y}^{2}$ $\displaystyle=O(n_{y}^{-2}),$
uniformly for all $k=1:n_{y}$, as $n_{y}\rightarrow\infty$. On the other hand,
$0\leq\eta_{1,0}(k)\leq\xi_{1,0}(k)\leq 1$. Furthermore, (33) implies that
$\inf_{k\geq 1}\xi_{1,0}(k)>0$. Since $n_{x}=\Theta(n_{y})$, for some finite
constant $C>0$ we find that
$\displaystyle\mathop{\sum}\limits_{i=1}^{n_{x}}{\mathbb{E}}(|X_{i}^{\prime}(k)|^{3})+\mathop{\sum}\limits_{j=1}^{n_{y}}{\mathbb{E}}(|Y_{j}^{\prime}(k)|^{3})$
$\displaystyle\leq
C\frac{1/n^{2}_{x}+1/n^{2}_{y}}{\left(\mathop{\inf}\limits_{k\geq
1}\xi_{1,0}(k)/n_{x}\right)^{3/2}}=O\left(\frac{1}{\sqrt{n_{x}}}\right),$
which shows (47).
The above establishes convergence in distribution of
$(\hat{\theta}_{P}(k)-\theta(k))/\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}$ to a
standard normal random variable uniformly for $k=1:n_{y}$, as
$n_{x},n_{y}\to\infty$. The end of the proof is an adaptation of the proof of
Slutsky’s Theorem [21]. Indeed, note that
$\displaystyle{\mathbb{P}}\left(\frac{\hat{\theta}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}(k))}}\leq
t\right)={\mathbb{P}}\left(\frac{\hat{\theta}_{P}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}}+\frac{\hat{\theta}(k)-\hat{\theta}_{P}(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}}\leq
t\sqrt{\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}}\right).$
(48)
From this identity, it follows for any fixed $\epsilon>0$ that
${\mathbb{P}}\left(\frac{\hat{\theta}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}(k))}}\leq
t\right)\leq{\mathbb{P}}\left(\frac{\hat{\theta}_{P}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}}\leq
t\sqrt{\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}}+\epsilon\right)+{\mathbb{P}}\left(\left|\frac{\hat{\theta}(k)-\hat{\theta}_{P}(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}}\right|\geq\epsilon\right).$
The first term on the right-hand side of the above inequality can be made as
close to ${\mathbb{P}}[Z\leq t+\epsilon]$ as wanted, uniformly for
$k=1:n_{y}$, as $n_{x},n_{y}\to\infty$, because of (40). On the other hand,
the second term tends to $0$ uniformly for $k=1:n_{y}$ because of (41).
Letting $\epsilon\to 0^{+}$, shows that
$\limsup_{n_{x},n_{y}\to\infty}\,\max_{k=1:n_{y}}{\mathbb{P}}\left(\frac{\hat{\theta}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}(k))}}\leq
t\right)\leq{\mathbb{P}}[Z\leq t].$
Similarly, using (48), we have:
${\mathbb{P}}\left(\frac{\hat{\theta}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}(k))}}\leq
t\right)\geq{\mathbb{P}}\left(\frac{\hat{\theta}_{P}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}}\leq
t\sqrt{\frac{\mathbb{V}(\hat{\theta}(k))}{\mathbb{V}(\hat{\theta}_{P}(k))}}-\epsilon\right)-{\mathbb{P}}\left(\left|\frac{\hat{\theta}(k)-\hat{\theta}_{P}(k)}{\sqrt{\mathbb{V}(\hat{\theta}_{P}(k))}}\right|\geq\epsilon\right),$
and a similar argument as before shows now that
$\liminf_{n_{x},n_{y}\to\infty}\,\max_{k=1:n_{y}}{\mathbb{P}}\left(\frac{\hat{\theta}(k)-\theta(k)}{\sqrt{\mathbb{V}(\hat{\theta}(k))}}\leq
t\right)\geq{\mathbb{P}}[Z\leq t],$
which completes the proof of the theorem. $\Box$
We finally show Theorem 4, for which we first show the following result.
###### Lemma 16.
Let $S^{i}_{k,n}$ be the set of one-to-one functions from $\\{1,\ldots,k\\}$
into $\\{1,\ldots,n\\}/\\{i\\}$. Consider the kernel
$h(x_{1},y_{1},\ldots,y_{k}):=[\\![x_{1}\notin\\{y_{1},\ldots,y_{k}\\}]\\!]$,
and define
$\displaystyle\hat{\theta}_{x}^{i}(k)$
$\displaystyle:=\frac{1}{|S_{k,n_{y}}|}\mathop{\sum}\limits_{\sigma\in
S_{k,n_{y}}}\frac{1}{n_{x}-1}\mathop{\sum}\limits_{j=1,j\neq
i}^{n_{x}}h(X_{j},Y_{\sigma(1)},\ldots,Y_{\sigma(k)});$ (49)
$\displaystyle\hat{\theta}_{x}^{i\prime}(k)$
$\displaystyle:=\frac{1}{|S_{k,n_{y}}|}\mathop{\sum}\limits_{\sigma\in
S_{k,n_{y}}}h(X_{i},Y_{\sigma(1)},\ldots,Y_{\sigma(k)});$ (50)
$\displaystyle\hat{\theta}_{y}^{i}(k)$
$\displaystyle:=\frac{1}{|S^{i}_{k,n_{y}}|}\mathop{\sum}\limits_{\sigma\in
S^{i}_{k,n_{y}}}\frac{1}{n_{x}}\mathop{\sum}\limits_{j=1}^{n_{x}}h(X_{j},Y_{\sigma(1)},\ldots,Y_{\sigma(k)});$
(51) $\displaystyle\hat{\theta}_{y}^{i\prime}(k)$
$\displaystyle:=\frac{1}{|S^{i}_{k-1,n_{y}}|}\mathop{\sum}\limits_{\sigma\in
S^{i}_{k-1,n_{y}}}\frac{1}{n_{x}}\mathop{\sum}\limits_{j=1}^{n_{x}}h(X_{j},Y_{i},Y_{\sigma(1)},\ldots,Y_{\sigma(k-1)}).$
(52)
Then, for each $k\geq 1$ and $\epsilon>0$,
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\to\infty}{\mathbb{P}}\left(\left|\mathop{\sum}\limits_{i=1}^{n_{x}}\frac{\left(\hat{\theta}_{x}^{i}(k)-\hat{\theta}_{x}^{i\prime}(k)\right)^{2}}{n_{x}}-\xi_{1,0}(k)\right|>\epsilon\right)$
$\displaystyle=0;$ (53)
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\to\infty}{\mathbb{P}}\left(\left|\mathop{\sum}\limits_{i=1}^{n_{y}}\frac{\left(\hat{\theta}_{y}^{i}(k)-\hat{\theta}_{y}^{i\prime}(k)\right)^{2}}{n_{y}}-\xi_{0,1}(k)\right|>\epsilon\right)$
$\displaystyle=0.$ (54)
###### Proof.
Fix $k\geq 1$. We first use a result by Sen [24] to show that, for each $i\geq
1$:
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\hat{\theta}_{x}^{i}(k)$
$\displaystyle=\theta(k);$ (55)
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\hat{\theta}_{x}^{i\prime}(k)$
$\displaystyle=\mathbb{E}(h(X_{i},Y_{1},\ldots,Y_{k})|X_{i});$ (56)
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\hat{\theta}_{y}^{i}(k)$
$\displaystyle=\theta(k);$ (57)
$\displaystyle\mathop{\lim}\limits_{n_{x},n_{y}\rightarrow\infty}\hat{\theta}_{y}^{i\prime}(k)$
$\displaystyle=\mathbb{E}(h(X_{1},Y_{i},\ldots,Y_{i+k-1})|Y_{i});$ (58)
in an almost sure sense. Indeed, assume without loss of generality that $i=1$.
As the kernel functions found in (49) and (51) are bounded, the hypotheses of
Theorem 1 in [24] are satisfied, from which (55) and (57) are immediate.
Similarly, because $X_{1}$ and $Y_{1}$ are discrete random variables, (56) and
(58) also follow from [24] .
Define
$\displaystyle U(k)$
$\displaystyle:=\mathop{\sum}\limits_{i=1}^{n_{x}}\frac{(\hat{\theta}_{x}^{i}(k)-\hat{\theta}_{x}^{i\prime}(k))^{2}}{n_{x}};$
(59) $\displaystyle V(k)$
$\displaystyle:=\mathop{\sum}\limits_{i=1}^{n_{y}}\frac{(\hat{\theta}_{y}^{i}(k)-\hat{\theta}_{y}^{i\prime}(k))^{2}}{n_{y}};$
(60)
and observe that
$\displaystyle\mathbb{V}\left(U(k)\right)$
$\displaystyle=\frac{\mathbb{V}\left((\hat{\theta}_{x}^{1}(k)-\hat{\theta}_{x}^{1\prime}(k))^{2}\right)}{n_{x}}+\frac{n_{x}-1}{n_{x}}\hbox{Cov}\left((\hat{\theta}_{x}^{1}(k)-\hat{\theta}_{x}^{1\prime}(k))^{2},(\hat{\theta}_{x}^{2}(k)-\hat{\theta}_{x}^{2\prime}(k))^{2}\right);$
$\displaystyle\mathbb{V}\left(V(k)\right)$
$\displaystyle=\frac{\mathbb{V}\left((\hat{\theta}_{y}^{1}(k)-\hat{\theta}_{y}^{1\prime}(k))^{2}\right)}{n_{y}}+\frac{n_{y}-1}{n_{y}}\hbox{Cov}\left((\hat{\theta}_{y}^{1}(k)-\hat{\theta}_{y}^{1\prime}(k))^{2},(\hat{\theta}_{y}^{2}(k)-\hat{\theta}_{y}^{2\prime}(k))^{2}\right).$
Furthermore, due to (55)-(58), we have that
$\displaystyle\lim_{n_{x},n_{y}\to\infty}(\hat{\theta}_{x}^{i}(k)-\hat{\theta}_{x}^{i\prime}(k))^{2}$
$\displaystyle=$
$\displaystyle(\theta(k)-\mathbb{E}(h(X_{i},Y_{1},\ldots,Y_{k})|X_{i}))^{2};$
(61)
$\displaystyle\lim_{n_{x},n_{y}\to\infty}(\hat{\theta}_{y}^{i}(k)-\hat{\theta}_{y}^{i\prime}(k))^{2}$
$\displaystyle=$
$\displaystyle(\theta(k)-\mathbb{E}(h(X_{1},Y_{i},\ldots,Y_{i+k-1})|Y_{i}))^{2}.$
(62)
But note that, for $i\neq j$,
$(\theta(k)-\mathbb{E}(h(X_{i},Y_{1},\ldots,Y_{k})|X_{i}))^{2}$ and
$(\theta(k)-\mathbb{E}(h(X_{j},Y_{1},\ldots,Y_{k})|X_{j}))^{2}$ are
independent and hence uncorrelated. Similarly, the random variables
$(\theta(k)-\mathbb{E}(h(X_{1},Y_{i},\ldots,Y_{i+k-1})|Y_{i}))^{2}$ and
$(\theta(k)-\mathbb{E}(h(X_{1},Y_{j},\ldots,Y_{j+k-1})|Y_{j}))^{2}$ are
independent. Since
$|\hat{\theta}_{x}^{i}(k)-\hat{\theta}_{x}^{i\prime}(k)|\leq 1$ and
$|\hat{\theta}_{y}^{i}(k)-\hat{\theta}_{y}^{i\prime}(k)|\leq 1$, it follows
from (61) and (62), and the Bounded Convergence Theorem [22] that
$\displaystyle\mathbb{V}\left(U(k)\right)$ $\displaystyle=o(1);$ (63)
$\displaystyle\mathbb{V}\left(V(k)\right)$ $\displaystyle=o(1);$ (64)
as $n_{x},n_{y}\to\infty$.
Finally, by (30) and (31) it follows that
$\displaystyle\xi_{1,0}(k)$
$\displaystyle={\mathbb{E}}\left(\theta(k)-\mathbb{E}(h(X_{1},Y_{1},\ldots,Y_{k})|X_{1})\right)^{2};$
$\displaystyle\xi_{0,1}(k)$
$\displaystyle={\mathbb{E}}\left(\theta(k)-\mathbb{E}(h(X_{1},Y_{1},\ldots,Y_{k})|Y_{1})\right)^{2}.$
In particular, again by the Bounded Convergence Theorem, we have that
$\lim_{n_{x},n_{y}\to\infty}\mathbb{E}\left(U(k)\right)=\xi_{1,0}(k)$ and
$\lim_{n_{x},n_{y}\to\infty}\mathbb{E}\left(V(k)\right)=\xi_{0,1}(k)$. Since
$\displaystyle U(k)-\xi_{1,0}(k)$
$\displaystyle=\big{(}\mathbb{E}(U(k))-\xi_{1,0}(k)\big{)}+\big{(}U(k)-\mathbb{E}(U(k))\big{)};$
$\displaystyle V(k)-\xi_{0,1}(k)$
$\displaystyle=\big{(}\mathbb{E}(V(k))-\xi_{0,1}(k)\big{)}+\big{(}V(k)-\mathbb{E}(V(k))\big{)};$
the lemma is now a direct consequence of (63) and (64), and Theorem 1.5.4 of
Durrett [22]. ∎
Proof of Theorem 4. Fix $k\geq 1$. Using (16) we have that
$\displaystyle\hat{\theta}(k)$
$\displaystyle=\left(1-\frac{1}{n_{x}}\right)\hat{\theta}_{x}^{i}(k)+\frac{1}{n_{x}}\hat{\theta}_{x}^{i\prime}(k);$
$\displaystyle\hat{\theta}_{x}^{i}(k)-\hat{\theta}(k)$
$\displaystyle=\frac{1}{n_{x}}\left(\hat{\theta}_{x}^{i}(k)-\hat{\theta}_{x}^{i\prime}(k)\right);$
$\displaystyle\hat{\theta}(k)$
$\displaystyle=\left(1-\frac{k}{n_{y}}\right)\hat{\theta}_{y}^{i}(k)+\frac{k}{n_{y}}\hat{\theta}_{y}^{i\prime}(k);$
$\displaystyle\hat{\theta}_{y}^{i}(k)-\hat{\theta}(k)$
$\displaystyle=\frac{k}{n_{y}}\left(\hat{\theta}_{y}^{i}(k)-\hat{\theta}_{y}^{i\prime}(k)\right).$
It follows by (28) and (29) that
$\displaystyle S^{2}_{x}(k)$
$\displaystyle=\frac{n_{x}-1}{n_{x}}\cdot\frac{U(k)}{n_{x}},$ (65)
$\displaystyle S^{2}_{y}(k)$
$\displaystyle=\frac{n_{y}-1}{n_{y}}\cdot\frac{k^{2}V(k)}{n_{y}},$ (66)
where $U(k)$ and $V(k)$ are as in (59) and (60), respectively. Furthermore,
observe that
$\displaystyle S^{2}(k)$
$\displaystyle=S_{x}^{2}(k)+S_{y}^{2}(k)=\frac{n_{x}-1}{n_{x}}\cdot\frac{U(k)}{n_{x}}+\frac{n_{y}-1}{n_{y}}\cdot\frac{k^{2}V(k)}{n_{y}}.$
In particular, due to (37), we obtain that
$\left|\frac{S^{2}(k)}{\mathbb{V}(\hat{\theta}_{P}(k))}-1\right|\leq\frac{|U(k)-\xi_{1,0}(k)|}{\xi_{1,0}(k)}+\frac{|V(k)-\xi_{0,1}(k)|}{\xi_{0,1}(k)}+\frac{|U(k)|}{n_{x}\,\xi_{1,0}(k)}+\frac{|V(k)|}{n_{y}\,\xi_{0,1}(k)}.$
By Lemma 16, $U(k)$ converges in probability to $\xi_{1,0}(k)$, while
similarly $V(k)$ converges in probability to $\xi_{0,1}(k)$; in particular,
the first two terms on the right-hand side of the inequality converge to $0$
in probability. Since $|U(k)|\leq 1$ and $|V(k)|\leq 1$, the same can be said
about the last two terms of the inequality. Consequently,
$S^{2}(k)/\mathbb{V}(\hat{\theta}_{P}(k))$ converges to $1$ in probability, as
$n_{x},n_{y}\to\infty$. As stated in (39), however, conditions (a)-(c) imply
that $\mathbb{V}(\hat{\theta}_{P}(k))$ and $\mathbb{V}(\hat{\theta}(k))$ are
asymptotically equivalent as $n_{x},n_{y}\to\infty$, from which the theorem
follows. $\Box$
## Acknowledgments
We thank Rob Knight for insightful discussions and comments about this
manuscript, and Antonio Gonzalez for providing processed OTU tables from the
Human Microbiome Project.
## References
* 1. Lladser ME, Gouet R, Reeder J (2011) Extrapolation of urn models via poissonization: Accurate measurements of the microbial unknown. PLoS ONE 6: e21105.
* 2. Good IJ (1953) The population frequencies of species and the estimation of population parameters. Biometrika 40: 237-264.
* 3. Esty WW (1983) A Normal limit law for a nonparametric estimator of the coverage of a sample. Ann Stat 11: 905-912.
* 4. Mao CX (2004) Predicting the conditional probability of finding a new class. J Am Stat Assoc 99: 1108-1118.
* 5. Lijoi A, Mena RH, Prünster I (2007) Bayesian nonparametric estimation of the probability of discovering new species. Biometrika 94: 769-786.
* 6. Robbins HE (1968) Estimating the total probability of the unobserved outcomes of an experiment. Ann Math Statist 39: 256-257.
* 7. Starr N (1979) Linear estimation of discovering a new species. Ann Stat 7: 644-652.
* 8. Halmos PR (1946) The theory of unbiased estimation. Ann Math Statist 17: 34-43.
* 9. Clayton MK, Frees EM (1987) Linear estimation of discovering a new species. J Am Stat Assoc 82: 305-311.
* 10. Lozupone C, Knight R (2005) UniFrac: a new phylogenetic method for comparing microbial communities. Appl Environ Microbiol 71: 8228-8235.
* 11. Lozupone C, Lladser ME, Knights D, Stombaugh J, Knight R (2011) UniFrac: An effective distance metric for microbial community comparison. ISME J 5: 169-172.
* 12. Tuerk C, Gold L (1990) Systematic evolution of ligands by exponential enrichment: RNA ligands to bacteriophage T4 DNA polymerase. Science 249: 505-510.
* 13. Huttenhower C, Gevers D, Sathirapongsasuti JF, Segata N, Earl AM, et al. (2012) Structure, function and diversity of the healthy human microbiome. Nature 486: 207–214.
* 14. Hoeffding W (1948) A class of statistics with asymptotically normal distribution. Ann Math Statist 19: 293-325.
* 15. Efron B, Stein C (1981) The jackknife estimate of variance. Ann Stat 9: 586-596.
* 16. Shao J, Wu C (1989) A general theory for jackknife variance estimation. Ann Stat 17: 1176-1197.
* 17. Grams WF, Serfling RJ (1973) Convergence rate for U-statistics and related statistics. Ann Stat 1: 153-160.
* 18. Hampton JD (2012) Dissimilarity and Optimal Sampling in Urn Ensemble Model. Ph.D. thesis, University of Colorado, Boulder, Colorado.
* 19. Ahmad AA (1980) On the Berry-Esseen theorem for random U-statistics. Ann Stat 8: 1395-1398.
* 20. Callaert H, Janssen P (1978) The Berry-Esseen theorem for U-statistics. Ann Stat 6: 417-421.
* 21. Slutsky E (1925) Über stochastische asymptoten und grenzwerte. Metron 5: 3-89.
* 22. Durrett R (2010) Probability: Theory and Examples. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press. URL http://books.google.com/books?id=evbGTPhuvSoC.
* 23. Shevtsova IG (2010) An improvement of convergence rate estimates in the Lyapunov theorem. Doklady Mathematics 82: 862-864.
* 24. Sen PK (1977) Almost sure convergence of generalized U-statistics. Ann Probab 5: pp. 287-290.
## Figure Legends
Figure 1: Dissimilarity estimates. Heat map of $\hat{\theta}(n_{y})$ sorted by
site location metadata. Here, the $x$-axis denotes the sample from the
environment corresponding to urn-$x$, and similarly for the $y$-axis. The
entries on the diagonal are set to zero. Figure 2: Error estimates. Heat map
of $S(n_{y})$ sorted by site location metadata. Here, the $x$-axis also
denotes the sample from the environment corresponding to urn-$x$, and
similarly for the $y$-axis, and the entries on the diagonal are again set to
zero. Figure 3: Discrete derivative estimates. Heat map of
$|\hat{\theta}(n_{y})-\hat{\theta}(n_{y}-1)|$, sorted by site location
metadata, following the same conventions as in the previous figures. Figure 4:
Sequential estimation. Plots of $\hat{\theta}(k)$, with $k=1:n_{y}$, for three
pairs of samples of the HMP data.
## Tables
Table 1: HMP data. Summary of V35 16S rRNA data processed by Qiime into an OTU table. Body Supersite | Body Subsite | Assigned Labels
---|---|---
Airways | Anterior Nares | 1-5
| Throat | 6-17
Gastrointestinal Tract | Stool | 18-47
Oral | Attached/Keratinized Gingiva | 48-59
| Buccal Mucosa | 60-76
| Hard Palate | 77-90
| Palatine Tonsils | 91-112
| Saliva | 113-122
| Subgingival Plaque | 123-144
| Supragingival Plaque | 145-167
| Tongue Dorsum | 168-191
Skin | Left Antecubital Fossa | 192-195
| Left Retroauricular Crease | 196-217
| Right Antecubital Fossa | 218-222
| Right Retroauricular Crease | 223-242
Urogenital Tract | Mid Vagina | 243-248
| Posterior Fornix | 249-259
| Vaginal Introitus | 260-266
Table 2: Sample comparisons. Summary of estimates for three pairs of samples of the HMP data. Urn-$x$ | Urn-$y$ | $n_{x}$ | $n_{y}$ | $\hat{\theta}(n_{y})$ | $\hat{\theta}(n_{y})-\hat{\theta}(n_{y}-1)$ | Regression Error | $S(n_{y})$ | $\hat{\rho}^{n_{y}}$
---|---|---|---|---|---|---|---|---
255 | 176 | 5054 | 6782 | 0.9998 | 0.0 | 0.0 | 1.9892$\times 10^{-4}$ | 0.0
200 | 139 | 12747 | 5739 | 0.0499 | -1.6533$\times 10^{-4}$ | 6.8306$\times 10^{-6}$ | 1.9286$\times 10^{-3}$ | 0.9997
100 | 10 | 6206 | 8655 | 0.0324 | -2.9416$\times 10^{-6}$ | 0.0438 | 2.2477$\times 10^{-3}$ | 0.5130
## Supporting Information Legends
File S1. Summary Metadata related to Table 1 (tab-limited text file).
File S2. OTU table related to Table 2 and Figures 1-4 (tab-limited text file).
|
arxiv-papers
| 2013-02-08T00:16:47 |
2024-09-04T02:49:41.508669
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jerrad Hampton and Manuel E. Lladser",
"submitter": "Manuel Lladser",
"url": "https://arxiv.org/abs/1302.1916"
}
|
1302.2065
|
# An Empirical Determination of the EBL and the Gamma-ray Opacity of the
Universe
Floyd W. Stecker NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
Department of Physics and Astronomy, University of California, Los Angeles
###### Abstract
I present the results of a new approach to the intensity and photon density
spectrum of the intergalactic background light as a function of redshift using
observational data obtained in many different wavelength bands from local to
deep galaxy surveys. This enables an empirical determination of both the EBL
and its observationally based uncertainties. Using these results one can place
68% confidence upper and lower limits on the opacity of the universe to
$\gamma$-rays, free of the theoretical assumptions that were needed for past
calculations. I compare our results with measurements of the extragalactic
background light, upper limits obtained from observations made by the Fermi
Gamma-ray Space Telescope, and new observationally based results from Fermi
and H.E.S.S. using recent analyses of blazar spectra.
## I INTRODUCTION
Past work on estimating the spectral and redshift characteristics of the
intergalactic photon density, generically referred to as the EBL, have
depended on various assumptions as to the evolution of stellar populations and
dust absorption in galaxies. A detailed review of the problem has been given
by Dwek & Krennrich dwe12 . There have also been attempts to probe the EBL
from studies of blazar spectra ack12 ; abr12 an approach originally suggested
by Stecker, De Jager & Salamon sds92 . In this paper, I present the results of
a new, fully empirical approach to calculating the EBL, and subsequently, the
$\gamma$-ray opacity of the Universe from pair production interactions of
$\gamma$-rays with the EBL. This approach of M.A. Malkan, S.T. Scully and
myself, hitherto unattainable, is now enabled by very recent data from deep
galaxy surveys spanning the electromagnetic spectrum from millimeter to UV
wavelengths and therewith using galaxy luminosity functions for redshifts up
to $z=8$. This new approach, in addition to being an alternative to the
approaches mentioned above, is totally model independent; it does not make
assumptions as to galaxy dust or star formation characteristics or as to
blazar emission models. It is also an approach uniquely capable of delineating
empirically based uncertainties on the determination of the EBL. More details
of the work presented here have now been published sms12 .(See also Ref. hk12
.)
## II GALAXY EMISSIVITIES AND LUMINOSITY DENSITIES
The observationally determined co-moving radiation energy density
$u_{\nu}(z)$, is derived from the co-moving specific emissivity ${\cal
E}_{\nu}(z)$, which, in turn is derived from the observed galaxy luminosity
function (LF) at redshift $z$. The galaxy luminosity function,
$\Phi_{\nu}(L)$, is defined as the distribution function of galaxy
luminosities at a specific frequency or wavelength. The specific emissivity at
frequency $\nu$ and redshift $z$ (also referred to in the literature as the
luminosity density, $\rho_{L_{\nu}}$) , is the integral over the luminosity
function
${\cal E}_{\nu}(z)=\int_{L_{min}}^{L_{max}}dL_{\nu}\,\Phi(L_{\nu};z)$ (1)
In view of the well known difficulties in predicting galaxy LFs based on
galaxy formation models mar07 , our working philosophy was to depend only on
observational determinations of galaxy LFs in deriving specific emissivities.
There are many references in the literature where the LF is given and fit to
Schechter parameters, but where $\rho_{L_{\nu}}$ is not given. In those cases,
we could not determine the covariance of the errors in the Schechter
parameters used to determine the dominant statistical errors in their
analyses. Thus, we could not ourselves accurately determine the error on the
emissivity from equation (1). We therefore chose to use only the papers that
gave values for $\rho_{L_{\nu}}(z)={\cal E}_{\nu}(z)$ with errors. We did not
consider cosmic variance, but this uncertainly should be minimized since we
used data from many surveys.
The co-moving radiation energy density $u_{\nu}(z)$ is the time integral of
the co-moving specific emissivity ${\cal E}_{\nu}(z)$,
$u_{\nu}(z)=\int_{z}^{z_{\rm max}}dz^{\prime}\,{\cal
E}_{\nu^{\prime}}(z^{\prime})\frac{dt}{dz}(z^{\prime})e^{-\tau_{\rm
eff}(\nu,z,z^{\prime})},$ (2)
where $\nu^{\prime}=\nu(1+z^{\prime})/(1+z)$ and $z_{\rm max}$ is the redshift
corresponding to initial galaxy formation sal98 , and
$\frac{dt}{dz}{(z)}={[H_{0}(1+z)\sqrt{\Omega_{\Lambda}+\Omega_{m}(1+z)^{3}}}]^{-1},$
(3)
with $\Omega_{\Lambda}=0.72$ and $\Omega_{m}=0.28$.
The opacity factor for frequencies below the Lyman limit is dominated by dust
extinction. Since we were using actual observations of galaxies rather than
models, dust absorption is implicitly included. The remaining opacity
$\tau_{\nu}$ refers to the extinction of ionizing photons with frequencies
above the rest frame Lyman limit of $\nu_{LyL}\equiv 3.29\times 10^{15}$ Hz by
interstellar and intergalactic hydrogen and helium. It has been shown that
this opacity is very high, corresponding to the expectation of very small
fraction of ionizing radiation in intergalactic space compared with radiation
below the Lyman limit lyt95 ; sal98 . In fact, the Lyman limit cutoff is used
as a tool; when galaxies disappear when using a filter at a given waveband
(e.g., ”$U$-dropouts”, ”$V$-dropouts”) it is an indication of the redshift of
the Lyman limit.
We have therefore replaced equation (2) with the following expression
$u_{\nu}(z)=\int_{z}^{z_{\rm max}}dz^{\prime}\,{\cal
E}_{\nu^{\prime}}(z^{\prime})\frac{dt}{dz}(z^{\prime}){{\cal
H}(\nu(z^{\prime})-\nu^{\prime}_{LyL})},$ (4)
where ${\cal H}(x)$ is the Heavyside step function.
## III EMPIRICAL SPECIFIC EMISSIVITIES
We have used the results of many galaxy surveys to compile a set of luminosity
densities (LDs), $\rho_{L_{\nu}}(z)={\cal E}_{\nu}(z)$, at all observed
redshifts, and at rest-frame wavelengths from the far-ultraviolet, FUV = 150
nm to the $I$ band, $I$ = 800 nm. The LDs were obtained with a wide variety of
instruments in many different deep fields sms12 . Figure 1 shows the redshift
evolution of the luminosity ${\cal E}_{\nu}(z)$ for the various wavebands
based on those published in the literature. In order to determine the redshift
evolution of the LD in each of the bands out to a redshift of $\sim$ 8 where
only UV data are available, we utilized observed color relations to transform
data from other bands. We have chosen to include all data possible at $z>1.5$
in order to to fill in the observational gaps for various wavebands, mostly at
higher redshifts. We used the redshift-dependent observations of average
galaxy colors where appropriate in our analysis. In the redshift ranges where
they overlap, the colored (observational) data points shown in Figure 1 for
the various wavelength bands agree quite well (within the uncertainties) with
the black data points that were extrapolated from the shorter wavelength bands
using our color relations. The observationally determined LDs, combined with
the color relations, extend our coverage of galaxy photon production from the
FUV to NIR wavelengths in the galaxy rest frame. Our final results are not
very sensitive to errors in our average color relations because the
interpolations that we made were only over very small fractional wavelength
intervals, $\Delta\lambda(z)$. We have directly tested this by using numerical
trial runs.
Figure 1: The observed specific emissivities in our standard astronomical
fiducial wavebands. The lower right panel shows all of the observational data
used. In the other panels, non-band data have been shifted using observed
color relations in order to fully determine the specific emissivities in each
waveband. The symbol designations are FUV: black filled circles, NUV: magenta
open circles, $U$: green filled squares, $B$: blue open squares, $V$: brown
filled triangles, $R$: orange open triangles, $I$: yellow open diamonds. Grey
shading: derived 68% confidence bands.
## IV THE PHOTON DENSITIES WITH EMPIRICAL UNCERTAINTIES
The 68% confidence band upper and lower limits of the EBL were determined from
the observational data on $\rho_{L_{\nu}}$. We made no assumptions about
luminosity density evolution. We derived a luminosity confidence band in each
waveband by using a robust rational fitting function characterized by
$\rho_{L_{\nu}}={\cal E}_{\nu}(z)={{ax+b}\over{cx^{2}+dx+e}}$ (5)
where $x=\log(1+z)$ and $a$,$b$,$c$,$d$,and $e$ are free parameters.
The 68% confidence band was then computed from Monte Carlo simulation by
finding $10^{5}$ realizations of the data and then fitting the function to the
form given by eq. (5). In order to best represent the tolerated confidence
band, particularly at the highest redshifts, we chose to equally weight all
FUV points in excess of a redshift of 2. Our goal was not to find the best fit
to the data, but rather to find the limits tolerated by the current
observational data. In order to perform the Monte Carlo analysis of the
fitting function, a likelihood was determined at each redshift given the
existing data. The shape of this function was taken to be Gaussian (or the sum
of Gaussians where multiple points exist) for symmetric errors quoted in the
literature. Where symmetric errors are not quoted it is impossible to know
what the actual shape of the likelihood function is. We have chosen to utilize
a skew normal distribution to model asymmetric errors. This assumption has
very little impact on the determination of the confidence bands. The resulting
bands are shown along with the luminosity density data in Figure 1.
With the confidence bands established, we took the upper and lower limits of
the bands to be our high and low EBL constraints respectively. We then
interpolated each of these cases separately between the various wavebands to
find the upper and lower limit rest frame LDs. The calculation was extended to
the Lyman limit using the derivative derived from our color relationship
between the near and far UV bands. The co-moving radiation energy density was
then determined from equation (4). This result was used as input for the
determination of the optical depth of the universe to $\gamma$-rays . Our
resultant $z=0$ EBL is shown in Figure 2 and compared with the present data,
as discussed in Ref. sms12 .
Figure 2: Our empirically-based determination of the EBL together with lower
limits and data as described in Ref. sms12 .
## V THE OPTICAL DEPTH FROM $\gamma-\gamma$ INTERACTIONS WiTH UV-IR PHOTONS
The photon density
$n(\epsilon,z)=u(\epsilon,z)/\epsilon\ \ ,$ (6)
with $\epsilon=h\nu$, $h$ being Planck’s constant, were calculated using
equation (2).
The cross section for photon-photon scattering to electron-positron pairs can
be calculated using quantum electrodynamics bre34 . The threshold for this
interaction is determined from the frame invariance of the square of the four-
momentum vector that reduces to the square of the threshold energy, $s$,
required to produce twice the electron rest mass in the c.m.s.,
$s=2\epsilon E_{\gamma}(1-\cos\theta)=4m_{e}^{2}$ (7)
This invariance is known to hold to within one part in $10^{15}$ ste01 ; jac04
. With the co-moving energy density $u_{\nu}(z)$ evaluated, the optical depth
for $\gamma$-rays owing to electron-positron pair production interactions with
photons of the stellar radiation background can be determined from the
expression sds92
$\displaystyle\tau(E_{0},z_{e})$ $\displaystyle=$ $\displaystyle
c\int_{0}^{z_{e}}dz\,\frac{dt}{dz}\int_{0}^{2}dx\,\frac{x}{2}\int_{0}^{\infty}d\nu\,(1+z)^{3}$
$\displaystyle\times\
\left[\frac{u_{\nu}(z)}{h\nu}\right]\sigma_{\gamma\gamma}[s=2E_{0}h\nu
x(1+z)],$
where $E_{0}$ is the observed $\gamma$-ray energy at redshift zero, $\nu$ is
the frequency at redshift $z$, $z_{e}$ is the redshift of the $\gamma$-ray
source at emission, $x=(1-\cos\theta)$,
$\theta$ being the angle between the $\gamma$-ray and the soft background
photon.
The pair production cross section $\sigma_{\gamma\gamma}$ is zero for center-
of-mass energy $\sqrt{s}<2m_{e}c^{2}$, $m_{e}$ being the electron mass. Above
this threshold, the pair production cross section is given by
$\displaystyle\sigma_{\gamma\gamma}(s)$ $\displaystyle=$
$\displaystyle\frac{3}{16}\sigma_{\rm T}(1-\beta^{2})$ $\displaystyle\times\
\left[2\beta(\beta^{2}-2)+(3-\beta^{4})\ln\left(\frac{1+\beta}{1-\beta}\right)\right],$
where $\sigma_{T}$ is the Thompson scattering cross section and
$\beta=(1-4m_{e}^{2}c^{4}/s)^{1/2}$ jau55 .
It follows from equation (7) that the pair-production cross section energy has
a threshold at $\lambda=4.75\ \mu{\rm m}\cdot E_{\gamma}({\rm TeV})$. Since
the maximum $\lambda$ that we consider here is in the rest frame I band at 800
nm at redshift $z$, and we observe $E_{\gamma}$ at redshift 0, so that its
energy at interaction in the rest frame is $(1+z)E_{\gamma}$, we then get a
conservative upper limit on $E_{\gamma}$ of $\sim 200(1+z)^{-1}$ GeV as the
maximum $\gamma$-ray energy affected by the photon range considered here.
Allowing for a small error, our opacities are good to $\sim 250(1+z)^{-1}$
GeV. The 68% opacity ranges for $z=0.1,0.5,1,3~{}$and $5$ are plotted in
Figure 3.
The widths of the grey uncertainty ranges in the LDs shown in Figure 1
increase towards higher redshifts, especially at the longest rest wavelengths.
This reflects the decreasing amount of long-wavelength data and the
corresponding increase in uncertainties about the galaxies in those regimes.
However, these uncertainties do not greatly influence the opacity
calculations. Because of the short time interval of the emission from galaxies
at high redshifts their photons do not contribute greatly to the opacity at
lower redshifts. Figure 3 shows that the opacities determined for redshifts of
3 and 5 overlap within the uncertainties.
Figure 3: The empirically determined opacities for redshifts of 0.1, 0.5, 1,
3, 5. The dashed lines are for $\tau=1$ and $\tau=3$ sms12 .
## VI Results and Implications
We have determined the EBL using local and deep galaxy survey data, together
with observationally produced uncertainties, for wavelengths from 150 nm to
800 nm and redshifts out to $z>5$. We have presented our results in terms of
68% confidence band upper and lower limits. In Figure 2, we compare our $z=0$
result with both published and preliminary measurements and limits. As
expected, our $z=0$ (EBL) 68% lower limits as shown in Figure 2 are higher
than those obtained by galaxy counts alone, since the EBL from galaxies is not
completely resolved.
Figure 4 shows our 68% confidence band for $\tau=1$ on an energy-redshift plot
fs70 compared with the Fermi data on the highest energy photons from
extragalactic sources at various redshifts as given in Ref. abd10 . It can be
seen that none of the photons from these sources would be expected to be
significantly annihilated by pair production interactions with the EBL. This
point is brought out further in Figure 5. This figure compares the 68%
confidence band of our opacity results with the 95% confidence upper limits on
the opacity derived for specific blazars abd10 .
In a recent publication, the Fermi Collaboration has probed for the imprint of
the intergalactic radiation fields (the EBL) in the $\gamma$-ray spectra of
blazars ack12 , an approach originally suggested in Ref. sds92 . Their result
appears to be consistent with our results near the low opacity end of our
uncertainty range. The H.E.S.S. group abr12 has also looked for such an
effect in the spectra of bright blazars at energies above 100 GeV. It follows
from eq. (7) that such air Čerenkov telescope studies are sensitive only to
interactions of $\gamma$-rays with infrared photons. The H.E.S.S. group has
recently obtained a value for the $z=0$ EBL of $15\pm 2_{stat}\pm 3_{sys}$ nW
m-2sr-1 at a wavelength of 1.4 $\mu$m. This compares to our value of $17.5\pm
4.9$ nW m-2sr-1 at 0.9 $\mu$m.
My colleagues M.A. Malkan and S.T. Scully and I are presently continuing our
studies of the photon density spectrum as a function of redshift into the
infrared range using surveys from Hubble, Spitzer, Herschel and other sources.
Such studies, together with ongoing complementary $\gamma$-ray observations of
extragalactic sources with Fermi and future observations using the Čerenkov
Telescope Array, which will be sensitive to energies above 10 GeV cta10 , one
can look forward to a obtaining better understanding of both the EBL and other
potential aspects of $\gamma$-ray propagation in the Universe, such as those
explored in Refs. dea09 – ess12 .
Figure 4: An energy-redshift plot of the $\gamma$-ray horizon showing our
uncertainty band results sms12 compared with the Fermi plot of their highest
energy photons from FSRQs (red), BL Lacs (black) and GRBs (blue) vs. redshift
abd10 . Figure 5: Our opacity results for the redshifts of the blazars
indicated sms12 compared with 95% confidence opacity upper limits (red
arrows) and 99% confidence limits (blue arrows) as given by the Fermi analysis
abd10 .
### Results Online
Our results in numerical form are available at the following link:
http://csma31.csm.jmu.edu/physics/
scully/opacities.html
###### Acknowledgements.
F.W.S., M.A. Malkan and S.T. Scully were partially supported by a Fermi Cycle
4 Guest Investigator grant.
## References
* (1) E. Dwek and F. Krennrich, arXiv:1209.4661 (2012)
* (2) M. Ackermann, et al., Science 338, 1190 (2012)
* (3) A. Abramowski et al. arXiv:1212.3409 (2012)
* (4) F.W. Stecker, O.C. De Jager and M.H. Salamon, ApJ 390, L49 (1992)
* (5) F.W. Stecker, M.A. Malkan and S.T. Scully, ApJ 761, 128 (2012)
* (6) K. Helgason and H. Kashlinsky, ApJ 758, L13 (2012)
* (7) D. Marchesini and P. Van Dokkum, ApJ, 663, L89 (2007)
* (8) M.H. Salamon and F.W. Stecker, ApJ, 493, 547 (1998)
* (9) C. Lytherer, et al., ApJ, 454, L19 (1995)
* (10) G. Breit and J.A. Wheeler, Phys. Rev., 46, 1087 (1934)
* (11) J.M. Jauch and F. Rohrlich, The Theory of Photons and Electrons (Cambridge, MA:Addison-Wesley) (1955)
* (12) F.W. Stecker and S.L. Glashow 2001, Astropart. Phys., 16, 97 (2001)
* (13) T.Jacobson, S.Liberati, D. Mattingly and F.W. Stecker, Phys. Rev. Letters 93, 021101 (2004)
* (14) G.G. Fazio and F.W. Stecker, Nature 226, 135 (1970)
* (15) A. Abdo et al. ApJ, 723, 1082 (2010)
* (16) The CTA Consortium, arXiv:1008.3703 (2010)
* (17) A. De Angelis et al. Mon. Not. Royal Astr. Soc. 394, L21 (2009)
* (18) W. Essey et al. Phys. Rev. Letters 104, 141102 (2010)
* (19) W. Essesy and A. Kusenko, ApJ 751, L11 (2012)
|
arxiv-papers
| 2013-02-08T15:33:08 |
2024-09-04T02:49:41.523042
|
{
"license": "Public Domain",
"authors": "Floyd W. Stecker",
"submitter": "Floyd Stecker",
"url": "https://arxiv.org/abs/1302.2065"
}
|
1302.2139
|
00footnotetext: $\mathbf{2000}$ Mathematics Subject Classification, 53C15,
53C25.
Key words and phrases : Sasakian manifold, locally $\phi$-symmetric,
$\phi$-semisymmetric, Ricci $\phi$-semisymmetric, projectively
$\phi$-semisymmetric, conformally $\phi$-semisymmetric, manifold of constant
curvature, $B$-tensor, $B$-$\phi$-semisymmetric.
# ON LOCALLY $\phi$-SEMISYMMETRIC SASAKIAN MANIFOLDS
Absos Ali Shaikh and Helaluddin Ahmad
Department of Mathematics,
University of Burdwan, Golapbag,
Burdwan-713104,
West Bengal, India [email protected], [email protected]
###### Abstract.
Generalizing the notion of local $\phi$-symmetry of Takahashi [20], in the
present paper, we introduce the notion of local $\phi$-semisymmetry of a
Sasakian manifold along with its proper existence and characterization. We
also study the notion of local Ricci (resp., projective, conformal)
$\phi$-semisymmetry of a Sasakian manifold and obtain its characterization. It
is shown that the local $\phi$-semisymmetry, local projective
$\phi$-semisymmetry and local concircular $\phi$-semisymmetry are equivalent.
It is also shown that local conformal $\phi$-semisymmetry and local
conharmonical $\phi$-semisymmetry are equivalent.
Typeset by AmS-LaTeX
## 1\. Introduction
Let $M$ be an $n$-dimensional, $n\geq 3$, connected smooth Riemannian manifold
endowed with the Riemannian metric $g$. Let $\nabla$, $R$, $S$ and $r$ be the
Levi-Civita connection, curvature tensor, Ricci tensor and the scalar
curvature of $M$ respectively. The manifold $M$ is called locally symmetric
due to Cartan ([2], [3]) if the local geodesic symmetry at $p\in M$ is an
isometry, which is equivalent to the fact that $\nabla R=0$. Generalizing the
concept of local symmetry, the notion of semisymmetry was introduced by Cartan
[4] and fully classified by Szabó ([17], [18], [19]). The manifold $M$ is said
to be semisymmetric if
$(R(U,V).R)(X,Y)Z=0$
for all vector fields $X$, $Y$, $Z$, $U$, $V$ on $M$, where $R(U,V)$ is
considered as the derivation of the tensor algebra at each point of $M$. Every
locally symmetric manifold is semisymmetric but the converse is not true, in
general. However, the converse is true only for $n=3$. As a weaker version of
local symmetry, in 1977 Takahashi [20] introduced the notion of local
$\phi$-symmetry on a Sasakian manifold. A Sasakian manifold is said to be
locally $\phi$-symmetric if
$\phi^{2}((\nabla_{W}R)(X,Y)Z)=0$
for all horizontal vector fields $X$, $Y$, $Z$, $W$ on $M$, where $\phi$ is
the structure tensor of the manifold $M$. The concept of local $\phi$-symmetry
on various structures and their generalizations or extensions are studied in
[6], [8], [9], [10], [11], [12], [13], [14], [15]. By extending the notion of
semisymmetry and generalizing the concept of local $\phi$-symmetry of
Takahashi [20], in the present paper, we introduce the notion of local
$\phi$-semisymmetry on a Sasakian manifold. A Sasakian manifold $M$, $n\geq
3$, is said to be locally $\phi$-semisymmetric if
$\phi^{2}((R(U,V).R)(X,Y)Z)=0$
for all horizontal vector fields $X$, $Y$, $Z$, $U$, $V$ on $M$. We note that
every locally $\phi$-symmetric as well as semisymmetric Sasakian manifold is
locally $\phi$-semisymmetric but not conversely. The object of the present
paper is to study the geometric properties of a locally $\phi$-semisymmetric
Sasakian manifold along with its proper existence and characterization. The
paper is organized as follows. Section 2 deals with the rudiments of Sasakian
manifolds. By extending the definition of local $\phi$-symmetry, in Section 3,
we derive the defining condition of a locally $\phi$-semisymmetric Sasakian
manifold and proved that a Sasakian manifold is locally $\phi$-semisymmetric
if and only if each Kählerian manifold, which is a base space of a local
fibering, is Hermitian locally semisymmetric. We cite an example of a locally
$\phi$-semisymmetric Sasakian manifold which is not locally $\phi$-symmetric.
We also obtain a characterization of locally $\phi$-semisymmetric Sasakian
manifold by considering the horizontal vector fields. Section 4 is devoted to
the characterization of locally $\phi$-semisymmetric Sasakian manifold for
arbitrary vector fields. As the generalization of Ricci (resp., projectively,
conformally) semisymmetric Sasakian manifold, in the last section, we
introduce the notion of locally Ricci (resp., projectively, conformally)
$\phi$-semisymmetric Sasakian manifold and obtain the characterization of such
notions. Recently Shaikh and Kundu [16] defined a generalized curvature
tensor, called $B$-tensor, by the linear combination of $R$, $S$ and $g$ which
includes various curvature tensors as particular cases. We study the
characterization of locally $B$-$\phi$-semisymmetric Sasakian manifolds. It is
shown that local $\phi$-semisymmetry, local projective $\phi$-semisymmetry and
local concircular $\phi$-semisymmetry are equivalent and hence they are of the
same characterization. Also it is proved that local conformal
$\phi$-semisymmetry and local conharmonical $\phi$-semisymmetry are
equivalent. Finally, we conclude that the study of local $\phi$-semisymmetry
and local conformal $\phi$-semisymmetry are meaningful as they are not
equivalent. However, the study of local $\phi$-semisymmetry with any other
generalized curvature tensor of type (1,3) (which are the linear combination
of $R$, $S$ and $g$) is either meaningless or redundant due to their
equivalency.
## 2\. Sasakian manifolds
An $n(=2m+1,m\geq 1)$-dimensional $C^{\infty}$ manifold $M$ is said to be a
contact manifold if it carries a global 1-form $\eta$ such that
$\eta\wedge(d\eta)^{m}\neq 0$ everywhere on $M$. Given a contact form $\eta$,
it is well-known that there exists a unique vector field $\xi$, called the
characterstic vector field of $\eta$, satisfying $\eta(\xi)=1$ and
$d\eta(X,\xi)=0$ for any vector field $X$ on $M$. A Riemannian metric $g$ is
said to be an associated metric if there exists a tensor field $\phi$ of type
(1,1) such that
(2.1) $\phi^{2}=-I+\eta\otimes\xi,\ \ \eta(\cdot)=g(\cdot,\xi),\ \ \
d\eta(\cdot,\cdot)=g(\cdot,\phi\cdot).$
Then the structure $(\phi,\xi,\eta,g)$ on $M$ is called a contact metric
stucture and the manifold $M$ equipped with such a stucture is called a
contact metric manifold [1].
From (2.1) it is easy to check that the following holds:
(2.2) $\displaystyle\phi\xi=0,\ \ \ \eta\circ\phi=0,\ \ \
g(\phi\cdot,\cdot)=-g(\cdot,\phi\cdot),$ (2.3) $\displaystyle
g(\phi\cdot,\phi\cdot)=g(\cdot,\cdot)-\eta\otimes\eta.$
Given a contact metric manifold $M$ there is an (1,1) tensor field $h$ given
by $h=\frac{1}{2}\pounds_{\xi}\phi$, where $\pounds$ denotes the operator of
Lie differentiation. Then $h$ is symmetric. The vector field $\xi$ is a
Killing vector field with respect to $g$ if and only if $h=0$. A contact
metric manifold $M$ for which $\xi$ is a Killing vector is said to be a
$K$-contact manifold. A contact structure on $M$ gives rise to an almost
complex structure $J$ on the product $M\times\mathbb{R}$ defined by
$J\Big{(}X,f\frac{d}{dt}\Big{)}=\Big{(}\phi
X-f\xi,\eta(X)\frac{d}{dt}\Big{)},$
where $f$ is a real valued function, is integrable, then the structure is said
to be normal and the manifold $M$ is a Sasakian manifold. Equivalently, a
contact metric manifold is Sasakian if and only if
(2.4) $R(X,Y)\xi=\eta(Y)X-\eta(X)Y$
holds for all $X$, $Y$ on $M$.
In an $n$-dimensional Sasakian manifold $M$ the following relations hold ([1],
[22]):
(2.5) $\displaystyle
R(\xi,X)Y=(\nabla_{X}\phi)(Y)=g(X,Y)\xi-\eta(Y)X=-R(X,\xi)Y,$ (2.6)
$\displaystyle\nabla_{X}\xi=-\phi X,\ \ \ (\nabla_{X}\eta)(Y)=g(X,\phi Y),$
(2.7) $\displaystyle\eta(R(X,Y)Z)=g(Y,Z)\eta(X)-g(X,Z)\eta(Y),$ (2.8)
$\displaystyle(\nabla_{W}R)(X,Y)\xi=g(W,\phi Y)X-g(W,\phi X)Y+R(X,Y)\phi W,$
(2.9) $\displaystyle(\nabla_{W}R)(X,\xi)Z=g(X,Z)\phi W-g(Z,\phi W)X+R(X,\phi
W)Z,$ (2.10) $\displaystyle S(X,\xi)=(n-1)\eta(X),\ \ S(\xi,\xi)=(n-1)$
for all vector fields $X$, $Y$, $Z$ and $W$ on $M$. In a Sasakian manifold,
for any $X$, $Y$, $Z$ on $M$, we also have [21]
(2.11) $R(X,Y)\phi W=g(W,\phi X)Y-g(W,Y)\phi X-g(W,\phi Y)X+g(W,X)\phi Y+\phi
R(X,Y)W.$
From (2.8) and (2.11), it follows that
(2.12) $(\nabla_{W}R)(X,Y)\xi=g(W,X)\phi Y-g(W,Y)\phi X+\phi R(X,Y)W.$
## 3\. Locally $\phi$-Semisymmetric Sasakian Manifolds
Let $M$ be an $n(=2m+1,m\geq 1)$-dimensional Sasakian manifold endowed with
the structure $(\phi,\xi,\eta,g)$. Let $\tilde{U}$ be an open neighbourhood of
$x\in M$ such that the induced Sasakian structure on $\tilde{U}$, denoted by
the same letters, is regular. Let $\pi:\tilde{U}\rightarrow N=\tilde{U}/\xi$
be a (local) fibering and let $(J,\bar{g})$ be the induced Kählerian structure
on $N$ [7]. Let $R$ and $\bar{R}$ be the curvature tensors constructed by $g$
and $\bar{g}$ respectively. For a vector field $\bar{X}$ on $N$, we denote its
horizontal lift (with respect to the connection form $\eta$) by $\bar{X}^{*}$.
Then we have, for any vector fields $\bar{X}$, $\bar{Y}$ and $\bar{Z}$ on $N$,
(3.1)
$(\bar{\nabla}_{\bar{X}}{\bar{Y}})^{*}=\nabla_{{\bar{X}}^{*}}{\bar{Y}}^{*}-\eta(\nabla_{{\bar{X}}^{*}}{\bar{Y}}^{*})\xi,$
(3.2)
$(\bar{R}(\bar{X},\bar{Y})\bar{Z})^{*}=R(\bar{X}^{*},\bar{Y}^{*})\bar{Z}^{*}+g(\phi{\bar{Y}^{*}},\bar{Z}^{*})\phi{\bar{X}^{*}}-g(\phi{\bar{X}^{*}},\bar{Z}^{*})\phi{\bar{Y}^{*}}-2g(\phi{\bar{X}^{*}},\bar{Y}^{*})\phi{\bar{Z}^{*}},$
(3.3)
$(({\bar{\nabla}}_{\bar{V}}\bar{R})(\bar{X},\bar{Y})\bar{Z})^{*}=-\phi^{2}[(\nabla_{\bar{V}^{*}}R)(\bar{X}^{*},\bar{Y}^{*})\bar{Z}^{*}]$
where $\bar{\nabla}$ is the Levi-Civita connection for $\bar{g}$. The
relations (3.1) and (3.2) are due to Ogiue [7] and the relation (3.3) is due
to Takahashi [20].
Making use of (2.1), (2.4)-(2.11) and (3.1)-(3.3), we get by straightforward
calculation
(3.4)
$((\bar{R}(\bar{U},\bar{V})\cdot\bar{R})(\bar{X},\bar{Y})\bar{Z})^{*}=-\phi^{2}[(R(\bar{U}^{*},\bar{V}^{*})\cdot
R)(\bar{X}^{*},\bar{Y}^{*})\bar{Z}^{*}]$
for any vector fields $\bar{X},\bar{Y},\bar{Z},\bar{U}$ and $\bar{V}$ on $N$,
where $R(U,V)$ is considered as the derivation of the tensor algebra at each
point of $N$. Hence from (3.4) it is natural to define the following:
###### Definition 3.1.
A Sasakian manifold is said to be a locally $\phi$-semisymmetric if
(3.5) $\phi^{2}[(R(U,V)\cdot R)(X,Y)Z]=0$
for any horizontal vector fields $X,Y,Z,U$ and $V$ on $M$, where a horizontal
vector is a vector which is horizontal with respect to the connection form
$\eta$ of the local fibering, that is, orthogonal to $\xi$.
Thus from (3.4) and (3.5), we can state the following:
###### Theorem 3.1.
A Sasakian manifold is locally $\phi$-semisymmetric if and only if each
Kählerian manifold, which is a base space of a local fibering, is a Hermitian
locally semisymmetric space.
Example 3.1. Suppose a Sasakian manifold is not of constant $\phi$-sectional
curvature. Then the Kählerian base manifold is not of constant sectional
curvature.
Now let $R(X,\phi X,Y,\phi Y)=f\in C^{\infty}(M)$. Then $(\nabla_{V}R)(X,\phi
X,Y,\phi Y)=(Vf)\neq 0$, i.e. the Kählerian manifold is not Hermitian locally
symmetric and therefore the Sasakian manifold is not locally $\phi$-symmetric.
Now $(\nabla_{U}\nabla_{V}R)(X,\phi X,Y,\phi Y)=U(Vf)\neq 0$, which implies
that $(R(U,V)\cdot R)(X,\phi X,Y,\phi Y)=0$, i.e. the Kählerian manifold is
Hermitian locally semisymmetric.
Hence the Sasakian manifold is locally $\phi$-semisymmetric but not locally
$\phi$-symmetric.
First we suppose that $M$ is a Sasakian manifold such that
(3.6) $\phi^{2}[(R(U,V)\cdot R)(X,Y)\xi]=0$
for any horizontal vector fields $X,Y,U$ and $V$ on $M$.
Differentiating (2.12) covariantly with respect to a horizontal vector field
$U$ we get
$\displaystyle(\nabla_{U}\nabla_{V}R)(X,Y)\xi$ $\displaystyle=$
$\displaystyle\\{g(Y,U)g(X,V)-g(X,U)g(Y,V)-R(X,Y,U,V)\\}\xi$ $\displaystyle+$
$\displaystyle\phi((\nabla_{U}R)(X,Y)V).$
Alternating $U$ and $V$ on (3) we get
$\displaystyle(\nabla_{V}\nabla_{U}R)(X,Y)\xi$ $\displaystyle=$
$\displaystyle\\{g(Y,V)g(X,U)-g(X,V)g(Y,U)-R(X,Y,V,U)\\}\xi$ $\displaystyle+$
$\displaystyle\phi((\nabla_{V}R)(X,Y)U).$
From (3) and (3) it follows that
$\displaystyle(R(U,V)\cdot R)(X,Y)\xi$ $\displaystyle=$ $\displaystyle
2\\{g(Y,U)g(X,V)-g(X,U)g(Y,V)-R(X,Y,U,V)\\}\xi$ $\displaystyle+$
$\displaystyle\phi\\{(\nabla_{U}R)(X,Y)V-(\nabla_{V}R)(X,Y)U\\}.$
Again from (3.6) we have
(3.10) $(R(U,V)\cdot R)(X,Y)\xi=0.$
From (3) and (3.10) we have
$\displaystyle 2\\{g(Y,U)g(X,V)$ $\displaystyle-$ $\displaystyle
g(X,U)g(Y,V)-R(X,Y,U,V)\\}\xi$ $\displaystyle+$
$\displaystyle\phi\\{(\nabla_{U}R)(X,Y)V-(\nabla_{V}R)(X,Y)U\\}=0.$
Applying $\phi$ on (3) and using (2.11), (2.12) and (2.2) we get
(3.12) $(\nabla_{U}R)(X,Y)V-(\nabla_{V}R)(X,Y)U=0.$
In view of (3.12), (3) yields
(3.13) $R(X,Y,U,V)=g(Y,U)g(X,V)-g(X,U)g(Y,V)$
for any horizontal vector fields $X,Y,U$ and $V$ on $M$. Hence $M$ is of
constant $\phi$-holomorphic sectional curvature 1 and hence of constant
curvature 1. This leads to the following:
###### Theorem 3.2.
If a Sasakian manifold $M$ satisfies the condition $\phi^{2}[(R(U,V)\cdot
R)(X,Y)\xi]=0$ for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$, then
it is a manifold of constant curvature 1.
Now we consider a locally $\phi$-semisymmetric Sasakian manifold. Then from
(3.5) we have
$(R(U,V)\cdot R)(X,Y)Z=g((R(U,V)\cdot R)(X,Y)Z,\xi)\xi,$
from which we get
(3.14) $(R(U,V)\cdot R)(X,Y)Z=-g((R(U,V)\cdot R)(X,Y)\xi,Z)\xi$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$.
In view of (3), it follows from (3.14) that
(3.15) $(R(U,V)\cdot R)(X,Y)Z=[(\nabla_{U}R)(X,Y,V,\phi
Z)-(\nabla_{V}R)(X,Y,U,\phi Z)]\xi.$
Now differentiating (2.11) covariantly with respect to a horizontal vector
field $V$, we obtain
$\displaystyle(\nabla_{V}R)(X,Y)\phi Z$ $\displaystyle=$
$\displaystyle[R(X,Y,Z,V)-\\{g(Y,Z)g(X,V)+g(X,Z)g(Y,V)]\xi$ $\displaystyle+$
$\displaystyle\phi((\nabla_{V}R)(X,Y)Z).$
Taking inner product of (3) with a horizontal vector field $U$, we obtain
(3.17) $g((\nabla_{V}R)(X,Y)\phi Z,U)=-g((\nabla_{V}R)(X,Y)Z,\phi U).$
Using (3.17) in (3.15) we get
(3.18) $(R(U,V)\cdot R)(X,Y)Z=[(\nabla_{U}R)(X,Y,Z,\phi
V)-(\nabla_{V}R)(X,Y,Z,\phi U)]\xi$
for any horizontal vector fields $X,Y,Z,U$ and $V$ on $M$. Hence we can state
the following:
###### Theorem 3.3.
A necessary and sufficient condition for a Sasakian manifold $M$ to be a
locally $\phi$-semisymmetric is that it satisfies the relation (3.18) for all
horizontal vector fields on $M$.
## 4\. Characterization of a Locally $\phi$-Semisymmetric Sasakian Manifold
In this section we investigate the condition of local $\phi$-semisymmetry of a
Sasakian manifold for arbitrary vector fields on $M$. To find the condition we
need the following lemmas.
###### Lemma 4.1.
[20] For any horizontal vector fields $X,Y$ and $Z$ on $M$, we get
(4.1) $(\nabla_{\xi}R)(X,Y)Z=0.$
Now Lemma 4.1, (2.9) and (2.12) together imply the following:
###### Lemma 4.2.
[20] For any vector fields $X,Y,Z,V$ on $M$, we get
$\displaystyle(\nabla_{\phi^{2}V}R)(\phi^{2}X,\phi^{2}Y)\phi^{2}Z$
$\displaystyle=$ $\displaystyle(\nabla_{V}R)(X,Y)Z$ $\displaystyle+$
$\displaystyle\eta(X)\\{g(Y,Z)\phi V-g(\phi V,Z)Y+R(Y,\phi V)Z\\}$
$\displaystyle-$ $\displaystyle\eta(Y)\\{g(X,Z)\phi V-g(\phi V,Z)X+R(X,\phi
V)Z\\}$ $\displaystyle-$ $\displaystyle\eta(Z)\\{g(X,V)\phi Y-g(Y,V)\phi
X+\phi R(X,Y)V\\}.$
Now let $X,Y,Z,U,V$ be arbitrary vector fields on $M$. We now compute
$(R(\phi^{2}U,\phi^{2}V)\cdot R)(\phi^{2}X,\phi^{2}Y)\phi^{2}Z$ in two
different ways. Firstly, from (3.18), (2.1) and (4.2) we get
(4.3) $\displaystyle(R(\phi^{2}U,\phi^{2}V)\cdot
R)(\phi^{2}X,\phi^{2}Y)\phi^{2}Z=[\\{(\nabla_{V}R)(X,Y,Z,\phi
U)-(\nabla_{U}R)(X,Y,Z,\phi V)\\}$ $\displaystyle+\eta(X)\\{g(Y,\phi V)g(\phi
U,Z)-g(Y,\phi U)g(\phi V,Z)-R(Y,Z,\phi U,\phi V)\\}$
$\displaystyle-\eta(Y)\\{g(X,\phi V)g(\phi U,Z)-g(X,\phi U)g(\phi
V,Z)-R(X,Z,\phi U,\phi V)\\}$
$\displaystyle-2\eta(Z)\\{g(X,V)g(U,Y)-g(X,U)g(V,Y)-R(X,Y,U,V)\\}]\xi.$
Again using (2.11) in (4.3), we obtain
(4.4) $\displaystyle(R(\phi^{2}U,\phi^{2}V)\cdot
R)(\phi^{2}X,\phi^{2}Y)\phi^{2}Z=[\\{(\nabla_{V}R)(X,Y,Z,\phi
U)-(\nabla_{U}R)(X,Y,Z,\phi V)\\}$
$\displaystyle-\eta(X)H(Y,Z,U,V)+\eta(Y)H(X,Z,U,V)+2\eta(Z)H(X,Y,U,V)]\xi$
where $H(X,Y,Z,U)=g(\mathcal{H}(X,Y)Z,U)$ and the tensor field $\mathcal{H}$
of type (1,3) is given by
(4.5) $\mathcal{H}(X,Y)Z=R(X,Y)Z-g(Y,Z)X+g(X,Z)Y$
for all vector fields $X,Y,Z$ on $M$. Secondly, we have
(4.6) $\displaystyle(R(\phi^{2}U,\phi^{2}V)\cdot
R)(\phi^{2}X,\phi^{2}Y)\phi^{2}Z=R(\phi^{2}U,\phi^{2}V)R(\phi^{2}X,\phi^{2}Y)\phi^{2}Z$
$\displaystyle-R(R(\phi^{2}U,\phi^{2}V)\phi^{2}X,\phi^{2}Y)\phi^{2}Z-R(\phi^{2}X,R(\phi^{2}U,\phi^{2}V)\phi^{2}Y)\phi^{2}Z$
$\displaystyle-R(\phi^{2}X,\phi^{2}Y)R(\phi^{2}U,\phi^{2}V)\phi^{2}Z.$
By straightforward calculation, from (4.6) we get
$\displaystyle(R(\phi^{2}U,\phi^{2}V)\cdot R)(\phi^{2}X,\phi^{2}Y)\phi^{2}Z$
$\displaystyle=$ $\displaystyle-(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle+$
$\displaystyle\eta(U)\big{[}H(X,Y,Z,V)\xi+\eta(X)\mathcal{H}(V,Y)Z$
$\displaystyle+$
$\displaystyle\eta(Y)\mathcal{H}(X,V)Z+\eta(Z)\mathcal{H}(X,Y)V\big{]}$
$\displaystyle-$
$\displaystyle\eta(V)\big{[}H(X,Y,Z,U)\xi+\eta(X)\mathcal{H}(U,Y)Z$
$\displaystyle+$
$\displaystyle\eta(Y)\mathcal{H}(X,U)Z+\eta(Z)\mathcal{H}(X,Y)U\big{]}.$
From (4.4) and (4) it follows that
$\displaystyle(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle=$
$\displaystyle\big{[}\\{(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\\}$ $\displaystyle+$
$\displaystyle\eta(X)H(Y,Z,U,V)-\eta(Y)H(X,Z,U,V)-2\eta(Z)H(X,Y,U,V)$
$\displaystyle+$ $\displaystyle\eta(U)H(X,Y,Z,V)-\eta(V)H(X,Y,Z,U)\big{]}\xi$
$\displaystyle+$
$\displaystyle\eta(U)\big{[}\eta(X)\mathcal{H}(V,Y)Z+\eta(Y)\mathcal{H}(X,V)Z+\eta(Z)\mathcal{H}(X,Y)V\big{]}$
$\displaystyle-$
$\displaystyle\eta(V)\big{[}\eta(X)\mathcal{H}(U,Y)Z+\eta(Y)\mathcal{H}(X,U)Z+\eta(Z)\mathcal{H}(X,Y)U\big{]}.$
Thus in a locally $\phi$-semisymmetric Sasakian manifold, the relation (4)
holds for any arbitrary vector fields $X,Y,Z,U$ and $V$ on $M$. Next, if the
relation (4) holds in a Sasakian manifold, then for any horizontal vector
fields $X,Y,Z,U$ and $V$ on $M$, we get the relation (3.18) and hence the
manifold is locally $\phi$-semisymmetric. Thus we can state the following:
###### Theorem 4.1.
A Sasakian manifold $M$ is locally $\phi$-semisymmetric if and only if the
relation (4) holds for any arbitrary vector fields $X,Y,Z,U$ and $V$ on $M$.
###### Corollary 4.1.
[21] A semisymmetric Sasakian manifold is a manifold of constant curvature 1.
## 5\. Locally Ricci (resp., Projectively, Conformally) $\phi$-Semisymmetric
Sasakian Manifolds
###### Definition 5.1.
A Sasakian manifold $M$ is said to be a locally Ricci $\phi$-semisymmetric if
the relation
(5.1) $\phi^{2}[(R(U,V)\cdot Q)(X)]=0$
holds for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$, $Q$ being the
Ricci operator of the manifold.
We know that
(5.2) $(R(U,V)\cdot Q)(X)=R(U,V)QX-QR(U,V)X.$
Applying $\phi^{2}$ on both sides of (5.2) we get
(5.3) $\phi^{2}[(R(U,V)\cdot Q)(X)]=-(R(U,V)\cdot Q)(X)$
for all horizontal vector fields $U,V$ and $X$ on $M$. This leads to the
following:
###### Theorem 5.1.
A Sasakian manifold $M$ is locally Ricci $\phi$-semisymmetric if and only if
$(R(U,V)\cdot Q)(X)=0$ for all horizontal vector fields $U$, $V$ and $X$ on
$M$.
Now let $M$ be a locally $\phi$-semisymmetric Sasakian manifold. Then the
relation (3.18) holds on $M$. Taking inner product of (3.18) with a horizontal
vector field $W$ and then contracting over $X$ and $W$, we get $(R(U,V)\cdot
S)(Y,Z)=0$ from which it follows that $(R(U,V)\cdot Q)(Y)=0$ for all
horizontal vector fields $U$, $V$ and $Y$ on $M$. Thus in view of the Theorem
5.1, we can state the following:
###### Theorem 5.2.
A locally $\phi$-semisymmetric Sasakian manifold $M$ is locally Ricci
$\phi$-semisymmetric.
Now let $U$, $V$ and $X$ are arbitrary vector fields on a Sasakian manifold
$M$. Then in view of (2.1), (2.4), (2.5) and (2.10), (5.2) yields
$\displaystyle(R(\phi^{2}U,\phi^{2}V)\cdot Q)(\phi^{2}X)$ $\displaystyle=$
$\displaystyle-(R(U,V)\cdot Q)(X)+\\{E(X,V)\eta(U)-E(X,U)\eta(V)\\}\xi$
$\displaystyle-$
$\displaystyle\eta(X)\\{\eta(V)\mathcal{E}U-\eta(U)\mathcal{E}V\\}$
where $g(\mathcal{E}X,Y)=E(X,Y)$ and $E$ is given by
(5.5) $E(X,Y)=S(X,Y)-(n-1)g(X,Y).$
Since $\phi^{2}U$, $\phi^{2}V$ and $\phi^{2}X$ are orthogonal to $\xi$, in a
locally Ricci $\phi$-semisymmetric Sasakian manifold $M$, from (5) we have
$\displaystyle(R(U,V)\cdot Q)(X)$ $\displaystyle=$
$\displaystyle\\{E(X,V)\eta(U)-E(X,U)\eta(V)\\}\xi$ $\displaystyle-$
$\displaystyle\eta(X)\\{\eta(V)\mathcal{E}U-\eta(U)\mathcal{E}V\\}.$
Thus in a locally Ricci $\phi$-semisymmetric Sasakian manifold $M$ the
relation (5) holds for any arbitrary vector fields $U$, $V$ and $X$ on $M$.
Next, if the relation (5) holds in a Sasakian manifold $M$, then for all
horizontal vector fields $U$, $V$ and $X$, we have $(R(U,V)\cdot Q)(X)=0$ and
hence $M$ is locally Ricci $\phi$-semisymmetric. Thus we can state the
following:
###### Theorem 5.3.
A Sasakian manifold $M$ is locally Ricci $\phi$-semisymmetric if and only if
the relation (5) holds for any arbitrary vector fields $U$, $V$ and $X$ on
$M$.
###### Corollary 5.1.
[21] A Ricci semisymmetric Sasakian manifold is an Einstein manifold.
###### Definition 5.2.
A Sasakian manifold $M$ is said to be a locally projectively (resp.
conformally) $\phi$-semisymmetric if the relation
(5.7) $\phi^{2}[(R(U,V)\cdot P)(X,Y)Z]\ \big{(}\textnormal{resp.}\
\phi^{2}[(R(U,V)\cdot C)(X,Y)Z]\big{)}=0$
holds for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$, $P$ (resp.
$C$) being the projective (resp. conformal) curvature tensor of the manifold.
The projective transformation is such that geodesics transformed into
geodesics [23] and as the invariant of such transformation the Weyl projective
curvature tensor $P$ of type (1,3) is given by [23]
(5.8) $P(X,Y)Z=R(X,Y)Z-\frac{1}{n-1}\big{[}S(Y,Z)X-S(X,Z)Y\big{]}.$
The conformal transformation is an angle preserving mapping and as the
invariant of such transformation the Weyl conformal curvature tensor $C$ of
type (1,3) on a Riemannian manifold $M$, $n>3$, is given by [23]
$\displaystyle C(X,Y)Z$ $\displaystyle=$ $\displaystyle
R(X,Y)Z-\frac{1}{n-2}\\{S(Y,Z)X-S(X,Z)Y+g(Y,Z)QX$ $\displaystyle-$
$\displaystyle g(X,Z)QY\\}+\frac{r}{(n-1)(n-2)}\\{g(Y,Z)X-g(X,Z)Y\\}.$
From (5.8) we get
$\displaystyle\ \ \ \ \ \ (R(U,V)\cdot P)(X,Y)Z$ $\displaystyle=(R(U,V)\cdot
R)(X,Y)Z-\frac{1}{n-1}\left[(R(U,V)\cdot S)(Y,Z)X\right.$
$\displaystyle\left.-(R(U,V)\cdot S)(X,Z)Y\right].$
Applying $\phi^{2}$ on both sides of (5) and using (3) we obtain
$\displaystyle\ \ \ \ \ \ \ \phi^{2}[(R(U,V)\cdot P)(X,Y)Z]$
$\displaystyle=-(R(U,V)\cdot R)(X,Y)Z$
$\displaystyle+\left[(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\right]\xi$ $\displaystyle+\frac{1}{n-1}\left[(R(U,V)\cdot
S)(Y,Z)X-(R(U,V)\cdot S)(X,Z)Y\right]$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$.
Now we suppose that $M$ is a locally projectively $\phi$-semisymmetric
Sasakian manifold. Then from (5) we obtain
$\displaystyle(R(U,V)\cdot R)(X,Y)Z$
$\displaystyle=\left[(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\right]\xi$ $\displaystyle+\frac{1}{n-1}\left[(R(U,V)\cdot
S)(Y,Z)X-(R(U,V)\cdot S)(X,Z)Y\right]$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$. Taking inner
product of (5) with a horizontal vector field $W$ and then contracting over
$X$ and $Z$, we get
(5.13) $(R(U,V)\cdot S)(Y,W)=0$
for all horizontal vetor fields $U,V,Y$ and $W$ on $M$ and hence by Theorem
5.1, it follows that the manifold $M$ is locally Ricci $\phi$-semisymmetric.
Using (5.13) in (5), it follows that the manifold $M$ is locally
$\phi$-semisymmetric.
Next, we suppose that $M$ is a locally $\phi$-semisymmetric Sasakian manifold.
Then the relation (3.18) holds on $M$. Taking inner product of (3.18) with a
horizontal vector field $W$ and then contracting over $X$ and $W$, we get
$(R(U,V)\cdot S)(Y,Z)=0$ for all horizontal vetor fields $U,V,Y$ and $Z$ on
$M$ and hence from (5) it follows that the manifold $M$ is locally
projectively $\phi$-semisymmetric. This leads to the following:
###### Theorem 5.4.
A locally projectively $\phi$-semisymmetric Sasakian manifold $M$ is locally
$\phi$-semi-symmetric and vice versa.
Now from (5) we get
$\displaystyle(R(U,V)\cdot C)(X,Y)Z$ $\displaystyle=$
$\displaystyle(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle-$
$\displaystyle\frac{1}{n-2}[(R(U,V)\cdot S)(Y,Z)X-(R(U,V)\cdot S)(X,Z)Y$
$\displaystyle+$ $\displaystyle g(Y,Z)(R(U,V)\cdot Q)(X)-g(X,Z)(R(U,V)\cdot
Q)(Y)].$
Applying $\phi^{2}$ on both sides of (5) and using (3) and (5.3) we obtain
$\displaystyle\ \ \ \ \phi^{2}[(R(U,V)\cdot C)(X,Y)Z]$ $\displaystyle=$
$\displaystyle-(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle+$
$\displaystyle[(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi U)]\xi$
$\displaystyle+$ $\displaystyle\frac{1}{n-2}\left[(R(U,V)\cdot
S)(Y,Z)X-(R(U,V)\cdot S)(X,Z)Y\right.$ $\displaystyle+$
$\displaystyle\left.g(Y,Z)(R(U,V)\cdot Q)(X)-g(X,Z)(R(U,V)\cdot Q)(Y)\right]$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$. This leads to the
following:
###### Theorem 5.5.
A Sasakian manifold $M$ is locally conformally $\phi$-semisymmetric if and
only if the relation
$\displaystyle(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle=$
$\displaystyle\left[(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\right]\xi$ $\displaystyle+$ $\displaystyle\frac{1}{n-2}\left[(R(U,V)\cdot
S)(Y,Z)X-(R(U,V)\cdot S)(X,Z)Y\right.$ $\displaystyle+$
$\displaystyle\left.g(Y,Z)(R(U,V)\cdot Q)(X)-g(X,Z)(R(U,V)\cdot Q)(Y)\right]$
holds for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$.
Let $M$ be a locally $\phi$-semisymmetric Sasakian manifold. Then $M$ is
locally Ricci $\phi$-semi-symmetric and thus in view of (3.18), it follows
from (5) that $\phi^{2}[(R(U,V)\cdot C)(X,Y)Z]=0$ for all horizontal vector
fields $X,Y,Z,U$ and $V$ on $M$. Hence the manifold $M$ is locally conformally
$\phi$-semisymmetric.
Again, we consider $M$ as the locally conformally $\phi$-semisymmetric
Sasakian manifold. If $M$ is locally $\phi$-semisymmetric Sasakian manifold,
then from (5.5) it follows that $(R(U,V)\cdot S)(Y,Z)=0$ which implies that
$(R(U,V)\cdot Q)(Y)=0$ for all horizontal vector fields $U$, $V$ and $Y$ on
$M$ and hence by Theorem 5.1, the manifold $M$ is locally Ricci
$\phi$-semisymmetric. Again, if $M$ is locally Ricci $\phi$-semisymmetric,
then $(R(U,V)\cdot Q)(Y)=0$ for all horizontal vector fields $U$, $V$ and $Y$
on $M$ and hence by Theorem 3.3, it follows from (5.5) that the manifold $M$
is locally $\phi$-semisymmetric. This leads to the following:
###### Theorem 5.6.
A locally $\phi$-semisymmetric Sasakian manifold $M$ is locally conformally
$\phi$-semisymmetric. The converse is true if and only if the manifold $M$ is
locally Ricci $\phi$-semi-symmetric.
Now let $X,Y,Z,U$ and $V$ be any arbitrary vector fields on a Sasakian
manifold $M$. Then using (2.1), (2.10), (4.2) and (5.5) we obtain
$\displaystyle(R(\phi^{2}U,\phi^{2}V)\cdot R)(\phi^{2}X,\phi^{2}Y)\phi^{2}Z$
$\displaystyle=$ $\displaystyle[\\{(\nabla_{V}R)(X,Y,Z,\phi
U)-(\nabla_{U}R)(X,Y,Z,\phi V)\\}$ $\displaystyle-$
$\displaystyle\eta(X)H(Y,Z,U,V)+\eta(Y)H(X,Z,U,V)$ $\displaystyle+$
$\displaystyle 2\eta(Z)H(X,Y,U,V)]\xi$ $\displaystyle-$
$\displaystyle\frac{1}{n-2}\big{[}\\{(R(U,V)\cdot S)(Y,Z)X-(R(U,V)\cdot
S)(X,Z)Y\\}$ $\displaystyle-$ $\displaystyle\big{\\{}(R(U,V)\cdot
S)(Y,Z)\eta(X)-(R(U,V)\cdot S)(X,Z)\eta(Y)\big{\\}}\xi$ $\displaystyle-$
$\displaystyle\big{\\{}E(V,Z)\eta(U)-E(U,Z)\eta(V)\big{\\}}\\{\eta(Y)X-\eta(X)Y\\}$
$\displaystyle+$
$\displaystyle\big{\\{}E(Y,U)X-E(X,U)Y\big{\\}}\eta(Z)\eta(V)$
$\displaystyle-$
$\displaystyle\big{\\{}E(Y,V)X-E(X,V)Y\big{\\}}\eta(Z)\eta(U)\big{]}-\frac{1}{n-2}$
$\displaystyle\big{[}\\{g(Y,Z)(R(U,V).Q)(X)-g(X,Z)(R(U,V).Q)(Y)\\}$
$\displaystyle-$
$\displaystyle\\{\eta(Y)(R(U,V).Q)(X)-\eta(X)(R(U,V).Q)(Y)\\}\eta(Z)$
$\displaystyle+$
$\displaystyle\\{g(Y,Z)\eta(X)-g(X,Z)\eta(Y)\\}\\{\eta(V)\mathcal{E}U-\eta(U)\mathcal{E}V\\}$
$\displaystyle-$
$\displaystyle\big{\\{}E(X,V)\eta(U)-E(X,U)\eta(V)\big{\\}}g(Y,Z)\xi$
$\displaystyle+$
$\displaystyle\big{\\{}E(Y,V)\eta(U)-E(Y,U)\eta(V)\big{\\}}g(X,Z)\xi\big{]}$
where $g(\mathcal{E}U,V)=E(U,V)$ and $E$ is given by (5.5).
From (5) and (4) it follows that
$\displaystyle\ \ \ \ \ \ \ (R(U,V)\cdot R)(X,Y)Z$ $\displaystyle=$
$\displaystyle\big{[}\\{(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\\}$ $\displaystyle+$ $\displaystyle\eta(X)H(Y,Z,U,V)-\eta(Y)H(X,Z,U,V)$
$\displaystyle-$ $\displaystyle 2\eta(Z)H(X,Y,U,V)+\eta(U)H(X,Y,Z,V)$
$\displaystyle-$
$\displaystyle\eta(V)H(X,Y,Z,U)\big{]}\xi+\eta(U)\big{[}\eta(X)\mathcal{H}(V,Y)Z$
$\displaystyle+$
$\displaystyle\eta(Y)\mathcal{H}(X,V)Z+\eta(Z)\mathcal{H}(X,Y)V\big{]}$
$\displaystyle-$
$\displaystyle\eta(V)\big{[}\eta(X)\mathcal{H}(U,Y)Z+\eta(Y)\mathcal{H}(X,U)Z+\eta(Z)\mathcal{H}(X,Y)U\big{]}$
$\displaystyle+$ $\displaystyle\frac{1}{n-2}\big{[}\\{(R(U,V)\cdot
S)(Y,Z)X-(R(U,V)\cdot S)(X,Z)Y\\}$ $\displaystyle-$
$\displaystyle\big{\\{}(R(U,V)\cdot S)(Y,Z)\eta(X)-(R(U,V)\cdot
S)(X,Z)\eta(Y)\big{\\}}\xi$ $\displaystyle-$
$\displaystyle\big{\\{}E(V,Z)\eta(U)-E(U,Z)\eta(V)\big{\\}}\\{\eta(Y)X-\eta(X)Y\\}$
$\displaystyle+$
$\displaystyle\big{\\{}E(Y,U)X-E(X,U)Y\big{\\}}\eta(Z)\eta(V)$
$\displaystyle-$
$\displaystyle\big{\\{}E(Y,V)X-E(X,V)Y\big{\\}}\eta(Z)\eta(U)\big{]}$
$\displaystyle+$
$\displaystyle\frac{1}{n-2}\big{[}\\{g(Y,Z)(R(U,V).Q)(X)-g(X,Z)(R(U,V).Q)(Y)\\}$
$\displaystyle-$
$\displaystyle\\{\eta(Y)(R(U,V).Q)(X)-\eta(X)(R(U,V).Q)(Y)\\}\eta(Z)$
$\displaystyle+$
$\displaystyle\\{g(Y,Z)\eta(X)-g(X,Z)\eta(Y)\\}\\{\eta(V)\mathcal{E}U-\eta(U)\mathcal{E}V\\}$
$\displaystyle-$
$\displaystyle\big{\\{}E(X,V)\eta(U)-E(X,U)\eta(V)\big{\\}}g(Y,Z)\xi$
$\displaystyle+$
$\displaystyle\big{\\{}E(Y,V)\eta(U)-E(Y,U)\eta(V)\big{\\}}g(X,Z)\xi\big{]}$
where $H(X,Y,Z,U)=g(\mathcal{H}(X,Y)Z,U)$ and $g(\mathcal{E}U,V)=E(U,V)$,
$\mathcal{H}$ and $E$ are given by (4.5) and (5.5) respectively. Thus in a
locally conformally $\phi$-semisymmetric Sasakian manifold $M$ the relation
(5) holds for any arbitrary vector fields $X,Y,Z,U$ and $V$ on $M$. Next, if
the relation (5) holds in a Sasakian manifold $M$, then for all horizontal
vector fields $X,Y,Z,U$ and $V$ on $M$, we have (5.5), that is, the manifold
is locally conformally $\phi$-semisymmetric. This leads to the following:
###### Theorem 5.7.
A Sasakian manifold $M$ is locally conformally $\phi$-semisymmetric if and
only if the relation (5) holds for any arbitrary vector fields $X,Y,Z,U$ and
$V$ on $M$.
###### Corollary 5.2.
[5] A conformally semisymmetric Sasakian manifold is a manifold of constant
curvature 1.
###### Remark 5.1.
Since the skew-symmetric operator $R(X,Y)$ and the structure tensor $\phi$ of
the Sasakian manifold both are commutes with the contraction, it follows from
Theorem 6.6(ii) of Shaikh and Kundu [16] that the same conclusion of the
Theorem 5.5, Theorem 5.6 and Theorem 5.7 holds for locally conharmonically
$\phi$-semisymmetric Sasakian manifold.
Again, by linear combination of $R$, $S$ and $g$, Shaikh and Kundu [16]
defined a generalized curvature tensor $B$ (see, equation (2.1) of [16]) of
type (1,3), called $B$-tensor which includes various curvature tensors as
particular cases. Then Shaikh and Kundu (see, equation (5.5) of [16]) shows
that this $B$-tensor turns into the following form:
$\displaystyle B(X,Y)Z$ $\displaystyle=$ $\displaystyle
b_{0}R(X,Y)Z+b_{1}\\{S(Y,Z)X-S(X,Z)Y$ $\displaystyle+$ $\displaystyle
g(Y,Z)QX-g(X,Z)QY\\}+b_{2}r\\{g(Y,Z)X-g(X,Z)Y\\}$
where $b_{0}$, $b_{1}$ and $b_{2}$ are scalars.
We note that if
(a) $b_{0}=1$, $b_{1}=0$ and $b_{2}=-\frac{1}{n(n-1)}$;
(b) $b_{0}=1$, $b_{1}=-\frac{1}{(n-2)}$ and $b_{2}=\frac{1}{(n-1)(n-2)}$;
(c) $b_{0}=1$, $b_{1}=-\frac{1}{(n-2)}$ and $b_{2}=0$;
and (d) $b_{2}=-\frac{1}{n}\big{(}\frac{b_{0}}{n-1}+2b_{1}\big{)}$,
then from (5) it follows that the $B$-tensor turns into the (a) concircular,
(b) conformal, (c) conharmonic and (d) quasi-conformal curvature tensor
respectively. For details about the $B$-tensor we refer the reader to see
Shaikh and Kundu [16] and also references therein.
###### Definition 5.3.
A Sasakian manifold $M$ is said to be a locally $B$-$\phi$-semisymmetric if
the relation
(5.20) $\phi^{2}[(R(U,V)\cdot B)(X,Y)Z]=0$
holds for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$, $B$ being the
generalized curvature tensor of the manifold.
From (5) we get
$\displaystyle\ \ \ \ \ \ (R(U,V)\cdot B)(X,Y)Z$
$\displaystyle=b_{0}(R(U,V)\cdot R)(X,Y)Z+b_{1}\left[(R(U,V)\cdot
S)(Y,Z)X\right.$ $\displaystyle\left.-(R(U,V)\cdot S)(X,Z)Y+g(Y,Z)(R(U,V)\cdot
Q)(X)\right.$ $\displaystyle\left.-g(X,Z)(R(U,V)\cdot Q)(Y)\right].$
Applying $\phi^{2}$ on both sides of (5) and using (3) and (5.3) we obtain
$\displaystyle\ \ \ \ \ \ \phi^{2}[(R(U,V)\cdot B)(X,Y)Z]$ $\displaystyle=$
$\displaystyle-b_{0}[(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle-$
$\displaystyle\left\\{(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\right\\}\xi]$ $\displaystyle-$ $\displaystyle b_{1}\left[(R(U,V)\cdot
S)(Y,Z)X-(R(U,V)\cdot S)(X,Z)Y\right.$ $\displaystyle+$
$\displaystyle\left.g(Y,Z)(R(U,V)\cdot Q)(X)-g(X,Z)(R(U,V)\cdot Q)(Y)\right]$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$. This leads to the
following:
###### Theorem 5.8.
A Sasakian manifold $M$ is locally $B$-$\phi$-semisymmetric if and only if
$\displaystyle(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle=$
$\displaystyle\left[(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\right]\xi$ $\displaystyle-$
$\displaystyle\frac{b_{1}}{b_{0}}\left[(R(U,V)\cdot S)(Y,Z)X-(R(U,V)\cdot
S)(X,Z)Y\right.$ $\displaystyle\left.+g(Y,Z)(R(U,V)\cdot
Q)(X)-g(X,Z)(R(U,V)\cdot Q)(Y)\right]$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$, provided $b_{0}\neq
0$.
Now taking inner product of (5.8) with a horizontal vector field $W$ and then
contracting over $X$ and $W$, we get
(5.24) $\\{b_{0}+(n-2)b_{1}\\}(R(U,V)\cdot S)(Y,Z)=0$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$.
From (5.24) following two cases arise:
Case-I: If $b_{0}+(n-2)b_{1}\neq 0$, then from (5.24) we have
(5.25) $(R(U,V)\cdot S)(Y,Z)=0$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$, from which it
follows that $(R(U,V)\cdot Q)(Y)=0$ for all horizontal vector fields $U$, $V$
and $Y$ on $M$. This leads to the following:
###### Theorem 5.9.
A locally $B$-$\phi$-semisymmetric Sasakian manifold $M$ is locally Ricci
$\phi$-semi-symmetric provided that $b_{0}+(n-2)b_{1}\neq 0$.
###### Corollary 5.3.
A locally concircularly $\phi$-semisymmetric Sasakian manifold $M$ is locally
Ricci $\phi$-semisymmetric.
###### Corollary 5.4.
A locally quasi-conformally $\phi$-semisymmetric Sasakian manifold $M$ is
locally Ricci $\phi$-semisymmetric provided that $b_{0}+(n-2)b_{1}\neq 0$.
Now if $b_{0}+(n-2)b_{1}\neq 0$, then in view of (5.25), (5.8) takes the form
(3.18) for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$ and hence the
manifold $M$ is locally $\phi$-semisymmetric. Again, if we consider the
manifold $M$ as locally $\phi$-semisymmetric, then the relation (3.18) holds
on $M$. Taking inner product of (3.18) with a horizontal vector field $W$ and
then contracting over $X$ and $W$, we get $(R(U,V)\cdot S)(Y,Z)=0$ for all
horizontal vetor fields $U,V,Y$ and $Z$ on $M$ and hence from (5) it follows
that the manifold $M$ is locally $B$-$\phi$-semisymmetric. Thus we can state
the following:
###### Theorem 5.10.
In a Sasakian manifold $M$, local $B$-$\phi$-semisymmetry and local
$\phi$-semisymmetry are equivalent provided that $b_{0}+(n-2)b_{1}\neq 0$.
###### Corollary 5.5.
In a Sasakian manifold $M$, local concircular $\phi$-semisymmetry and local
$\phi$-semisymmetry are equivalent.
###### Corollary 5.6.
In a Sasakian manifold $M$, local quasi-conformal $\phi$-semisymmetry and
local $\phi$-semisymmetry are equivalent provided that $b_{0}+(n-2)b_{1}\neq
0$.
###### Remark 5.2.
Since the skew-symmetric operator $R(X,Y)$ and the structure tensor $\phi$ of
the Sasakian manifold both are commutes with the contraction, it follows from
Theorem 6.6(i) of Shaikh and Kundu [16] that the same conclusion of the
Corollary 5.3 and Corollary 5.5 holds for locally projectively
$\phi$-semisymmetric Sasakian manifold as the contraction on projective
curvature tensor gives rise the Ricci operator although projective curvature
tensor is not a generalized curvature tensor.
Case-II: If $b_{0}+(n-2)b_{1}=0$, then from (5) we have
$\displaystyle\ \ \ \ \ \ \phi^{2}[(R(U,V)\cdot B)(X,Y)Z]$ $\displaystyle=$
$\displaystyle-b_{0}[(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle-$
$\displaystyle\left\\{(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\right\\}\xi]$ $\displaystyle+$
$\displaystyle\frac{b_{0}}{n-2}\left[(R(U,V)\cdot S)(Y,Z)X-(R(U,V)\cdot
S)(X,Z)Y\right.$ $\displaystyle+$ $\displaystyle\left.g(Y,Z)(R(U,V)\cdot
Q)(X)-g(X,Z)(R(U,V)\cdot Q)(Y)\right]$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$. This leads to the
following:
###### Theorem 5.11.
A Sasakian manifold $M$ is locally $B$-$\phi$-semisymmetric if and only if
$\displaystyle(R(U,V)\cdot R)(X,Y)Z$ $\displaystyle=$
$\displaystyle\left[(\nabla_{U}R)(X,Y,Z,\phi V)-(\nabla_{V}R)(X,Y,Z,\phi
U)\right]\xi$ $\displaystyle+$ $\displaystyle\frac{1}{n-2}\left[(R(U,V)\cdot
S)(Y,Z)X-(R(U,V)\cdot S)(X,Z)Y\right.$ $\displaystyle\left.+g(Y,Z)(R(U,V)\cdot
Q)(X)-g(X,Z)(R(U,V)\cdot Q)(Y)\right]$
for all horizontal vector fields $X,Y,Z,U$ and $V$ on $M$ provided that
$b_{0}+(n-2)b_{1}=0$.
###### Corollary 5.7.
A Sasakian manifold $M$ is locally conformally (resp. conharmonically)
$\phi$-semisymmetric if and only if the relation (5.11) holds.
###### Corollary 5.8.
A Sasakian manifold $M$ is locally quasi-conformally $\phi$-semisymmetric if
and only if the relation (5.11) holds provided that $b_{0}+(n-2)b_{1}=0$.
Let $M$ be a locally $\phi$-semisymmetric Sasakian manifold. Then $M$ is
locally Ricci $\phi$-semi-symmetric and thus in view of (3.18), it follows
from (5) that $\phi^{2}[(R(U,V)\cdot B)(X,Y)Z]=0$ for all horizontal vector
fields $X,Y,Z,U$ and $V$ on $M$. Hence the manifold $M$ is locally
$B$-$\phi$-semisymmetric.
Again, we consider $M$ as the locally $B$-$\phi$-semisymmetric Sasakian
manifold. If $b_{0}+(n-2)b_{1}\neq 0$, then $M$ is locally
$\phi$-semisymmetric. So we suppose that $b_{0}+(n-2)b_{1}=0$. If $M$ is
locally $\phi$-semisymmetric, then from (5.11) it follows that $(R(U,V)\cdot
S)(Y,Z)=0$, which implies that $(R(U,V)\cdot Q)(Y)=0$ for all horizontal
vector fields $U$, $V$ and $Y$ on $M$. Thus in view of Theorem 5.1, the
manifold $M$ is locally Ricci $\phi$-semisymmetric. Again, if $M$ is locally
Ricci $\phi$-semisymmetric, then $(R(U,V)\cdot Q)(Y)=0$ for all horizontal
vector fields $U$, $V$ and $Y$ on $M$. Thus in view of Theorem 3.3, it follows
from (5.11) that the manifold $M$ is locally $\phi$-semisymmetric. This leads
to the following:
###### Theorem 5.12.
A locally $\phi$-semisymmetric Sasakian manifold $M$ is locally
$B$-$\phi$-semisymmetric. The converse is true for $b_{0}+(n-2)b_{1}=0$ if and
only if the manifold $M$ is locally Ricci $\phi$-semisymmetric.
If $X,Y,Z,U$ and $V$ are arbitrary vector fields on $M$, then proceeding
similarly as in the case of conformal curvature tensor, it is easy to check
that (5) holds for $b_{0}+(n-2)b_{1}=0$. Hence we can state the following:
###### Theorem 5.13.
A Sasakian manifold $M$ is locally $B$-$\phi$-semisymmetric if and only if the
relation (5) holds for any arbitrary vector fields $X,Y,Z,U$ and $V$ on $M$
provided that $b_{0}+(n-2)b_{1}=0$.
###### Corollary 5.9.
A Sasakian manifold $M$ is locally conformally (resp. conharmonically)
$\phi$-semisymmetric if and only if the relation (5) holds for any arbitrary
vector fields $X,Y,Z,U$ and $V$ on $M$.
###### Corollary 5.10.
A Sasakian manifold $M$ is locally quasi-conformally $\phi$-semisymmetric if
and only if the relation (5) holds for any arbitrary vector fields $X,Y,Z,U$
and $V$ on $M$ provided that $b_{0}+(n-2)b_{1}=0$.
Conclusion. From the above discussion and results, we conclude that the study
of local $\phi$-semisymmetry is meaningful as a generalized notion of local
$\phi$-symmetry and semisymmetry. From Theorem 6.6 and Corollary 6.2 of Shaikh
and Kundu [16] we also conclude that the same characterization of local
$\phi$-semisymmetry of a Sasakian manifold holds for the locally projectively
$\phi$-semisymmetric and locally concircularly $\phi$-semisymmetric Sasakian
manifolds as the contraction on projective or concircular curvature tensor
gives rise the Ricci operator. And also from Theorem 6.6 and Corollary 6.2 of
Shaikh and Kundu [16] we again conclude that the local conformal
$\phi$-semisymmetry and local conharmonical $\phi$-semisymmetry on a Sasakian
manifold are equivalent. However, the study of local $\phi$-semisymmetry and
local conformal $\phi$-semisymmetry are meaningful as they are not equivalent.
Finally, we conclude that the study of local $\phi$-semisymmetry on a Sasakian
manifold by considering any other generalized curvature tensor of type
(1,3)(which are the linear combination of $R$, $S$ and $g$ ) is either
meaningless or redundant due to their equivalency.
## References
* [1] Blair, D. E., _Contact manifolds in Riemannian geometry_ , Lecture Notes in Math. 509, Springer-Verlag, 1976.
* [2] Cartan, É., _Sur une classe remarquable déspaces de Riemann, I_ , Bull. de la Soc. Math. de France, 54 (1926), 214-216.
* [3] Cartan, É., _Sur une classe remarquable déspaces de Riemann, II_ , Bull. de la Soc. Math. de France, 55 (1927), 114-134.
* [4] Cartan, É., _Lecons sur la geometrie des espaces de Riemann_ , 2nd ed., Paris, 1946.
* [5] Chaki, M. C. and Tarafdar, M., _On a type of Sasakian manifold_ , Soochow J. Math., 16 (1990), 23-28.
* [6] De, U. C., Shaikh, A. A. and Biswas, S., _On $\phi$-recurrent Sasakian manifolds_, Novi Sad J. of Math., 33(2) (2003), 43-48.
* [7] Ogiue, K., _On fiberings of almost contact manifolds_ , Kodai Math. Sem. Rep., 17 (1965), 53-62.
* [8] Shaikh, A. A. and Baishya K. K., _On $\phi$-symmetric LP-Sasakian manifolds_, Yokohama Math. J., 52 (2005), 97-112.
* [9] Shaikh A. A., Baishya, K. K. and Eyasmin, S., _On $\phi$-recurrent generalized $(k,\mu)$-contact metric manifolds_, Lobachevski J. Math., 27 (2007), 3-13.
* [10] Shaikh A. A., Baishya, K. K. and Eyasmin, S., _On the existence of some types of LP-Sasakian manifolds_ , Commun. Korean Math. Soc., 23(1) (2008), 1-16.
* [11] Shaikh A. A., Basu, T. and Eyasmin, S., _On locally $\phi$-symmetric $(LCS)_{n}$-manifolds_, Int. J. of Pure and Appl. Math., 41(8) (2007), 1161-1170.
* [12] Shaikh A. A., Basu, T. and Eyasmin, S., _On the existence of $\phi$-recurrent $(LCS)_{n}$-manifolds_, Extracta Mathematica, 23(1) (2008), 71-83.
* [13] Shaikh, A. A. and De, U. C., _On 3-dimensional LP-Sasakian manifolds_ , Soochow J. Math., 26(4) (2000), 359-368.
* [14] Shaikh, A. A. and Hui, S. K., _On locally $\phi$-symmetric $\beta$-Kenmotsu manifolds_, Extracta Mathematica, 24(3) (2010), 301-316.
* [15] Shaikh, A. A. and Hui, S. K., _On extended $\phi$-recurrent $\beta$-Kenmotsu manifolds_, Publi. de l’ Inst. Math., Nouvelle serie, 89(103) (2011), 77-88.
* [16] Shaikh, A. A. and Kundu, H., _On equivalency of various geometric structures_ , arXiv: 1301.7214v2 [math.DG] 7 Feb 2013.
* [17] Szabó, Z. I., _Structure theorems on Riemannian spaces satisfying $R(X,Y)\cdot R=0$, I_, The local version, J. Diff. Geom., 17 (1982), 531-582.
* [18] Szabó, Z. I., _Structure theorems on Riemannian spaces satisfying $R(X,Y)\cdot R=0$, II_, Global version, Geom. Dedicata, 19 (1983), 65-108.
* [19] Szabó, Z. I., _Classification and construction of complete hypersurfaces satisfying $R(X,Y)\cdot R=0$_, Acta. Sci. Math., 47 (1984), 321-348.
* [20] Takahashi, T., _Sasakian $\phi$-symmetric spaces_, Tohoku Math. J. , 29 (1977), 91-113.
* [21] Tanno, S., _Isometric immersions of Sasakian manifold in spheres_ , Kodai Math. Sem. Rep., 21 (1969), 448-458.
* [22] Yano, K. and Kon, M., _Structures on manifolds_ , World Scientific Publ., Singapore, 1984.
* [23] Weyl, H., _Reine infinitesimal geometrie_ , Math. Zeitschrift, 2 (1918), 384-411.
|
arxiv-papers
| 2013-02-08T20:43:41 |
2024-09-04T02:49:41.530101
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Absos Ali Shaikh, Chandan Kumar Mondal and Helaluddin Ahmad",
"submitter": "Absos Ali Shaikh Absos",
"url": "https://arxiv.org/abs/1302.2139"
}
|
1302.2189
|
# Periods of Jacobi forms and Hecke operator
YoungJu Choie Department of Mathematics and PMI
Pohang University of Science and Technology
Pohang, 790–784, Korea [email protected] and Seokho Jin Department of
Mathematics and PMI
Pohang University of Science and Technology
Pohang, 790–784, Korea [email protected]
###### Abstract.
A Hecke action on the space of periods of cusp forms, which is compatible with
that on the space of cusp forms, was first computed using continued
fraction[19] and an explicit algebraic formula of Hecke operators acting on
the space of period functions of modular forms was derived by studying the
rational period functions[8]. As an application an elementary proof of the
Eichler-Selberg trace formula was derived[26]. Similar modification has been
applied to period space of Maass cusp forms with spectral parameter $s$[21,
22, 20]. In this paper we study the space of period functions of Jacobi forms
by means of Jacobi integral and give an explicit description of Hecke operator
acting on this space. a Jacobi Eisenstein series $E_{2,1}(\tau,z)$ of weight
$2$ and index $1$ is discussed as an example. Periods of Jacobi integrals are
already appeared as a disguised form in the work of Zwegers to study Mordell
integral coming from Lerch sums[27] and mock Jacobi forms are typical example
of Jacobi integral[9].
Keynote: Jacobi form, Hecke Operator, period
1991 Mathematics Subject Classification:11F50, 11F37, 11F67
This work was partially supported by NRF 2012047640, NRF 2011-0008928 and NRF
2008-0061325
## 1\. Introduction
Period functions of modular forms have played an important role to understand
the arithmetics on cusp forms [17]. Manin[19] studied a Hecke action on the
space of period of cusp forms, which is compatible with that on the space of
cusp forms, in terms of continued fractions. Later an explicit algebraic
description of a Hecke operator on the space of period functions of modular
forms was given by studying the rational period functions[8]. As an
application, a new elementary proof of the Eichler-Selberg trace formula was
derived [26]. Moreover, various modifications of period theory have been also
developed. A notion of rational period functions has been introduced and
completely classified in the case of cofinite subgroups of
$SL_{2}(\mathbb{Z})$ [14, 16, 15, 1]. Some period functions were attached
bijectively to Maass cusp forms according to the spectral parameter $s$ and
its cohomological counterpart was described (see [18, 2]). Similar
modification to Hecke operators on the period space has also been applied to
that of Maass cusp forms with spectral parameter $s$ [21, 22, 20].
Historically, Eichler[10] and Shimura[23] discovered an isomorphism between a
space of cusp forms and Eichler cohomology group, attaching period polynomials
to cusp forms. A notion of Eichler integral for arbitrary real weight with
multiplier system has been introduced and shown that there is an isomorphism
between the space of modular forms and Eichler cohomology group[14]. Along
this line, recently a notion of Jacobi integral has been introduced [7] and
shown there is also an isomorphism between the space of Jacobi cusp forms and
the corresponding Eichler cohomology group[3]. Mock Jacobi form[9] is one of
the typical example of Jacobi integral.
The main purpose of this paper is to give an explicit algebraic description of
Hecke operator on the period functions attached to the Jacobi integrals. This
is an analogous result of [8] to the case of Jacobi forms.
This paper is organized as follows: in section $2$ we state the main results
and section $3$ discusses about a Jacobi Eisenstein series $E_{2,1}(\tau,z)$
as an example of Jacobi integral with period functions. Basic definitions and
notations are given in section $4$ and section $5$ gives proofs of the main
theorems by introducing various properties which are modified from the results
about rational period functions [8].
## 2\. Statement of Main Results
Let $f$ be an element of $J_{k,m}^{\int}(\Gamma(1)),$ that is, $f$ is a real
analytic function $f:\mathbb{H}\times\mathbb{C}\rightarrow\mathbb{C}$
satisfying a certain growth condition with the following functional equation,
(2.1)
$(f|_{k,m}\gamma)(\tau,z)=f(\tau,z)+P_{\gamma}(\tau,z),\forall\gamma\in\Gamma(1)^{J},$
with $P_{\gamma}\in\mathcal{P}_{m},$ where $\mathcal{P}_{m}$ is a set of
holomorphic functions
(2.2)
$\mathcal{P}_{m}=\\{g:\mathcal{H}\times\mathbb{C}\rightarrow\mathbb{C}|\,|g(\tau,z)|<K(|\tau|^{\rho}+v^{-\sigma})e^{2\pi
m\frac{y^{2}}{v}},\ \mbox{ for some $K,\rho,\sigma>0$}\\}$
( $v=Im(\tau)$ and $y=Im(z)$ ).
$P_{\gamma}$ in (2.1) is called a period function of $f.$ If
$P_{\gamma}(\tau,z)=0,\forall\gamma\in\Gamma(1)^{J},$ $f$ is a usual Jacobi
form (see [11]). It turns out that each element of the following set
(2.3)
$\mathcal{P}er_{k,m}:=\\{P:\mathcal{H}\times\mathbb{C}\rightarrow\mathbb{C}\,|\,\sum_{j=0}^{3}P|_{k,m}T^{j}=\sum_{j=0}^{5}P|_{k,m}U^{j}=0\\}\
$
( $T=[\left(\begin{smallmatrix}0&-1\\\
1&0\end{smallmatrix}\right),(1,0)],S=[\left(\begin{smallmatrix}1&1\\\
0&1\end{smallmatrix}\right),(0,0)],$ $U=ST$)
generates a system of period functions
$\\{P_{\gamma}|\gamma\in\Gamma(1)^{J}\\}$ of $f\in J_{k,m}^{\int}(\Gamma(1)).$
We also consider the following subspace:
(2.4) $EJ^{\int}_{k,m}(\Gamma(1)):=\\{f\in
J_{k,m}^{\int}(\Gamma(1))|\,f|_{k,m}[I,(1,0)]=f\\}.$
It turns out that the following set
(2.5)
$\mathcal{EP}er_{k,m}:=\\{P:\mathcal{H}\times\mathbb{C}\rightarrow\mathbb{C}\,|\,\sum_{j=0}^{3}P|_{k,m}T^{j}=\sum_{j=0}^{5}P|_{k,m}U^{j}=P-P|_{k,m}[-I,(1,0)]=0\\}.$
is a generating set of a system of period functions
$\\{P_{\gamma}|\,\gamma\in\Gamma(1)^{J}\\}$ of $f\in
EJ_{k,m}^{\int}(\Gamma(1)).$
For each positive integer $n,$ define two Hecke operators
$\mathcal{V}_{n}^{\infty}$ and $\mathcal{T}_{n}^{\infty}$ by
(2.6)
$\left(f|_{k,m}\mathcal{V}_{n}^{\infty}\right)(\tau,z):=n^{k-1}\sum_{ad=n,a>o\atop
b\;(\text{mod}\;d)}d^{-k}f\left(\frac{a\tau+b}{d},az\right)$
(2.7)
$(f|_{k,m}\mathcal{T}_{n}^{\infty})(\tau,z):=n^{k-4}\sum_{\begin{array}[]{cc}ad=n^{2},\gcd(a,b,d)=\square\atop
a,d>0,b\;(\text{mod}\;d),\small{X,Y\in\mathbb{Z}/n\mathbb{Z}}\end{array}}(f|_{k,m}[\left(\begin{smallmatrix}a&b\\\
0&d\end{smallmatrix}\right),(X,Y)])(\tau,z),$
Then
###### Theorem 2.1.
1. (1)
$\left(f|_{k,m}\mathcal{V}_{n}^{\infty}\right)\in J_{k,mn}^{\int}(\Gamma(1))$
if $f\in J_{k,m}^{\int}(\Gamma(1)).$
2. (2)
Let $f|_{k,m}T=f+P_{T}$ and
$(f|_{k,m}\mathcal{V}_{n}^{\infty})|_{k,mn}T=(f|_{k,m}\mathcal{V}_{n}^{\infty})+\hat{P_{T}}.$
Then
$\hat{P_{T}}=n^{\frac{k}{2}-1}\biggl{(}{P}_{T}|_{k,m}\biggl{[}\Bigl{(}\begin{matrix}\frac{1}{\sqrt{n}}&0\\\
0&\frac{1}{\sqrt{n}}\end{matrix}\Bigr{)},(0,0)\biggr{]}\biggr{)}|_{k,m}\tilde{\mathcal{V}}_{n},$
where
$\displaystyle\tilde{\mathcal{V}}_{n}$ $\displaystyle=$
$\displaystyle\sum_{ad-bc=n\atop
a>c>0,d>-b>0}\biggl{\\{}\biggl{[}\Bigl{(}\begin{matrix}a&b\\\
c&d\end{matrix}\Bigr{)},(0,0)\biggr{]}+\biggl{[}\Bigl{(}\begin{matrix}a&-b\\\
-c&d\end{matrix}\Bigr{)},(0,0)\biggr{]}\biggr{\\}}$
$\displaystyle+\sum_{ad=n\atop-\frac{1}{2}d<b\leq\frac{1}{2}d}\biggl{[}\Bigl{(}\begin{matrix}a&b\\\
0&d\end{matrix}\Bigr{)},(0,0)\biggr{]}+\sum_{ad=n\atop-\frac{1}{2}a<c\leq\frac{1}{2}a,c\neq
0}\biggl{[}\Bigl{(}\begin{matrix}a&0\\\
c&d\end{matrix}\Bigr{)},(0,0)\biggr{]}.$
###### Theorem 2.2.
1. (1)
If $f\in EJ_{k,m}^{\int}(\Gamma(1)),$ then
$f|_{k,m}\mathcal{T}_{n}^{\infty}\in EJ_{k,m}^{\int}(\Gamma(1)).$
2. (2)
Let $f|_{k,m}T=f+P_{T}$ and
$(f|_{k,m}\mathcal{T}_{n}^{\infty})|_{k,m}T=(f|_{k,m}\mathcal{T}_{n}^{\infty})+\tilde{P_{T}}.$
Then
$\tilde{P_{T}}=n^{k-4}P_{T}|_{k,m}\tilde{\mathcal{T}_{n}},$
where
$\displaystyle\tilde{\mathcal{T}}_{n}=\sum_{\tiny{\begin{array}[]{ccc}ad-
bc=n^{2},gcd(a,b,c,d)=\square\\\ a>c>0,d>-b>0\\\
X,Y\in{\mathbb{Z}}/n{\mathbb{Z}}\end{array}}}\left\\{[\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right),(X,Y)]+[\left(\begin{smallmatrix}a&-b\\\
-c&d\end{smallmatrix}\right),(X,Y)]\right\\}$
$\displaystyle+\sum_{\tiny{\begin{array}[]{ccc}ad=n^{2},gcd(a,b,d)=\square\\\
-\frac{1}{2}d<b\leq\frac{1}{2}d\\\
X,Y\in{\mathbb{Z}}/n{\mathbb{Z}}\end{array}}}[\left(\begin{smallmatrix}a&b\\\
0&d\end{smallmatrix}\right),(X,Y)]+\sum_{\tiny{\begin{array}[]{ccc}ad=n^{2},gcd(a,c,d)=\square\\\
-\frac{1}{2}a<c\leq\frac{1}{2}a,c\neq 0\\\
X,Y\in{\mathbb{Z}}/n{\mathbb{Z}}\end{array}}}[\left(\begin{smallmatrix}a&0\\\
c&d\end{smallmatrix}\right),(X,Y)].$
###### Remark 2.3.
1. (1)
There are Hecke operators acting on the space of Jacobi forms [11]. One needs
to choose a special set of representatives to apply Hecke operators to Jacobi
integral $f$ since $f$ is not $\Gamma(1)^{J}$-invariant.
2. (2)
Note that $\mathcal{T}_{n}^{\infty}$ acts only on the subspace
$EJ_{k,m}^{\int}(\Gamma(1))$ (see section 5 for details).
3. (3)
It is shown that there is an isomorphism between the space of Jacobi cusp
forms and Eichler cohomology group with some coefficient module(there, denoted
by $\mathcal{P}_{\mathcal{M}}^{e}$) corresponding to period functions for
$f\in EJ_{k,m}^{\int}(\Gamma(1)).$ Hence the Hecke operator
$\tilde{\mathcal{T}_{n}}$ can also be regarded as an Hecke operator on the
Eichler cohomology group.
## 3\. Example of Jacobi Integral
It is well known that Eisenstein series $E_{2}(\tau):=1-24\sum_{n\geq
1}(\sum_{0<d|n}d)q^{n}$ is not a modular form on $\Gamma(1)$, but it is a
modular integral(see [16]). Similarly Jacobi-Eisenstien series
$E_{2,1}(\tau,z)$ is not a Jacobi form on $\Gamma(1)$, but it is a Jacobi
integral as we explain below.
The following Jacobi-Eisenstein series was studied in [5]:
$E_{2,1}(\tau,z)=-12\sum_{n,r\in\mathbb{Z},r^{2}\leq
4n}H(4n-r^{2})q^{n}\zeta^{r},q=e^{2\pi i\tau},\zeta=e^{2\pi iz},$
where $H(n)$ be the class number of the quadratic forms of discriminant $-n$
with $H(0)=-\frac{1}{12}.$
The theta decomposition of $E_{2,1}(\tau,z)$ is
$E_{2,1}(\tau,z)=-12\cdot{\binom{\mathcal{H}_{0}(\tau)}{\mathcal{H}_{1}(\tau)}}^{t}\cdot\binom{\vartheta_{1,0}(\tau,z)}{\vartheta_{1,1}(\tau,z)},$
where
$\mathcal{H}_{\mu}(\tau):=\sum_{N\geq
0,N\equiv-\mu^{2}\;(\text{mod}\;4)}H(N)q^{\frac{N}{4}},\
\vartheta_{1,\mu}(\tau,z)=\sum_{r\in\mathbb{Z}\atop
r\equiv\mu\;(\text{mod}\;2)}q^{\frac{r^{2}}{4}}\zeta^{r}.$
###### Remark 3.1.
1. (1)
$E_{2,1}(\tau,z)$ was defined in [5] as the holomorphic part of
$E_{2,1}^{*}(\tau,z;s)=\frac{1}{2}\sum_{c,d\in\mathbb{Z},gcd(c,d)=1}\sum_{\lambda\in\mathbb{Z}}\frac{e^{2\pi
i(\lambda^{2}\frac{a\tau+b}{c\tau+d}+2\lambda\frac{z}{c\tau+d}-\frac{cz^{2}}{c\tau+d})}}{(c\tau+d)^{2}|c\tau+d|^{2s}}$
at $s=0$.
2. (2)
A correspondence among $E_{2}(\tau),E_{2,1}(\tau,z)$ and
$\mathcal{H}_{\frac{3}{2}}(\tau):=\sum_{n\geq 0}H(n)q^{n}$ was studied in [5].
3. (3)
$\mathcal{H}_{\frac{3}{2}}(\tau)=\mathcal{H}_{0}(4\tau)+\mathcal{H}_{1}(4\tau)$
since $H(N)=0$ unless $N\equiv 0,3\;(\text{mod}\;4)$.
The following function $\mathcal{F}$ transforms like a modular form of weight
$\frac{3}{2}$ on the group $\Gamma_{0}(4)$ (see [13], p 91-92):
$\mathcal{F}(\tau)=\mathcal{H}_{\frac{3}{2}}(\tau)+v^{-\frac{1}{2}}\sum_{\ell=-\infty}^{\infty}\beta(4\pi
l^{2}v)q^{-\ell^{2}},$
where
$\beta(x):=\frac{1}{16\pi}\int_{1}^{\infty}u^{-\frac{3}{2}}e^{-xu}du(x\geqq
0)$ and
$\displaystyle\binom{v^{-\frac{1}{2}}\sum_{l\equiv
0\;(\text{mod}\;2)}\beta(\pi
l^{2}v)q^{-\frac{l^{2}}{4}}}{v^{-\frac{1}{2}}\sum_{l\equiv
1\;(\text{mod}\;2)}\beta(\pi
l^{2}v)q^{-\frac{l^{2}}{4}}}=\frac{1+i}{16\pi}\binom{\int_{-\bar{\tau}}^{i\infty}(t+\tau)^{-\frac{3}{2}}\vartheta_{1,0}(t,0)dt}{\int_{-\bar{\tau}}^{i\infty}(t+\tau)^{-\frac{3}{2}}\vartheta_{1,1}(t,0)dt}.$
Consider a function
$\varphi(\tau,z):=\mathcal{F}_{0}(\tau)\vartheta_{1,0}(\tau,z)+\mathcal{F}_{1}(\tau)\vartheta_{1,1}(\tau,z)=\binom{\mathcal{F}_{0}(\tau)}{\mathcal{F}_{1}(\tau)}^{t}\binom{\vartheta_{1,0}(\tau,z)}{\vartheta_{1,1}(\tau,z)},$
with
$\displaystyle\mathcal{F}_{\mu}(\tau)=\mathcal{H}_{\mu}(\tau)+v^{-\frac{1}{2}}\sum_{\ell\equiv\mu\;(\text{mod}\;2)}\beta(\pi\ell^{2}v)q^{-\frac{\ell^{2}}{4}},\mu=0,1.$
It is easy to check that $\varphi(\tau,z)$ transforms like a Jacobi form for
$\Gamma(1)^{J}$ of weight $2$ and index $1$ and so
$\varphi\big{|}_{2,1}T=\varphi.$ On the other hand, following the computation
in [13], page 92 we see
$\biggl{\\{}\frac{1+i}{16\pi}\binom{\int_{-\bar{\tau}}^{i\infty}(t+\tau)^{-\frac{3}{2}}\vartheta_{1,0}(t,0)dt}{\int_{-\bar{\tau}}^{i\infty}(t+\tau)^{-\frac{3}{2}}\vartheta_{1,1}(t,0)dt}^{t}\cdot\binom{\vartheta_{1,0}}{\vartheta_{1,1}}\bigg{|}_{2,1}(T-E)\biggr{\\}}(\tau,z)$
$=\frac{1+i}{16}\binom{\int_{i\infty}^{0}(\tau+w)^{-\frac{3}{2}}\vartheta_{1,0}(w,0)dw}{\int_{i\infty}^{0}(\tau+w)^{-\frac{3}{2}}\vartheta_{1,1}(w,0)dw}^{t}\binom{\vartheta_{1,0}(\tau,z)}{\vartheta_{1,1}(\tau,z)},$
using the transformation formula (for example, see [6]) of theta series
$\theta_{1,\mu}(\tau,z).$ So we conclude that
$(E_{2,1}|_{2,1}T)(\tau,z)=E_{2,1}(\tau,z)+P_{E_{2,1},T}(\tau,z),$
where
$P_{E_{2,1},T}(\tau,z)=\frac{1+i}{16}\binom{\int^{i\infty}_{0}(\tau+w)^{-\frac{3}{2}}\vartheta_{1,0}(w,0)dw}{\int^{i\infty}_{0}(\tau+w)^{-\frac{3}{2}}\vartheta_{1,1}(w,0)dw}^{t}\cdot\binom{\vartheta_{1,0}(\tau,z)}{\vartheta_{1,1}(\tau,z)}.$
Next, for each prime $p,$ let
$(E_{2}|_{2}\mathcal{T}_{p,2}^{\infty})(\tau):=p\sum_{ad=p,d>0\atop
b\;(\text{mod}\;d)}d^{-4}E_{2}(\frac{a\tau+b}{d}),$
$(\mathcal{H}_{\frac{3}{2}}|_{\frac{3}{2}}\mathcal{T}_{p,\frac{3}{2}}^{\infty})(\tau):=\sum_{N\equiv
0,3\;(\text{mod}\;4)}\left(H(Np^{2})+\left(\frac{-N}{p}\right)H(N)+pH\left(\frac{N}{p^{2}}\right)\right)q^{N},$
and
$(E_{2,1}|_{2,1}\mathcal{T}_{p}^{\infty})(\tau,z):=p^{-2}\sum_{ad=p^{2},a,d>0\atop
b\;(\text{mod}\;d),\gcd{(a,b,d)}=\square}\sum_{\lambda,\mu\in\mathbb{Z}/p\mathbb{Z}}(E_{2,1}|_{2,1}[\bigl{(}\begin{smallmatrix}a&b\\\
0&d\end{smallmatrix}\bigr{)},(\lambda,\mu)])(\tau,z).$
It can be directly checked that the following diagram commutes:
where $\varphi(\sum_{n\geq
0}c(n)q^{n}):=1-\frac{24}{L(0,\left(\frac{D}{\cdot}\right))}\sum_{n\geq
1}\sum_{d|n}\left(\frac{D}{d}\right)c\left(\frac{n^{2}}{d^{2}}|D|\right)q^{n}$
with a fundamental discriminant $D$ and $\psi(\sum_{n\geq
0}c(n)q^{n}):=-12\sum_{n,r\in\mathbb{Z},r^{2}\leq
4n}c(4n-r^{2})q^{n}\zeta^{r}$ (see [5], Theorem 3.2).
###### Remark 3.2.
There is a one-to-one correspondence, which is Hecke equivariant, among the
space of modular forms of weight $2k-2$($k$ even) on $\Gamma(1)$, the Kohnen
plus space of weight $k-\frac{1}{2}$ on $\Gamma_{0}(4)$ and the space of
Jacobi forms of weight $k$ and index $1$ on $\Gamma(1)^{J}.$ The above diagram
shows that this correspondence is extended to the more general case
$E_{2}(\tau),H_{\frac{3}{2}}(\tau)$ and $E_{2.1}(\tau,z).$
The above commutative diagram implies that
$E_{2,1}|_{2,1}\mathcal{T}_{p}^{\infty}=(p+1)E_{2,1}$ since
$E_{2}|_{2}\mathcal{T}_{p,2}^{\infty}=(p+1)E_{2}$ (also see Knopp[16]). In
summary we have shown the following:
###### Proposition 3.3.
1. (1)
$(E_{2,1}|_{2,1}T)(\tau,z)=E_{2,1}(\tau,z)+P_{E_{2,1},T}(\tau,z),$ where
$P_{E_{2,1},T}(\tau,z)=\frac{1+i}{16}\binom{\int^{i\infty}_{0}(\tau+w)^{-\frac{3}{2}}\vartheta_{1,0}(w,0)dw}{\int^{i\infty}_{0}(\tau+w)^{-\frac{3}{2}}\vartheta_{1,1}(w,0)dw}^{t}\cdot\binom{\vartheta_{1,0}(\tau,z)}{\vartheta_{1,1}(\tau,z)}.$
2. (2)
For each prime $p$, let
$(E_{2,1}|_{2,1}\mathcal{T}_{p}^{\infty}|_{2,1}T)(\tau,z)=E_{2,1}(\tau,z)+\tilde{P}_{E_{2.1},T}(\tau,z).$
Then
$\tilde{P}_{E_{2.1},T}(\tau,z)=p^{-2}P_{E_{2,1},T}|_{2,1}\tilde{\mathcal{T}}_{p}=(p+1)\cdot
P_{E_{2,1},T}(\tau,z).$
## 4\. Definitions and Notations
Let $\mathcal{H}$ be the usual complex upper half plane,
$\tau\in\mathcal{H},z\in\mathbb{C}$ and
$\tau=u+iv,z=x+iy,u,v,x,y\in\mathbb{R}.$ Take $k,m\in\mathbb{Z}.$ Let
$\Gamma(1)^{J}:=\Gamma(1)\ltimes\mathbb{Z}^{2}=\\{[M,(\lambda,\mu)]|M\in\Gamma(1),\lambda,\mu\in\mathbb{Z}\\},(\Gamma(1)=SL_{2}(\mathbb{Z}))$
be the full Jacobi group with a group law
$[M_{1},(\lambda_{1},\mu_{1})][M_{2},(\lambda_{2},\mu_{2})]=[M_{1}M_{2},(\lambda,\mu)M_{2}+(\lambda_{2},\mu_{2})].$
Let us introduce the following elements in $\Gamma(1)^{J}$:
$S=[\left(\begin{smallmatrix}1&1\\\
0&1\end{smallmatrix}\right),(0,0)],T=[\left(\begin{smallmatrix}0&-1\\\
1&0\end{smallmatrix}\right),(1,0)],T_{0}=[\left(\begin{smallmatrix}0&-1\\\
1&0\end{smallmatrix}\right),(0,0)],U=ST=[\left(\begin{smallmatrix}1&-1\\\
1&0\end{smallmatrix}\right),(1,0)],$
$I_{2}=[I,(1,0)],I_{1}=[I,(0,1)],E=[I,(0,0)].$
In [4] it is known that $\Gamma(1)^{J}$ is generated by $S$ and $T$. Also
$\Gamma(1)^{J}$ is generated by $T$ and $U$ and they satisfy the relations
$T^{4}=U^{6}=E,$ $UT^{2}=T^{2}U[I,(-1,0)]=T^{2}[I,(0,-1)]U=[I,(0,1)]T^{2}U$
and these are the defining relations for $\Gamma(1)^{J}.$
Also, let $G^{J}$ be the set of triples $[M,X,\zeta]$ $(M\in
SL_{2}(\mathbb{R}),X\in\mathbb{R}^{2},\zeta\in\mathbb{C},|\zeta|=1)$. Then
$G^{J}$ is a group via
$[M,X,\zeta][M^{\prime},X^{\prime},\zeta^{\prime}]=[MM^{\prime},XM^{\prime}+X^{\prime},\zeta\zeta^{\prime}\cdot
e^{2\pi i\det\binom{XM^{\prime}}{X^{\prime}}}],$
acting on $\mathcal{H}\times\mathbb{C}$ as
$\gamma(\tau,z)=\left(\frac{a\tau+b}{c\tau+d},\frac{z+\lambda\tau+\mu}{c\tau+d}\right),\gamma=[\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right),(\lambda,\mu),\zeta],$
which defines a usual slash operator on a function
$f:\mathcal{H}\times\mathbb{C}\rightarrow\mathbb{C}$ defined by :
$(f|_{k,m}\gamma)(\tau,z):=j_{k,m}(\gamma,(\tau,z))f(\gamma(\tau,z)),$
with $j_{k,m}(\gamma,(\tau,z)):=\zeta^{m}(c\tau+d)^{-k}e^{2\pi
im(-\frac{cz^{2}}{c\tau+d}+\lambda^{2}\tau+2\lambda z+\lambda\mu)}.$ Further
let
$f|_{k,m}[\bigl{(}\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\bigr{)},(X,Y)]:=f|_{k,m}[\bigl{(}\begin{smallmatrix}a/\sqrt{\ell}&b/\sqrt{\ell}\\\
c/\sqrt{l}&d/\sqrt{l}\end{smallmatrix}\bigr{)},(X,Y),1],\mbox{if $ad-
bc=\ell>0$ }.$
We will omit $\zeta$ if $\zeta=1$.
###### Definition 4.1.
(Jacobi Integral)
1. (1)
A real analytic periodic (in both variables) function
$f:\mathcal{H}\times\mathbb{C}\rightarrow\mathbb{C}$ is called a Jacobi
Integral of weight $k\in\mathbb{Z}$ and index $m\in\mathbb{Z}$ with period
functions $P_{\gamma}$ on $\Gamma(1)^{J}$ if it satisfies the following
relations:
1. (i)
For all $\gamma\in\Gamma(1)^{J}$
(4.1)
${}f|_{k,m}\gamma=f(\tau,z)+P_{\gamma}(\tau,z),P_{\gamma}\in\mathcal{P}_{m}.$
2. (ii)
It satisfies a growth condition, when $v,y\rightarrow\infty,$
$|f(\tau,z)|v^{-\frac{k}{2}}e^{-2\pi m\frac{y^{2}}{v}}\rightarrow 0.$
The space of Jacobi integrals forms a vector space over $\mathbb{C}$ and we
denote it by $J_{k,m}^{\int}(\Gamma(1)^{J}).$
The periodicity condition on $f$ is equivalent to saying that
$f|_{k,m}S=f|_{k,m}I_{1}=f$ so that $P_{S}(\tau,z)=P_{I_{1}}(\tau,z)=0.$ A set
of period functions
(4.2) $\\{P_{\gamma}|\gamma\in\Gamma(1)^{J}\\}\mbox{\, of $f\in
J_{k,m}^{\int}(\Gamma(1))$}$
satisfies the following consistency condition:
$P_{\gamma_{1}\gamma_{2}}=P_{\gamma_{1}}|_{k,m}\gamma_{2}+P_{\gamma_{2}},\
\text{for all}\ \gamma_{1},\gamma_{2}\in\Gamma(1)^{J}.$
So using the relations $T^{4}=U^{6}=E,$ it is easy to see that $P_{T}$
satisfies
$\sum_{j=0}^{3}P|_{k,m}T^{j}=\sum_{j=0}^{5}P|_{k,m}U^{j}=0$
so that $P_{T}$ belongs $\mathcal{P}er_{k,m}.$ In fact it is shown in [7] that
$\mathcal{P}er_{k,m}$ generates the space of period functions
$\\{P_{\gamma}|\gamma\in\Gamma(1)^{J}\\}$ of $f\in J_{k,m}^{\int}(\Gamma(1)).$
Next we consider a subspace
$EJ_{k,m}^{\int}(\Gamma(1))=\\{g\in
J_{k,m}^{\int}(\Gamma(1))|\,g|_{k,m}I_{2}=g\\}$
By the similar method as that for $\mathcal{P}er_{k,m}$ we can show that a set
of period functions $\\{P_{\gamma}|\gamma\in\Gamma(1)^{J}\\}$ of $g$ is
spanned by $\mathcal{EP}er_{k,m}.$
## 5\. Hecke Operators on the space of Jacobi Integral
We start to prove Theorem 2.2 and only sketch out the proof of Theorem 2.1
briefly, since it is similar but simpler.
### 5.1. Proof of Theorem 2.2
It is immediate to see $f|_{k,m}\mathcal{T}_{n}^{\infty}\in
EJ_{k,m}^{\int}(\Gamma(1))$ if $f\in EJ_{k,m}^{\int}(\Gamma(1))$ by checking
the following:
$f|_{k,m}[\bigl{(}\begin{smallmatrix}a/n&(b+d)/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)]=f|_{k,m}[\bigl{(}\begin{smallmatrix}1&1\\\
0&1\end{smallmatrix}\bigr{)},(0,0)]\cdot[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)],$
$f|_{k,m}[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X+n,Y)]=f|_{k,m}[I,(0,-b)]\cdot[I,(d,0)]\cdot[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)],$
$f|_{k,m}[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y+n)]=f|_{k,m}[I,(0,a)]\cdot[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)].$
To prove (2) in Theorem 2.2, first we need to prove a couple of propositions
and lemmata. For each integer $n>0$ let
${\mathcal{M}}_{n}^{J}:=\\{\gamma=[\gamma_{0},(X,Y)]\,|\,\gamma_{0}\in\frac{1}{n}(M_{2}({\mathbb{Z}})/\\{\pm
1\\}),X,Y\in(\frac{\mathbb{Z}}{n})/n\mathbb{Z},det(\gamma_{0})=1\\}.$ Write
${\mathcal{M}}^{J}_{+}:=\cup{\mathcal{M}}_{n}^{J}$ and
$\mathcal{R}_{n}^{J}:={\mathbb{Z}}[{\mathcal{M}}_{n}^{J}]$ and
$\mathcal{R}_{+}^{J}:={\mathbb{Z}}[\mathcal{M}_{+}^{J}]=\oplus_{n}{\mathcal{M}}_{n}^{J},$
for the sets of finite integral linear combinations of elements of
${\mathcal{M}}_{n}^{J}$ and $\mathcal{M}_{+}^{J},$ respectively. Then
$\mathcal{R}_{+}^{J}$ is a (non-commutative) ring with unity and is
“multiplicatively graded” in the sense that
$\mathcal{R}_{n}^{J}\mathcal{R}_{m}^{J}\subset\mathcal{R}_{mn}^{J}$ for all
$m,n>0$; in particular, each $\mathcal{R}_{n}^{J}$ is a left and right module
over the group ring $\mathcal{R}_{1}^{J}={\mathbb{Z}}[\Gamma(1)^{J}]$ of
$\Gamma(1)^{J}$. Denote by $\mathcal{J}$ the right ideal
$(1+T+T^{2}+T^{3})\mathcal{R}_{1}^{J}+(1+U+U^{2}+U^{3}+U^{4}+U^{5})\mathcal{R}_{1}^{J}$
of $\mathcal{R}_{1}^{J}$. Finally denote $\mathcal{M}_{n,1}^{J}$ by
$\\{\gamma\in\mathcal{M}_{n}^{J}|\ X,Y\in\mathbb{Z}/n\mathbb{Z}\\}$. Then
###### Proposition 5.1.
Let $\hat{\mathcal{T}}_{n}^{\infty}$ be $n^{-k+4}\mathcal{T}_{n}^{\infty}$.
1. (1)
For each integer $n\geq 1,$ $\hat{\mathcal{T}}_{n}^{\infty}(S-I)\equiv
0,\,\hat{\mathcal{T}}_{n}^{\infty}(I_{1}-I)\equiv
0,\hat{\mathcal{T}}_{n}^{\infty}(I_{2}-I)\equiv
0\;(\text{mod}\;(S-I){\mathcal{R}}_{n}^{J}+(I_{1}-I){\mathcal{R}}_{n}^{J}+(I_{2}-I){\mathcal{R}}_{n}^{J})$
and
$\hat{\mathcal{T}}_{n}^{\infty}(T-I)\equiv(T-I)\tilde{\mathcal{T}_{n}}\pmod{(S-I){\mathcal{R}}_{n}^{J}+(I_{1}-I){\mathcal{R}}_{n}^{J}+(I_{2}-I){\mathcal{R}}_{n}^{J}}$
for a certain element $\tilde{\mathcal{T}_{n}}\in{\mathcal{R}}_{n}^{J}.$
2. (2)
This element is unique modulo $\mathcal{J}{\mathcal{R}}_{n}^{J}.$
3. (3)
If $n$ and $n^{\prime}$ are positive integers coprime to $m$ then
$\tilde{\mathcal{T}}_{n},\tilde{\mathcal{T}}_{n^{\prime}}\in{\mathcal{R}}_{+}^{J}$
satisfy the product formula
$\tilde{\mathcal{T}}_{n}\cdot\tilde{\mathcal{T}}_{n^{\prime}}=\sum_{d|\gcd{(n,n^{\prime})}}d^{2k-3}\tilde{\mathcal{T}}_{\frac{nn^{\prime}}{d^{2}}}\pmod{(S-I){\mathcal{R}}_{nn^{\prime}}^{J}+(I_{1}-I){\mathcal{R}}_{nn^{\prime}}^{J}+(I_{2}-I){\mathcal{R}}_{nn^{\prime}}^{J}}.$
The proof of Proposition 5.1 is similar to that in [8] where the explicit
Hecke operators on the space rational period functions associated to elliptic
modular forms were computed.
Proof of Proposition 5.1 $(1)$ For the assertion
$\hat{\mathcal{T}}_{n}^{\infty}(S-I)\equiv 0$ we compute as in [8]. To claim
$\hat{\mathcal{T}}_{n}^{\infty}(I_{1}-I)\equiv 0$ check that
$\sum_{\begin{array}[]{cc}ad=n^{2},\gcd(a,b,d)=\square\atop
a,d>0,b\;(\text{mod}\;d),X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)][\bigl{(}\begin{smallmatrix}1&0\\\
0&1\end{smallmatrix}\bigr{)},(0,1)]$
$=\sum_{\begin{array}[]{cc}ad=n^{2},\gcd(a,b,d)=\square\atop
a,d>0,b\;(\text{mod}\;d),X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y+1)]$
$\equiv\sum_{\begin{array}[]{cc}ad=n^{2},\gcd(a,b,d)=\square\atop
a,d>0,b\;(\text{mod}\;d),X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)].$
For the assertion $\hat{\mathcal{T}}_{n}^{\infty}(I_{2}-1)\equiv 0$ check that
$\sum_{\begin{array}[]{cc}ad=n^{2},\gcd(a,b,d)=\square\atop
a,d>0,b\;(\text{mod}\;d),X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)][\bigl{(}\begin{smallmatrix}1&0\\\
0&1\end{smallmatrix}\bigr{)},(1,0)]$
$=\sum_{\begin{array}[]{cc}ad=n^{2},\gcd(a,b,d)=\square\atop
a,d>0,b\;(\text{mod}\;d),X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X+1,Y)]$
$\equiv\sum_{\begin{array}[]{cc}ad=n^{2},\gcd(a,b,d)=\square\atop
a,d>0,b\;(\text{mod}\;d),X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)].$
To claim
$\hat{\mathcal{T}}_{n}^{\infty}(T-I)\equiv(T-I)\tilde{\mathcal{T}_{n}}\pmod{(S-I){\mathcal{R}}_{n}^{J}+(I_{1}-I){\mathcal{R}}_{n}^{J}+(I_{2}-I){\mathcal{R}}_{n}^{J}}$
we need the following lemma.
###### Lemma 5.2.
Take any $\gamma\in\Gamma(1)^{J}.$ Then
$\gamma-I\in(T-I){\mathcal{R}}_{1}^{J}+(S-I){\mathcal{R}}_{1}^{J}.$
Proof of Lemma 5.2: This lemma follows by induction on the word length. Assume
this holds for some $\gamma\in\Gamma(1)^{J}.$ Note that
$T\gamma-I=(T-I)\gamma+(\gamma-I),$
$S\gamma-I=(S-I)\gamma+(\gamma-I),S^{-1}\gamma-I=(S-I)(-S^{-1}\gamma)+(\gamma-I)$
also belong to $(T-I){\mathcal{R}}_{1}^{J}+(S-I){\mathcal{R}}_{1}^{J}$. Since
$S,T$ generates $\Gamma(1)^{J},$ lemma follows by induction on the word
length. ∎
Now write $\hat{\mathcal{T}}_{n}^{\infty}$ as $\sum M_{i}$. Note that
$[\bigl{(}\begin{smallmatrix}a/n&b/n\\\
0&d/n\end{smallmatrix}\bigr{)},(X,Y)]T=[\bigl{(}\begin{smallmatrix}b/(b,d)&\beta\\\
d/(b,d)&\delta\end{smallmatrix}\bigr{)},(0,0)][\bigl{(}\begin{smallmatrix}(b,d)/n&-a\delta/n\\\
0&n/(b,d)\end{smallmatrix}\bigr{)},(Y+1,-X)]$
for some $\beta,\delta\in\mathbb{Z}$ such that
$\frac{b}{(b,d)}\delta-\frac{d}{(b,d)}\beta=1$. Hence for each index $i$ we
can choose index $i^{\prime}$ such that $M_{i}T=\gamma_{i}M_{i^{\prime}}$ for
some $\gamma_{i}\in\Gamma(1)^{J}$ so that the set of $i^{\prime}$’s is equal
to that of $i$. Then
$\hat{\mathcal{T}}_{n}^{\infty}(T-1)=\sum\gamma_{i}M_{i^{\prime}}-M_{i}=\sum_{i}(\gamma_{i}-1)M_{i^{\prime}}$,
and this belongs to $(T-1)\mathcal{R}_{n}^{J}+(S-1)\mathcal{R}_{n}^{J}$ by
Lemma 5.2.
$(2)$ To characterize the elements of $\mathcal{J}{\mathcal{M}}_{n}^{J}$
consider an“acyclicity” condition, which was introduced in [8] in the case of
the modular group: suppose $V$ is an abelian group on which $\Gamma(1)^{J}$
acts on the left. Then $V$ is a left ${\mathcal{R}}_{1}^{J}$-module. For
$X\in{\mathcal{R}}_{1}^{J}$ let $Ker(X):=\\{v\in
V\,|\,Xv=0\\},Im(X):=\\{Xv\,|\,v\in V\\}.$ We call $V$ acyclic if
$Ker(1+T+T^{2}+T^{3})\cap Ker(I+U+\cdots+U^{5})=\\{0\\},$
$Ker(I-T)=Im(I+T+T^{2}+T^{3}),$
and
$Ker(I-U)=Im(I+U+U^{2}+U^{3}+U^{4}+U^{5}).$
Then the following holds:
###### Lemma 5.3.
${\mathcal{R}}_{n}^{J}$ is an acyclic ${\mathcal{R}}_{1}^{J}$-module for all
$n$.
Proof of Lemma 5.3 First we claim that $Ker(1+T+T^{2}+T^{3})\cap
Ker(I+U+\cdots+U^{5})=\\{0\\}:$ let $X=\sum
n_{\gamma}\gamma(n_{\gamma}\in{\mathbb{Z}},\gamma\in{\mathcal{M}}_{n}^{J})$ be
an element of ${\mathcal{R}}_{n}^{J}.$ Suppose that $X\in
Ker(1+T+T^{2}+T^{3})\cap Ker(I+U+\cdots+U^{5}).$ Take any
$r(\tau)=\frac{1}{\tau-a},$ where $a\in\mathbb{C}$ is not rational or
quadratic and let $q(\tau,z):=(r|_{k,m}X)(\tau,z).$ Then $q(\tau,z)$ behaves
somewhat like the rational period functions in [8], i.e., it has finite number
of singularities as a function of $\tau$(when $z$ is fixed), and these
singularities are rational or real quadratic. But it can be seen that for some
$z_{0}$, $q(\tau,z_{0})$ has a singularity at some point $\tau=M^{-1}a$ with
$n_{M}\neq 0$, contradicting to the fact that $q(\tau,z_{0})$ have
singularities only at rational or quadratic irrational points. On the other
hand, if $X$ is left invariant under $T$, then
$n_{M}=n_{TM}=n_{T^{2}M}=n_{T^{3}M}$ for all $M$, and since $M,TM,T^{2}M,$ and
$T^{3}M$ are distinct, this means that $X$ can be written as an integral
linear combination of elements
$M+TM+T^{2}M+T^{3}M=(1+T+T^{2}+T^{3})M\in\mathcal{R}_{N}^{J}$. Similarly,
$X=UX$ implies $n_{M}=n_{UM}=\cdots=n_{U^{5}M}$ for all $M$ and hence
$X\in(1+U+\cdots+U^{5})\mathcal{R}_{n}^{J}$. This proves the second hypothesis
in the definition of acyclicity.
∎
###### Lemma 5.4.
If $V$ is an acyclic $\Gamma-$module and $v\in V,$ then
$(I-T)v\in(I-S)V\Leftrightarrow
v\in(1+T+T^{2}+T^{3})V+(I+U+U^{2}+U^{3}+U^{4}+U^{5})V=\mathcal{J}V.$
Proof of Lemma 5.4 The direction “$\Leftarrow$” is true for any
$\Gamma(1)^{J}$-module, since
$v=(1+T+T^{2}+T^{3})x+(I+U+U^{2}+U^{3}+U^{4}+U^{5})y$ implies
$(I-T)v=(1-S^{-1}U)(I+U+\cdots+U^{5})y=(S-1)S^{-1}(I+U+\cdots+U^{5})y.$
Conversely, assume that $V$ is acyclic and $(1-T)v=(1-S)w$ for some $w\in V$.
Then
$(1-T)(v-(1+T+T^{2})w)=(1-T)v-(1-T^{3})w=(T^{3}-S)w=(1-U)T^{3}w.$
This element must vanish since $Im(1-T)\cap Im(1-U)\subset
Ker(1+T+T^{2}+T^{3})\cap Ker(I+U+\cdots+U^{5})$={0}. But then
$v-(1+T+T^{2})w\in Im((1+T+T^{2}+T^{3}))$ and $T^{3}w\in
Im((I+U+\cdots+U^{5}))$ by the second hypothesis in the definition of
acyclicity, so
$v=(v-(1+T+T^{2})w)+(1+T+T^{2}+T^{3})w-T^{3}w\in(1+T+T^{2}+T^{3})V+(I+U+\cdots+U^{5})V$.
∎
Lemmas 5.3 and 5.4 give a characterization of $\mathcal{J}\mathcal{R}_{n}^{J}$
as $\\{X\in\mathcal{R}_{n}^{J}|\ (1-T)X\in(1-S)\mathcal{R}_{n}^{J}\\}$.
$(2)$ The uniqueness of $\tilde{\mathcal{T}}_{n}$ modulo
$\mathcal{J}\mathcal{R}_{n}^{J}$ follows immediately from this
characterization and the definition of $\tilde{\mathcal{T}}_{n}$.
$(3)$ Finally,
$\displaystyle(T-1)(\tilde{\mathcal{T}}_{n}\cdot\tilde{\mathcal{T}}_{n^{\prime}}-\sum_{d|\gcd{(n,n^{\prime})}}d^{2k-3}\tilde{\mathcal{T}}_{\frac{nn^{\prime}}{d^{2}}})$
$\displaystyle\equiv$
$\displaystyle\hat{\mathcal{T}}_{n}^{\infty}(T-1)\tilde{\mathcal{T}}_{n^{\prime}}-\sum_{d|\gcd{(n,n^{\prime})}}d^{2k-3}(T-1)\tilde{\mathcal{T}}_{\frac{nn^{\prime}}{d^{2}}}$
$\displaystyle\equiv$
$\displaystyle\hat{\mathcal{T}}_{n}^{\infty}[\hat{\mathcal{T}}_{n^{\prime}}^{\infty}(T-1)-(S-1)X_{n^{\prime}}-(I_{1}-1)Y_{n^{\prime}}-(I_{2}-1)Z_{n^{\prime}}]-\sum_{d|\gcd{(n,n^{\prime})}}d^{2k-3}\hat{\mathcal{T}}_{\frac{nn^{\prime}}{d^{2}}}^{\infty}(T-1)$
$\displaystyle\equiv$
$\displaystyle(\hat{\mathcal{T}}_{n}^{\infty}\cdot\hat{\mathcal{T}}_{n^{\prime}}^{\infty}-\sum_{d|\gcd{(n,n^{\prime})}}d^{2k-3}\hat{\mathcal{T}}_{\frac{nn^{\prime}}{d^{2}}}^{\infty})(T-1)\;(\text{mod}\;(S-1)\mathcal{R}_{n}^{J}+(I_{1}-1)\mathcal{R}_{n}^{J}+(I_{2}-1)\mathcal{R}_{n}^{J}),$
and
$\hat{\mathcal{T}}_{n}^{\infty}\cdot\hat{\mathcal{T}}_{n^{\prime}}^{\infty}-\sum_{d|\gcd{(n,n^{\prime})}}d^{2k-3}\hat{\mathcal{T}}_{\frac{nn^{\prime}}{d^{2}}}^{\infty}\equiv
0\;(\text{mod}\;(S-1)\mathcal{R}_{n}^{J}+(I_{1}-1)\mathcal{R}_{n}^{J}+(I_{2}-1)\mathcal{R}_{n}^{J})$
by the usual calculation for the commutation properties of Hecke operators.
This completes the proof of Proposition 5.1.
Now we are ready to prove Theorem 2.2-(2):
### 5.2. Proof of Theorem 2.2-(2)
For any $(X,Y)\in{\mathbb{Z}}^{2}/n{\mathbb{Z}}^{2},$ the maps
$A=[\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right),(X,Y)]\rightarrow
I_{1}^{-1}TS^{-[\frac{a}{c}]}A=\left[\left(\begin{smallmatrix}c&d\\\
-a+c[\frac{a}{c}]&-b+d[\frac{a}{c}]\end{smallmatrix}\right),(X,Y)\right]$
$B=[\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right),(X,Y)]\rightarrow
I_{1}^{-1}S^{[\frac{d}{b}]}TB=\left[\left(\begin{smallmatrix}-c+a[\frac{d}{b}]&-d+b[\frac{d}{b}]\\\
a&b\end{smallmatrix}\right),(X,Y)\right],$
where $[\,]$ denotes the integral part, are to give inverse bijections between
the sets
$\mathcal{A}_{n}:=\\{[\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right),(X,Y)]\in{\mathcal{M}}_{n,1}^{J}|a>c>0,d>-b\geq
0,b=0\Rightarrow a\geq 2c,\gcd(a,b,c,d)=\square\\}$
and
$\mathcal{B}_{n}=\\{[\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right),(X,Y)]\in{\mathcal{M}}_{n,1}^{J}\,|\,a>-c\geq
0,d>b>0,c=0\Rightarrow d\geq 2b,\gcd(a,b,c,d)=\square\\}.$
Note that
$\sum_{\tiny{\begin{array}[]{cc}ad-bc=n^{2},gcd(a,b,c,d)=\square\\\
a>c>0,d>-b>0\\\
X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}}\left\\{[\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right),(X,Y)]-T[\left(\begin{smallmatrix}a&-b\\\
-c&d\end{smallmatrix}\right),(X,Y)]\right\\}$
$\equiv\sum_{\tiny{\begin{array}[]{cc}ad=n^{2},gcd(a,b,d)=\square\\\
\frac{1}{2}d\geq b>0\\\
X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}}[\left(\begin{smallmatrix}0&-d\\\
a&b\end{smallmatrix}\right),(X,Y)]-\sum_{\tiny{\begin{array}[]{cc}ad=n^{2},gcd(a,c,d)=\square\\\
\frac{1}{2}a\geq c>0\\\
X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}}[\left(\begin{smallmatrix}a&0\\\
c&d\end{smallmatrix}\right),(X,Y)].$
Conjugating this equation by $\alpha:=[\bigl{(}\begin{smallmatrix}-1&0\\\
0&1\end{smallmatrix}\bigr{)},(0,0)]$ changes the sign of all the off-diagonal
coefficients of each $2\times 2$ matrices and each $X$’s, and preserves the
property “$\equiv$”, since $\alpha
S\alpha^{-1}=[\bigl{(}\begin{smallmatrix}1&-1\\\
0&1\end{smallmatrix}\bigr{)},(0,0)],\alpha I_{1}\alpha^{-1}=I_{1},$ and
$\alpha I_{2}\alpha^{-1}=[\bigl{(}\begin{smallmatrix}1&0\\\
0&1\end{smallmatrix}\bigr{)},(-1,0)]$. Also since $X$ can be chosen freely in
$\mathbb{Z}/n\mathbb{Z}$, note that we may change $-X$ to $X$. Adding the
result to the original equation, we get
$(I-T)\sum_{\tiny{\begin{array}[]{cc}ad-bc=n^{2},gcd(a,b,c,d)=\square\\\
a>c>0,d>-b>0\\\
X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}}[(\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right)+\left(\begin{smallmatrix}a&-b\\\
-c&d\end{smallmatrix}\right)),(X,Y)]$
$\equiv\sum_{\tiny{\begin{array}[]{cc}ad=n^{2},gcd(a,b,d)=\square\\\
0<|b|\leq\frac{1}{2}d\\\
X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}}[(\left(\begin{smallmatrix}0&-d\\\
a&b\end{smallmatrix}\right)-\left(\begin{smallmatrix}d&0\\\
b&a\end{smallmatrix}\right)),(X,Y)].$
Hence,
$(I-T)\tilde{\mathcal{T}}_{n}\equiv\sum_{\tiny{\begin{array}[]{cc}ad=n^{2},gcd(a,b,d)=\square\\\
-\frac{1}{2}d<b\leq\frac{1}{2}d\\\
X,Y\in{\mathbb{Z}}/n{\mathbb{Z}}\end{array}}}[\left(\begin{smallmatrix}a&b\\\
0&d\end{smallmatrix}\right),(X,Y)](I-T)$
$+\sum_{\tiny{\begin{array}[]{cc}ad=n^{2},gcd(a,d)=\square\\\ a,d>0,d\
\mbox{even}\\\
X,Y\in{\mathbb{Z}}/n{\mathbb{Z}}\end{array}}}[(\left(\begin{smallmatrix}0&-d\\\
a&-\frac{1}{2}d\end{smallmatrix}\right)-\left(\begin{smallmatrix}d&0\\\
-\frac{1}{2}d&a\end{smallmatrix}\right)),(X,Y)].$
The first sum on the right is $\equiv\hat{\mathcal{T}}_{n}^{\infty}(I-T),$
while the second equals to
$\sum_{\tiny{\begin{array}[]{cc}\alpha\beta=\frac{n^{2}}{2},gcd(\alpha,\beta)=\square\\\
\alpha,\beta>0\\\
X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}}[(\left(\begin{smallmatrix}0&2\beta\\\
-\alpha&\beta\end{smallmatrix}\right)-\left(\begin{smallmatrix}2\alpha&0\\\
-\alpha&\beta\end{smallmatrix}\right)),(X,Y)]$
$=\sum_{\tiny{\begin{array}[]{cc}\alpha\beta=\frac{n^{2}}{2},gcd(\alpha,\beta)=\square\\\
\alpha,\beta>0\\\
X,Y\in\mathbb{Z}/n\mathbb{Z}\end{array}}}(S^{2}-I)[\left(\begin{smallmatrix}2\alpha&0\\\
-\alpha&\beta\end{smallmatrix}\right),(X,Y)]\equiv 0.$
∎
### 5.3. Proof of Theorem 2.1-(1)
To see $f|_{k,m}\mathcal{V}_{n}^{\infty}\in J_{k,m}^{\int}(\Gamma(1))$ if
$f\in J_{k,m}^{\int}(\Gamma(1))$ note that
$\bigl{(}(f|_{k,m}\mathcal{V}_{n}^{\infty})\big{|}_{k,mn}\left[\bigl{(}\begin{smallmatrix}\alpha&\beta\\\
\gamma&\delta\end{smallmatrix}\bigr{)},(\lambda,\mu)\right]\bigr{)}(\tau,z)$
$=n^{\frac{k}{2}-1}\biggl{\\{}\sum_{ad=n\atop
b\;(\text{mod}\;d)}\biggl{(}f|_{k,m}\biggl{[}\Bigl{(}\begin{matrix}\frac{1}{\sqrt{n}}&0\\\
0&\frac{1}{\sqrt{n}}\end{matrix}\Bigr{)},(0,0)\biggr{]}\biggr{)}|_{k,m}\left[\bigl{(}\begin{smallmatrix}a/\sqrt{n}&b/\sqrt{n}\\\
c/\sqrt{n}&d/\sqrt{n}\end{smallmatrix}\bigr{)},(0,0)\right]\left[\bigl{(}\begin{smallmatrix}\alpha&\beta\\\
\gamma&\delta\end{smallmatrix}\bigr{)},(\sqrt{n}\lambda,\sqrt{n}\mu)\right]\biggr{\\}},$
$\mbox{ and
}\,\bigl{(}(f|_{k,m}\left[\bigl{(}\begin{smallmatrix}\alpha&\beta\\\
\gamma&\delta\end{smallmatrix}\bigr{)},(\lambda,\mu)\right])\big{|}_{k,m}\mathcal{V}_{n}^{\infty}\bigr{)}(\tau,z)$
$=n^{\frac{k}{2}-1}\biggl{\\{}\sum_{ad=n\atop
b\;(\text{mod}\;d)}\biggl{(}f|_{k,m}\biggl{[}\Bigl{(}\begin{matrix}\frac{1}{\sqrt{n}}&0\\\
0&\frac{1}{\sqrt{n}}\end{matrix}\Bigr{)},(0,0)\biggr{]}\biggr{)}|_{k,m}\left[\bigl{(}\begin{smallmatrix}\alpha&\beta\\\
\gamma&\delta\end{smallmatrix}\bigr{)},(\lambda,\mu)\right]\left[\bigl{(}\begin{smallmatrix}a/\sqrt{n}&b/\sqrt{n}\\\
c/\sqrt{n}&d/\sqrt{n}\end{smallmatrix}\bigr{)},(0,0)\right]\biggr{\\}}.$
So applying the following equalities
$\sum_{ad=l\atop
b\;(\text{mod}\;d)}\left[\bigl{(}\begin{smallmatrix}a/\sqrt{l}&b/\sqrt{l}\\\
0&d/\sqrt{l}\end{smallmatrix}\bigr{)},(0,0)\right]\left[\bigl{(}\begin{smallmatrix}1&1\\\
0&1\end{smallmatrix}\bigr{)},(0,0)\right]=\sum_{ad=l\atop
b\;(\text{mod}\;d)}\left[\bigl{(}\begin{smallmatrix}1&1\\\
0&1\end{smallmatrix}\bigr{)},(0,0)\right]\left[\bigl{(}\begin{smallmatrix}a/\sqrt{l}&b/\sqrt{l}\\\
0&d/\sqrt{l}\end{smallmatrix}\bigr{)},(0,0)\right],$ $\sum_{ad=l\atop
b\;(\text{mod}\;d)}\left[\bigl{(}\begin{smallmatrix}a/\sqrt{l}&b/\sqrt{l}\\\
0&d/\sqrt{l}\end{smallmatrix}\bigr{)},(0,0)\right]\left[\bigl{(}\begin{smallmatrix}1&0\\\
0&1\end{smallmatrix}\bigr{)},(0,\sqrt{l})\right]=\sum_{ad=l\atop
b\;(\text{mod}\;d)}\left[\bigl{(}\begin{smallmatrix}1&0\\\
0&1\end{smallmatrix}\bigr{)},(0,a)\right]\left[\bigl{(}\begin{smallmatrix}a/\sqrt{l}&b/\sqrt{l}\\\
0&d/\sqrt{l}\end{smallmatrix}\bigr{)},(0,0)\right]$
to $\biggl{(}f|_{k,m}\biggl{[}\Bigl{(}\begin{matrix}\frac{1}{\sqrt{n}}&0\\\
0&\frac{1}{\sqrt{n}}\end{matrix}\Bigr{)},(0,0)\biggr{]}\biggr{)}(\tau,z)$ we
conclude our claim.
### 5.4. Proof of Theorem 2.1-(2)
By following similar way to the proof of Theorem 2.2-(2) we find an explicit
formula for $\tilde{\mathcal{V}}_{n}$ and the detailed proof will be skipped.
## References
* [1] A. Ash, Parabolic cohomology of arithmetic subgroups of $SL(2,\mathbb{Z})$ with coefficients in the field of rational functions on the Riemann sphere, Amer. J. Math. 111 (1989), no. 1, 35–51.
* [2] R. Bruggeman, J. Lewis and D. Zagier, Period functions for Maass wave forms II: cohomology, preprint (2012).
* [3] D. Choi and S. Lim, The Eichler cohomology theorem for Jacobi forms, arXiv :1211.2988v2 (2013).
* [4] Y. Choie, A short note on the full Jacobi group, Proc. Amer. Math. Soc. 123 (1995), no. 9, 2625–2628.
* [5] Y. Choie, Correspondence among Eisenstein series $E_{2,1}(\tau,z),H_{\frac{3}{2}}(\tau)$ and $E_{2}(\tau),$ Manuscripta Math. 93 (1997), no. 2, 177–187.
* [6] Y. Choie, Half integral weight Jacobi forms and periods of modular forms, Manuscripta Math. 104 (2001), no. 1, 123–133.
* [7] Y. Choie and S. Lim, Eichler integrals, period relations and Jacobi forms, Math. Z. 271 (2012), no. 3-4, 639–661.
* [8] Y. Choie and D. Zagier, Rational period functions for $PSL(2,{\mathbb{Z}}),$ A tribute to Emil Grosswald: number theory and related analysis, 89–108, Contemp. Math., 143, Amer. Math. Soc., Providence, RI, 1993.
* [9] A. Dabholkar, S. Murthy and D. Zagier, Quantum black holes, wall crossing, and mock modular forms, arXiv:1208.4074 (2012).
* [10] M. Eichler, Eine verallgemeinerung der Abelschen integrale, Math. Z. $67$ (1957), 267–298.
* [11] M. Eichler and D. Zagier, The theory of Jacobi forms, Progress in Mathematics, $55$. Birkhäuser Boston, Inc., Boston, MA, $1985$. v+$148$ pp.
* [12] J. Hilgert, D. Mayer and H. Movasati, Transfer operators for $\Gamma_{0}(N)$ and the Hecke operators for the period functions of $PSL(2,\mathbb{Z})$, Math. Proc. Cambridge Philos. Soc. 139 (2005), no. 1, 81–116.
* [13] F. Hirzebruch and D. Zagier, Intersection numbers of curves and Hilbert modular surfaces and modular forms of Nebentypus, Invent. Math. 36 (1976), 57–113.
* [14] M. Knopp, Some new results on the Eichler cohomology of automorphic forms, Bull. Amer. Math. Soc. 80 (1974), 607–632.
* [15] M. Knopp, Rational period functions of the modular group, With an appendix by Georges Grinstein, Duke Math. J. 45 (1978), no. 1, 47–62.
* [16] M. Knopp, Recent developments in the theory of rational period functions, Number theory (New York, 1985/1988), 111–122, Lecture Notes in Math., 1383, Springer, Berlin, 1989.
* [17] W. Kohnen and D. Zagier, Modular forms with rational periods, Modular forms (Durham, 1983), 197–249, Ellis Horwood Ser. Math. Appl.: Statist. Oper. Res., Horwood, Chichester, 1984.
* [18] J. Lewis and D. Zagier, Period functions for Maass wave forms I, Ann. of Math. (2) 153 (2001), no. 1, 191–258.
* [19] Y. Manin, Periods of parabolic forms and $p$-adic Hecke series, Mat. Sb. 21 (1973), 371–393.
* [20] Y. Manin, Remarks on modular symbols for Maass wave forms, Algebra $\&$ Number Theory 4 (2010), no. 8, 1091–1114.
* [21] T. M$\mathrm{\ddot{u}}$hlenbruch, Hecke operators on period functions for the full modular group, Int. Math. Res. Not. 2004, no. 77, 4127–4145.
* [22] T. M$\mathrm{\ddot{u}}$hlenbruch, Hecke operators on period functions for $\Gamma_{0}(N)$, J. Number Theory 118 (2006), no. 2, 208–235.
* [23] G. Shimura, Sur les intégrales attachées aux formes automorphes, J. Math. Soc. Japan 11 (1959) 291–311.
* [24] G. Shimura, On modular forms of half integral weight, Ann. of Math. (2) 97 (1973), 440–481.
* [25] D. Zagier, Hecke operators and periods of modular forms, Festschrift in honor of I. I. Piatetski-Shapiro on the occasion of his sixtieth birthday, Part II (Ramat Aviv, 1989), 321–336, Israel Math. Conf. Proc., 3, Weizmann, Jerusalem, 1990.
* [26] D. Zagier, Periods of modular forms, traces of Hecke operators and multiple zeta values, Research into automorphic forms and $L$-functions (Japanese) (Kyoto, 1992), S$\rm\bar{u}$rikaisekikenky$\rm\bar{u}$sho K$\rm\bar{o}$ky$\rm\bar{u}$roku No. 843 (1993), 162–170.
* [27] S. Zweger, Mock theta functions, Utrecht PhD thesis (2002).
|
arxiv-papers
| 2013-02-09T03:36:45 |
2024-09-04T02:49:41.538279
|
{
"license": "Public Domain",
"authors": "Youngju Choie and Seokho Jin",
"submitter": "Seokho Jin",
"url": "https://arxiv.org/abs/1302.2189"
}
|
1302.2224
|
# Computer-Aided Derivation of Multi-scale Models: A Rewriting Framework
††thanks: This work is partially supported by the European Territorial
Cooperation Programme INTERREG IV A France-Switzerland 2007-2013.
Bin Yang
Department of Applied Mathematics
Northwestern Polytechnical University
710129 Xi’an Shaanxi, China
University of Franche-Comté,
26 chemin de l’Epitaphe, 25030 Besançon Cedex, France
[email protected]
Walid Belkhir
INRIA Nancy - Grand Est,
CASSIS project,
54600 Villers-lès-Nancy, France
[email protected] Michel Lenczner
FEMTO-ST, Département Temps-Fréquence,
University of Franche-Comté,
26 chemin de l’Epitaphe, 25030 Besançon Cedex, France
[email protected]
###### Abstract
We introduce a framework for computer-aided derivation of multi-scale models.
It relies on a combination of an asymptotic method used in the field of
partial differential equations with term rewriting techniques coming from
computer science. In our approach, a multi-scale model derivation is
characterized by the features taken into account in the asymptotic analysis.
Its formulation consists in a derivation of a reference model associated to an
elementary nominal model, and in a set of transformations to apply to this
proof until it takes into account the wanted features. In addition to the
reference model proof, the framework includes first order rewriting principles
designed for asymptotic model derivations, and second order rewriting
principles dedicated to transformations of model derivations. We apply the
method to generate a family of homogenized models for second order elliptic
equations with periodic coefficients that could be posed in multi-dimensional
domains, with possibly multi-domains and/or thin domains.
## 1 Introduction
There is a vast literature on multi-scale methods for partial differential
equations both in applied mathematics and in many modeling areas. Among all
developed methods, asymptotic methods occupy a special place because they have
rigorous mathematical foundations and can lead to error estimates based on the
small parameters involved in the approach. This is a valuable aspect from the
model reliability point of view. They have been applied when a physical
problem depends on one or more small parameters which can be some coefficients
or can be related to the geometry. Their principle is to identify the
asymptotic model obtained when the parameters tend to zero. For instance, this
method applies in periodic homogenization, i.e. to systems consisting of a
large number of periodic cells, the small parameter being the ratio of the
cell size over the size of the complete system, see for instance [BLP78, CD99,
JZKO94]. Another well-developed case is when parts of a system are thin, e.g.
thin plates as in [Cia], that is to say that some of their dimensions are
small compared to others. A third kind of use is that of strongly
heterogeneous systems e.g. [BB02], i.e. when equation coefficients are much
smaller in some parts of a system than in others. These three cases can be
combined in many ways leading to a broad variety of configurations and models.
In addition, it is possible to take into account several nested scales and the
asymptotic characteristics can be different at each scale: thin structures to
a scale, periodic structures to another, etc…. It is also possible to cover
cases where the asymptotic phenomena happen only in certain regions or even
are localized to the boundary. Moreover, different physical phenomena can be
taken into account: heat transfer, solid deformations, fluid flow, fluid-
structure interaction or electromagnetics. In each model, the coefficients can
be random or deterministic. Finally, different operating regimes can be
considered as the static or the dynamic regimes, or the associated spectral
problems. Today, there exists a vast literature covering an impressive variety
of configurations.
Asymptotic methods, considered as model reduction techniques, are very useful
for complex system simulation and are of great interest in the software design
community. They enjoy a number of advantages. The resulting models are
generally much faster (often by several order of magnitude – depending on the
kind of model simplification –) to simulate than the original one and are
fully parameterized. In addition, they do not require any long numerical
calculation for building them, so they can be inserted into identification and
optimization loops of a design process. Finally, they are of general use and
they can be rigorously applied whenever a model depends on one or several
small parameters and the error between their solution and nominal model
solution can be estimated.
Despite these advantages, we observe that the asymptotic modeling techniques
have almost not been transferred in general industrial simulation software
while numerical techniques, as for instance the Finite Element Method, have
been perfectly integrated in many design tools. The main limitation factor for
their dissemination is that each new problem requires new long hand-made
calculations that may be based on a relatively large variety of techniques. In
the literature, each published paper focus on a special case regarding
geometry or physics, and no work is oriented in an effort to deal with a more
general picture. Moreover, even if a large number of models combining various
features have already been derived, the set of already addressed problems
represents only a tiny fraction of those that could be derived from all
possible feature combinations using existing techniques.
Coming to this conclusion, we believe that what prevents the use of asymptotic
methods by non-specialists can be formulated as a scientific problem that
deserves to be posed. It is precisely the issue that we discuss in this paper.
We would like to establish a mathematical framework for combining asymptotic
methods of different nature and thus for producing a wide variety of models.
This would allow the derivation of complex asymptotic models are made by
computers. In this paper, we present first elements of a solution by combining
some principles of asymptotic model derivations and rewriting methods issued
from computer science.
In computer science equational reasoning is usually described by rewrite
rules, see [BN98] for a classical reference. A rewrite rule $t\rightarrow u$
states that every occurrence of an instance of $t$ in a term can be replaced
with the corresponding instance of $u$. Doing so, a proof based on a sequence
of equality transformations is reduced to a series of rewrite rule
applications. Rules can have further conditions and can be combined by
specifying strategies which specify where and when to apply them, see for
instance [Ter03, CK01, CFK05, BKKR01, CKLW03]
The method developed in this paper is led by the idea of derivating models by
generalization. For this purpose, it introduces a reference model with its
derivation and a way to generate generalizations to cover more cases. The
level of detail in the representation of mathematical objects should be
carefully chosen. On the one hand it should have enough precision to cover
fairly wide range of models and on the other hand calculations should be
reasonably sized. The way the generalizations are made is important so that
they could be formulated in a single framework.
In this paper, we select as reference problem that of the periodic
homogenization of a scalar second order elliptic equation posed in a one-
dimension domain and with Dirichlet boundary conditions. Its derivation is
based on the use of the two-scale transform operator introduced in [ADH90],
and reused in [BLM96]. We quote that homogenization of various problems using
this transformation was performed according to different techniques in [Len97,
Len06, LS07, CD00, CDG02, CDG08]. Here, we follow that of [LS07], so a number
of basic properties coming from this paper are stated and considered as the
building blocks of the proofs. The complete derivation of the model is
organized into seven lemmas and whose proof is performed by a sequence of
applications of these properties. Their generalization to another problem
requires generalization of certain properties, which is assumed to be made
independently. It may also require changes in the path of the proof, and even
adding new lemmas. The mathematical concepts are common in the field of
partial differential equations: geometric domains, variables defined on these
domains, functions of several variables, operators (e.g. derivatives,
integrals, two-scale transform, etc..). Finally, the proofs of Lemmas are
designed to be realizable by rewriting.
Then, we presents a computational framework based on the theory of rewriting
to express the above method. Each property is expressed as a rewrite rule that
can be conditional, so that it can be applied or not according to a given
logical formula. A step in a lemma proof is realized by a strategy that
expresses how the rule applies. The complete proof of a lemma is then a
sequence of such strategies. Ones we use have been developed in previous work
[BGL] that is implemented in Maple${}^{{}^{\ooalign{\hfil\raise
0.1507pt\hbox{$\scriptstyle\mathrm{\text{missing}}{R}$}\hfil\crcr\text{$\mathchar
525\relax$}}}}$, here we provide its formalization. To allow the successful
application of rewriting strategies to an expression that contains associative
and/or commutative operations, such as $+,*,\cup,\cap$, etc, we use the
concept of rewriting modulo an equational theory [BN98, §11]. Without such
concept one needs to duplicate the rewriting rules.
In this work, rewriting operates on expressions whose level of abstraction
accurately reflects the mathematical framework. Concrete descriptions of
geometric domains, functions or operators are not provided. Their description
follows a grammar that has been defined in order that they carry enough
information allowing for the design of the rewriting rules and the strategies.
In some conditions of rewriting rules, the set of variables on which an
expression depends is required. This is for example the case for the linearity
property of the integral. Rather than introducing a typing system, which would
be cumbersome and restrictive, we introduced a specific functionality in the
form of a $\lambda$-term (i.e. a program). The language of strategy allows
this use. Put together all these concepts can express a lemma proof as a
strategy, i.e. a first order strategy, and therefore provide a framework of
symbolic computation. The concept of generalization of a proof is introduced
as second order rewrite strategies, made with second order rewriting rules,
operating on first order strategies. They can transform first order rewrite
rules and strategies and, where appropriate, remove or add new ones. This
framework has been implemented in the software
Maple${}^{{}^{\ooalign{\hfil\raise
0.1507pt\hbox{$\scriptstyle\mathrm{\text{missing}}{R}$}\hfil\crcr\text{$\mathchar
525\relax$}}}}$. We present its application to the complete proof of the
reference problem and also to the generalizations of the first lemma, by
applying second order strategies, to multi-dimensional geometrical domains,
multi-dimensional thin domains and multi-domains.
The paper is organized as follows. Section 2 is devoted to all mathematical
aspects. This includes all definitions and properties, the lemmas and their
proof. The principles of rewrite rules and strategies are formulated in
Section 3. Section 4 is devoted to the theoretical framework that allows to
derive a model and its generalizations. Implementation results are described
in section 6.
## 2 Skeleton of two-scale modeling
We recall the framework of the two-scale convergence as presented in [LS07],
and the proof of the reference model whose implementation and extension under
the form of algorithms of symbolic computation are discussed in Section 6. The
presentation is divided into three subsections. The first one is devoted to
basic definitions and properties, stated as Propositions. The latter are
admitted without proof because they are assumed to be prerequisites, or
building blocks, in the proofs. They are used as elementary steps in the two
other sections detailing the proof of the convergence of the two-scale
transform of a derivative, and the homogenized model derivation. The main
statements of these two subsections are also stated as Propositions and their
proofs are split into numbered blocks called lemmas. Each lemma is decomposed
into steps refering to the definitions and propositions. All components of the
reference model derivation, namely the definitions, the propositions, the
lemmas and the proof steps are designed so that to be easily implemented and
also to be generalized for more complex models. We quote that a number of
elementary properties are used in the proof but are not explicitely stated nor
cited.
### 2.1 Notations, Definitions and Propositions
Note that the functional framework used in this section is not as precise as
it should be for a usual mathematical work. The reason is that the functional
analysis is not covered by our symbolic computation. So, precise mathematical
statements and justifications are not in the focus of this work.
In the sequel, $A\subset\mathbb{R}^{n}$ is a bounded open set, with measure
$|A|,$ having a ”sufficiently” regular boundary $\partial A$ and with unit
outward normal denoted by $n_{\partial A}$. We shall use the set $L^{1}(A)$ of
integrable functions and the set $L^{p}(A)$, for any $p>0$, of functions $f$
such that $f^{p}\in L^{1}(A),$ with norm $||v||_{L^{p}(A)}=(\int_{A}|v|^{p}$
$dx)^{1/p}.$ The Sobolev space $H^{1}(A)$ is the set of functions $f\in
L^{2}(A)$ whose gradient $\nabla f\in L^{2}(A)^{n}.$ The set of $p$ times
differentiable functions on $A$ is denoted by $\mathcal{C}^{p}(A)$, where $p$
can be any integer or $\infty$. Its subset $\mathcal{C}_{0}^{p}(A)$ is
composed of functions whose partial derivatives are vanishing on the boundary
$\partial A$ of $A$ until the order $p$. For any integers $p$ and $q,$
$\mathcal{C}^{q}(A)\subset L^{p}(A)$. When
$A=(0,a_{1})\times...\times(0,a_{n})$ is a cuboid (or rectangular
parallelepiped) we say that a function $v$ defined in $\mathbb{R}^{n}$ is
$A$-periodic if for any $\ell\in\mathbb{Z}^{n},$
$v(y+\sum_{i=1}^{n}\ell_{i}a_{i}e_{i})=v(y)$ where $e_{i}$ is the $i^{th}$
vector of the canonical basis of $\mathbb{R}^{n}$. The set of $A$-periodic
functions which are $\mathcal{C}^{\infty}$ is denoted by
$\mathcal{C}_{\sharp}^{\infty}(A)$ and those which are in $H^{1}(A)$ is
denoted by $H_{\sharp}^{1}(A)$. The operator $tr$ (we say trace) can be
defined as the restriction operator from functions defined on the closure of
$A$ to functions defined on its boundary $\partial A$. Finally, we say that a
sequence $(u^{\varepsilon})_{\varepsilon>0}\in L^{2}(A)$ converges strongly in
$L^{2}(A)$ towards $u^{0}\in L^{2}(A)$ when $\varepsilon$ tends to zero if
$\lim_{\varepsilon\rightarrow 0}||u^{\varepsilon}-u^{0}||_{L^{2}(A)}=0$. The
convergence is said to be weak if $\lim_{\varepsilon\rightarrow
0}\int_{A}(u^{\varepsilon}-u^{0})v$ $dx=0$ for all $v\in L^{2}(A)$. We write
$u^{\varepsilon}=u^{0}+O_{s}(\varepsilon)$ (respectively
$O_{w}(\varepsilon)$), where $O_{s}(\varepsilon)$ (respectively
$O_{w}(\varepsilon)$) represents a sequence tending to zero strongly
(respectively weakly) in $L^{2}(A)$. Moreover, the simple notation
$O(\varepsilon)$ refers to a sequence of numbers which simply tends to zero.
We do not detail the related usual computation rules.
###### Proposition 1
[Interpretation of a weak equality] For $u\in L^{2}(A)$ and for any
$v\in\mathcal{C}_{0}^{\infty}(A)$,
$\text{if }\int_{A}u(x)\text{ }v(x)\text{ }dx=0\text{ then }u=0$
in the sense of $L^{2}(A)$ functions.
###### Proposition 2
[Interpretation of a periodic boundary condition] For $u\in H^{1}(A)$ and for
any $v\in\mathcal{C}_{\\#}^{\infty}\left(A\right)$,
$\text{if }\int_{\partial A}u(x)\ v(x)\ n_{\partial A}(x)\ dx=0\text{ then
}u\in H_{\sharp}^{1}\left(A\right).$
In the remainder of this section, only the dimension $n=1$ is considered, the
general definitions being used for the generalizations discussed in Section 6.
###### Notation 3
[Physical and microscopic Domains] We consider an interval
$\Omega=\bigcup\limits_{c=1}^{N(\varepsilon)}\Omega_{c}^{1,\varepsilon}\subset\mathbb{R}$
divided into $N(\varepsilon)$ periodic cells (or intervals)
$\Omega_{c}^{1,\varepsilon}$, of size $\varepsilon>0$, indexed by $c$, and
with center $x_{c}.$ The translation and magnification
$(\Omega_{c}^{1,\varepsilon}-x_{c})/\varepsilon$ is called the unit cell and
is denoted by $\Omega^{1}$. The variables in $\Omega$ and in $\Omega^{1}$ are
denoted by $x^{\varepsilon}$ and $x^{1}.$
The two-scale transform $T$ is an operator mapping functions defined in the
physical domain $\Omega$ to functions defined in the two-scale domain
$\Omega^{\sharp}\times\Omega^{1}$ where for the reference model
$\Omega^{\sharp}=\Omega$. In the following, we shall denote by $\Gamma,$
$\Gamma^{\sharp}$ and $\Gamma^{1}$ the boundaries of $\Omega,$
$\Omega^{\sharp}$ and $\Omega^{1}$.
###### Definition 4
[Two-Scale Transform] The two-scale transform $T$ is the linear operator
defined by
$(Tu)(x_{c},x^{1})=u(x_{c}+\varepsilon x^{1})$ (1)
and then by extension $T(u)(x^{\sharp},x^{1})=u(x_{c}+\varepsilon x^{1})$ for
all $x^{\sharp}\in\Omega_{c}^{1,\varepsilon}$ and each $c$ in
$1,..,N(\varepsilon).$
###### Notation 5
[Measure of Domains] $\kappa^{0}=\frac{1}{|\Omega|}$ and
$\kappa^{1}=\frac{1}{|\Omega^{\sharp}\times\Omega^{1}|}.$
The operator $T$ enjoys the following properties.
###### Proposition 6
[Product Rule] For two functions $u,$ $v$ defined in $\Omega,$
$T(uv)=(Tu)(Tv).$ (2)
###### Proposition 7
[Derivative Rule] If $u$ and its derivative are defined in $\Omega$ then
$T\left(\frac{du}{dx}\right)=\frac{1}{\varepsilon}\frac{\partial(Tu)}{\partial
x^{1}}.$ (3)
###### Proposition 8
[Integral Rule] If a function $u\in L^{1}(\Omega)$ then $Tu\in
L^{1}(\Omega^{\sharp}\times\Omega^{1})$ and
$\kappa^{0}\int_{\Omega}u\text{
}dx=\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}(Tu)\text{
}dx^{\sharp}dx^{1}.$ (4)
The next two properties are corollaries of the previous ones.
###### Proposition 9
[Inner Product Rule] For two functions $u,$ $v\in L^{2}(\Omega),$
$\kappa^{0}\int_{\Omega}u\text{ }v\text{
}dx=\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}(Tu)\text{ }(Tv)\text{
}dx^{\sharp}dx^{1}.$ (5)
###### Proposition 10
[Norm Rule] For a function $u\in L^{2}(\Omega),$
$\kappa^{0}\left\|u\right\|_{L^{2}(\Omega)}^{2}=\kappa^{1}\left\|Tu\right\|_{L^{2}(\Omega^{\sharp}\times\Omega^{1})}^{2}.$
(6)
###### Definition 11
[Two-Scale Convergence] A sequence $u^{\varepsilon}\in L^{2}(\Omega)$ is said
to be two-scale strongly (respect. weakly) convergent in
$L^{2}(\Omega^{\sharp}\times\Omega^{1})$ to a limit $u^{0}(x^{\sharp},x^{1})$
if $Tu^{\varepsilon}$ is strongly (respect. weakly) convergent towards $u^{0}$
in $L^{2}(\Omega^{\sharp}\times\Omega^{1}).$
###### Definition 12
[Adjoint or Dual of T] As $T$ is a linear operator from $L^{2}(\Omega)$ to
$L^{2}(\Omega^{\sharp}\times\Omega^{1}),$ its adjoint $T^{\ast}$ is a linear
operator from $L^{2}(\Omega^{\sharp}\times\Omega^{1})$ to $L^{2}(\Omega)$
defined by
$\kappa^{0}\int_{\Omega}T^{\ast}v\text{ }u\text{
}dx=\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}v\text{ }Tu\text{
}dx^{\sharp}dx^{1}.$ (7)
The expression of $T^{\ast}$ can be explicited, it maps regular functions in
$\Omega^{\sharp}\times\Omega^{1}$ to piecewise-constant functions in $\Omega$.
The next definition introduce an operator used as a smooth approximation of
$T^{\ast}$.
###### Definition 13
[Regularization of T∗] The operator $B$ is the linear continuous operator
defined from $L^{2}(\Omega^{\sharp}\times\Omega^{1})$ to $L^{2}(\Omega)$ by
$Bv=v(x,\frac{x}{\varepsilon}).$ (8)
The nullity condition of a function $v(x^{\sharp},x^{1})$ on the boundary
$\partial\Omega^{\sharp}\times\Omega^{1}$ is transferred to the range $Bv$ as
follows.
###### Proposition 14
[Boundary Conditions of Bv] If
$v\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp};\mathcal{C}^{\infty}(\Omega^{1}))$
then $Bv\in\mathcal{C}_{0}^{\infty}(\Omega)$.
###### Proposition 15
[Derivation Rule for B] If $v$ and its partial derivatives are defined on
$\Omega^{\sharp}\times\Omega^{1}$ then
$\frac{d(Bv)}{dx}=B(\frac{\partial v}{\partial
x^{\sharp}})+\varepsilon^{-1}B(\frac{\partial v}{\partial x^{1}}).$ (9)
The next proposition states that the operator $B$ is actually an approximation
of the operator $T^{\ast}$ for $\Omega^{1}$-periodic functions.
###### Proposition 16
[Approximation between T∗ and B] If $v(x^{\sharp},x^{1})$ is continuous,
continuously differentiable in $x^{\sharp}$ and $\Omega^{1}$-periodic in
$x^{1}$ then
$T^{\ast}v=Bv-\varepsilon B(x^{1}\frac{\partial v}{\partial
x^{\sharp}})+\varepsilon O_{s}(\varepsilon).$ (10)
Conversely,
$Bv=T^{\ast}(v)+\varepsilon T^{\ast}(x^{1}\frac{\partial v}{\partial
x^{\sharp}})+\varepsilon O_{s}(\varepsilon).$ (11)
Next, the formula of integration by parts is stated in a form compatible with
the Green formula used in some extensions. The boundary $\Gamma$ is composed
of the two end points of the interval $\Omega$, and the unit outward normal
$n_{\Gamma}$ defined on $\Gamma$ is equal to $-1$ and $+1$ at the left- and
right-endpoints respectively.
###### Proposition 17
[Green Rule] If $u$, $v\in H^{1}(\Omega)$ then the traces of $u$ and $v$ on
$\Gamma$ are well defined and
$\int_{\Omega}u\frac{dv}{dx}\text{ }dx=\int_{\Gamma}tr(u)\text{ }tr(v)\text{
}n_{\Gamma}\text{ }ds(x)-\int_{\Omega}v\frac{du}{dx}\text{ }dx.$ (12)
The last proposition is stated as a building block of the homogenized model
derivation.
###### Proposition 18
[The linear operator associated to the Microscopic problem] For
$\mu\in\mathbb{R}$, there exist $\theta^{\mu}\in H_{\sharp}^{1}(\Omega^{1})$
solutions to the linear weak formulation
$\int_{\Omega^{1}}a^{0}\frac{\partial\theta^{\mu}}{\partial
x^{1}}\frac{\partial w}{\partial x^{1}}\text{
}dx^{1}=-\mu\int_{\Omega^{1}}a^{0}\frac{\partial w}{\partial x^{1}}\text{
}dx^{1}\text{ for all }w\in\mathcal{C}_{\sharp}^{\infty}(\Omega^{1}),$ (13)
and $\frac{\partial\theta^{\mu}}{\partial x^{1}}$ is unique. Since the mapping
$\mu\mapsto\dfrac{\partial\theta^{\mu}}{\partial x^{1}}$ from $\mathbb{R}$ to
$L^{2}(\Omega^{1})$ is linear then
$\dfrac{\partial\theta^{\mu}}{\partial
x^{1}}=\mu\dfrac{\partial\theta^{1}}{\partial x^{1}}.$ (14)
Moreover, this relation can be extended to any $\mu\in
L^{2}(\Omega^{\sharp})$.
### 2.2 Two-Scale Approximation of a Derivative
Here we detail the reference computation of the weak two-scale limit
$\eta=\lim_{\varepsilon\rightarrow 0}T(\frac{du^{\varepsilon}}{dx})$ in
$L^{2}(\Omega^{\sharp}\times\Omega^{1})$ when
$\left\|u^{\varepsilon}\right\|_{L^{2}(\Omega)}\text{ and
}\left\|\frac{du^{\varepsilon}}{dx}\right\|_{L^{2}(\Omega)}\leq C,$ (15)
$C$ being a constant independent of $\varepsilon$. To simplify the proof, we
further assume that there exist $u^{0}$, $u^{1}\in
L^{2}(\Omega^{\sharp}\times\Omega^{1})$ such that
$T(u^{\varepsilon})=u^{0}+\varepsilon u^{1}+\varepsilon
O_{w}(\varepsilon)\text{,}$
i.e.
$\int_{\Omega^{\sharp}\times\Omega^{1}}(T(u^{\varepsilon})-u^{0}-\varepsilon
u^{1})v\text{ }dx^{\sharp}dx^{1}=\varepsilon O(\varepsilon)\text{ for all
}v\in L^{2}(\Omega^{\sharp}\times\Omega^{1}).$ (16)
We quote that Assumption (16) is not necessary, it is introduced to simplify
the proof since it avoids some non-equational steps. The statement proved in
the remaining of the subsection is the following.
###### Proposition 19
[Two-scale Limit of a Derivative] If $u^{\varepsilon}$ is a sequence bounded
as in (15) and satisfying (16), then $u^{0}$ is independent of $x^{1},$
$\tilde{u}^{1}=u^{1}-x^{1}\partial_{x^{\sharp}}u^{0}$ (17)
defined in $\Omega^{\sharp}\times\Omega^{1}$ is $\Omega^{1}$-periodic and
$\eta=\frac{\partial u^{0}}{\partial
x^{\sharp}}+\frac{\partial\tilde{u}^{1}}{\partial x^{1}}.$ (18)
Moreover, if $u^{\varepsilon}=0$ on $\Gamma$ then $u^{0}=0$ on
$\Gamma^{\sharp}.$
The proof is split into four Lemmas corresponding to the first four blocks
discussed in Section 6, the other three being detailed in subsection 2.3.
###### Lemma 20
[First Block: Constraint on $u^{0}$] $u^{0}$ is independent of $x^{1}$.
Proof. We introduce
$\Psi=\varepsilon\kappa^{0}{\int_{\Omega}\frac{du^{\varepsilon}}{dx}Bv}\text{
}{dx}$
with
$v\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp};\mathcal{C}_{0}^{\infty}(\Omega^{1}))$.
From the Cauchy-Schwartz inequality and (15), $\lim_{\varepsilon\rightarrow
0}\Psi=0$.
* •
Step 1. The Green formula (12) and Proposition 14 $\Longrightarrow$
$\Psi={-\varepsilon\kappa^{0}\int_{\Omega}u^{\varepsilon}\frac{d(Bv)}{dx}\text{
}dx.}$
* •
Step 2. Proposition 15 $\Longrightarrow$
$\Psi=\kappa^{0}{\int_{\Omega}u^{\varepsilon}B(\frac{\partial v}{\partial
x^{1}})\text{ }dx+O(\varepsilon).}$
* •
Step 3. Proposition 16 $\Longrightarrow$
$\Psi=\kappa^{0}{\int_{\Omega}u^{\varepsilon}T^{\ast}(\frac{\partial
v}{\partial x^{1}})}\text{ }{\text{ }dx+O(\varepsilon).}$
* •
Step 4. Definition 12 $\Longrightarrow$
$\Psi=\kappa^{1}{\int_{\Omega^{\sharp}\times\Omega^{1}}T(u^{\varepsilon})\frac{\partial
v}{\partial x^{1}}\text{ }dx+O(\varepsilon).}$
* •
Step 5. Assumption (16) and passing to the limit when $\varepsilon\rightarrow
0$ $\Longrightarrow$
$\kappa^{1}{\int_{\Omega^{\sharp}\times\Omega^{1}}u^{0}\frac{\partial
v}{\partial x^{1}}\text{ }dx=0}.$
* •
Step 6. The Green formula (12) and $v=0$ on $\Omega^{\sharp}\times\Gamma^{1}$
$\Longrightarrow$
$\kappa^{1}{\int_{\Omega^{\sharp}\times\Omega^{1}}\frac{\partial
u^{0}}{\partial x^{1}}}\text{ }{v\text{ }dx=0}.$
* •
Step 7. Proposition 1 $\Longrightarrow$
$\dfrac{\partial u^{0}}{\partial x^{1}}=0.$
###### Lemma 21
[Second Block: Two-Scale Limit of the Derivative] $\eta=\frac{\partial
u^{1}}{\partial x^{1}}.$
Proof. We choose
$v\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp};\mathcal{C}_{0}^{\infty}(\Omega^{1}))$
in
$\Psi=\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}T(\frac{du^{\varepsilon}}{dx})v\text{
}dx^{\sharp}dx^{1}.$ (19)
* •
Step 1. Definition 12 $\Longrightarrow$
$\Psi=\kappa^{0}\int_{\Omega}\frac{du^{\varepsilon}}{dx}T^{\ast}v\text{ }dx.$
* •
Step 2. Proposition 16 (to approximate $T^{\ast}$ by $B$), the Green formula
(12), the linearity of integrals, and again Proposition 16 (to approximate $B$
by $T^{\ast}$) $\Longrightarrow$
$\Psi=-\kappa^{0}\int_{\Omega}u^{\varepsilon}T^{\ast}(\frac{\partial
v}{\partial x^{\sharp}})\text{
}dx-\frac{\kappa^{0}}{\varepsilon}\int_{\Omega}u^{\varepsilon}T^{\ast}(\frac{\partial
v}{\partial x^{1}})\text{
}dx-\kappa^{0}\int_{\Omega}u^{\varepsilon}T^{\ast}(\frac{\partial^{2}v}{\partial
x^{1}\partial x^{\sharp}}x^{1})\text{ }dx+O(\varepsilon).$
* •
Step 3. Definition 12 $\Longrightarrow$
$\displaystyle\Psi$ $\displaystyle=$
$\displaystyle-\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}T(u^{\varepsilon})\frac{\partial
v}{\partial x^{\sharp}}\text{
}dx^{\sharp}dx^{1}-\frac{\kappa^{1}}{\varepsilon}\int_{\Omega^{\sharp}\times\Omega^{1}}T(u^{\varepsilon})\frac{\partial
v}{\partial x^{1}}\text{ }dx^{\sharp}dx^{1}$
$\displaystyle-\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}T(u^{\varepsilon})x^{1}\frac{\partial^{2}v}{\partial
x^{1}\partial x^{\sharp}}\text{ }dx^{\sharp}dx^{1}+O(\varepsilon).$
* •
Step 4. Assumption (16) $\Longrightarrow$
$\displaystyle\Psi$ $\displaystyle=$
$\displaystyle-\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}u^{0}\frac{\partial
v}{\partial x^{\sharp}}\text{
}dx^{\sharp}dx^{1}-\frac{\kappa^{1}}{\varepsilon}\int_{\Omega^{\sharp}\times\Omega^{1}}u^{0}\frac{\partial
v}{\partial x^{1}}\text{
}dx^{\sharp}dx^{1}-\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}u^{1}\frac{\partial
v}{\partial x^{1}}\text{ }dx^{\sharp}dx^{1}$
$\displaystyle-\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}u^{0}\frac{\partial^{2}v}{\partial
x^{1}\partial x^{\sharp}}x^{1}+O(\varepsilon).$
* •
Step 5. The Green formula (12), Lemma 20, and passing to the limit when
$\varepsilon\rightarrow 0$ $\Longrightarrow$
$\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}\eta\text{ }v\text{
}dx^{\sharp}dx^{1}=\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}\frac{\partial
u^{1}}{\partial x^{1}}v\text{ }dx^{\sharp}dx^{1}.$
* •
Step 6. Proposition 1 $\Longrightarrow$
$\eta=\frac{\partial u^{1}}{\partial x^{1}}.$
###### Lemma 22
[Third Block: Microscopic Boundary Condition] $\tilde{u}^{1}$ is
$\Omega^{1}$-periodic.
Proof. In (19), we choose
$v\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp};\mathcal{C}_{\sharp}^{\infty}(\Omega^{1}))$.
* •
Step 1. The steps 1-5 of the second block $\Longrightarrow$
$\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}\eta v\text{
}dx^{\sharp}dx^{1}-\kappa^{1}\int_{\Omega^{\sharp}\times\Gamma^{1}}(u^{1}-x^{1}\frac{\partial
u^{0}}{\partial x^{\sharp}})v\text{ }n_{\Gamma^{1}}\text{
}dx^{\sharp}dx^{1}-\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}\frac{\partial
u^{1}}{\partial x^{1}}v\text{ }dx^{\sharp}dx^{1}=0.$
* •
Step 2. Lemma 21 $\Longrightarrow$
$\int_{\Omega^{\sharp}\times\Gamma^{1}}(u^{1}-x^{1}\frac{\partial
u^{0}}{\partial x^{\sharp}})v\text{ }n_{\Gamma^{1}}\text{
}dx^{\sharp}ds(x^{1})=0.$ (20)
* •
Step 3. Definition (17) of $\tilde{u}^{1}$ and Proposition 2 $\Longrightarrow$
$\tilde{u}^{1}\text{ is }\Omega^{1}\text{-periodic.}$ (21)
###### Lemma 23
[Fourth Block: Macroscopic Boundary Condition] $u^{0}$ vanishes on
$\Gamma^{\sharp}$.
Proof. We choose $v\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp})$,
* •
Step 1. The steps 1-5 of the second block and $u^{\varepsilon}=0$ on $\Gamma$
$\Longrightarrow$
$\int_{\Gamma^{\sharp}\times\Omega^{1}}u^{0}v\text{ }n_{\Gamma^{\sharp}}\text{
}ds(x^{\sharp})dx^{1}=0.$
* •
Step 2. Proposition 1 $\Longrightarrow$
$u^{0}=0\text{ on }\Gamma^{\sharp}.$
### 2.3 Homogenized Model Derivation
Here we provide the reference proof of the homogenized model derivation. It
uses Proposition 19 as an intermediary result. Let $u^{\varepsilon}$, the
solution of a linear boundary value problem posed in $\Omega,$
$\left\\{\begin{array}[]{l}-\dfrac{d}{dx}(a^{\varepsilon}(x)\dfrac{du^{\varepsilon}(x)}{dx})=f\text{
in }\Omega\\\ u^{\varepsilon}=0\text{ on }\Gamma,\end{array}\right.$ (22)
where the right-hand side $f\in L^{2}(\Omega),$ the coefficient
$a^{\varepsilon}\in\mathcal{C}^{\infty}(\Omega)$ is
$\varepsilon\Omega^{1}$-periodic, and there exist two positive constants
$\alpha$ and $\beta$ independent $\varepsilon$ such that
$0<\alpha\leq a^{\varepsilon}(x)\leq\beta.$ (23)
The weak formulation is obtained by multiplication of the differential
equation by a test function $v\in\mathcal{C}_{0}^{\infty}(\Omega)$ and
application of the Green formula,
$\kappa^{0}\int_{\Omega}a^{\varepsilon}(x)\frac{du^{\varepsilon}}{dx}\frac{dv}{dx}\text{
}dx=\kappa^{0}\int_{\Omega}f(x)v(x)\text{ }dx.$ (24)
It is known that its unique solution $u^{\varepsilon}$ is bounded as in (15).
Moreover, we assume that for some functions $a^{0}(x^{1})$ and
$f^{0}(x^{\sharp}),$
$T(a^{\varepsilon})=a^{0}\text{ and
}T(f)=f^{0}(x^{\sharp})+O_{w}(\varepsilon).$ (25)
The next proposition states the homogenized model and is the main result of
the reference proof. For $\theta^{1}$ a solution to the microscopic problem
(13) with $\mu=1,$ the homogenized coefficient and right-hand side are defined
by
$a^{H}=\int_{\Omega^{1}}a^{0}\left(1+\frac{\partial\theta^{1}}{\partial
x^{1}}\right)^{2}\text{ }dx^{1}\text{ and }f^{H}=\int_{\Omega^{1}}f^{0}\text{
}dx^{1}.$ (26)
###### Proposition 24
[Homogenized Model] The limit $u^{0}$ is solution to the weak formulation
$\int_{\Omega^{\sharp}}a^{H}\frac{du^{0}}{dx^{\sharp}}\frac{dv^{0}}{dx^{\sharp}}\text{
}dx^{\sharp}=\int_{\Omega^{\sharp}}f^{H}v^{0}\text{ }dx^{\sharp}$ (27)
for all $v^{0}\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp}).$
The proof is split into three lemmas.
###### Lemma 25
[Fifth Block: Two-Scale Model] The couple $(u^{0},\widetilde{u}^{1})$ is
solution to the two-scale weak formulation
$\int_{\Omega^{\sharp}\times\Omega^{1}}a^{0}\left(\frac{\partial
u^{0}}{\partial x^{\sharp}}+\frac{\partial\widetilde{u}^{1}}{\partial
x^{1}}\right)\left(\frac{\partial v^{0}}{\partial x^{\sharp}}+\frac{\partial
v^{1}}{\partial x^{1}}\right)\text{
}dx^{\sharp}dx^{1}=\int_{\Omega^{\sharp}\times\Omega^{1}}f^{0}v^{0}\text{
}dx^{\sharp}dx^{1}$ (28)
for any $v^{0}\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp})$ and
$v^{1}\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp},C_{\sharp}^{\infty}(\Omega^{1})).$
Proof. We choose the test functions
$v^{0}\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp})$,
$v^{1}\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp},C_{\sharp}^{\infty}(\Omega^{1}))$.
* •
Step 1 Posing $v=B(v^{0}+\varepsilon v^{1})$ in (24) and Proposition 14
$\Longrightarrow$
$Bv\in\mathcal{C}_{0}^{\infty}(\Omega)\text{ and
}\kappa^{0}\int_{\Omega}a^{\varepsilon}\frac{du^{\varepsilon}}{dx}\frac{dB(v^{0}+\varepsilon
v^{1})}{dx}\text{ }dx=\kappa^{0}\int_{\Omega}f\text{ }B(v^{0}+\varepsilon
v^{1})\text{ }dx.$
* •
Step 2 Propositions 15 and 16 $\Longrightarrow$
$\kappa^{0}\int_{\Omega}a^{\varepsilon}\frac{du^{\varepsilon}}{dx}T^{\ast}\left(\frac{\partial
v^{0}}{\partial x^{\sharp}}+\frac{\partial v^{1}}{\partial
x^{1}}\right)dx=\kappa^{0}\int_{\Omega}f\text{
}T^{\ast}(v^{0})dx+O(\varepsilon).$
* •
Step 3 Definition 12 and Proposition 6 $\Longrightarrow$
$\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}T(a^{\varepsilon})T(\frac{du^{\varepsilon}}{dx})\left(\frac{\partial
v^{0}}{\partial x^{\sharp}}+\frac{\partial v^{1}}{\partial x^{1}}\right)\text{
}dx^{\sharp}dx^{1}=\kappa^{1}\int_{\Omega^{\sharp}\times\Omega^{1}}T(f)\text{
}v^{0}\text{ }dx^{\sharp}dx^{1}+O(\varepsilon).$ (29)
* •
Step 4 Definitions (25), Lemma 19, and passing to the limit when
$\varepsilon\rightarrow 0$ $\Longrightarrow$
$\int_{\Omega^{\sharp}\times\Omega^{1}}a^{0}\left(\frac{\partial
u^{0}}{\partial x^{\sharp}}+\frac{\partial\widetilde{u}^{1}}{\partial
x^{1}}\right)\left(\frac{\partial v^{0}}{\partial x^{\sharp}}+\frac{\partial
v^{1}}{\partial x^{1}}\right)\text{
}dx^{\sharp}dx^{1}=\int_{\Omega^{\sharp}\times\Omega^{1}}f^{0}v^{0}\text{
}dx^{\sharp}dx^{1}$
which is the expected result.
###### Lemma 26
[Sixth Block: Microscopic Problem] $\widetilde{u}^{1}$ is solution to (13)
with $\mu=\dfrac{\partial u^{0}}{\partial x^{\sharp}}$ and
$\frac{\partial\widetilde{u}^{1}}{\partial x^{1}}=\dfrac{\partial
u^{0}}{\partial x^{\sharp}}\frac{\partial\theta^{1}}{\partial x^{1}}.$
Proof. We choose $v^{0}=0$ and
$v^{1}(x^{\sharp},x^{1})=w(x^{1})\varphi(x^{\sharp})$ in (28) with
$\varphi\in\mathcal{C}^{\infty}(\Omega^{\sharp})$ and
$w^{1}\in\mathcal{C}_{\sharp}^{\infty}(\Omega^{1})$.
* •
Step 1 Proposition 1, Lemma 20, and the linearity of the integral
$\Longrightarrow$
$\int_{\Omega^{1}}a^{0}\frac{\partial\widetilde{u}^{1}}{\partial
x^{1}}\frac{\partial w^{1}}{\partial x^{1}}\text{ }dx^{1}=-\frac{\partial
u^{0}}{\partial x^{\sharp}}\int_{\Omega^{1}}a^{0}\frac{\partial
w^{1}}{\partial x^{1}}\text{ }dx^{1}.$ (30)
* •
Step 2 Proposition 18 with $\mu=\dfrac{\partial u^{0}}{\partial x^{\sharp}}$
$\Longrightarrow$
$\frac{\partial\widetilde{u}^{1}}{\partial x^{1}}=\dfrac{\partial
u^{0}}{\partial x^{\sharp}}\frac{\partial\theta^{1}}{\partial x^{1}}$
as announced.
###### Lemma 27
[Seventh Block: Macroscopic Problem] $u^{0}$ is solution to (27).
Proof. We choose $v^{0}\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp})$ and
$v^{1}=\dfrac{\partial v^{0}}{\partial
x^{\sharp}}\dfrac{\partial\theta^{1}}{\partial
x^{1}}\in\mathcal{C}_{0}^{\infty}(\Omega^{\sharp},C_{\sharp}^{\infty}(\Omega^{1}))$
in (28).
* •
Step 1 Lemma 26 $\Longrightarrow$
$\int_{\Omega^{\sharp}\times\Omega^{1}}a^{0}\left(\frac{\partial
u^{0}}{\partial x^{\sharp}}+\frac{\partial\theta^{1}}{\partial
x^{1}}\frac{\partial u^{0}}{\partial x^{\sharp}}\right)\left(\frac{\partial
v^{0}}{\partial x^{\sharp}}+\frac{\partial\theta^{1}}{\partial
x^{1}}\frac{\partial v^{0}}{\partial x^{\sharp}}\right)\text{
}dx^{\sharp}dx^{1}=\int_{\Omega^{\sharp}\times\Omega^{1}}f^{0}v^{0}\text{
}dx^{\sharp}dx^{1}\text{.}$ (31)
* •
Step 2 Factorizing and definitions (26) $\Longrightarrow$
$\int_{\Omega^{\sharp}}a^{H}\frac{\partial u^{0}}{\partial
x^{\sharp}}\frac{\partial v^{0}}{\partial x^{\sharp}}\text{
}dx^{\sharp}=\int_{\Omega^{\sharp}}f^{H}v^{0}\text{ }dx^{\sharp}.$
## 3 Rewriting strategies
In this section we recall the rudiments of rewriting, namely, the definitions
of terms over a signature, of substitution and of rewriting rules. We
introduce a strategy language: its syntax and semantics in terms of partial
functions. This language will allow us to express most of the useful rewriting
strategies.
### 3.1 Term, substitution and rewriting rule.
We start with an example of rewriting rule. We define a set of rewriting
variables $\mathcal{X}=\\{x,y\\}$ and a set of function symbols
$\Sigma=\\{f,g,a,b,c\\}$. A term is a combination of elements of
$\mathcal{X}\cup\Sigma,$ for instance $f(x)$ or $f(a)$. The rewriting rule
$f(x)\leadsto g(x)$ applied to a term $f(a)$ is a two-step operation. First,
it consists in matching the left term $f(x)$ with the input term $f(a)$ by
matching the two occurences of the function symbol $f,$ and by matching the
rewriting variable $x$ with the function symbol $a$. Then, the result $g(a)$
of the rewriting operation is obtained by replacing the rewriting variable $x$
occuring in the right hand side $g(x)$ by the subterm $a$ that have been
associated to $x$. In case where a substitution is possible, as in the
application of $f(b)\rightarrow g(x)$ to $f(a)$, we say that the rewriting
rule fails.
###### Definition 28
Let $\Sigma$ be a countable set of function symbols, each symbol $f\in\Sigma$
is associated with a non-negative integer $n$, its _arity_ $ar(f)$ i.e. the
number of arguments of $f$. Let $\mathcal{X}$ be a countable set of variables
such that $\Sigma\cap\mathcal{X}=\emptyset$. The set of terms, denoted by
$\mathcal{T}(\Sigma,\mathcal{X})$, is inductively defined by
* •
$\mathcal{X}\subseteq\mathcal{T}(\Sigma,\mathcal{X})$ (i.e. every rewriting
variable is a term),
* •
for all $f\in\Sigma$ of arity $n$, and all
$t_{1},\ldots,t_{n}\in\mathcal{T}(\Sigma,\mathcal{X})$, the expression
$f(t_{1},\ldots,t_{n})\in\mathcal{T}(\Sigma,\mathcal{X})$ (i.e. the
application of function symbols to terms gives rise to terms).
We denote by $\Sigma_{n}$ the subset of $\Sigma$ of the function symbols of
arity $n$. For instance in the example $f$ and $g$ belong to $\Sigma_{1}$
while $a$ and $b$ belong to $\Sigma_{0}$. Two other common examples of terms
are the expressions $Integral(\Omega,f(x),x)$ and diff$(f(x),x)$ which
represent the expressions $\int_{\Omega}f(x)\,dx$ and $\dfrac{df(x)}{dx}$.
Notice that $Integral\in\Sigma_{3},$ diff$\in\Sigma_{2},$ $f\in\Sigma_{1}$ and
$x,\Omega\in\Sigma_{0}$. For the sake of simplicity we often keep the symbolic
mathematical notation to express the rewriting rules. In the following we see
a term as an oriented, ranked and rooted tree as it is usual in symbolic
computation. We recall that in a ranked tree the child order is important. For
instance the tree associated to the term $Integral(\Omega,f(x),x)$ has
$Integral$ as its root which has three children in the order $\Omega,$ $f,$
$x$ and $f$ has one child $x$.
###### Definition 29
A _substitution_ is a function
$\sigma:\mathcal{X}\rightarrow\mathcal{T}(\Sigma,\mathcal{X})$ such that
$\sigma(x)\neq x$ for $x\in\mathcal{X}$. The set of variables that $\sigma$
does not map to themselves is called the _domain_ of $\sigma$, i.e.
$Dom(\sigma)=\\{{x\in\mathcal{X}\;|\;\sigma(x)\neq x}\\}$. If
$Dom(\sigma)=\\{{x_{1},\cdots,x_{n}}\\}$ then we might write $\sigma$ as
$\sigma=\\{x_{1}\mapsto t_{1},\ldots,x_{n}\mapsto t_{n}\\}$ for some terms
$t_{1},..,t_{n}$. Any substitution $\sigma$ can be extended to a mapping
$\mathcal{T}(\Sigma,\mathcal{X})\rightarrow\mathcal{T}(\Sigma,\mathcal{X})$ as
follows: for $x\in\mathcal{X},$ $\hat{\sigma}(x)=\sigma(x)$, and for any non-
variable term $s=f(s_{1},\cdots,s_{n})$, we define
$\hat{\sigma}(s)=f(\hat{\sigma}(s_{1}),\cdots,\hat{\sigma}(s_{n}))$. To
simplify the notation we do not distinguish between a substitution
$\sigma:\mathcal{X}\rightarrow\mathcal{T}(\Sigma,\mathcal{X})$ and its
extension
$\hat{\sigma}:\mathcal{T}(\Sigma,\mathcal{X})\rightarrow\mathcal{T}(\Sigma,\mathcal{X})$.
The _application_ of a substitution $\sigma$ to a term $t$, denoted by
$\sigma(t)$, simultaneously replaces all occurrences of variables in $t$ by
their $\sigma$-images.
For instance, the maping $\sigma$ defined by $\sigma(x)=a$ is a substitution
and its extension $\hat{\sigma}$ maps $f(x)$ and $g(x)$ into $f(a)$ and
$g(a)$.
A _rewriting rule_ , is a pair $(l,r)$ where $l$ and $r$ are terms in
$\mathcal{T}(\Sigma,\mathcal{X})$; it will also be denoted by $l\leadsto r$.
We observe that for any two terms $s,t$, there exists at most one substitution
$\sigma$ such that $\sigma(s)=t$. We mention that a rewriting rule stands for
the rule application at the top position. It is more useful to be able to
apply a rule at arbitrary position, and more generally to specify the way
rules are applied. For this purpose we next present a strategy language that
allows to built strategies out of basic constructors. To this end, we
introduce strategy constructor symbols $;,\leadsto,\oplus,\mu,etc$ that do not
belong to $\Sigma\cup\mathcal{X}$. Informally, the constructor $";"$ stands
for the composition, $"\oplus"$ for the left choice, $Some$ for the
application of a strategy to the immediate subterms of the input term,
$\eta(x)$ for the fail as identity constructor, $Child(j,s)$ applies the
strategy $s$ to the $j^{\text{th}}$ immediate subterm, $X$ is a fixed-point
variable, and $\mu$ is the fixed-point or the iterator constructor, its
purpose is to define recursive strategies. For example, the strategy $\mu
X.(s;X)$ stands for $s;s;\ldots$, that is, it is the iteration of the
application of $s$ until a fixed-point is reached. The precise semantics of
these constructors is given in Definition 31.
###### Definition 30
(Strategy) Let $\mathcal{F}$ be a finite set of fixed-point variables. A
strategy is inductively defined by the following grammar:
$s::=l\leadsto r\;\;|\;\;s;s\;\;|\;\;s\oplus
s\;\;|\;\;\eta(s)\;\;|\;\;Some(s)\;\;|\;\;Child(j,s)\;\;|\;\;X\;\;|\;\;\mu
X.s$ (32)
where $j\in\mathbb{N}$ and $X\in\mathcal{F}$. The set of strategies defined
from a set of rewriting rules in
$\mathcal{T}(\Sigma,\mathcal{X})\times\mathcal{T}(\Sigma,\mathcal{X})$ is
denoted by $\mathcal{S}_{\mathcal{T}}$.
We denote by $\mathbb{F}$ the failing result of a strategy and
$\mathcal{T}^{\ast}(\Sigma,\mathcal{X})=\mathcal{T}(\Sigma,\mathcal{X})\cup\mathbb{F}.$
###### Definition 31
(Semantics of a strategy) The semantics of a strategy is a function
$[\\![.]\\!]:\mathcal{S}_{\mathcal{T}(\Sigma,\mathcal{X})}\rightarrow(\mathcal{T}^{\ast}(\Sigma,\mathcal{X})\rightarrow\mathcal{T}^{\ast}(\Sigma,\mathcal{X}))$
defined by its application to each grammar component:
$[\\![s]\\!](\mathbb{F})=\mathbb{F}$
$[\\![l\leadsto r]\\!](t)=\begin{cases}\sigma(r)&\text{ if }\sigma(l)=t\\\
\mathbb{F}&\text{otherwise}\end{cases}$
$[\\![s_{1};s_{2}]\\!](t)=[\\![s_{2}]\\!]([\\![s_{1}]\\!](t))$
$[\\![s_{1}\oplus s_{2}]\\!](t)=\begin{cases}[\\![s_{1}]\\!](t)&\text{ if
}[\\![s_{1}]\\!](t)\neq\mathbb{F}\\\
[\\![s_{2}]\\!](t)&\text{otherwise}\end{cases}$
$[\\![\eta(s)]\\!](t)=\begin{cases}t&\text{ if
}[\\![s]\\!](t)=\mathbb{F}\\\\[0.0pt]
[\\![s]\\!](t)&\text{otherwise}\end{cases}$
$[\\![Some(s)]\\!](t)=\begin{cases}\mathbb{F}&\text{ if }ar(t)=0\\\
f(\eta(s)(t_{1}),\ldots,\eta(s)(t_{n}))&\text{ if
}t=f(t_{1},\ldots,t_{n})\textrm{ and }\exists i\in[1..n]\text{ s.t.
}[\\![s]\\!](t_{i})\neq\mathbb{F}\\\ \mathbb{F}&\text{ otherwise}\end{cases}$
$[\\![Child(j,s)]\\!](t)=\begin{cases}\mathbb{F}\text{ }\text{ if
}ar(t)=0,\text{ or }t=f(t_{1},\ldots,t_{n})\textrm{ and }j>n\\\
f(t_{1},\ldots,t_{j-1},[\\![s]\\!](t_{j}),t_{j+1},\ldots,t_{n})\text{ }\text{
if }t=f(t_{1},\ldots,t_{n})\textrm{ and }j\leq n.\end{cases}$
The semantics of the fixed-point constructor is more subtle. One would write:
$[\\![\mu X.s]\\!]=[\\![s[X/\mu X.s]]\\!]$ (33)
but this equation cannot be directly used to define $[\\![\mu X.s]\\!]$, since
the right-hand side contains as a subphrase the phrase whose denotation we are
trying to define. Notice that the equation (33) amounts to saying that
$[\\![\mu X.s]\\!]$ should be the least fixed-point of the operator $F$:
$F(X)=\lambda
X^{(\mathcal{T}^{\ast}(\Sigma,\mathcal{X})\rightarrow\mathcal{T}^{\ast}(\Sigma,\mathcal{X}))}\;[\\![s]\\!]^{(\mathcal{T}^{\ast}(\Sigma,\mathcal{X})\rightarrow\mathcal{T}^{\ast}(\Sigma,\mathcal{X}))}.$
Let
$D=\mathcal{T}^{\ast}(\Sigma,\mathcal{X})\rightarrow\mathcal{T}^{\ast}(\Sigma,\mathcal{X})$
and define $\sqsubseteq$ a partial order on $D$ as follows:
$w\sqsubseteq w^{\prime}\text{ iff }graph(w)\subseteq graph(w^{\prime}).$
Let $\bot$ be the function of empty graph, and let
$\displaystyle F_{0}$ $\displaystyle=\bot$ $\displaystyle F_{n}$
$\displaystyle=F(F_{n-1}).$
One can show, using Knaster-Tarsky fixed-point theorem [Tar55], that
$F_{\infty}$ is the least fixed-point of the operator $F$, that is
$F(w)=w\implies F_{\infty}\sqsubseteq w.$
Such fixed point equations arises very often in giving denotational semantics
to languages with recursive features, for instance the semantics of the loop
“while” of the programming languages [SK95, §9, §10].
###### Example 32
Out of the basic constructors of strategies given in Definition 30, we built
up some useful strategies. The strategy $TopDown(s)$ applies the strategy $s$
to an input term $t$ in a top down way starting from the root, it stops when
it succeeds. That is, if the strategy $s$ succeeds on some subterm
$t^{\prime}$ of $t$, then it is not applied to the proper subterms of
$t^{\prime}$. The strategy $OuterMost(s)$ behaves exactly like $TopDown(s)$
apart that if the strategy $s$ succeeds on some subterm $t^{\prime}$ of $t$,
then it is also applied to the proper subterms of $t^{\prime}$. The strategy
$BottomUp(s)$ (resp. $InnerMost(s)$) behaves like $BottomUp(s)$ (resp.
$InnerMost(s)$) but in the opposite direction, i.e. it traverses a term $t$
starting from the leafs. The strategy $Normalizer(s)$ iterates the application
of $s$ until a fixed-point is reached. The formal definition of these
strategies follows:
$\displaystyle TopDown(s)$ $\displaystyle:=\mu X.(s\oplus Some(X)),$
$\displaystyle OuterMost(s)$ $\displaystyle:=\mu X.(s;Some(X)),$
$\displaystyle BottomUp(s)$ $\displaystyle:=\mu X.(Some(X)\oplus s),$
$\displaystyle InnerMost(s)$ $\displaystyle:=\mu X.(Some(X);s),$
$\displaystyle Normalizer(s)$ $\displaystyle:=\mu X.(s;X).$
###### Example 33
Let the variable set $\mathcal{X}=\\{y,z,t,w\\}$ and the partition
$\Sigma=\Sigma_{0}\cup\Sigma_{1}\cup\Sigma_{2}$ of the set of function symbols
with respect to their arity with
$\Sigma_{0}=\\{x,x^{1},x^{2},\partial{\Omega,\Omega,\varepsilon}\\},$
$\Sigma_{1}=\\{{u,v,n,O,B}\\},$ $\Sigma_{2}=\\{$derivative$\\},$
$\Sigma_{3}=\\{$Integral$\\}$ with obvious definitions. We present the
strategy that rewrites the expression
$\Psi=\int_{\partial{\Omega}}u(x)\text{ }n(x)\text{
}B(v(x^{1},x^{2}))\;dx-\int_{\Omega}u(x)\;\frac{d}{dx}({B(v(x^{1},x^{2})))}\text{
}dx+O(\varepsilon),$
taking into account that $B(v)$ vanishes on the boundary $\partial{\Omega}$.
This term is written under mathematical form for simplicity, but in practice
it is written from the above defined symbol of functions. Remark that the
expression $B(v(x^{1},x^{2}))$ is a function of the variable $x$ but this does
not appear explicitly in this formulation. Such a case cannot appear when the
grammar for terms introduced in the next section is used. We need the two
rewriting rules
$\displaystyle r_{1}$ $\displaystyle:=\int_{\partial\Omega}w\text{
}dt\leadsto\int_{\partial\Omega}w\text{ }dt,$ $\displaystyle r_{2}$
$\displaystyle:=B(v(z,y))\leadsto 0,$
and the strategy $TopDown$ already defined. Notice that the rule $r_{1}$ has
not effect but to detect the presence of the integral over the boundary.
Finally, the desired strategy is:
$F:=TopDown(r_{1};TopDown(r_{2})),$
and the result is
$[\\![F]\\!](\Psi)=\int_{\partial{\Omega}}u(x)\text{ }n(x)\text{
}B(0)\;dx-\int_{\Omega}u(x)\;\frac{d}{dx}({B(v(x}^{1},x^{2}){))}\text{
}dx+O(\varepsilon).$
### 3.2 Rewriting modulo equational theories
So far the semantics of strategies does not take into account the properties
of some function symbols, e.g. associativity and commutativity equalities of
”+”. In particular the application of the rule $a+b\leadsto f(a,b)$ to the
term $(a+c)+b$ fails. More generally we next consider the rewriting modulo an
equational theory, i.e. a theory that is axiomatized by a set of equalities.
For the sake of illustration, we consider the commutativity and associativity
theory of $+,$ $E=\\{x+y=y+x,(x+y)+z=x+(y+z)\\}$ and the rewrite rule
$f(x+y)\leadsto f(x)+f(y)$ applying the linearity rule of a function $f$. Its
application to the term $f((a+b)+c)$ modulo $E$ yields the set of terms
$\\{{f(a+b)+f(c),}$ ${f(a)+f(b+c),}$ ${f(b)+f(a+c)}\\}.$ In the following, we
define part of the semantics of a strategy modulo a theory, we use the
notation $\mathcal{P}(\mathcal{T}(\Sigma,\mathcal{X}))$ to denote the set of
subsets of $\mathcal{T}(\Sigma,\mathcal{X})$.
###### Definition 34
(Semantics of a strategy modulo) Let be $E$ be a finitary equational theory,
the semantics of a strategy modulo $E$ is a function
${[\\![.]\\!]}^{E}:\mathcal{S}_{\mathcal{T}(\Sigma,\mathcal{X})}\rightarrow(\mathcal{P(T}^{\ast}(\Sigma,\mathcal{X}))\rightarrow\mathcal{P(T}^{\ast}(\Sigma,\mathcal{X})))$
that is partly defined by
$\displaystyle{[\\![s]\\!]}^{E}(\\{{t_{1},\ldots,t_{n}}\\})=\cup_{i=1}^{n}{[\\![s]\\!]}^{E}({t_{i}})$
$\displaystyle{[\\![l\leadsto
r]\\!]}^{E}({t_{1}})=\cup_{j}\\{{\sigma_{j}(r)}\\}\text{ if
}E\implies\sigma_{j}(l)=t,$
$\displaystyle[\\![s_{1};s_{2}]\\!]^{E}(t)=[\\![s_{2}]\\!]^{E}([\\![s_{1}]\\!]^{E}(t))$
$\displaystyle[\\![s_{1}\oplus
s_{2}]\\!]^{E}(t)=\begin{cases}[\\![s_{1}]\\!]^{E}(t)&\text{ if
}[\\![s_{1}]\\!](t)\neq\\{{\mathbb{F}}\\}\\\
[\\![s_{2}]\\!]^{E}(t)&\text{otherwise}\end{cases}$
$\displaystyle[\\![\eta(s)]\\!]^{E}(t)=\begin{cases}\\{{t}\\}&\text{ if
}[\\![s]\\!]^{E}(t)=\\{{\mathbb{F}}\\}\\\\[0.0pt]
[\\![s]\\!]^{E}(t)&\text{otherwise.}\end{cases}$
The semantics of $Some$ and $Child$ is more complex and we do not detail it
here. The semantics of the fixed-point operator is similar to the one given in
the rewriting modulo an empty theory.
### 3.3 Conditional rewriting
Rewriting with conditional rules, also known as conditional rewriting, extends
the basic rewriting with the notion of condition. A conditional rewrite rule
is a triplet:
$(l,r,c)$
where $c$ is a constraint expressed in some logic. The semantics of the rule
application is given by
${[\\![(l,r,c)]\\!]}^{E}(t)=\begin{cases}\cup_{j}\\{{\sigma_{j}(r)}\\}&\text{
if }\text{ the formula }\sigma_{j}(c)\text{ can be derived from }E,\\\
\mathbb{F}&\text{otherwise.}\end{cases}$
The set of strategies defined over rewriting rules
$(l,r,c)\in\mathcal{T}\times\mathcal{T}\times\mathcal{T}_{c}$ is denoted by
$\mathcal{S}_{\mathcal{T},\mathcal{T}_{c}}.$
### 3.4 Rewriting with memory
Some definitions or computations require storing the history of the
transformations of some terms. To carry on, we introduce a particular function
symbol $\mathbb{M}\in\Sigma_{2}$ of arity two to represent the memory.
Intuitively the term $\mathbb{M}(t_{1},t_{2})$ represents the term $t_{1}$,
besides the additional information that $t_{2}$ was transformed to $t_{1}$ at
an early stage. From this consideration if follows that any strategy applied
to $\mathbb{M}(t_{1},t_{2})$ should only be applied to $t_{1}$. Formally, we
define the semantics of strategy application taking into account the memory as
a partial function:
${[\\![.]\\!]}_{{}_{\mathbb{M}}}:\mathcal{S}_{\mathcal{T}(\Sigma,\mathcal{X})}\rightarrow(\mathcal{T}^{\ast}(\Sigma,\mathcal{X})\rightarrow\mathcal{T}^{\ast}(\Sigma,\mathcal{X}))$
so that:
$[\\![s]\\!]_{\mathbb{M}}(t)=\mathbb{M}([\\![s]\\!]_{\mathbb{M}}(t_{1}),t_{2})$
if $t=\mathbb{M}(t_{1},t_{2})$, and behaves like $[\\![.]\\!]$, otherwise.
That is,
$[\\![s]\\!]_{\mathbb{M}}(\mathbb{F})=\mathbb{F}$
$[\\![l\leadsto r]\\!]_{\mathbb{M}}(t)=\begin{cases}\sigma(r)&\text{ if
}\sigma(l)=t\\\ \mathbb{F}&\text{otherwise}\end{cases}$
$[\\![s_{1};s_{2}]\\!]_{\mathbb{M}}(t)=[\\![s_{2}]\\!]_{\mathbb{M}}([\\![s_{1}]\\!]_{\mathbb{M}}(t))$
$[\\![s_{1}\oplus
s_{2}]\\!]_{\mathbb{M}}(t)=\begin{cases}[\\![s_{1}]\\!]_{\mathbb{M}}(t)&\text{
if }[\\![s_{1}]\\!]_{\mathbb{M}}(t)\neq\mathbb{F}\\\
[\\![s_{2}]\\!]_{\mathbb{M}}(t)&\text{otherwise}\end{cases}$
etc.
## 4 A Symbolic Computation Framework for Model Derivation
In this section we propose a framework for the two-scale model proofs. As in
Example 33, the latter are formulated as rewriting strategies. We notice that
the following framework differs from that used in Example 33 in that it allows
for the complete representation of the data. It does not rely on external
structures such as hash tables. To this end, we define the syntax of the
mathematical expressions by means of a grammar $\mathcal{G}$.
### 4.1 A Grammar for Mathematical Expressions
The grammar includes four rules to built terms for mathematical functions
$\mathcal{F}$, regions $\mathcal{R}$, mathematical variables $\mathcal{V}$,
and boundary conditions $\mathcal{C}$. It involves $\Sigma_{Reg}$,
$\Sigma_{Var},$ $\Sigma_{Fun},$ $\Sigma_{Oper},$ and $\Sigma_{Cons}$ which are
sets of names of regions, variables, functions, operators, and constants so
subsets of $\Sigma_{0}$. Empty expressions in $\Sigma_{Reg}$ and
$\Sigma_{Fun}$ are denoted by $\bot_{\mathcal{R}}$ and $\bot_{\mathcal{F}}$.
The set of usual algebraic operations $\Sigma_{Op}=\\{+,-,\times,/,\char
94\relax\\}$ is a subset of $\Sigma_{2}$. The elements of
$\Sigma_{Type}=\\{$Unknown, Test, Known, $\bot_{Type}\\}\subset\Sigma_{0},$
$\bot_{Type}$ denoting the empty expression, are to specify the nature of a
function, namely an unknown function (as $u^{\varepsilon},$ $u^{0},$ $u^{1}$
in the proof), a test function (as $v,$ $v^{0},$ $v^{1}$) in a weak
formulation or another known function (as $a^{\varepsilon},$
$f^{\varepsilon},$ $a^{0},$ $f^{0}$ or $n_{\Gamma^{1}}$). The boundary
conditions satisfied by a function are specified by the elements of
$\Sigma_{BC}=\\{d,n,{pd,apd,t}\\}\subset\Sigma_{0}$ to express that it
satisfies Dirichlet, Neuman, periodic, anti-periodic or transmission
conditions. The grammar also involve the symbols of functions $\mathtt{Reg}$,
$\mathtt{Fun}$, $\mathtt{IndexedFun}$, $\mathtt{IndexedReg}$,
$\mathtt{IndexedVar}$, $\mathtt{Oper}$, $\mathtt{Var}$, and $\mathtt{BC}$ that
define regions, mathematical functions, indexed functions or regions or
variables, operators, mathematical variables and boundary conditions. The
grammar reads as
$\displaystyle\mathcal{F}::=$
$\displaystyle\;\circledast(\mathcal{F},\mathcal{F})\;\;|\;\;d\;\;|\;\;\mathcal{V}\;\;\;|$
$\displaystyle\mathtt{Fun}(f,[\mathcal{V},\ldots,\mathcal{V}],[\mathcal{C},\ldots,\mathcal{C}],K)\;\;|$
$\displaystyle\mathtt{IndexedFun}(\mathcal{F},\mathcal{V})\;\;|$
$\displaystyle\mathtt{Oper}(A,[\mathcal{F},\ldots,\mathcal{F}],[\mathcal{V},\ldots,\mathcal{V}],[\mathcal{V},\ldots,\mathcal{V}],[d,\ldots,d])\;\;|$
$\displaystyle\bot_{\mathcal{F}}\;\;\;|\;\;\mathbb{M}(\mathcal{F},\mathcal{F}),$
$\displaystyle\mathcal{R}::=\;$
$\displaystyle\mathtt{Reg}(\Omega,[d,\ldots,d],\\{{\mathcal{R},\ldots,\mathcal{R}}\\},\mathcal{R},\mathcal{F})\;\;|$
$\displaystyle\mathtt{IndexedReg}(\mathcal{F},\mathcal{V})\;\;|$
$\displaystyle\bot_{\mathcal{R}}\;\;|\;\;\mathbb{M}(\mathcal{R},\mathcal{R}),$
$\displaystyle\mathcal{V}::=\;$
$\displaystyle\mathtt{Var}(x,\mathcal{R})\;\;|\;\;\mathtt{IndexedVar}(\mathcal{V},\mathcal{V})\;\;|\;\;\mathbb{M}(\mathcal{V},\mathcal{V}),$
$\displaystyle\mathcal{C}::=\;$
$\displaystyle\mathtt{BC}(c,\mathcal{R},\mathcal{F})\;\;|\;\;\mathbb{M}(\mathcal{C},\mathcal{C}),$
where the symbols $\Omega,$ $d,$ $\circledast,$ $f,$ $K,$ $A,$ $x$ and $c$
hold for any function symbols in $\Sigma_{Reg}$, $\Sigma_{Cons}$,
$\Sigma_{Op}$, $\Sigma_{Fun}$, $\Sigma_{Type}$, $\Sigma_{Oper}$,
$\Sigma_{Var},$ and $\Sigma_{BC}$. The arguments of a region term are its
region name, the list of its space directions (e.g. [1,3] for a plane in the
variables $(x_{1},x_{3}))$, the (possibly empty) set of subregions, the
boundary and the outward unit normal. Those of a function term are its
function name, the list of the mathematical variables that range over its
domain, its list of boundary conditions, and its nature. Those for an indexed
region or variable or function term are its function or variable term and its
index (which should be discrete). For an operator term these are its name, the
list of its arguments, the list of mathematical variable terms that it
depends, the list of mathematical variable terms of its co-domain (useful e.g.
for $T$ when the image cannot be deduced from the initial set), and a list of
parameters. Finally, the arguments of a boundary condition term are its type,
the boundary where it applies and an imposed function if there is one. For
example, the imposed function is set to $0$ for an homogeneous Dirichlet
condition and there is no imposed function in a periodicity condition. We
shall denote by $\mathcal{T}_{\mathcal{R}}(\Sigma,\emptyset),$
$\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$,
$\mathcal{T}_{\mathcal{V}}(\Sigma,\emptyset),$ and
$\mathcal{T}_{\mathcal{C}}(\Sigma,\emptyset)$ the set of terms generated by
the grammar starting from the non-terminal $\mathcal{R},$ $\mathcal{F}$,
$\mathcal{V},$ and $\mathcal{C}.$ The set of all terms generated by the
grammar (i.e. starting from $\mathcal{R},$ $\mathcal{F}$, $\mathcal{V},$ or
$\mathcal{C}$) is denoted by $\mathcal{T}_{\mathcal{G}}(\Sigma,\emptyset)$.
Finally, we also define the set of terms
$\mathcal{T}_{\mathcal{G}}(\Sigma,\mathcal{X})$ where each non-terminal
$\mathcal{R},$ $\mathcal{F}$, $\mathcal{V},$ and $\mathcal{C}$ can be replaced
by a rewriting variable in $\mathcal{X}$. Equivalently, it can be generated by
the extension of $\mathcal{G}$ obtained by adding ” $|$ $x$” with
$x\in\mathcal{X}$ in the definition of each non-terminal term. Or, by adding
$N::=x$, with $x\in\mathcal{X}$ for each non-terminal $N$.
###### Example 35
Throughout this paper, an underlined symbol represents a shortcut whose name
corresponds to the term name. For instance,
$\displaystyle\underline{\Omega}=\mathtt{Reg}(\Omega,[2],\emptyset,\underline{\Gamma},\underline{n}),\text{
where
}\underline{\Gamma}=\mathtt{Reg}(\Gamma,[],\emptyset,\bot_{\mathcal{R}},\bot_{\mathcal{F}})\text{,
}$
$\displaystyle\underline{n}=\mathtt{Fun}(n,[\underline{x}^{\prime}],[],Known),\text{
}\underline{x}^{\prime}=\mathtt{Var}(x,\underline{\Omega}^{\prime})\text{ and
}\underline{\Omega}^{\prime}=\mathtt{Reg}(\Omega,[2],\emptyset,\underline{\Gamma},\bot_{\mathcal{F}})\text{
}$
represents a region-term a one-dimensional domain named $\Omega$, oriented in
the direction $x_{2}$, with boundary $\underline{\Gamma}$ and with outward
unit normal $\underline{n}$. The shortcut $\underline{\Gamma}$ is also for a
region term representing the boundary named $\Gamma$. As it can be understood
from this example, except names all other fields can be void terms or empty
lists.
###### Example 36
An unknown function $u(x)$ defined on $\underline{\Omega}$ satisfying
homogeneous Dirichlet boundary condition $u(x)=0$ on $\underline{\Gamma}$ is
represented by the function-term,
$\underline{u}(\underline{x})=\mathtt{Fun}(u,[\underline{x}],\mathtt{Cond}(d,\underline{\Gamma},0),\mathtt{Unknown})\text{
where }\quad\underline{x}=\mathtt{Var}(x,\underline{\Omega}).$
### 4.2 Short-cut Terms
For the sake of conciseness, we introduce shortcut terms that are constantly
used in the end of the paper:
$\underline{\Omega}\in\mathcal{T}_{\mathcal{R}}(\Sigma,\mathcal{X})$,
$\underline{x}\in\mathcal{T}_{\mathcal{V}}(\Sigma,\mathcal{X})$ defined in
$\underline{\Omega}$,
$\underline{I}\in\mathcal{T}_{\mathcal{R}}(\Sigma,\mathcal{X})$ used for
(discrete) indices,
$\underline{i}\in\mathcal{T}_{\mathcal{V}}(\Sigma,\mathcal{X})$ used as an
index defined in $\underline{I}$,
$\underline{u}\in\mathcal{T}_{\mathcal{F}}(\Sigma,\mathcal{X})$ or
$\underline{u}(\underline{x})\in\mathcal{T}_{\mathcal{F}}(\Sigma,\mathcal{X})$
to express that it depends on the variable $\underline{x}$ and
$\underline{u}_{\underline{i}}$ the indexed-term of the function
$\underline{u}$ indexed by $\underline{i}$. Similar definitions can be given
for the other notations used in the proof as $\underline{\Omega}^{\sharp},$
$\underline{x}^{\sharp},$ $\underline{\Omega}^{1},$ $\underline{x}^{1},$
$\underline{\Omega^{\prime}},$ $\underline{x^{\prime}},$
$\underline{v}(\underline{x}^{\sharp},\underline{x}^{1})$ etc. The operators
necessary for the proof are the integral, the derivative, the two-scale
transform $T$, its adjoint $T^{\ast}$, and $B$. In addition, for some
extensions of the reference proof we shall use the discrete sum.
Instead of writing operator-terms as defined in the grammar, we prefer to use
the usual mathematical expressions. The table below establishes the
correspondance between the two formulations.
$\displaystyle\int\underline{u}\,d\underline{x}$
$\displaystyle\equiv\mathtt{Oper}(\mathtt{Integral},\underline{u},[\underline{x}],[],[]),$
$\displaystyle\frac{\partial\underline{u}}{\partial\underline{x}}$
$\displaystyle\equiv\mathtt{Oper}(\mathtt{Partial},\underline{u},[\underline{x}],[\underline{x}],[]),$
$\displaystyle tr(\underline{u},\underline{x})(\underline{x^{\prime}})$
$\displaystyle\equiv\mathtt{Oper}(\text{Restriction},\underline{u},[\underline{x}],[\underline{x^{\prime}}],[]),$
$\displaystyle
T(\underline{u},\underline{x})(\underline{x}^{\sharp},\underline{x}^{1})$
$\displaystyle\equiv\mathtt{Oper}(T,\underline{u},[\underline{x}],[\underline{x}^{\sharp},\underline{x}^{1}],[\varepsilon]),$
$\displaystyle
T^{\ast}(\underline{v},[\underline{x}^{\sharp},\underline{x}^{1}])(\underline{x})$
$\displaystyle\equiv\mathtt{Oper}(T^{\ast},\underline{v},[\underline{x}^{\sharp},\underline{x}^{1}],[\underline{x}],[\varepsilon]),$
$\displaystyle
B(\underline{v},[\underline{x}^{\sharp},\underline{x}^{1}])(\underline{x})$
$\displaystyle\equiv\mathtt{Oper}(B,\underline{v},[\underline{x}^{\sharp},\underline{x}^{1}],[\underline{x}],[\varepsilon]),$
$\displaystyle\sum_{\underline{i}}\underline{u}_{\underline{i}}$
$\displaystyle\equiv\mathtt{Oper}(\mathtt{Sum},\underline{u}_{\underline{i}},[\underline{i}],[],[]).$
The multiplication and exponentiation involving two terms $f$ and $g$ are
written $fg$ and $f^{g}$ as usual in mathematics. All these conventions have
been introduced for terms in $\mathcal{T}(\Sigma,\emptyset)$. For terms in
$\mathcal{T}(\Sigma,X)$ as those encoutered in rewriting rules, the rewriting
variables can replace any of the above short cut terms.
###### Example 37
The rewriting rule associated to the Green rule (12) reads
$\int\frac{\partial u}{\partial\underline{x}}v\text{
}d\underline{x}\leadsto-\int u\frac{\partial v}{\partial\underline{x}}\text{
}d\underline{x}+\int tr(u)\text{ }tr(v)\;n\text{ }d\underline{x^{\prime}}.$
with the short-cuts
$\underline{\Gamma}=\mathtt{Reg}(\Gamma,d1,\emptyset,\bot_{\mathcal{R}},\bot_{\mathcal{F}})$,
$\underline{\Omega}=\mathtt{Reg}(\Omega,d2,\emptyset,\underline{\Gamma},n)$,
$\underline{x}=\mathtt{Var}(x,\underline{\Omega})$ and
$\underline{x^{\prime}}=\mathtt{Var}(x,\underline{\Gamma})$. The other symbols
$u,$ $v$, $x$, $\Omega,$ $\Gamma,$ $d1,$ $d2$, $n$ are rewriting variables,
and for instance
$\frac{\partial u}{\partial
x}\equiv\mathtt{Oper}(\mathtt{Partial},u,x,[],[]).$
Applying this rule according to an appropriate strategy, say the top down
strategy, to a term in $\mathcal{T}(\Sigma,\emptyset)$ like
$\Psi=\int\frac{\partial\underline{f}(\underline{z})}{\partial\underline{z}}\underline{g}(\underline{z})\text{
}d\underline{z},$
for a given variable term $\underline{z}$ and function terms $\underline{f},$
$\underline{g}$. As expected, the result is
$-\int\underline{f}\text{
}\frac{\partial\underline{g}}{\partial\underline{z}}\text{
}d\underline{z}+\int\underline{f}\text{ }\underline{g}\;\underline{n}\text{
}d\underline{z^{\prime}}$
with evident notations for $\underline{n}$ and $\underline{z^{\prime}}$.
### 4.3 A Variable Dependency Analyzer
The variable dependency analyzer $\Theta$ is related to effect systems in
computer science [MM09]. It is a function from
$\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$ to the set
$\mathcal{P}(\mathcal{T}_{\mathcal{V}}(\Sigma,\emptyset))$ of the parts of
$\mathcal{T}_{\mathcal{V}}(\Sigma,\emptyset)$. When applied to a term
$t\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$, it returns the set of
mathematical variables on which $t$ depends. The analyzer $\Theta$ is used in
the condition part of some rewriting rules and is inductively defined by
$\displaystyle\Theta(d)=\emptyset\text{ for }d\in\Sigma_{Cons},$
$\displaystyle\Theta(\underline{x})=\\{\underline{x}\\}\text{ for
}\underline{x}\in\mathcal{T}_{\mathcal{V}}(\Sigma,\emptyset),$
$\displaystyle\Theta(\circledast(\underline{u},\underline{v}))=\Theta(\underline{u})\cup\Theta(\underline{v})\text{
for
}\underline{u},\underline{v}\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)\text{
and }\circledast\in\Sigma_{Op},$
$\displaystyle\Theta(\bot_{\mathcal{F}})=\emptyset\text{,}$
$\displaystyle\Theta(\underline{u}(\underline{x^{1}},..,\underline{x^{n}}))=\\{\underline{x^{1}},..,\underline{x^{n}}\\}\text{
for }\underline{u}\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)\text{ and
}\underline{x^{1}},..,\underline{x^{n}}\in\mathcal{T}_{\mathcal{V}}(\Sigma,\emptyset),$
$\displaystyle\Theta(\underline{u}_{\underline{i}})=\Theta(\underline{u})\text{
for }\underline{u}\in\mathcal{T}_{\mathcal{V}}(\Sigma,\emptyset)\text{ and
}\underline{i}\in\mathcal{T}_{\mathcal{V}}(\Sigma,\emptyset),$
$\displaystyle\Theta([\underline{u^{1}},\dots,\underline{u^{n}}])=\Theta(\underline{u^{1}})\cup\dots\cup\Theta(\underline{u^{n}})\text{
for
}\underline{u^{1}},\dots,\underline{u^{n}}\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset).$
The definition of $\Theta$ on the operator-terms is done case by case,
$\displaystyle\Theta(\int\underline{u}\,d\underline{x})=\Theta(\underline{u})\setminus\Theta(\underline{x}),$
$\displaystyle\Theta(\frac{\partial\underline{u}}{\partial\underline{x}})=\left\\{\begin{array}[]{l}\Theta(\underline{u})\text{
if }\Theta(\underline{x})\subseteq\Theta(u),\\\ \emptyset\text{
otherwise,}\end{array}\right.$
$\displaystyle\Theta(tr(\underline{u},\underline{x})(\underline{x^{\prime}}))=\Theta(\underline{x^{\prime}}),$
$\displaystyle\Theta(T(\underline{u},\underline{x})(\underline{x}^{\sharp},\underline{x}^{1}))=(\Theta(\underline{u})\setminus\Theta(\underline{x}))\cup\Theta([\underline{x}^{\sharp},\underline{x}^{1}])\text{
if }\Theta(\underline{x})\cap\Theta(\underline{u})\neq\emptyset,$
$\displaystyle\Theta(T^{\ast}(\underline{v},[\underline{x}^{\sharp},\underline{x}^{1}])(\underline{x}))=(\Theta(\underline{v})\setminus\Theta([\underline{x}^{\sharp},\underline{x}^{1}]))\cup\Theta(\underline{x})\text{
if
}\Theta([\underline{x}^{\sharp},\underline{x}^{1}])\cap\Theta(\underline{v})\neq\emptyset,$
$\displaystyle\Theta(B(\underline{v},[\underline{x}^{\sharp},\underline{x}^{1}])(\underline{x})))=(\Theta(\underline{v})\setminus\Theta([\underline{x}^{\sharp},\underline{x}^{1}]))\cup\Theta(\underline{x})\text{
if
}\Theta([\underline{x}^{\sharp},\underline{x}^{1}])\cap\Theta(\underline{v})\neq\emptyset,$
$\displaystyle\Theta(\sum_{\underline{i}}\underline{u}_{\underline{i}})=\bigcup_{\underline{i}}\Theta(\underline{u}_{\underline{i}}).$
We observe that these definitions are not very general, but they are
sufficient for the applications of this paper. To complete the definition of
$\Theta$, it remains to define it on memory terms,
$\Theta(\mathbb{M}(\underline{u},\underline{v}))=\Theta(\underline{u}).$
###### Example 38
For
$\Psi=\int_{\underline{\Omega}^{\sharp}}[\int_{\underline{\Omega}^{1}}T(\underline{u}(\underline{x}),\underline{x})(\underline{x}^{\sharp},\underline{x}^{1})\frac{\partial\underline{v}(\underline{x}^{\sharp},\underline{x}^{1})}{\partial\underline{x}^{1}}d\underline{x}^{1}]d\underline{x}^{\sharp}\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset),$
the set $\Theta(\Psi)$ of mathematical variables on which $\Psi$ depends is
hence inductively computed as follows:
$\Theta(\underline{u}(\underline{x}))=\\{\underline{x}\\}$,
$\Theta(T(\underline{u}(\underline{x}),\underline{x})(\underline{x}^{\sharp},\underline{x}^{1}))=\\{\underline{x}^{\sharp},\underline{x}^{1}\\}$,
$\Theta(\underline{v}(\underline{x}^{\sharp},\underline{x}^{1}))=\\{\underline{x}^{\sharp},\underline{x}^{1}\\}$,
$\Theta(\frac{\partial\underline{v}(\underline{x}^{\sharp},\underline{x}^{1})}{\partial\underline{x}^{1}})=\\{\underline{x}^{\sharp},\underline{x}^{1}\\}$,
$\Theta(T(\underline{u}(\underline{x}),\underline{x})$
$(\underline{x}^{\sharp},\underline{x}^{1})$
$\frac{\partial\underline{v}(\underline{x}^{\sharp},\underline{x}^{1})}{\partial\underline{x}^{1}})=\\{\underline{x}^{\sharp},\underline{x}^{1}\\}$,
$\Theta(\int_{\underline{\Omega}^{1}}T(\underline{u}(\underline{x}),\underline{x})(\underline{x}^{\sharp},\underline{x}^{1})$
$\frac{\partial\underline{v}(\underline{x}^{\sharp},\underline{x}^{1})}{\partial\underline{x}^{1}}d\underline{x}^{1})=\\{\underline{x}^{\sharp}\\}$,
and $\Theta(\Psi)=\emptyset$, that is, $\Psi$ is a constant function.
### 4.4 Formulation of the Symbolic Framework for Model Derivation
Now we are ready to define the framework for two-scale model derivation by
rewriting. To do so, the rewriting rules are restricted to left and right
terms
$(l,r)\in\mathcal{T}_{\mathcal{G}}(\Sigma,\mathcal{X})\times\mathcal{T}_{\mathcal{G}}(\Sigma,\mathcal{X})$.
Their conditions $c$ are formulas generated by a grammar, not explicited here,
combining terms in $\mathcal{T}_{\mathcal{G}}(\Sigma,\mathcal{X})$ with the
usual logical operators in $\Lambda=\\{\vee,\wedge,\rceil,\in\\}$. It also
involves operations with the dependency analyzer $\Theta$. The set of terms
generated by this grammar is denoted by
$\mathcal{T}_{\mathcal{L}}(\Sigma,\mathcal{X},\mathcal{G},\Theta,\Lambda).$
It remains to argue that, given a strategy $s$ in
$\mathcal{S}_{\mathcal{T}_{\mathcal{G}}(\Sigma,\mathcal{X}),\mathcal{T}_{\mathcal{L}}(\Sigma,\mathcal{X},\mathcal{G},\Theta,\Lambda)}$,
the set of terms $\mathcal{T}_{\mathcal{G}}(\Sigma,\emptyset)$ is closed under
the application of $s$. It is sufficient to show that for each rewriting $r$
rule in $s$, the application of $r$ to any term
$t\in\mathcal{T}_{\mathcal{G}}(\Sigma,\emptyset)$ at any position yields a
term in $\mathcal{T}_{\mathcal{G}}(\Sigma,\emptyset)$. As an example,
$\mathcal{T}_{\mathcal{G}}(\Sigma,\emptyset)$ is not closed under the
application of the rule $x\leadsto\underline{\Omega}$, where $x$ is a
variable. But it is closed under the application of the linearity rule
$\int_{z}f+g\,dx\leadsto\int_{z}f\,dx+\int_{z}g\,dx$ at any position, where
$f,g,x,z$ are rewriting variables. The argument is, since
$\int_{z}f+g\,dx\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$, then
$f+g\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$, and hence
$f,g\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$. Thus,
$\int_{z}f\,dx+\int_{z}g\,dx\in\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$.
That is, a term in $\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$ is replaced
by a another term in $\mathcal{T}_{\mathcal{F}}(\Sigma,\emptyset)$. A more
general setting that deals with the closure of regular languages under
specific rewriting strategies can be found in [GGJ09].
A model derivation is divided into several intermediary lemmas. Each of them
is intended to produce a new property that can be expressed as one or few
rewriting rules to be applied in another part of the derivation. Since
dynamical creation of rules is not allowed, a strategy is covering one lemma
only and is operating with a fixed set of rewriting rules. The conversion of a
result of a strategy to a new set of rewriting rules is done by an elementary
external operation that is not a limitation for generalizations of proofs. The
following definition summarizes the framework of symbolic computation
developed in this paper.
###### Definition 39
The components of the quintuplet
$\Xi=\langle\Sigma,\mathcal{X},E,\mathcal{G},\Theta\rangle$ provide a
framework for symbolic computation to derive multi-scale models. A two-scale
model derivation is expressed as a strategy
$\pi\in\mathcal{S}_{\mathcal{T}_{\mathcal{G}}(\Sigma,\mathcal{X}),\mathcal{T}_{\mathcal{L}}(\Sigma,\mathcal{X},\mathcal{G},\Theta,\Lambda)}$
for which the semantics ${[\\![\pi]\\!]}^{E}$ is applicable to an initial
expression $\Psi\in\mathcal{T}(\Sigma,\emptyset)$.
In the end of this section we argue that this framework is in the same time
relatively simple, it covers the reference model derivation and it allows for
the extensions presented in the next section.
The grammar of terms is designed to cover all mathematical expressions
occuring in the proof of the reference model as well as of their
generalizations. A term produced by the grammar includes locally all useful
information. This avoids the use of external tables and facilitates design of
rewriting rules, in particular to take into account the context of subterms to
be transformed. It allows also for local definitions, for instance a same name
of variable $x$ can be used in different parts of a same term with different
meaning, which is useful for instance in integrals. A limitation regarding
generalizations presented in the next section, is that the grammar must cover
by anticipation all needed features. This drawback should be fixed in another
work by supporting generalization of grammars in the same time as
generalization of proofs.
Each step in the proof consists in replacing parts of an expression according
to a known mathematical property. This is well done, possibly recursively,
using rewriting rules together with strategies allowing for precise
localization. Some steps need simplifications and often use the second
linearity rule of a linear operator, $A(\lambda u)=\lambda Au$ when $\lambda$
is a scalar (or is independent of the variables in the initial set of $A$). So
variable dependency of each subterm should be determined, this is precisely
what $\Theta$, the variable dependency analyzer, is producing. The other
simplifications do not require the use of $\Theta$. In addition to the grammar
$\mathcal{G},$ the analyzer $\Theta$ must be upgraded in view of each new
extension.
In all symbolic computation based on the grammar $\mathcal{G}$, it is
implicitely assumed that the derivatives, the integrals and the traces (i.e.
restriction of a function to the boundary) are well defined since the
regularity of functions is not encoded.
Due to the algebraic nature of the mathematical proofs, this framework has
been formulated by considering these proofs as a calculus rather than formal
proofs that can be formalized and checked with a proof assistant [BC04,
Won96]. Indeed, this is far simpler and allows, from a very small set of
tools, for building significant mathematical derivation. To cover broader
proofs, the framework must be changed by extending the grammar and the
variable dependency analyzer only. Yet, the language Tom [BBK+07] does not
provide a complete environment for the implementation of our framework since
it does not support the transformation of rewriting rules, despite it provides
a rich strategy language and a module for the specification of the grammar.
## 5 Transformation of Strategies as Second Order Strategies
For a given rewriting strategy representing a model proof, one would like to
transform it to obtain a derivation of more complex models. Transforming a
strategy $\pi\in\mathcal{S}_{\mathcal{T}(\Sigma,\mathcal{X})}$ is achieved by
applying strategies to the strategy $\pi$ itself. For this purpose, we
consider two levels of strategies: the first order ones
$\mathcal{S}_{\mathcal{T}(\Sigma,\mathcal{X})}$ as defined in Definition 30,
and the strategies of second order in such a way that second order strategies
can be applied to first order ones. That is, the second order strategies are
considered as terms in a set
$\mathcal{T}(\overline{\Sigma},\overline{\mathcal{X}})$ of terms where
$\overline{\Sigma}$ and $\overline{\mathcal{X}}$ remain to be defined. Given a
set of strategies $\mathcal{S}_{\mathcal{T}(\Sigma,\mathcal{X})}$ that comes
with a set of fixed-point variables $\mathcal{F}$, we pose
$\overline{\Sigma}\supset\Sigma\cup\\{{\leadsto,;,\oplus,Some,Child,\eta,\mu}\\}\cup\mathcal{F}$.
Let $\overline{\mathcal{X}}$ be a set of second order rewriting variables such
that $\overline{\mathcal{X}}\cap(\mathcal{X}\cup\overline{\Sigma})=\emptyset$.
Notice that first order rewriting variables and fixed-point variables are
considered as constants in
$\mathcal{T}(\overline{\Sigma},\overline{\mathcal{X}})$, i.e. function symbols
in $\overline{\Sigma}_{0}$. Notice also that the arity of the function symbols
$\leadsto,{;,}\oplus,Child,\mu$ is two, and the arity of $Some$ and $\eta$ is
one. In particular, the rule $l\leadsto r$ can be viewed as the term
$\leadsto(l,r)$ with the symbol $\leadsto$ at the root, and the strategy $\mu
X.s$ viewed as the term $\mu(X,s)$. This allows us to define second order
strategies
$\overline{\mathcal{S}}_{\mathcal{T}(\overline{\Sigma},\overline{\mathcal{X}})}$
by the grammar
$\bar{s}::=l\bar{\leadsto}r\;\;|\;\;\bar{s}\bar{;}\bar{s}\;\;|\;\;\bar{s}\bar{\oplus}\bar{s}\;\;|\;\;\bar{\eta}(\bar{s})\;\;|\;\;\overline{Some}(\bar{s})\;\;|\;\;\overline{Child}(j,\bar{s})\;\;|\;\;X\;\;|\;\;\bar{\mu}X.\bar{s}$
(34)
Again we assume that the symbols
$\bar{\leadsto},\overline{;},\overline{\oplus},\ldots$ of the second order
strategies do no belong to $\overline{\Sigma}$. The semantics of the
strategies in
$\overline{\mathcal{S}}_{\mathcal{T}(\overline{\Sigma},\overline{\mathcal{X}})}$
are similar to the semantics of first order strategies. In addition, we assume
that second order strategies transform first order strategies, to which they
are applied, into first order strategies. Composing several second order
strategies and applying such composition to a given first order strategy $s$
provide successive transformations of $s$.
$\textstyle{{s_{1}}}$$\textstyle{s_{2}}$$\textstyle{s_{3}}$$\textstyle{s_{23}}$$\scriptstyle{\overline{\Pi}_{2}}$$\scriptstyle{\overline{\Pi}_{3}}$$\scriptstyle{\overline{\Pi}_{3}}$$\scriptstyle{\overline{\Pi}_{2}}$
Figure 1: An example of the composition of transformations of strategies.
In the following example we illustrate the extension of an elementary strategy
which is a rewriting rule.
###### Example 40
For the set $\mathcal{X}=\\{{i,j,x,x^{\sharp},x^{1},u,\varepsilon}\\}$ we
define $s_{1},$ $s_{2},$ $s_{3},$ and $s_{23}$ four rewriting rules,
$\displaystyle s_{1}$ $\displaystyle:=T(\frac{\partial u}{\partial
x},x){(x^{\sharp},x^{1})}\leadsto\frac{1}{\varepsilon}\frac{\partial
T(u,x){(x^{\sharp},x^{1})}}{\partial{x^{1}}}\text{ for }x\in\Omega\text{ and
}{(x^{\sharp},x^{1})}\in\Omega{{}^{\sharp}}\times\Omega{{}^{1},}$
$\displaystyle s_{2}$ $\displaystyle:=T(\frac{\partial u}{\partial
x_{i}},x){(x^{\sharp},x^{1})}\leadsto\frac{1}{\varepsilon}\frac{\partial
T(u,x){(x^{\sharp},x^{1})}}{\partial{x_{i}^{1}}}\text{ for }x\in\Omega\text{
and }{(x^{\sharp},x^{1})}\in\Omega{{}^{\sharp}}\times\Omega{{}^{1},}$
$\displaystyle s_{3}$ $\displaystyle:=T(\frac{\partial u}{\partial
x},x){(x^{\sharp},x^{1})}\leadsto\frac{1}{\varepsilon}\frac{\partial
T(u,x){(x^{\sharp},x^{1})}}{\partial{x^{1}}}\text{ for }x\in\Omega_{j}\text{
and }{(x^{\sharp},x^{1})}\in\Omega_{j}^{\sharp}\times\Omega_{j}^{1}{,}$
$\displaystyle s_{23}$ $\displaystyle:=T(\frac{\partial u}{\partial
x_{i}},x){(x^{\sharp},x^{1})}\leadsto\frac{1}{\varepsilon}\frac{\partial
T(u,x){(x^{\sharp},x^{1})}}{\partial{x_{i}^{1}}}\text{ for
}x\in\Omega_{j}\text{ and
}{(x^{\sharp},x^{1})}\in\Omega_{j}^{\sharp}\times\Omega_{j}^{1}.$
The rule $s_{1}$ is encountered in the reference proof, $s_{2}$ is a (trivial)
generalization of $s_{1}$ in the sense that it applies to multi-dimensional
regions $\Omega{{}^{1}}$ referenced by a set of variables $(x_{i}^{1})_{i}$,
and $s_{3}$ is a second (trivial) generalization of $s_{1}$ on the number of
sub-regions $(\Omega_{j})_{j},$ $(\Omega{{}_{j}^{\sharp}})_{j}$ and
$(\Omega_{j}^{1})_{j}$ in $\Omega$, $\Omega{{}^{\sharp}}$ and
$\Omega{{}^{1}.}$ The rule $s_{23}$ is a generalization combining the two
previous generalizations. First, we aim at transforming the strategy $s_{1}$
into the strategy $s_{2}$ or the strategy $s_{3}$. To this end, we introduce
two second order strategies with $\overline{\mathcal{X}}=\\{v,z\\}$ and
$\overline{\Sigma}\supset\\{i,$ $j,$ $\Omega,$ $\Omega^{\sharp},$
$\Omega^{1},$ $Partial,IndexedFun,IndexedVar,IndexedReg\\},$
$\displaystyle\bar{\Pi}_{1}$
$\displaystyle:=\overline{OuterMost}(\frac{\partial v}{\partial
z}\bar{\leadsto}\frac{\partial v}{\partial z_{i}})$
$\displaystyle\bar{\Pi}_{2}$
$\displaystyle:=\overline{OuterMost}(\Omega\bar{\leadsto}\Omega_{j});\overline{OuterMost}(\Omega{{}^{\sharp}}\bar{\leadsto}\Omega_{j}^{\sharp});\overline{OuterMost}(\Omega{{}^{1}}\bar{\leadsto}\Omega_{j}^{1})$
Notice that $\bar{\Pi}_{1}$ (resp. $\bar{\Pi}_{2}$) applies the rule
$\dfrac{\partial v}{\partial z}\bar{\leadsto}\dfrac{\partial v}{\partial
z_{i}}$ (resp. $\Omega\bar{\leadsto}\Omega_{j},$
$\Omega{{}^{\sharp}}\bar{\leadsto}\Omega_{j}^{\sharp}$, and
$\Omega{{}^{1}}\bar{\leadsto}\Omega_{j}^{1}$) at all of the positions
111Notice the difference with $\overline{TopDown}$ which could not apply these
rules at any position. of the input first order strategy so that
$\bar{\Pi}_{1}(s_{1})=s_{2}\text{ and }\bar{\Pi}_{2}(s_{1})=s_{3}.$
Once $\bar{\Pi}_{1}$ and $\bar{\Pi}_{2}$ have been defined, they can be
composed to produce $s_{23}:$
$\bar{\Pi}_{2}\bar{\Pi}_{1}(s_{1})=s_{23}\text{ or
}\bar{\Pi}_{1}\bar{\Pi}_{2}(s_{1})=s_{23}.$
The diagram of Figure 1 illustrates the application of $\bar{\Pi}_{1},$
$\bar{\Pi}_{2}$ and of their compositions.
The next example shows how an extension can not only change rewriting rules
but also to add new ones.
###### Example 41
To operate simplifications in the reference model, we use the strategy
$s_{1}:=TopDown(\frac{\partial x}{\partial x}\leadsto 1).$
In the generalization to multi-dimensional regions, it is replaced by two
strategies involving the Kronecker symbol $\delta$, usually defined as
$\delta(i,j)=1$ if $i=j$ and $\delta(i,j)=0$ otherwise,
$\displaystyle s_{2}$ $\displaystyle:$
$\displaystyle=TopDown\left(\frac{\partial x_{i}}{\partial
y_{j}}\leadsto\delta(i,j),\;x=y\right),$ $\displaystyle s_{3}$
$\displaystyle:$ $\displaystyle=TopDown\left(\delta(i,j)\leadsto
1,\;i=j\right),$ $\displaystyle s_{4}$ $\displaystyle:$
$\displaystyle=TopDown\left(\delta(i,j)\leadsto 0,\;i\neq j\right).$
The second order strategy that transforms $s_{1}$ into the strategy
$Normalizer(s_{2}\oplus s_{3}\oplus s_{4})$ is
$\bar{\Pi}:=\overline{TopDown}(s_{1}\bar{\leadsto}s_{2}\oplus s_{3}\oplus
s_{4}).$
## 6 Implementation and Experiments
The framework presented in Section 4.4 has been implemented in
Maple${}^{{}^{\ooalign{\hfil\raise
0.1507pt\hbox{$\scriptstyle\mathrm{\text{missing}}{R}$}\hfil\crcr\text{$\mathchar
525\relax$}}}}$. The implementation includes, the language Symbtrans of
strategies already presented in [BGL]. The derivation of the reference model
presented in Section 2 has been fully implemented. It starts from an input
term which is the weak formulation (24) of the physical problem,
$\int\underline{a}\frac{\partial\underline{u}}{\partial\underline{x}}\frac{\partial\underline{v}}{\partial\underline{x}}\text{
}d\underline{x}=\int\underline{f}\text{ }\underline{v}\text{ }d\underline{x},$
(35)
where $\underline{a}=\mathtt{Fun}(a,[\underline{\Omega}],[$ $],Known),$
$\underline{u}=\mathtt{Fun}(u,[\underline{\Omega}],[\underline{Dirichlet}],Unknown),$
$\underline{v}=\mathtt{Fun}(u,[\underline{\Omega}],[\underline{Dirichlet}],Test),$
$\underline{\Omega}=\mathtt{Reg}(\Omega,[1],\emptyset,\underline{\Gamma},n_{\Omega})$,
$\underline{\Gamma}=\mathtt{Reg}(\Gamma,[$ $],\emptyset,$
$\bot_{\mathcal{R}},$ $\bot_{\mathcal{F}}),$
$\underline{Dirichlet}=\mathtt{BC}(Dirichlet,\underline{\Gamma},0)$ and where
the short-cuts of the operators are those of Section 4.2. The information
regarding the two-scale transformation is provided through the test functions.
For instance, in the first block the proof starts with the expression
$\Psi=\int\frac{\partial\underline{u}}{\partial\underline{x}}B(\underline{v}(\underline{x^{\sharp}},\underline{x^{1}})(\underline{x})\text{
}d\underline{x},$
where the test function
$B(\underline{v}(\underline{x^{\sharp}},\underline{x^{1}})(\underline{x})$ is
also an input, with
$\underline{v}=\mathtt{Fun}(a,[\underline{x^{\sharp}},\underline{x^{1}}],[\underline{Dirichlet\sharp}],Test),$
$\underline{x^{\sharp}}=\mathtt{Var}(x^{\sharp},\underline{\Omega^{\sharp}}),$
$\underline{x^{1}}=\mathtt{Var}(x^{1},\underline{\Omega^{1}}),$
$\underline{\Omega}^{\sharp}=\mathtt{Reg}(\Omega^{\sharp},[1],\emptyset,\underline{\Gamma^{\sharp}},n_{\Omega^{\sharp}}),$
$\underline{\Gamma^{\sharp}}=\mathtt{Reg}(\Gamma^{\sharp},[$
$],\emptyset,\bot_{\mathcal{R}},\bot_{\mathcal{F}}),$
$\underline{\Omega}^{1}=\mathtt{Reg}(\Omega^{1},[1],$ $\emptyset,$
$\underline{\Gamma^{1}},n_{\Omega^{1}}),$
$\underline{\Gamma^{1}}=\mathtt{Reg}(\Gamma^{1},[$
$],\emptyset,\bot_{\mathcal{R}},\bot_{\mathcal{F}})$, and
$\underline{Dirichlet\sharp}=\mathtt{BC}(Dirichlet\sharp,\underline{\Gamma^{\sharp}},0).$
The proof is divided into five strategies corresponding to the five blocks of
the proof, each ending by some results transformed into rewriting rules used
in the following blocks. The rewriting rules used in the strategies are FO-
rules and can be classified into the three categories.
* •
Usual mathematical rules: that represents the properties of the derivation and
integration operators, such as the linearity, the chain rule, the Green rule,
etc,
* •
Specialized rules: for the properties of the two-scale calculus, as those of
the two-scale transform, the approximation of $B$ by the adjoint $T^{\ast}$
etc,
* •
Auxiliary tools: for transformations of expressions format that are not
related to operator properties such as the rule which transforms
$\psi_{1}=\psi_{2}$ into $\psi_{1}-\psi_{2}=0$.
| Usual Rules | Specialized Rules | Aux. Tools
---|---|---|---
Skeleton | 53 | 14 | 28
Table 1: The number of first order rules used in the reference model.
The Table 1 summarizes the number of first order (FO) rules, used in the
reference model, by categories.
The reference model has been extended to cover three different kinds of
configurations. To proceed to an extension, the new model derivation is
established in a form that is as close as possible of the reference proof. The
grammar and the dependency analyzer should be completed. Then, the initial
data is determined, and second order (SO) strategies yielding the generalized
model derivation are found and optimized. As it has been already mentioned,
$\mathcal{G}$ and $\Theta$ have already been designed to cover the three
extensions.
The first generalization is to cover multi-dimensional regions, i.e.
$\Omega\subset\mathbb{R}^{n}$ with $n\geq 1$. When $n=2,$ the initial term is
$\sum_{\underline{i}=1}^{n}\sum_{\underline{j}=1}^{n}\int\underline{a}_{\underline{i}\underline{j}}\frac{\partial\underline{u}}{\partial\underline{x}_{\underline{i}}}\frac{\partial\underline{v}}{\partial\underline{x}_{\underline{j}}}\text{
}d\underline{x}=\int\underline{f}\text{ }\underline{v}\text{ }d\underline{x},$
where
$\underline{\Omega}=\mathtt{Reg}(\Omega,[1,2],\emptyset,\underline{\Gamma},n_{\Omega}),$
$\underline{a}_{\underline{i}\underline{j}}=\mathtt{Indexed}(\mathtt{Indexed}(\underline{a},\underline{j}),\underline{i}),$
$\underline{i}=\mathtt{Var}(i,\underline{I}),$
$\underline{I}=\mathtt{Reg}(I,[1,2],\emptyset,\bot_{\mathcal{R}},\bot_{\mathcal{F}})$
and the choice of the test function is trivially deduced. Then, the model
derivation is very similar to this of the reference model, see [LS07], so much
so it is obtained simply by applying the SO strategy $\bar{\Pi}_{1}$defined in
Example 40. This extension has been tested on the four first blocks.
The second generalization transforms the reference model into a model with
several adjacent one-dimensional regions (or intervals)
$(\Omega_{k})_{k=1,..,m}$ so that $\Omega$ is still an interval i.e.
$\Omega\subset\mathbb{R}$. For $m=2$, the initial term is the same as (35) but
with $\underline{\Omega}=\mathtt{Reg}(\Omega,[1],$
$\\{\underline{\Omega_{1}},\underline{\Omega_{2}}\\},$
$\underline{\Gamma},n_{\Omega})$,
$\underline{\Omega_{1}}=\mathtt{Reg}(\Omega_{1},[1],$ $\emptyset,$
$\underline{\Gamma_{1}},n_{\Omega_{1}}),$ and
$\underline{\Omega_{2}}=\mathtt{Reg}(\Omega_{2},[1],$ $\emptyset,$
$\underline{\Gamma_{2}},n_{\Omega_{2}})$. The two-scale geometries, all
variables, all kind of functions and also the operators $B$ and $T$ are
defined subregion by subregion. All definitions and properties apply for each
subregion, and the proof steps are the same after spliting the integrals over
the complete region $\Omega$ into integrals over the subregions. The only
major change is in the fourth step where the equality $u_{1}^{0}=u_{2}^{0}$ at
the interface between $\Omega_{1}$ and $\Omega_{2}$ which is encoded as
transmission conditions in the boundary conditions of $u_{1}^{0}$ and
$u_{2}^{0}.$
The third extension transforms the multi-dimensional model obtained from the
first generalization to a model related to thin cylindrical regions, in the
sense that the dimension of $\Omega$ is in the order of $\varepsilon$ in some
directions $i\in I^{\natural}$ and of the order $1$ in the others $i\in
I^{\sharp}$ e.g. $\Omega=(0,1)\times(0,\varepsilon)$ where
$I^{\natural}=\\{2\\}$ and $I^{\sharp}=\\{1\\}.$ The boundary $\Gamma$ is
split in two parts, the lateral part $\Gamma_{lat}$ and the other parts
$\Gamma_{other}$ where the Dirichlet boundary conditions are replaced by
homogeneous Neuman boundary conditions i.e. $\frac{du^{\varepsilon}}{dx}=0$.
In this special case the integrals of the initial term are over a region which
size is of the order of $\varepsilon$ so it is required to multiply each side
of the equality by the factor $1/\varepsilon$ to work with expressions of the
order of $1$. Moreover, the macroscopic region differs from $\Omega$, it is
equal to $\Omega^{\sharp}=(0,1)$ when the microscopic region remains
unchanged. In general, the definition of the adjoint $T^{\ast}$ is unchanged
but $(Bv)(x)=v((x_{i})_{i\in I^{\sharp}},(x-x_{c}^{\sharp})/\varepsilon)$
where $x_{c}^{\sharp}$ is the center of the $c^{th}$ cell in
$\Omega^{\sharp}$. It follows that the approximations (10, 11) are between
$T^{\ast}$ and $\varepsilon B$ with $\sum_{i\in
I^{\sharp}}x_{i}^{1}\frac{\partial v}{\partial x_{i}^{\sharp}}$ instead of
$\sum_{i=1}^{n}x_{i}^{1}\frac{\partial v}{\partial x_{i}^{\sharp}}$. With
these main changes in the definitions and the preliminary properties, the
proof steps may be kept unchanged.
| Usual Rules | Specialized Rules | Aux. Tools
---|---|---|---
Multi-Dimension | 6 | 0 | 4
Thin-Region | 2 | 0 | 0
Multi-Region | 3 | 0 | 0
Table 2: The number of first order rules used in the three extensions.
The mathematical formulation of the second and third extensions has been
derived. This allows for the determination of the necessary SO-strategies, but
they have not been implemented nor tested. To summarize the results about the
principle of extension of strategies, we show its benefit through some
statistics. In particular the main concerned is the reusability and the
extensibility of existing strategies. The Table 2 shows an estimate of the
number of new FO-rules for the three extensions in each category and for the
first four blocks.
| Usual Rules | Specialized Rules | Aux. Tools
---|---|---|---
Multi-Dimension | 9 | 2 | 3
Thin-Region | 0 | 0 | 0
Multi-Region | 1 | 0 | 0
Table 3: The number of second order strategies used in the extension of proofs. Input model | Resulting model | % Modi. FO-rules | % Modi. FO-strategies
---|---|---|---
Reference | Multi-Dim. | 16.6% | 5%
Multi-Dim. | Thin | 0 | 0
Thin | Multi-Reg. | 0 | 2.5%
Table 4: The ratio of modified FO-rules and FO-strategies.
The Table 3 shows the number of SO-strategies used in each extension. Finally,
the Table 4 shows, the ratio of the modified FO-rules and the ratio of the
modified FO-strategies. The reusability ratio is high since most of the FO-
strategies defined in the skeleton model are reused. Besides very little
number of SO-strategies is used in the extensions. This systematic way of the
generation of proofs is a promising path that will be further validated within
more complex configurations for which the proofs can not obtained by hand. In
the future, we plan to introduce dedicated tools to aid in the design of
composition of several extensions.
## References
* [ADH90] Todd Arbogast, Jim Douglas, Jr., and Ulrich Hornung. Derivation of the double porosity model of single phase flow via homogenization theory. SIAM J. Math. Anal., 21:823–836, May 1990.
* [BB02] G. Bouchitte and M. Bellieud. Homogenization of a soft elastic material reinforced by fibers. Asymptotic Analysis, 32(2):153, 2002.
* [BBK+07] Emilie Balland, Paul Brauner, Radu Kopetz, Pierre-Etienne Moreau, and Antoine Reilles. Tom: Piggybacking rewriting on Java. In the proceedings of the 18th International Conference on Rewriting Techniques and Applications RTA 07, pages 36–47, 2007.
* [BC04] Yves Bertot and Pierre Castéran. Interactive Theorem Proving and Program Development. Coq’Art: The Calculus of Inductive Constructions. Texts in Theoretical Computer Science. Springer Verlag, 2004.
* [BGL] W. Belkhir, A. Giorgetti, and M. Lenczner. A symbolic transformation language and its application to a multiscale method. Submitted. December 2010, http://arxiv.org/abs/1101.3218v1.
* [BKKR01] Peter Borovansky, Claude Kirchner, Hélène Kirchner, and Christophe Ringeissen. Rewriting with strategies in ELAN: a functional semantics. International Journal of Foundations of Computer Science, 12(1):69–95, 2001.
* [BLM96] Alain Bourgeat, Stephan Luckhaus, and Andro Mikelic. Convergence of the homogenization process for a double-porosity model of immiscible two-phase flow. SIAM J. Math. Anal., 27:1520–1543, November 1996.
* [BLP78] A. Bensoussan, J.L. Lions, and G. Papanicolaou. Asymptotic Methods for Periodic Structures. North-Holland, 1978.
* [BN98] F. Baader and T. Nipkow. Term rewriting and all that. Cambridge University Press, 1998.
* [CD99] D. Cioranescu and P. Donato. An introduction to homogenization. Oxford University Press, 1999.
* [CD00] J. Casado-Díaz. Two-scale convergence for nonlinear Dirichlet problems in perforated domains. Proc. Roy. Soc. Edinburgh Sect. A, 130(2):249–276, 2000.
* [CDG02] D. Cioranescu, A. Damlamian, and G. Griso. Periodic unfolding and homogenization. C. R. Math. Acad. Sci. Paris, 335(1):99–104, 2002.
* [CDG08] D. Cioranescu, A. Damlamian, and G. Griso. The periodic unfolding method in homogenization. SIAM Journal on Mathematical Analysis, 40(4):1585–1620, 2008.
* [CFK05] H. Cirstea, G. Faure, and C. Kirchner. A $\rho$-calculus of explicit constraint application. Electronic Notes in Theoretical Computer Science, 117:51 – 67, 2005\. Proceedings of the Fifth International Workshop on Rewriting Logic and Its Applications (WRLA 2004).
* [CK01] Horatiu Cirstea and Claude Kirchner. The rewriting calculus — Part I and II. Logic Journal of the Interest Group in Pure and Applied Logics, 9(3):427–498, May 2001.
* [CKLW03] Horatiu Cirstea, Claude Kirchner, Luigi Liquori, and Benjamin Wack. Rewrite strategies in the rewriting calculus. In Bernhard Gramlich and Salvador Lucas, editors, 3rd International Workshop on Reduction Strategies in Rewriting and Programming , volume 86(4) of Electronic Notes in Theoretical Computer Science, pages 18–34, Valencia, Spain, 2003. Elsevier.
* [GGJ09] Adrià Gascón, Guillem Godoy, and Florent Jacquemard. Closure of tree automata languages under innermost rewriting. Electron. Notes Theor. Comput. Sci., 237:23–38, April 2009.
* [JZKO94] V.V. Jikov, V. Zhikov, M. Kozlov, and O.A. Oleinik. Homogenization of differential operators and integral functionals. Springer-Verlag, 1994.
* [Len97] M. Lenczner. Homogénéisation d’un circuit électrique. C. R. Acad. Sci. Paris Sér. II b, 324(9):537–542, 1997.
* [Len06] Michel Lenczner. Homogenization of linear spatially periodic electronic circuits. NHM, 1(3):467–494, 2006.
* [LS07] M. Lenczner and R. C. Smith. A two-scale model for an array of AFM’s cantilever in the static case. Mathematical and Computer Modelling, 46(5-6):776–805, 2007.
* [MM09] Daniel Marino and Todd Millstein. A generic type-and-effect system. In Proceedings of the 4th international workshop on Types in language design and implementation, TLDI ’09, pages 39–50, New York, NY, USA, 2009. ACM.
* [SK95] Kenneth Slonneger and Barry L. Kurtz. Formal syntax and semantics of programming languages - a laboratory based approach. Addison-Wesley, 1995.
* [Tar55] Alfred Tarski. A lattice-theoretical fixpoint theorem and its applications. The Journal of Symbolic Logic, 5(4):370, 1955.
* [Ter03] Terese. Term Rewriting Systems, volume 55 of Cambridge Tracts in Theor. Comp. Sci. Cambridge Univ. Press, 2003.
* [Won96] Wai Wong. A proof checker for hol, 1996.
|
arxiv-papers
| 2013-02-09T11:56:12 |
2024-09-04T02:49:41.547831
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Bin Yang, Walid Belkhir, Michel Lenczner",
"submitter": "Walid Belkhir",
"url": "https://arxiv.org/abs/1302.2224"
}
|
1302.2297
|
# Two meromorphic mappings sharing $2n+2$ hyperplanes regardless of
multiplicity
###### Abstract.
Nevanlinna showed that two non-constant meromorphic functions on
${\mathbf{C}}$ must be linked by a Möbius transformation if they have the same
inverse images counted with multiplicities for four distinct values. After
that this results is generalized by Gundersen to the case where two
meromorphic functions share two values ignoring multiplicity and share other
two values with multiplicities trucated by $2$. Previously, the first author
proved that for $n\geq 2,$ there are at most two linearly nondegenerate
meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ sharing $2n+2$ hyperplanes ingeneral position
ignoring multiplicity. In this article, we will show that if two meromorphic
mappings $f$ and $g$ of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ share $2n+1$ hyperplanes ignoring
multiplicity and another hyperplane with multiplicities trucated by $n+1$ then
the map $f\times g$ is algebraically degenerate.
Si Duc Quanga and Le Ngoc Quynhb
a Department of Mathematics, Hanoi National University of Education,
136 Xuan Thuy street, Cau Giay, Hanoi, Vietnam
email address: [email protected]
b Faculty of Education, An Giang University, 18 Ung Van Khiem,
Dong Xuyen, Long Xuyen, An Giang, Vietnam
email address: [email protected]
††2010 _Mathematics Subject Classification_ : Primary 32H04, 32A22; Secondary
32A35.††Key words and phrases: Degenerate, meromorphic mapping, truncated
multiplicity, hyperplane.
## Introduction
In $1926$, R. Nevanlinna [7] showed that if two distinct nonconstant
meromorphic functions $f$ and $g$ on the complex plane ${\mathbf{C}}$ have the
same inverse images for four distinct values then $g$ is a special type of
linear fractional transformation of $f$.
The above result is usually called the four values theorem of Nevanlinna. In
1983, Gundersen [5] improved the result of Nevanlinna by proving the
following.
Theorem A (Gundersen [5]). Let $f$ and $g$ be two distinct non-constant
meromorphic functions and let $a_{1},a_{2},a_{3},a_{4}$ be four distinct
values in ${\mathbf{C}}\cup\\{\infty\\}$. Assume that
$\min\\{\nu^{0}_{f-a_{i}},1\\}=\min\\{\nu^{0}_{g-a_{i}},1\\}\text{ for
}i=1,2\text{ and }\nu^{0}_{f-a_{j}}=\nu^{0}_{g-a_{j}}\text{ and }j=3,4$
$\bigl{(}$outside a discrete set of counting function regardless of
multiplicity is equal to $o(T(r,f))\bigl{)}$. Then
$\nu^{0}_{f-a_{i}}=\nu^{0}_{g-a_{i}}$ for all $i\in\\{1,\dots,4\\}$.
In this article, we will extend and improve the above results of Nevanlinna
and Gundersen to the case of meromorphic mappings into
${\mathbf{P}}^{n}({\mathbf{C}})$. To state our results, we firstly give some
following.
Take two meromorphic mapping $f$ and $g$ of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$. Let $H$ be a hyperplanes of
${\mathbf{P}}^{n}({\mathbf{C}})$ such that $(f,H)\not\equiv 0$ and
$(g,H)\not\equiv 0$. Let $d$ be an positive integer or $+\infty$. We say that
$f$ and $g$ share the hyperplane $H$ with multiplicity truncated by $d$ if the
following two conditions are satisfied:
$\min\ (\nu_{(f,H)},d)=\min\ (\nu_{(g,H)},d)\text{ and }f(z)=g(z)\text{ on
}f^{-1}(H).$
If $d=1$, we will say that $f$ and $g$ share $H$ ignoring multiplicity. If
$d=+\infty$, we will say that $f$ and $g$ share $H$ with counting
multiplicity.
Recently, Chen - Yan [1] and S. D. Quang [8] showed that two meromorphic
mappings of ${\mathbf{C}}^{m}$ into ${\mathbf{P}}^{n}({\mathbf{C}})$ must be
identical if they share $2n+3$ hyperplanes in general position ignoring
multiplicity. In 2011, Chen - Yan considered the case of meromorphic mappings
sharing only $2n+2$ hyperplanes, and they showed that
Theorem B (see [2, Main Theorem]). Let $f,g$ and $h$ be three linearly
nondegenerate meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$. Let $H_{1},...,H_{2n+2}$ be $2n+2$
hyperplanes of ${\mathbf{P}}^{n}({\mathbf{C}})$ in general position with
$\dim f^{-1}(H_{i}\cap H_{j})\leqslant m-2\quad(1\leqslant i<j\leqslant
2n+2).$
Assume that $f,g$ and $h$ share $H_{1},...,H_{2n+2}$ with multiplicity
truncated by level $2$. Then the map $f\times g\times h$ is linearly
degenerate.
Independently, in 2012 S. D. Quang [9] proved a finiteness theorem for
meromorphic mappings sharing $2n+2$ hyperplanes without multiplicity as
follows.
Theorem C ( see [9, Theorem 1.1]). Let $f,g$ and $h$ be three meromorphic
mappings of ${\mathbf{C}}^{m}$ into ${\mathbf{P}}^{n}({\mathbf{C}})$. Let
$H_{1},...,H_{2n+2}$ be $2n+2$ hyperplanes of ${\mathbf{P}}^{n}({\mathbf{C}})$
in general position with
$\dim f^{-1}(H_{i}\cap H_{j})\leqslant m-2\quad(1\leqslant i<j\leqslant
2n+2).$
Assume that $f,g$ and $h$ share $H_{1},...,H_{2n+2}$ ignoring multiplicity. If
$f$ is linearly nondegenerate and $n\geq 2$ then
$f=g\text{ or }g=h\text{ or }h=f.$
The above theorem means that there are at most two linearly nondegenerate
meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ sharing $2n+2$ hyperplanes in general
position regardless of multiplicity. In this paper, we will show that there is
an algebraic relation among them if they share at least one of these
hyperplanes with multiplicity truncated by level $n+1$. Namely, we will prove
the following.
Main Theorem. Let $f$ and $g$ be two meromorphic mappings of
${\mathbf{C}}^{m}$ into ${\mathbf{P}}^{n}({\mathbf{C}})$. Let
$H_{1},...,H_{2n+2}$ be $2n+2$ hyperplanes of ${\mathbf{P}}^{n}({\mathbf{C}})$
in general position with
$\dim f^{-1}(H_{i}\cap H_{j})\leqslant m-2\quad(1\leqslant i<j\leqslant
2n+2).$
Assume that $f$ and $g$ share $H_{1},...,H_{2n+1}$ ignoring multiplicity and
share $H_{2n+2}$ with multiplicity truncated by $n+1$. Then the map $f\times
g:{\mathbf{C}}^{m}\rightarrow{\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$
is algebraically degenerate.
In the last section of this paper, we will consider the case of two
meromorphic mappings sharing two different families of hyperplanes. We will
also give an algebraically degeneracy theorem for that case.
Acknowledgements. This work was done during a stay of the first author at
Vietnam Institute for Advanced Study in Mathematics. He would like to thank
the institute for support. This work is also supported in part by a NAFOSTED
grant of Vietnam.
## 1\. Basic notions and auxiliary results from Nevanlinna theory
2.1. We set $||z||=\big{(}|z_{1}|^{2}+\dots+|z_{m}|^{2}\big{)}^{1/2}$ for
$z=(z_{1},\dots,z_{m})\in{\mathbf{C}}^{m}$ and define
$\displaystyle B(r):=\\{z\in{\mathbf{C}}^{m}:||z||<r\\},\quad
S(r):=\\{z\in{\mathbf{C}}^{m}:||z||=r\\}\ (0<r<\infty).$
Define
$\displaystyle\sigma(z):=$
$\displaystyle\big{(}dd^{c}||z||^{2}\big{)}^{m-1}\quad\quad\text{and}$
$\displaystyle\eta(z):=$ $\displaystyle
d^{c}\text{log}||z||^{2}\land\big{(}dd^{c}\text{log}||z||^{2}\big{)}^{m-1}\text{on}\quad{\mathbf{C}}^{m}\setminus\\{0\\}.$
2.2. Let $F$ be a nonzero holomorphic function on a domain $\Omega$ in
${\mathbf{C}}^{m}$. For a set $\alpha=(\alpha_{1},...,\alpha_{m})$ of
nonnegative integers, we set $|\alpha|=\alpha_{1}+...+\alpha_{m}$ and
$\mathcal{D}^{\alpha}F=\dfrac{\partial^{|\alpha|}F}{\partial^{\alpha_{1}}z_{1}...\partial^{\alpha_{m}}z_{m}}.$
We define the map $\nu_{F}:\Omega\to\mathbf{Z}$ by
$\nu_{F}(z):=\max\ \\{l:\mathcal{D}^{\alpha}F(z)=0\text{ for all }\alpha\text{
with }|\alpha|<l\\}\ (z\in\Omega).$
We mean by a divisor on a domain $\Omega$ in ${\mathbf{C}}^{m}$ a map
$\nu:\Omega\to\mathbf{Z}$ such that, for each $a\in\Omega$, there are nonzero
holomorphic functions $F$ and $G$ on a connected neighborhood $U\subset\Omega$
of $a$ such that $\nu(z)=\nu_{F}(z)-\nu_{G}(z)$ for each $z\in U$ outside an
analytic set of dimension $\leqslant m-2$. Two divisors are regarded as the
same if they are identical outside an analytic set of dimension $\leqslant
m-2$. For a divisor $\nu$ on $\Omega$ we set $|\nu|:=\overline{\\{z:\nu(z)\neq
0\\}},$ which is a purely $(m-1)$-dimensional analytic subset of $\Omega$ or
empty set.
Take a nonzero meromorphic function $\varphi$ on a domain $\Omega$ in
${\mathbf{C}}^{m}$. For each $a\in\Omega$, we choose nonzero holomorphic
functions $F$ and $G$ on a neighborhood $U\subset\Omega$ such that
$\varphi=\dfrac{F}{G}$ on $U$ and $dim(F^{-1}(0)\cap G^{-1}(0))\leqslant m-2,$
and we define the divisors $\nu_{\varphi},\ \nu^{\infty}_{\varphi}$ by
$\nu_{\varphi}:=\nu_{F},\ \nu^{\infty}_{\varphi}:=\nu_{G}$, which are
independent of choices of $F$ and $G$ and so globally well-defined on
$\Omega$.
2.3. For a divisor $\nu$ on ${\mathbf{C}}^{m}$ and for a positive integer $M$
or $M=\infty$, we define the counting function of $\nu$ by
$\displaystyle\nu^{(M)}(z)$ $\displaystyle=\min\ \\{M,\nu(z)\\},$
$\displaystyle n(t)$
$\displaystyle=\begin{cases}\int\limits_{B(t)}\nu(z)\sigma&\text{ if }m\geq
2,\\\ \sum\limits_{|z|\leq t}\nu(z)&\text{ if }m=1.\end{cases}$ $\displaystyle
N(r,\nu)$
$\displaystyle=\int\limits_{1}^{r}\dfrac{n(t)}{t^{2m-1}}dt\quad(1<r<\infty).$
For a meromorphic function $\varphi$ on ${\mathbf{C}}^{m}$, we set
$N_{\varphi}(r)=N(r,\nu_{\varphi})$ and
$N_{\varphi}^{[M]}(r)=N(r,\nu_{\varphi}^{[M]}).$ We will omit the character
[M] if $M=\infty$.
2.4. Let $f:{\mathbf{C}}^{m}\longrightarrow{\mathbf{P}}^{n}({\mathbf{C}})$ be
a meromorphic mapping. For arbitrarily fixed homogeneous coordinates
$(w_{0}:\dots:w_{n})$ on ${\mathbf{P}}^{n}({\mathbf{C}})$, we take a reduced
representation $f=(f_{0}:\dots:f_{n})$, which means that each $f_{i}$ is a
holomorphic function on ${\mathbf{C}}^{m}$ and
$f(z)=\big{(}f_{0}(z):\dots:f_{n}(z)\big{)}$ outside the analytic set
$I(f)=\\{f_{0}=\dots=f_{n}=0\\}$ of codimension $\geq 2$. Set
$\|f\|=\big{(}|f_{0}|^{2}+\dots+|f_{n}|^{2}\big{)}^{1/2}$.
The characteristic function of $f$ is defined by
$\displaystyle
T_{f}(r)=\int\limits_{S(r)}\text{log}\|f\|\eta-\int\limits_{S(1)}\text{log}\|f\|\eta.$
Let $H$ be a hyperplane in ${\mathbf{P}}^{n}({\mathbf{C}})$ given by
$H=\\{a_{0}\omega_{0}+...+a_{n}\omega_{n}=0\\},$ where
$a:=(a_{0},...,a_{n})\neq(0,...,0)$. We set $(f,H)=\sum_{i=0}^{n}a_{i}f_{i}$.
It is easy to see that the divisor $\nu_{(f,H)}$ does not depend on the
choices of reduced representation of $f$ and coefficients $a_{0},..,a_{n}$.
Moreover, we define the proximity function of $f$ with respect to $H$ by
$m_{f,H}(r)=\int_{S(r)}\mathrm{log}\dfrac{||f||\cdot||H||}{|(f,H)|}\eta-\int_{S(1)}\mathrm{log}\dfrac{||f||\cdot||H||}{|(f,H)|}\eta,$
where $||H||=(\sum_{i=0}^{n}|a_{i}|^{2})^{\frac{1}{2}}.$
2.5. Let $\varphi$ be a nonzero meromorphic function on ${\mathbf{C}}^{m}$,
which is occasionally regarded as a meromorphic map into
${\mathbf{P}}^{1}({\mathbf{C}})$. The proximity function of $\varphi$ is
defined by
$m(r,\varphi):=\int_{S(r)}\mathrm{log}^{+}|\varphi|\eta,$
where $\mathrm{log}^{+}t=\max\\{0,\mathrm{log}t\\}$ for $t>0$. The Nevanlinna
characteristic function of $\varphi$ is defined by
$T(r,\varphi)=N_{\frac{1}{\varphi}}(r)+m(r,\varphi).$
There is a fact that
$T_{\varphi}(r)=T(r,\varphi)+O(1).$
The meromorphic function $\varphi$ is said to be small with respect to $f$ iff
$||\ T(r,\varphi)=o(T_{f}(r))$.
Here as usual, by the notation $``||\ P"$ we mean the assertion $P$ holds for
all $r\in[0,\infty)$ excluding a Borel subset $E$ of the interval $[0,\infty)$
with $\int_{E}dr<\infty$.
The following plays essential roles in Nevanlinna theory (see [6]).
###### Theorem 1.1 (First main theorem).
Let $f:{\mathbf{C}}^{m}\to{\mathbf{P}}^{n}({\mathbf{C}})$ be a meromorphic
mapping and let $H$ be a hyperplane in ${\mathbf{P}}^{n}({\mathbf{C}})$ such
that $f({\mathbf{C}}^{m})\not\subset H$. Then
$N_{(f,H)}(r)+m_{f,H}(r)=T_{f}(r)\ (r>1).$
###### Theorem 1.2 (Second main theorem).
Let $f:{\mathbf{C}}^{m}\to{\mathbf{P}}^{n}({\mathbf{C}})$ be a linearly
nondegenerate meromorphic mapping and $H_{1},...,H_{q}$ be hyperplanes of
${\mathbf{P}}^{n}({\mathbf{C}})$ in general position. Then
$||\ \
(q-n-1)T_{f}(r)\leqslant\sum_{i=1}^{q}N^{[n]}_{(f,H_{i})}(r)+o(T_{f}(r)).$
###### Lemma 1.3 (Lemma on logarithmic derivative).
Let $f$ be a nonzero meromorphic function on ${\mathbf{C}}^{m}.$ Then
$\biggl{|}\biggl{|}\quad
m\biggl{(}r,\dfrac{\mathcal{D}^{\alpha}(f)}{f}\biggl{)}=O(\mathrm{log}^{+}T_{f}(r))\
(\alpha\in\mathbf{Z}^{m}_{+}).$
2.6. Let $h_{1},h_{2},...,h_{p}$ be finitely many nonzero meromorphic
functions on ${\mathbf{C}}^{m}$. By a rational function in logarithmic
derivatives of $h_{j}^{\prime}$s we mean a nonzero meromorphic function
$\varphi$ on ${\mathbf{C}}^{m}$ which is represented as
$\varphi=\dfrac{P(\cdots,\frac{\mathcal{D}^{\alpha}h_{j}}{h_{j}},\cdots)}{Q(\cdots,\frac{\mathcal{D}^{\alpha}h_{j}}{h_{j}},\cdots)}$
with polynomials $P(\cdots,X^{\alpha},\cdots)$ and
$Q(\cdots,X^{\alpha},\cdots)$
###### Proposition 1.4 (see [4, Proposition 3.4]).
Let $h_{1},h_{2},...,h_{p}\ (p\geq 2)$ be nonzero meromorphic functions on
${\mathbf{C}}^{m}$. Assume that
$h_{1}+h_{2}+\cdots+h_{p}=0$
Then, the set $\\{1,...,p\\}$ of indices has a partition
$\\{1,...,p\\}=J_{1}\cup J_{2}\cup\cdots\cup J_{k},\sharp J_{\alpha}\geq 2\
\forall\ \alpha,J_{\alpha}\cap J_{\beta}=\emptyset\text{ for }\alpha\neq\beta$
such that, for each $\alpha$,
$\displaystyle\mathrm{(i)}$ $\displaystyle\ \sum_{i\in J_{\alpha}}h_{i}=0,$
$\displaystyle\mathrm{(ii)}$ $\frac{h_{i}^{\prime}}{h_{i}}\ (i,i^{\prime}\in
J_{\alpha})$ are rational functions in logarithmic derivatives of
$h_{j}^{\prime}$s
## 2\. Algebraic degeneracy of two meromorphic mappings
In order to prove the main theorem, we need the following algebraic
propositions.
Let $H_{1},...,H_{2n+1}$ be $(2n+1)$ hyperplanes of
${\mathbf{P}}^{n}({\mathbf{C}})$ in general position given by
$H_{i}:\ \ x_{i0}\omega_{0}+x_{i1}\omega_{1}+\cdots+x_{in}\omega_{n}=0\ (1\leq
i\leq 2n+1).$
We consider the rational map
$\Phi:{\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})\longrightarrow{\mathbf{P}}^{2n}({\mathbf{C}})$
as follows:
For $v=(v_{0}:v_{1}\cdots:v_{n}),\
w=(w_{0}:w_{1}:\cdots:w_{n})\in{\mathbf{P}}^{n}({\mathbf{C}})$, we define the
value $\Phi(v,w)=(u_{0}:\cdots:u_{2n+1})\in{\mathbf{P}}^{2n}({\mathbf{C}})$ by
$u_{i}=\frac{x_{i0}v_{0}+x_{i1}v_{1}+\cdots+x_{in}v_{n}}{x_{i0}w_{0}+x_{i1}w_{1}+\cdots+x_{in}w_{n}}.$
###### Proposition 2.1 (see [4, Proposition 5.9]).
The map $\Phi$ is a birational map of
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$ onto
${\mathbf{P}}^{2n}({\mathbf{C}})$.
Let $f$ and $g$ be two meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$ with reduced representations
$f=(f_{0}:\cdots:f_{n})\ \text{ and }\ g=(g_{0}:\cdots:g_{n}).$
Define $h_{i}=\frac{(f,H_{i})/f_{0}}{(g,H_{i})/g_{0}}\ (1\leq i\leq 2n+1)$ and
$h_{I}=\prod_{i\in I}h_{i}$ for each subset $I$ of $\\{1,...,2n+1\\}.$ Set
$\mathcal{I}=\\{I=(i_{1},...,i_{n})\ ;\ 1\leq i_{1}<\cdots<i_{n}\leq 2n+1\\}$.
We have the following proposition
###### Proposition 2.2.
If there exist constants $A_{I}$, not all zero, such that
$\sum_{I\in\mathcal{I}}A_{I}h_{I}\equiv 0$
then the map $f\times g$ into
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$ is
algebraically degenerate.
###### Proof.
For $v=(v_{0}:v_{1}\cdots:v_{n}),\
w=(w_{0}:w_{1}:\cdots:w_{n})\in{\mathbf{P}}^{n}({\mathbf{C}})$, we define the
map $\Phi(v,w)=(u_{0}:\cdots:u_{2n+1})\in{\mathbf{P}}^{2n}({\mathbf{C}})$ as
above. By Proposition LABEL:3.1, $\Phi$ is birational function. This implies
that the function
$\sum_{I\in\mathcal{I}}A_{I}\frac{x_{i0}v_{0}+x_{i1}v_{1}+\cdots+x_{in}v_{n}}{x_{i0}w_{0}+x_{i1}w_{1}+\cdots+x_{in}w_{n}}$
is a nonzero rational function. It follows that
$Q(v_{0},...,v_{n},w_{0},...,w_{n})=\sum_{I\in\mathcal{I}}A_{I}\left(\prod_{i\in
I}\sum_{j=0}^{n}x_{ij}v_{j}\right)\times\left(\prod_{i\in
I^{c}}\sum_{j=0}^{n}x_{ij}w_{j}\right),$
where $I^{c}=\\{1,....,2n+1\\}\setminus I$, is a nonzero polynomial. Since the
assumption of the proposition, it is clear that
$Q(f_{0},...,f_{n},g_{0},...,g_{n})\equiv 0.$
Hence $f\times g$ is algebraically degenerate. ∎
###### Proposition 2.3.
Let $f,g$ be two meromorphic mappings of ${\mathbf{C}}^{m}$ into
${\mathbf{P}}^{n}({\mathbf{C}})$. Let $\\{H_{i}\\}_{i=1}^{2n+2}$ be $(2n+2)$
hyperplanes of ${\mathbf{P}}^{n}({\mathbf{C}})$ in general position as in Main
Theorem. Suppose that the map $f\times g$ is algebraically nondegenerate. Then
the following assertions hold:
$\mathrm{(a)}$ $||\ T_{f}(r)=O(T_{g}(r))$ and $||\ T_{g}(r)=O(T_{f}(r))$.
$\mathrm{(b)}$
$m\biggl{(}r,\dfrac{(f,H_{i})}{(g,H_{i})}\dfrac{(g,H_{j})}{(f,H_{j})}\biggl{)}=o(T_{f}(r))\
\forall 1\leq i,j\leq 2n+2$.
###### Proof.
(a). By the supposition the map $f\times g$ is algebraically non-degenerate,
both $f$ and $g$ are linearly nondegenerate. Assume that $f,g$ have reduced
representations
$f=(f_{0}:\cdots:f_{n}),\ g=(g_{0}:\cdots:g_{n}),$
and the hyperplane $H_{i}\ (1\leq i\leq 2n+2)$ is given by
$H_{i}=\\{(w_{0}:\cdots:w_{n})\ ;\ a_{i0}w_{0}+\cdots+a_{in}w_{n}=0\\}.$
By Theorem 1.2 we have
$\displaystyle\bigl{|}\bigl{|}\ (n+1)T_{f}(r)$
$\displaystyle\leq\sum_{i=1}^{2n+2}N_{(f,H_{i})}^{[n]}(r)+o(T_{f}(r))$
$\displaystyle\leq n\cdot\sum_{i=1}^{2n+2}N_{(g,H_{i})}^{[1]}(r)+o(T_{f}(r))$
$\displaystyle\leq n(2n+2)(T_{g}(r))+o(T_{f}(r)).$
Then we have $||\ T_{f}(r)=O(T_{g}(r))$. Similarly we also have $||\
T_{g}(r)=O(T_{f}(r))$. We have the first assertion of the proposition.
(b). Since
$\sum_{k=0}^{n}a_{ik}f_{k}-\dfrac{f_{0}h_{i}}{g_{0}}\cdot\sum_{k=0}^{n}a_{ik}g_{k}=0\
(1\leq i\leq 2n+2),$ it implies that
(2.1) $\displaystyle\Phi:=\det\
(a_{i0},...,a_{in},a_{i0}h_{i},...,a_{in}h_{i};1\leq i\leq 2n+2)\equiv 0.$
For each subset $I\subset\\{1,2,...,2n+2\\},$ put $h_{I}=\prod_{i\in I}h_{i}$.
Denote by $\mathcal{I}$ the set
$\mathcal{I}=\\{I=(i_{1},...,i_{n+1})\ ;\ 1\leq i_{1}<\cdots<i_{n+1}\leq
2n+2\\}.$
For each $I=(i_{1},...,i_{n+1})\in\mathcal{I}$, define
$\displaystyle A_{I}=(-1)^{\frac{(n+1)(n+2)}{2}+i_{1}+...+i_{n+1}}$
$\displaystyle\times\det(a_{i_{r}l};1\leq r\leq n+1,0\leq l\leq n)$
$\displaystyle\times\det(a_{j_{s}l};1\leq s\leq n+1,0\leq l\leq n),$
where $J=(j_{1},...,j_{n+1})\in\mathcal{I}$ such that $I\cup
J=\\{1,2,...,2n+2\\}.$
We denote by $\mathcal{M}$ the field of all meromorphic functions on
${\mathbf{C}}^{m}$, and denote by $G$ the group of all nonzero functions
$\varphi$ so that $\varphi^{m}$ is a rational function in logarithmic
derivatives of ${h_{i}}^{\prime}$s for some positive integers $m$. We denote
by $\mathcal{H}$ the subgroup of the group $\mathcal{M}/G$ generated by
elements $[h_{1}],...,[h_{2n+2}]$.
Hence $\mathcal{H}$ is a finitely generated torsion-free abelian group. We
call $(x_{1},...,x_{p})$ a basis of $\mathcal{H}$. Then for each
$i\in\\{1,...,2n+2\\}$, we have
$[h_{i}]=x_{1}^{t_{i1}}\cdots x_{p}^{t_{ip}}.$
Put $t_{i}=(t_{i1},...,t_{ip})\in\mathbf{Z}^{p}$ and denote by $``\leqslant"$
the lexicographical order on $\mathbf{Z}^{p}$. Without loss of generality, we
may assume that
$t_{1}\leqslant t_{2}\leqslant\cdots\leqslant t_{2n+2}.$
Now the equality (2.1) implies that
$\sum_{I\in\mathcal{I}}A_{I}h_{I}=0.$
Applying Proposition 1.4 to meromorphic mappings $A_{I}h_{I}\
(I\in\mathcal{I})$, then we have a partition
$\mathcal{I}=\mathcal{I}_{1}\cup\cdots\cup\mathcal{I}_{k}$ with
$\mathcal{I}_{\alpha}\neq\emptyset$ and
$\mathcal{I}_{\alpha}\cap\mathcal{I}_{\beta}=\emptyset$ for $\alpha\neq\beta$
such that for each $\alpha$,
(2.2) $\displaystyle\sum_{I\in\mathcal{I}_{\alpha}}A_{I}h_{I}\equiv 0,$ (2.3)
$\displaystyle\frac{A_{I^{\prime}}h_{I^{\prime}}}{A_{I}h_{I}}\
(I,I^{\prime}\in\mathcal{I}_{\alpha})\text{ are rational functions in
logarithmic derivatives of ${A_{J}h_{J}}^{\prime}$s}.$
Moreover, we may assume that $I_{\alpha}$ is minimal, i.e., there is no proper
subset $J_{\alpha}\subsetneq I_{\alpha}$ with
$\sum_{I\in\mathcal{J}_{\alpha}}A_{I}h_{I}\equiv 0$.
We distinguish the following two cases:
Case 1. Assume that there exists an index $i_{0}$ such that
$t_{i_{0}}<t_{i_{0}+1}$. We may assume that $i_{0}\leq n+1$ (otherwise we
consider the relation $``\geqslant"$ and change indices of
$\\{h_{1},...,h_{2n+2}\\}$).
Assume that $I=(1,2,...,n+1)\in\mathcal{I}_{1}$. By the assertion (2.3), for
each $J=(j_{1},...,j_{n+1})\in\mathcal{I}_{1}\ (1\leq j_{1}<\cdots<j_{n+1}\leq
2n+2)$, we have $[h_{I}]=[h_{J}]$. This implies that
$t_{1}+\cdots+t_{n+1}=t_{j_{1}}+\cdots+t_{j_{n+1}}.$
This yields that $t_{j_{i}}=t_{i}\ (1\leq i\leq n+1)$.
Suppose that $j_{i_{0}}>i_{0}$, then $t_{i_{0}}<t_{i_{0}+1}\leqslant
t_{j_{i_{0}}}$. This is a contradiction. Therefore $j_{i_{0}}=i_{0}$, and
hence $j_{1}=1,...,j_{i_{0}-1}=i_{0}-1.$ We conclude that
$J=(1,...,i_{0},j_{i_{0}+1},...,j_{n+1})$ and $i_{0}\leq n+1$ for each
$J\in\mathcal{I}_{1}.$
By (2.3), we have
$\sum_{I\in\mathcal{I}_{1}}A_{I}h_{I}=h_{i_{0}}\sum_{I\in\mathcal{I}_{1}}A_{I}h_{I\setminus\\{i_{0}\\}}\equiv
0.$
Thus
$\sum_{I\in\mathcal{I}_{1}}A_{I}h_{I\setminus\\{i_{0}\\}}\equiv 0.$
Then Proposition 2.2 shows that $f\times g$ is algebraically degenerate. It
contradicts to the supposition.
Case 2. Assume that $t_{1}=\cdots=t_{2n+2}$. It follows that
$\frac{h_{I}}{h_{J}}\in G$ for any $I,J\in\mathcal{I}$. Then we easily see
that $\frac{h_{i}}{h_{j}}\in G$ for all $1\leq i,j\leq 2n+2.$ Hence, there
exists a positive integer $m_{ij}$ such that
$\left(\frac{h_{i}}{h_{j}}\right)^{m_{ij}}$ is a rational funtion in
logarithmic derivatives of ${h_{s}}^{\prime}$s. Therefore, by lemma on
logarithmic derivatives, we have
$\displaystyle\biggl{|}\biggl{|}\ \ m\bigl{(}r,\frac{h_{i}}{h_{j}}\bigl{)}$
$\displaystyle=\frac{1}{m_{ij}}m\bigl{(}r,\left(\frac{h_{i}}{h_{j}}\right)^{m_{ij}}\bigl{)}+O(1)$
$\displaystyle=O\biggl{(}\max
m\bigl{(}r,\frac{\mathcal{D}^{\alpha}(h_{s})}{h_{s}}\bigl{)}\biggl{)}+O(1)=o(\max
T(r,h_{s}))+O(1)$ $\displaystyle=o\biggl{(}\max
T\bigl{(}r,\dfrac{(f,H_{s})}{f_{0}}\bigl{)}\biggl{)}+o\biggl{(}\max
T\bigl{(}r,\dfrac{(g,H_{s})}{g_{0}}\bigl{)}\biggl{)}+O(1)$
$\displaystyle=o(T_{f}(r))+o(T_{g}(r))=o(T_{f}(r)).$
Therefore, we have
$m\biggl{(}r,\dfrac{(f,H_{i})}{(g,H_{i})}\dfrac{(g,H_{j})}{(f,H_{j})}\biggl{)}=o(T_{f}(r))\
\forall 1\leq i,j\leq 2n+2.$
The second assertion is proved. ∎
###### Proposition 2.4.
Let $f,g:{\mathbf{C}}^{m}\rightarrow{\mathbf{P}}^{n}({\mathbf{C}})$ be two
meromorphic mappings and let $\\{H_{i}\\}_{i=1}^{2n+2}$ be $2n+2$ hyperplanes
of ${\mathbf{P}}^{n}({\mathbf{C}})$ in general position with
$\dim f^{-1}(H_{i}\cap H_{j})\leqslant m-2\quad(1\leqslant i<j\leqslant
2n+2).$
Assume that $f$ and $g$ share $H_{i}\ (1\leq i\leq 2n+2)$ ignoring
multiplicity. Suppose that the map $f\times g$ is algebraically nondegenerate.
Then for every $i=1,...,2n+2,$ the following assertions hold
$\mathrm{(i)}$ $||\ T_{f}(r)=N_{(f,H_{i})}(r)+o(T_{f}(r))$ and $||\
T_{g}(r)=N_{(g,H_{i})}(r)+o(T_{f}(r))$,
$\mathrm{(ii)}$ $||\
N(r,|\nu^{0}_{(f,H_{i})}-\nu^{0}_{(g,H_{i})}|)+2N^{[1]}_{(h,H_{i})}(r)=\sum_{t=1}^{2n+2}N^{[1]}_{(h,H_{t})}(r)+o(T_{f}(r)),\
h\in\\{f,g\\},$
$\mathrm{(iii)}$ $||\
N(r,\min\\{\nu^{0}_{(f,H_{i})},\nu^{0}_{(g,H_{i})}\\})=\sum_{u=f,g}N^{[n]}_{(u,H_{v})}(r)-nN^{[1]}_{(f,H_{v})}(r)+o(T_{f}(r)).$
$\mathrm{(iv)}$ Moreover, if there exists an index $i_{0}\in\\{1,...,2n+2\\}$
such that $f$ and $g$ share $H_{i}$ with multiplicity truncated by level $n+1$
then
$\nu_{(f,H_{i_{0}})}(z)=\nu_{(g,H_{i_{0}})}(z)=n$
for all $z\in f^{-1}(H_{i_{0}})$ outside an analytic subset of counting
function regardless of multiplicity is equal to $T_{f}(r)$.
###### Proof.
(i)-(iii). For each two indecies $i$ and $j$, $1\leq i<j\leq 2n+2$, we defined
$P_{ij}\overset{Def}{:=}\dfrac{(f,H_{i})}{(g,H_{i})}\cdot\dfrac{(g,H_{j})}{(f,H_{j})}.$
By the supposition that the map $f\times g$ is algebraically nondegenerate, we
have that $P_{ij}$ is not constant. Then by Proposition 2.3 we have
$\displaystyle T(r,P_{ij})$
$\displaystyle=m(r,P_{ij})+N(r,\nu^{\infty}_{P_{ij}})=N(r,\nu^{\infty}_{P_{ij}})+o(T_{f}(r))$
$\displaystyle=N(r,\nu^{\infty}_{\frac{(f,H_{i})}{(g,H_{i})}})+N(r,\nu^{\infty}_{\frac{(g,H_{j})}{(f,H_{j})}})+o(T_{f}(r))$
On the other hand, since $f=g$ and then $P_{ij}=1$ on
$\bigcup_{\underset{t\neq i,j}{t=1}}^{2n+2}f^{-1}(H_{t})$, therefore we have
$N(r,\nu^{0}_{P_{ij}-1})\geq\sum_{\underset{t\neq
i,j}{t=1}}^{2n+2}N^{[1]}_{(f,H_{t})}(r).$
Since $N(r,\nu^{0}_{P_{ij}-1})\leq T(r,P_{ij})$, we have
(2.4) $\displaystyle
N(r,\nu^{\infty}_{\frac{(f,H_{i})}{(g,H_{i})}})+N(r,\nu^{\infty}_{\frac{(g,H_{j})}{(f,H_{j})}})\geq\sum_{\underset{t\neq
i,j}{t=1}}^{2n+2}N^{[1]}_{(g,H_{t})}(r)+o(T_{f}(r)).$
Similarly, we also get
(2.5) $\displaystyle
N(r,\nu^{\infty}_{\frac{(g,H_{i})}{(f,H_{i})}})+N(r,\nu^{\infty}_{\frac{(f,H_{j})}{(g,H_{j})}})\geq\sum_{\underset{t\neq
i,j}{t=1}}^{2n+2}N^{[1]}_{(f,H_{t})}(r)+o(T_{f}(r)).$
It is also easy to see that
(2.6)
$\displaystyle\begin{split}N(r,\nu^{\infty}_{\frac{(f,H_{t})}{(g,H_{t})}})&+N(r,\nu^{\infty}_{\frac{(g,H_{t})}{(f,H_{t})}})=N(r,|\nu^{0}_{(f,H_{t})}-\nu^{0}_{(g,H_{t})}|)\\\
&=N(r,\max\\{\nu^{0}_{(f,H_{t})},\nu^{0}_{(g,H_{t})}\\})-N(r,\min\\{\nu^{0}_{(f,H_{t})},\nu^{0}_{(g,H_{t})}\\})\\\
&=N(r,\max\\{\nu^{0}_{(f,H_{t})},\nu^{0}_{(g,H_{t})}\\})+N(r,\min\\{\nu^{0}_{(f,H_{t})},\nu^{0}_{(g,H_{t})}\\})\\\
&\ \ -2N(r,\min\\{\nu^{0}_{(f,H_{t})},\nu^{0}_{(g,H_{t})}\\})\\\
&=N_{(f,H_{t})}(r)+N_{(g,H_{t})}(r)-2N(r,\min\\{\nu^{0}_{(f,H_{t})},\nu^{0}_{(g,H_{t})}\\}),\forall
1\leq t\leq 2n+2.\end{split}$
Therefore, by summing-up both sides of (2.4) and (2.4) we get
(2.7)
$\displaystyle\sum_{v=i,j}\bigl{(}\sum_{u=f,g}N_{(u,H_{v})}(r)-2N(r,\min\\{\nu^{0}_{(f,H_{v})},\nu^{0}_{(g,H_{v})}\\})\bigl{)}\geq\sum_{u=f,g}\sum_{\underset{t\neq
i,j}{t=1}}^{2n+2}N^{[1]}_{(u,H_{t})}(r)+o(T_{f}(r)).$
Since
(2.8) $\displaystyle T_{u}(r)\geq N_{(u,H_{t})}(r),\ u=f,g,$
the above inequality yields that
(2.9) $\displaystyle\begin{split}||\
\sum_{u=f,g}2T_{u}(r)&\geq\sum_{u=f,g}\bigl{(}N_{(u,H_{i})}(r)+N_{(u,H_{j})}(r)\bigl{)}\\\
&\geq\sum_{v=i,j}2N(r,\min\\{\nu^{0}_{(f,H_{v})},\nu^{0}_{(g,H_{v})}\\})+\sum_{u=f,g}\sum_{\underset{t\neq
i,j}{t=1}}^{2n+2}N^{[1]}_{(u,H_{t})}(r)+o(T_{f}(r)).\end{split}$
We also see that for all $z\in f^{-1}(H_{t}),v=i,j,$
$\displaystyle\min\\{\nu^{0}_{(f,H_{v})}(z),\nu^{0}_{(g,H_{v})}(z)\\}\geqslant\min\\{\nu^{0}_{(f,H_{v})},n\\}+\min\\{\nu^{0}_{(g,H_{v})}(z),n\\}-n.$
This implies that
(2.10)
$\displaystyle\begin{split}N(r,\min\\{\nu^{0}_{(f,H_{v})},\nu^{0}_{(g,H_{v})}\\})&\geq\sum_{u=f,g}N^{[n]}_{(u,H_{v})}(r)-nN^{[1]}_{(f,H_{v})}(r)\\\
&=\sum_{u=f,g}\bigl{(}N^{[n]}_{(u,H_{v})}(r)-\dfrac{n}{2}N^{[1]}_{(u,H_{v})}(r)\bigl{)}.\end{split}$
Combining inequalities (2.9) and (2.10), we have
$\displaystyle||\ \sum_{u=f,g}2T_{u}(r)$
$\displaystyle\geq\sum_{v=i,j}\sum_{u=f,g}\bigl{(}2N^{[n]}_{(u,H_{v})}(r)-N^{[1]}_{(u,H_{v})}(r)\bigl{)}$
$\displaystyle\ \ +\sum_{u=f,g}\sum_{\underset{t\neq
i,j}{t=1}}^{2n+2}N^{[1]}_{(u,H_{t})}(r)+o(T_{f}(r)).$
Summing-up both sides of the above inequality over all pair $(i,j),i\neq j$,
and using the Second Main Theorem, we get
$\displaystyle||\ \sum_{u=f,g}2T_{u}(r)$
$\displaystyle\geq\dfrac{2}{n+1}\sum_{u=f,g}\sum_{v=1}^{2n+2}\bigl{(}2N^{[n]}_{(u,H_{v})}(r)+o(T_{f}(r))$
$\displaystyle\sum_{u=f,g}\geq\dfrac{2(2n+2-n-1)}{n+1}T_{u}(r)+o(T_{f}(r))=\sum_{u=f,g}2T_{u}(r)+o(T_{f}(r)).$
The last equality yields that all inequalities (2.4) (2.5) and (2.8-2.8)
become equalities outside a Borel set of finite measure. Summarizing all these
“equalities”, we obtain the following:
(2.11) $\displaystyle||\ T_{f}(r)=N_{(f,H_{i})}(r)+o(T_{f}(r))\text{ and }||\
T_{g}(r)=N_{(g,H_{i})}(r)+o(T_{f}(r))\ \text{(by (\ref{2.12}))},$ (2.12)
$\displaystyle||\
\sum_{v=i,j}N(r,|\nu^{0}_{(f,H_{v})}-\nu^{0}_{(g,H_{v})}|)=\sum_{u=f,g}\sum_{\underset{t\neq
i,j}{t=1}}^{2n+2}N^{[1]}_{(u,H_{t})}(r)+o(T_{f}(r))\text{(by (\ref{2.10}) and
(\ref{2.11}))},$ (2.13) $\displaystyle||\
N(r,\min\\{\nu^{0}_{(f,H_{i})},\nu^{0}_{(g,H_{i})}\\})=\sum_{u=f,g}N^{[n]}_{(u,H_{v})}(r)-nN^{[1]}_{(f,H_{v})}(r)\
\text{(by (\ref{2.10}) and (\ref{2.11}))},$
for every $i=1,...,2n+2.$ Then, equalities (2.11) and (2.12) prove the first
assertion and the third assertion of the proposition. Also the equality (2.12)
implies that
$||\
\sum_{v=i,j}\bigl{(}N(r,|\nu^{0}_{(f,H_{v})}-\nu^{0}_{(g,H_{v})}|)+2N^{[1]}_{(h,H_{v})}(r)\bigl{)}=\sum_{u=f,g}\sum_{t=1}^{2n+2}N^{[1]}_{(u,H_{t})}(r)+o(T_{f}(r))$
holds for all $i,j\in\\{1,...,2n+2\\}$ and $h\in\\{f,g\\}$, it easily follows
that
$||\
N(r,|\nu^{0}_{(f,H_{i})}-\nu^{0}_{(g,H_{i})}|)+2N^{[1]}_{(h,H_{i})}(r)=\sum_{t=1}^{2n+2}N^{[1]}_{(h,H_{t})}(r)+o(T_{f}(r)),1\leq
i\leq 2n+2,\ h\in\\{f,g\\}.$
Then the second assertion is proved.
(iv). Without loss of generality, we may assume that $i_{0}=2n+2$. From the
third assertion and the assumption that $f$ and $g$ share $H_{2n+2}$ with
multiplicity truncated by level $n+1$, we have
$\displaystyle||\ N^{[n+1]}_{(f,H_{2n+2})}(r)$ $\displaystyle\leq
N(r,\min\\{\nu_{(f,H_{2n+2})},\nu_{(g,H_{2n+2})}\\})$
$\displaystyle=\sum_{u=f,g}N^{[n]}_{(u,H_{2n+2})}(r)-nN^{[1]}_{(g,H_{2n+2})}(r)+o(T_{f}(r))$
$\displaystyle\leq\sum_{u=f,g}N^{[n]}_{(u,H_{2n+2})}(r)-N^{[n]}_{(g,H_{2n+2})}(r)+o(T_{f}(r))$
$\displaystyle=N^{[n]}_{(f,H_{2n+2})}(r)+o(T_{f}(r)).$
This yields that
$||\ N^{[n+1]}_{(f,H_{2n+2})}(r)=N^{[n]}_{(f,H_{2n+2})}(r)+o(T_{f}(r))\text{
and }||\ N^{[n]}_{(u,H_{2n+2})}(r)=nN^{[1]}_{(g,H_{2n+2})}(r)+o(T_{f}(r)).$
It folows that
$\min\\{\nu_{(f,H_{2n+2})},n+1\\}=\min\\{\nu_{(f,H_{2n+2})},n\\}\text{ and
}\min\\{\nu_{(g,H_{2n+2})},n\\}=n\min\\{\nu_{(f,H_{2n+2})},n1\\}$
outside an analytic subset $S$ of counting function regardless of multiplicity
is equal to $T_{f}(r)$. Therefore,
$\nu_{(f,H_{2n+2})}(z)\leq n\text{ and }\nu_{(g,H_{2n+2})}(z)\geq n\ \forall
z\in f^{-1}(H_{2n+2})\setminus S.$
Similarly, we have
$\nu_{(g,H_{2n+2})}(z)\leq n\text{ and }\nu_{(f,H_{2n+2})}(z)\geq n$
for all $z\in f^{-1}(H_{2n+2})$ outside an analytic subset $S^{\prime}$ of
counting function regardless of multiplicity is equal to $T_{f}(r)$. Then we
have
$\nu_{(f,H_{2n+2})}(z)=\nu_{(g,H_{2n+2})}(z)=n$
for all $z\in f^{-1}(H_{2n+2})\setminus(S\cup S^{\prime})$. The fourth
assertion is proved. ∎
Proof of Main Theorem. Suppose that $f\times g$ is not algebraically
degenerate. Then by Lemma 2.4(ii)-(iv) and by the assumption, we have the
following:
$\displaystyle||\
2N^{[1]}_{(h,H_{2n+2})}(r)=\sum_{t=1}^{2n+2}N^{[1]}_{(h,H_{t})}(r)+o(T_{f}(r))\
h\in\\{f,g\\}.$
By the Second Main Theorem, it follows that
$\displaystyle||\ T_{h}(r)\geq$ $\displaystyle
N^{[1]}_{(h,H_{2n+2})}(r)=\sum_{\overset{t=1}{t\neq
2n+2}}^{2n+2}N^{[1]}_{(h,H_{t})}(r)+o(T_{f}(r))$
$\displaystyle\geq\dfrac{1}{n}\sum_{\overset{t=1}{t\neq
2n+2}}^{2n+2}N^{[n]}_{(h,H_{t})}(r)+o(T_{f}(r))$
$\displaystyle\geq\dfrac{2n+1-n-1}{n}T_{h}(r)+o(T_{f}(r))=T_{h}(r)+o(T_{f}(r))$
for each $h\in\\{f,g\\}$. Therefore, we easily obtain that
$\displaystyle||\ T_{h}(r)$
$\displaystyle=N_{(h,H_{2n+2})}(r)+o(T_{h}(r))=N^{[n]}_{(h,H_{2n+2})}(r)+o(T_{h}(r))$
$\displaystyle=N^{[1]}_{(h,H_{2n+2})}(r)+o(T_{h}(r)),\ \forall h\in\\{f,g\\}.$
Then, by Lemma 2.4(iii), we have
$\displaystyle||\ T_{h}(r)$
$\displaystyle=N(r,\min\\{\nu^{0}_{(f,H_{i})},\nu^{0}_{(g,H_{i})}\\})=\sum_{u=f,g}N^{[n]}_{(u,H_{v})}(r)-nN^{[1]}_{(h,H_{v})}(r)+o(T_{h}(r))$
$\displaystyle=2T_{h}(r)-nT_{h}(r)+o(T_{h}(r)),\ \forall h\in\\{f,g\\}.$
Letting $r\longrightarrow+\infty$, we get $n=1$. This is a contradiction to
the assumption that $n\geq 2.$ Therefore, the supposition is impossible. Then
the map $f\times g$ is algebraically degenerate. $\square$
## 3\. Two meromorphic mappings with two family of hyperplanes
Let $f$ and $g$ be three distinct meromorphic mappings of ${\mathbf{C}}^{m}$
into ${\mathbf{P}}^{n}({\mathbf{C}})$. Let
$\left\\{H_{i}\right\\}_{i=1}^{2n+2}$ and
$\left\\{G_{i}\right\\}_{i=1}^{2n+2}$ be two families of hyperplanes of
${\mathbf{P}}^{n}({\mathbf{C}})$ in general position. Hyperplanes $H_{i}$ and
$G_{i}$ are given by
$\displaystyle H_{i}$ $\displaystyle=\\{(\omega_{0}:\cdots:\omega_{n})\ |\
\sum_{v=0}^{n}a_{iv}\omega_{v}=0\\}$ $\displaystyle\text{and }G_{i}$
$\displaystyle=\\{(\omega_{0}:\cdots:\omega_{n})\ |\
\sum_{v=0}^{n}b_{iv}\omega_{v}=0\\}$
respectively. Let $f=(f_{0}:\cdots:f_{n})$ and $g=(g_{0}:\cdots:g_{n})$ be
reduced representations of $f$ and $g$ respectively. We set
$(f,H_{i})=\sum_{v=0}^{n}a_{iv}f_{v}\text{ and
}(g,G_{i})=\sum_{v=0}^{n}b_{iv}g_{v}.$
In this section, we will consider the case of two meromorphic mappings sharing
two different families of hyperplanes as follows.
###### Theorem 3.1.
Let $f,g,\left\\{H_{i}\right\\}_{i=1}^{2n+2}$ and
$\left\\{G_{i}\right\\}_{i=1}^{2n+2}$ be as above. Assume that
$\mathrm{(}a)$ $\dim f^{-1}(H_{i})\cap f^{-1}(H_{j})\leqslant m-2\ \forall
1\leqslant i<j\leqslant 2n+2,$
$\mathrm{(}b)$ $f^{-1}(H_{i})=g^{-1}(G_{i}),$ for $k=1,2,$ and $i=1,...,2n+1,$
$\mathrm{(}c)$
$\min\\{\nu_{(f,H_{2n+2})},n+1\\}=\min\\{\nu_{(g,G_{2n+2})},n+1\\}$,
$\mathrm{(}d)$ $\dfrac{(f,H_{v})}{{(f,H_{j})}}=\dfrac{(g,G_{v})}{{(g,G_{j})}}$
on $\bigcup_{i=1}^{2n+2}f^{-1}(H^{0}_{i})\setminus f^{-1}(H^{0}_{j}),$ for
$1\leqslant v,j\leqslant 2n+2.$
If $n\geq 2$ then the map $f\times g$ is algebraically degenerate.
###### Proof.
We consider the linearly projective transformation $\mathcal{L}$ of
${\mathbf{P}}^{n}({\mathbf{C}})$ which is given by
$\mathcal{L}((z_{0}:\cdots:z_{n}))=(\omega_{0}:\cdots:\omega_{n})$ with
$\displaystyle\left(\begin{array}[]{ccc}\omega_{0}\\\ \vdots\\\
\omega_{n}\end{array}\right)=\underbrace{\left(\begin{array}[]{ccc}c_{10}&\cdots&c_{1n}\\\
\vdots&\cdots&\vdots\\\
c_{(n+1)0}&\cdots&c_{(n+1)n}\end{array}\right)}_{C}\left(\begin{array}[]{ccc}z_{0}\\\
\vdots\\\ z_{n}\end{array}\right),$
where
$C={\underbrace{\left(\begin{array}[]{ccc}a_{10}&\cdots&a_{1n}\\\
\vdots&\cdots&\vdots\\\
a_{(n+1)0}&\cdots&a_{(n+1)n}\end{array}\right)}_{A}}^{-1}\cdot\underbrace{\left(\begin{array}[]{ccc}b_{10}&\cdots&b_{1n}\\\
\vdots&\cdots&\vdots\\\ b_{(n+1)0}&\cdots&b_{(n+1)n}\end{array}\right)}_{B}$
We set
$(a^{\prime}_{i0},...,a^{\prime}_{in})=(b_{i0},...,b_{in})\cdot C^{-1},\
\text{ for }i=1,..,2n+2.$
Since $A\circ B=C,$ then
$(a^{\prime}_{i0},...,a^{\prime}_{in})=(a_{i0},...,a_{in}),\ \forall
i=1,...,n+1.$
Suppose that there exists an index $i_{0}\in\\{n+2,...,2n+2\\}$ such that
$(a^{\prime}_{i0},...,a^{\prime}_{in})\neq(a_{i0},...,a_{in}).$
We consider the following function
$F=\sum_{j=0}^{n}(a^{\prime}_{i_{0}j}-a_{i_{0}j})f_{j}.$
Since $f$ is linearly nondegenerate, $F$ is a nonzero meromorphic function on
${\mathbf{C}}^{m}$. For $z\in\bigcup_{i=1}^{2n+2}f^{-1}(H_{i})\setminus
I(f^{0})$, without loss of generality we may assume that $(f,H_{1})(z)\neq 0$,
then
$\displaystyle F(z)=$
$\displaystyle\sum_{j=0}^{n}(a^{\prime}_{i_{0}j}-a_{i_{0}j})f_{j}(z)=(a_{i_{0}0},...,a_{i_{0}n})\cdot
C^{-1}(f)(z)-(f,H_{i_{0}})(z)$ $\displaystyle=$
$\displaystyle(a_{i_{0}0},...,a_{i_{0}n})\cdot B^{-1}\circ
A(f)(z)-(f,H_{i_{0}})(z)$ $\displaystyle=$
$\displaystyle\dfrac{(a_{i_{0}0},...,a_{i_{0}n})\cdot B^{-1}\circ
A(f)(z)-(f,H_{i_{0}})(z)}{(f,H_{1})(z)}\cdot(f,H_{1})(z)$ $\displaystyle=$
$\displaystyle\dfrac{(a_{i_{0}0},...,a_{i_{0}n})\cdot B^{-1}\circ
B(f)(z)-(f,H_{i_{0}})(z)}{(f,H_{1})(z)}\cdot(f,H_{1})(z)$ $\displaystyle=$
$\displaystyle\dfrac{(a_{i_{0}0},...,a_{i_{0}n})(f)(z)-(f,H_{i_{0}})(z)}{(f,H_{1})(z)}\cdot(f,H_{1})(z)$
$\displaystyle=$
$\displaystyle\dfrac{(f,H_{i_{0}})(z)-(f,H_{i_{0}})(z)}{(f,H_{1})(z)}\cdot(f,H_{1})(z)=0.$
Therefore, it follows that
$N_{F}(r)\geqslant\sum_{i=1}^{2n+2}N_{(f,H_{i})}^{[1]}(r).$
On the other hand, by Jensen formula we have that
$N_{F}(r)=\int\limits_{S(r)}\mathrm{log}|F(z)|\
\eta+O(1)\leqslant\int\limits_{S(r)}\mathrm{log}||f(z)||\
\eta+O(1)=T_{f}(r)+o(T_{f}(r)).$
By using the Second Main Theorem, we obtain
$\displaystyle||\ (n+1)T_{f}(r)$
$\displaystyle\leqslant\sum_{i=1}^{2n+2}N_{(f,H_{i})}^{[n]}(r)+o(T_{f}(r))$
$\displaystyle\leqslant
n\sum_{i=1}^{2n+2}N_{(f,H_{i})}^{[1]}(r)+o(T_{f}(r))\leqslant nT_{f}(r).$
It implies that $||\ T_{f}(r)=0.$ This is a contradiction to the fact that $f$
is linearly nondegenerate. Therefore we have
$(a^{\prime}_{i0},...,a^{\prime}_{in})=(a_{i0},...,a_{in}),\ \forall
i=1,...,2n+2.$
Hence $\mathcal{L}(G_{i})=H_{i}$ for all $i=1,...,2n+2.$
We set $\tilde{g}=\mathcal{L}(g),\ k=1,2$. Then $f$ and $\tilde{g}$ share
$\\{H_{1},..,H_{2n+1}\\}$ ignoring multiplicity and share $H_{2n+2}$ with
multiplicity truncated by level $n+1$. By Main Theorem, the map
$f\times\tilde{g}$ is algebraically degenerate. We easily see that the map
$\begin{array}[]{cccc}\Psi:&{\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})&\longrightarrow&{\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})\\\
&((\omega_{0}:\cdots:\omega_{n})\times(z_{0}:\cdots:z_{n}))&\mapsto&((\omega_{0}:\cdots:\omega_{n})\times\mathcal{L}^{-1}(z_{0}:\cdots:z_{n}))\end{array}$
is an automorphism of
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$.
Therefore, the map $f\times g=\Phi(f\times\tilde{g})$ is an algebraically
degenerate mapping into
${\mathbf{P}}^{n}({\mathbf{C}})\times{\mathbf{P}}^{n}({\mathbf{C}})$. The
theorem is proved. ∎
## References
* [1] Z. Chen and Q. Yan, Uniqueness theorem of meromorphic mappings into ${\mathbf{P}}^{N}({\mathbf{C}})$ sharing $2N+3$ hyperplanes regardless of multiplicities, Internat. J. Math., 20 (2009), 717-726.
* [2] Z. Chen and Q. Yan, A degeneracy theorem for meromorphic mappings with truncated multiplicities, Acta Math. Scientia, 31 (2011), 549–560
* [3] G. Dethloff and S. D. Quang and T. V. Tan, A uniqueness theorem for meromor-phic mappings with two families of hyperplanes, Proc. Amer. Math. Soc., 140 (2012), 189-197.
* [4] H. Fujimoto, Uniqueness problem with truncated multiplicities in value distribution theory II, Nagoya Math. J., 155 (1999), 161-188.
* [5] G. Gundersen, Meromorphic functions that share four values,Trans. Amer. J. Math. 227 (1983), 545-567.
* [6] J. Noguchi and T. Ochiai, Introduction to Geometric Function Theory in Several Complex Variables, Trans. Math. Monogr. 80, Amer. Math. Soc., Providence, Rhode Island, 1990.
* [7] R. Nevanlinna, Einige Eideutigkeitssätze in der Theorie der meromorphen Funktionen, Acta. Math., 48 (1926), 367-391.
* [8] S. D. Quang, Unicity of meromorphic mappings sharing few hyperplanes, Ann. Polon. Math., 102 No. 3 (2011), 255-270.
* [9] S. D. Quang, A Finiteness theorem for meromorphic mappings with few hyperplanes, Kodai Math. J., 35 (2012), 463-484.
|
arxiv-papers
| 2013-02-10T04:33:14 |
2024-09-04T02:49:41.562856
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Si Duc Quang and Le Ngoc Quynh",
"submitter": "Duc Quang Si",
"url": "https://arxiv.org/abs/1302.2297"
}
|
1302.2331
|
# The Phase Transition of Matrix Recovery from Gaussian Measurements Matches
the Minimax MSE of Matrix Denoising
David L. Donoho 111Department of Statistics, Stanford University Matan Gavish
111Department of Statistics, Stanford University Andrea Montanari
111Department of Statistics, Stanford University 222Department of Electrical
Engineering, Stanford University
###### Abstract
Let $X_{0}$ be an unknown $M$ by $N$ matrix. In matrix recovery, one takes
$n<MN$ linear measurements $y_{1},\dots,y_{n}$ of $X_{0}$, where $y_{i}={\rm
Tr}(a_{i}^{T}X_{0})$ and each $a_{i}$ is a $M$ by $N$ matrix. For measurement
matrices with Gaussian i.i.d entries, it known that if $X_{0}$ is of low rank,
it is recoverable from just a few measurements. A popular approach for matrix
recovery is Nuclear Norm Minimization (NNM): solving the convex optimization
problem $\text{min }\|X\|_{*}\text{ subject to }y_{i}={\rm Tr}(a_{i}^{T}X)$
for all $1\leq i\leq n$, where $\|\cdot\|_{*}$ denotes the nuclear norm,
namely, the sum of singular values. Empirical work reveals a _phase
transition_ curve, stated in terms of the undersampling fraction
$\delta(n,M,N)=n/(MN)$, rank fraction $\rho=r/N$ and aspect ratio $\beta=M/N$.
Specifically, a curve $\delta^{*}=\delta^{*}(\rho;\beta)$ exists such that, if
$\delta>\delta^{*}(\rho;\beta)$, NNM typically succeeds, while if
$\delta<\delta^{*}(\rho;\beta)$, it typically fails.
An apparently quite different problem is matrix denoising in Gaussian noise,
where an unknown $M$ by $N$ matrix $X_{0}$ is to be estimated based on direct
noisy measurements $Y=X_{0}+Z$, where the matrix $Z$ has iid Gaussian entries.
It has been empirically observed that, if $X_{0}$ has low rank, it may be
recovered quite accurately from the noisy measurement $Y$. A popular matrix
denoising scheme solves the unconstrained optimization problem $\text{min
}\|Y-X\|_{F}^{2}/2+\lambda\|X\|_{*}$. When optimally tuned, this scheme
achieves the asymptotic minimax MSE ${\cal
M}(\rho)=\lim_{N\rightarrow\infty}\inf_{\lambda}\sup_{{\rm
rank}(X)\leq\rho\cdot N}MSE(X,\hat{X}_{\lambda})$.
We report extensive experiments showing that the phase transition
$\delta^{*}(\rho)$ in the first problem (Matrix Recovery from Gaussian
Measurements) coincides with the minimax risk curve ${\cal M}(\rho)$ in the
second problem (Matrix Denoising in Gaussian Noise): $\delta^{*}(\rho)={\cal
M}(\rho)$, for any rank fraction $0<\rho<1$.
Our experiments considered matrices belonging to two constraint classes: real
$M$ by $N$ matrices, of various ranks and aspect ratios, and real symmetric
positive semidefinite $N$ by $N$ matrices, of various ranks. Different
predictions ${\cal M}(\rho)$ of the phase transition location were used in the
two different cases, and were validated by the experimental data.
###### Contents
1. 1 Introduction
2. 2 Methods
1. 2.1 Generation of Problem Instances
2. 2.2 Convex Optimization
3. 2.3 Probability of Exact Recovery
4. 2.4 Estimating the Probability of Exact Recovery
5. 2.5 Asymptotic Phase Transition Hypothesis
6. 2.6 Empirical Phase Transitions
7. 2.7 Experimental Design
8. 2.8 Computing
3. 3 Results
4. 4 Discussion: Existing Literature and Our Contributions
5. 5 Conclusions
6. A Asymptotic Minimax MSE Formula
7. B Summary of Empirical Results
8. C Data Deposition
9. D Code Deposition
## 1 Introduction
Let $X_{0}$ be an unknown $M$ by $N$ matrix. How many measurements must we
obtain in order to ‘completely know’ $X_{0}$? While it seems that $MN$
measurements must be necessary, in recent years intense research in applied
mathematics, optimization and information theory, has shown that, when $X_{0}$
is of low rank, we may efficiently recover it from a relatively small number
of linear measurements by convex optimization [3, 1, 2]. Applications have
been developed in fields ranging widely, for example from video and image
processing [4], to quantum state tomography [5], to collaborative filtering
[1, 6].
Specifically, let ${\cal A}:{\mathbb{R}}^{M\times N}\to{\mathbb{R}}^{n}$ be a
linear operator and consider measurements $y={\cal A}(X_{0})$. If $n<MN$, the
problem of inferring $X_{0}$ from $y$ may be viewed as attempting to solve an
underdetermined system of equations. Under certain circumstances, it has been
observed that this (seemingly hopeless) task can be accomplished by solving
the so-called nuclear norm minimization problem
$(P_{nuc})\qquad\text{min }\|X\|_{*}\;\;\text{ subject to }y={\cal A}(X)\,.$
(1)
Here the _nuclear norm_ $\|X\|_{*}$ is the sum of singular values of $X$. For
example, it was found that if $X_{0}$ is sufficiently low rank, with a
principal subspace in a certain sense incoherent to the measurement operator
${\cal A}$, then the solution $X_{1}=X_{1}(y)$ to $(P_{nuc})$ is precisely
$X_{0}$. Such incoherence can be obtained by letting ${\cal A}$ be random, for
instance if ${\cal A}(X_{0})_{i}={\rm Tr}(a_{i}^{T}X_{0})$ with
$a_{i}\in{\mathbb{R}}^{m\times n}$ having i.i.d. Gaussian entries. In this
case we speak of _“matrix recovery from Gaussian measurements”_ [3].
A key phrase from the previous paragraph: ‘if $X_{0}$ is _sufficiently_ low
rank’. Clearly there must be a quantitative trade-off between the rank of
$X_{0}$ and the number of measurements required, such that higher rank
matrices require more measurements. In the Gaussian measurements model, with
$N$ sufficiently large, empirical work by Recht, Xu and Hassibi [8, 7], Fazel,
Parillo and Recht [3], Tanner and Wei [9] and Oymak and Hassibi [10],
documents a _phase transition_ phenomenon. For matrices of a given rank, there
is a fairly precise number of required samples, in the sense that a transition
from non recovery to complete recovery takes place sharply as the number of
samples varies across this value. For example, in Figure 1 below we report
results obtained in our own experiments, showing that, for reconstructing
matrices of size 60 by 60 which are of rank 20, 2600 Gaussian measurements are
sufficient with very high probability, but 2400 Gaussian measurements are
insufficient with very high probability.
$\delta$ | 0.67 | 0.68 | 0.68 | 0.68 | 0.68 | 0.69 | 0.69 | 0.69 | 0.69 | 0.70 | 0.71 | 0.72 | 0.73
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$n$ | 2400 | 2437 | 2446 | 2455 | 2465 | 2474 | 2483 | 2492 | 2502 | 2511 | 2538 | 2575 | 2612
$\hat{\pi}(r|n,N)$ | 0.00 | 0.00 | 0.30 | 0.20 | 0.30 | 0.45 | 0.60 | 0.80 | 0.60 | 0.90 | 1.00 | 1.00 | 1.00
Figure 1: Data from typical Phase Transition experiment. Here $N=60$, $r=20$,
and the number $n$ of Gaussian measurements varies. Note: our formula predicts
an asymptotic phase transition at $\delta^{*}=0.6937$, corresponding to
$n=2497$. And, indeed, the success probability is close to $1/2$ at that $n$.
All runs involved $T=20$ Monte Carlo trials.
In this paper, we present a simple and explicit formula for the phase
transition curve in matrix recovery from Gaussian measurements. The formula
arises in an apparently unrelated problem: matrix de-noising in Gaussian
noise. In this problem, we again let $X_{0}$ denote an $M$ by $N$ matrix, and
we observe $Y=X_{0}+Z$, where $Z$ is Gaussian iid noise $Z_{ij}\sim{\cal
N}(0,1)$. We consider the following nuclear norm de-noising scheme:
$\displaystyle(P_{nuc,\lambda})\qquad\min\Big{\\{}\frac{1}{2}\|Y-X\|_{F}^{2}+\lambda\|X\|_{*}\Big{\\}}\,.$
(2)
In this problem the measurements $Y$ are direct, so in some sense complete,
but noisy. The solution $\hat{X}_{\lambda}(Y)$ can be viewed as a shrinkage
estimator. In the basis of the singular vectors $U_{Y}$ and $V_{Y}$ of $Y$,
the solution $\hat{X}_{\lambda}(Y)$ is diagonal, and the diagonal entries are
produced by soft thresholding of the singular values of $Y$.
Because the measurements $y$ in the matrix recovery problem are noiseless but
incomplete, while the measurements $Y$ in the matrix denoising problem are
complete but noisy, the problems seem quite different. Nevertheless, we show
here that there is a deep connection between the two problems.
Let us quantify performance in the denoising problem by the minimax MSE,
namely
$\mathcal{M}(\rho;M,N)=\min_{\lambda}\max_{rank(X)\leq\rho
N}MSE(X_{0}\,,\,\hat{X}_{\lambda}(Y)),$
where MSE refers to the dimension-normalized mean-squared error
$\frac{1}{MN}E\|X-\hat{X}_{\lambda}\|_{F}^{2}$
and subscript $F$ denotes Frobenius norm. The asymptotic minimax MSE
$\mathcal{M}(\rho;\beta)=\lim_{N\rightarrow\infty}\mathcal{M}(\rho;\beta N,N)$
has been derived in [11]. Explicit formulas for the curve
$\rho\mapsto\mathcal{M}(\rho;\beta)$ appear in the Appendix. A parametric form
is given for the case of asymptotically square matrices, $\beta=1$. Figures 2
and 3 depict the various minimax MSE curves.
Figure 2: The two asymptotic minimax MSE’s $M(\rho|{\bf X})$: ${\bf X}=M_{N}$
(black), ${\bf X}=Symm_{N}$ (red), in the case of square matrices. For non
square matrices, see curves in Figure 3 Figure 3: Curves: Asymptotic Minimax
MSE’s for nonsquare matrix settings $Mat_{M,N}$; varying shape factor
$\beta=M/N$. Points: Empirical phase transition locations for matrix recovery
from incomplete measurements, see Table 8.
We can now state our main hypothesis for matrix recovery from Gaussian
measurements.
Main Hypothesis: Asymptotic Phase Transition Formula. _Consider a sequence of
matrix recovery problems with parameters $\\{(r,n,M,N)\\}_{N\geq 1}$ having
limiting fractional rank $\rho=\lim_{N\to\infty}r/N$, limiting aspect ratio
$\beta=\lim_{N\to\infty}M/N$, and limiting incompleteness fraction
$\delta=\lim_{N\to\infty}n/(MN)$. In the limit of large problem size $N$, the
solution $X_{1}(y)$ to the nuclear norm minimization problem $(P_{nuc})$ is
correct with probability converging to one if $\delta>{\cal M}(\rho;\beta)$
and incorrect with probability converging to one if $\delta<{\cal
M}(\rho;\beta)$. _ In short: _The asymptotic phase transition
$\delta^{*}(\rho,\beta)$ in Gaussian matrix recovery is equal to the
asymptotic minimax MSE ${\cal M}(\rho;\beta)$._
In particular, for the case of small rank $r$, by studying the small $\rho$
asymptotics of Eq. (14), we obtain that reconstruction is possible from $n\geq
2r(M+N+\sqrt{MN})(1+O(r/N))$ measurements, but not from substantially less.
This brief announcement tests this hypothesis by conducting a substantial
computer experiment generating large numbers of random problem instances. We
use statistical methods to check for disagreement between the hypothesis and
the predicted phase transition. To bolster the solidity of our results, we
conduct the experiment in two different settings: $(1)$ the matrix $X_{0}$ is
a general $M$ by $N$ matrix, for various rank fractions $\rho$ and aspect
ratios $\beta$; $(2)$ $X_{0}$ is a symmetric positive definite matrix, for
various rank fractions $\rho$. In the latter case the positive semidefinite
constraint is added to the convex program $(P_{nuc})$. As described below,
there are different asymptotic MSE curves for the two settings. We demonstrate
an empirically accurate match in each of the cases, showing the depth and
significance of the connection we expose here.
In the discussion and conclusion we connect our result with related work in
the field of sparsity-promoting reconstructions, where the same formal
identity between a minimax MSE and a phase transition boundary has been
observed, and in some cases even proved. We also discuss recent rigorous
evidence towards establishing the above matrix recovery phase transition
formula.
## 2 Methods
We investigated the hypothesis that the asymptotic phase transition boundary
agrees with the proposed phase transition formula to within experimental
error.
For notational simplicity we will focus here on the case $M=N$, and defer the
case of non-square matrices to the SI. Hence, we will drop throughout the main
text the argument $\beta=1$. The asymptotic phase plane at point
$(\rho,\delta)$ is associated to triples $(r,n,N)$, where $\rho=r/N\in[0,1]$
is the rank fraction, and $\delta=\delta(n,N|{\bf X})=n/dim({\bf X})$ is the
under sampling ratio, where $dim({\bf X})$ is the dimension of the underlying
collection of matrices ${\bf X}$. We performed a sequence of experiments, one
for each tuple, in which we generated random rank-$r$ $N$ by $N$ matrices
$X_{0}\in{\bf X}$, random measurement matrices ${\cal A}=A$ of size $n\times
N^{2}$, and obtained random problem instances $(y,A)$. We then applied a
convex optimization procedure, obtaining a putative reconstruction
$\hat{X}=\hat{X}(y,A)$. We declared a reconstruction successful when the
Frobenius norm was smaller than a threshold. Our raw empirical observations
consist of a count of empirical successes and sample sizes at a selected set
of positions $(\rho,\delta)$ and a selected set of problem sizes $N$. From
these raw counts we produce fitted success probabilities $\hat{\pi}(r|n,N,{\bf
X})$, The finite-$N$ phase transition is the place where the true underlying
probability of successful reconstruction take the value $50$%. We tested the
hypothesis that the finite-$N$ transition was consistent with the proposed
asymptotic phase transition formula.
This section discusses details of data generation and data analysis.
### 2.1 Generation of Problem Instances
Each problem instance $(y,A)$ was generated by, first, generating a random
rank $r$ matrix $X_{0}$, then, generating a random measurement matrix
$A=A_{n,N^{2}}$ and then applying $y=A\cdot vec(X_{0})$.
We considered problem instances of two specific types, corresponding to
matrices $X_{0}\in{\bf X}$, with ${\bf X}$ one of two classes of matrices
* •
${Mat}_{N}$: all $N\times N$ matrices with real-valued entries
* •
${Sym}_{N}$: all $N\times N$ real symmetric matrices which are nonnegative-
semidefinite
In the case ${\bf X}={Mat}_{N}$, we consider low-rank matrices
$X_{0}=UV^{\prime}$ where $U$ and $V$ are each $N$ by $r$ partial orthogonal
matrices in the Stiefel manifold $St(N,r)$. The matrices are uniformly
distributed on $St(N,r)$. In the case ${\bf X}={Sym}_{N}$, we consider low-
rank matrices $X_{0}=UU^{\prime}$ where $U$ is an $N$ by $r$ partial
orthogonal matrix in $St(N,r)$, and again the matrix is uniformly distributed.
For measurement matrices $A$, we use Gaussian random matrices satisfying
$A_{i,j}\sim N(0,1/n)$.
### 2.2 Convex Optimization
For a given problem instance $(y,A)$, we attempt to recover the underlying
low-rank object $X_{0}$ from the measurements $y$ by convex optimization. Each
of our choices ${\bf X}$ gives rise to an associated optimization problem:
$(P^{\bf X}_{nuc})\qquad\text{min }||X||_{*}\text{ subject to }y=A\cdot
vec(X),\qquad X\in{\bf X}.$
Here ${\bf X}$ is one of these two classes of matrices ${Mat}_{N}$ or
${Sym}_{N}$. The two resulting optimization problems can each be reduced to a
so-called semidefinite programming problem; see [13, 12].
### 2.3 Probability of Exact Recovery
Since both the measurement matrix $A$, and the underlying low-rank object
$X_{0}$ are random, $(y,A)$ is a random instance for $(P^{\bf X}_{nuc})$. The
probability of exact recovery is defined by
$\pi(r|n,N,{\bf X})=\text{Prob}\\{X_{0}\mbox{ is the unique solution of
}(P^{\bf X}_{nuc})\\}.$
Clearly $0\leq\pi\leq 1$; for fixed $N$, $\pi$ is monotone decreasing in $r$
and monotone increasing in $n$. Also
$\pi(r|n,N,{Mat}_{N})<\pi(r|n,N,{Sym}_{N})$.
### 2.4 Estimating the Probability of Exact Recovery
Our procedure follows [14, 15]. For a given matrix type ${\bf X}$ and rank $r$
we conduct an experiment whose purpose is to estimate $\pi(r|n,N,{\bf X})$
using $T$ Monte Carlo trials. In each trial we generate a random instance
$(y,A)$ which we supply to a solver for $(P^{\bf X}_{(nuc)})$, obtaining the
result $X_{1}$. We compare the result $X_{1}$ to $X_{0}$. If the relative
error $\|X_{0}-X_{1}\|_{F}/\|X_{0}\|_{F}$ is smaller than a numerical
tolerance, we declare the recovery a success; if not, we declare it a failure.
(In this paper, we used an error tolerance of $0.001$.) We thus obtain $T$
binary measurements $Y_{i}$ indicating success or failure in reconstruction.
The empirical success fraction is then calculated as
$\hat{\pi}(r|n,N,T,{\bf
X})=\frac{\\#\\{\text{successes}\\}}{\\#\\{\text{trials}\\}}=\frac{1}{T}\sum_{i=1}^{T}Y_{i}\,.$
These are the raw observations generated by our experiments.
### 2.5 Asymptotic Phase Transition Hypothesis
Consider a sequence of tuples $(r,n,N)$ with $r/N\rightarrow\rho$ and
$n/N\rightarrow\delta$. We assume that there is an asymptotic phase transition
curve $\delta^{*}(\rho|{\bf X})$, i.e. a curve obeying
$\pi(r|{n,N},{\bf
X})\rightarrow\left\\{\begin{array}[]{ll}1&\delta<\delta^{*}(\rho|{\bf X})\\\
0&\delta>\delta^{*}(\rho|{\bf X})\end{array}\right.$ (3)
For many convex optimization problems the existence of such an asymptotic
phase transition is rigorously proven; see the Discussion below.
The hypothesis we investigate concerns the value of $\delta^{*}(\rho|{\bf
X})$; specifically, whether
$\delta^{*}(\rho|{\bf X})={\cal M}(\rho|{\bf X}).$ (4)
Here ${\cal M}(\rho|{Mat})$ (respectively ${\cal M}(\rho|{Sym})$ ) is the
minimax MSE for SVT for general matrices (respectively, positive definite
ones). Formulas for ${\cal M}$ were derived by the Authors in [11];
computational details are provided in the Appendix.
### 2.6 Empirical Phase Transitions
The empirical phase transition point is estimated by fitting a smooth function
$\hat{\pi}(n/N)$ (in fact a logistic function) to the empirical data
$\hat{\pi}(r|n,N,{\bf X})$ using the glm() command in the R statistics
language. In fact we fit the logistic model that
$\mbox{logit}(\pi)\equiv\log(\frac{\pi}{1-\pi})=a+b\Delta$, where
$\Delta(\delta|\rho)=\delta-M(\rho)$ is the offset between $\delta$ and the
predicted phase transition. The coefficients $a$ and $b$ are called the
intercept and slope, and will be tabulated below. The intercept gives the
predicted logit exactly at $\Delta=0$, i.e. $\delta={\cal M}(\rho)$. The
empirical phase transition is located at $\hat{\delta}(r,N,M,{\bf X})={\cal
M}(\rho)-a/b$. This is the value of $\delta=\delta(n,N|{\bf X})$ solving
$\hat{\pi}(\delta)=1/2.$
Under the hypothesis (4) we have
$\lim_{N\rightarrow\infty,r/N\rightarrow\rho}\lim_{T\rightarrow\infty}\hat{\delta}(r,N,T,{\bf
X})={\cal M}(\rho|{\bf X}).$
Consequently, in data analysis we will compare the fitted values
$\hat{\delta}(r,N,T,{\bf X})$ with ${\cal M}(r/N|{\bf X})$.
### 2.7 Experimental Design
To address our main hypothesis regarding the agreement of phase transition
boundaries, we measure $\hat{\pi}$ at points $\delta=n/N$ and $\rho=r/N$ in
the phase plane $(\delta,\rho)$ which we expect to be maximally informative
about the location of the phase transition. In fact the informative locations
in binomial response models correspond to points where the probability of
response is in the middle range $(1/10,9/10)$ [16]. As a rough approximation
to such an optimal design, we sample at equispaced $\delta\in[{\cal
M}(\rho|{\bf X})-0.05,{\cal M}(\rho|{\bf X})+0.05]$.
### 2.8 Computing
We primarily used the MATLAB computing environment, and the popular CVX convex
optimization package [17]. A modeling system for disciplined convex
programming by Boyd, Grant and others, supporting two open source interior-
point solvers: SeDuMi and SDPT3 [19, 18].
We also studied the robustness of our results across solvers. Zulfikar Ahmed
translated our code into Python and used the general purpose solver package
CVXOPT by Anderson and Vandeberghe [20].
## 3 Results
Our experimental data have been deposited at [21] they are contained in a text
file with more than 100,000 lines, each line reporting one batch of Monte
Carlo experiments at a given $r,n,M,N$ and ${\bf X}$. Each line also documents
the number of Monte Carlo trials $T$, and the observed success fraction
$\hat{\pi}$. The file also contains metadata identifying the solver and the
researcher responsible for the run.
In all cases, we observed a transition from no observed successes at
$\delta={\cal M}(\rho)-0.05$ to no observed failures at $\delta={\cal
M}(\rho)+0.05$. Figure 3 shows results we obtained at non square matrix
ensembles, with varying $\beta=M/N$. The minimax MSE curves $M(\rho|\beta)$
vary widely, but the observed PT’s track the curves closely.
$\rho$ | $N$ | $T$ | ${\cal M}(\rho)$ | $\hat{\delta}(\rho)$ | $a$ | $b$ | $Z$
---|---|---|---|---|---|---|---
1/10 | 40 | 400 | 0.351 | 0.352 | -0.128 | 182.978 | -0.581
50 | 400 | 0.351 | 0.350 | 0.282 | 200.131 | 1.213
60 | 400 | 0.351 | 0.352 | -0.096 | 221.212 | -0.398
80 | 400 | 0.351 | 0.350 | 0.415 | 295.049 | 1.452
100 | 400 | 0.351 | 0.349 | 0.641 | 383.493 | 1.900
Figure 4: Results with ${\bf X}={Mat}_{N}$, with $\rho=1/10$ and Varying $N$.
$T$ total number of reconstructions; $a$, $b$: fitted logistic regression
parameters; $Z$: traditional Z-score of logistic regression intercept. See
Table 6 for the complete table.
Figure 4 shows a small subset of our results in the square case ${\bf
X}={Mat}_{N}$, to explain our empirical results; the full tables are given in
SI for the square, non square and symmetric positive definite cases. In the
square case, the empirical phase transition agrees in all cases with the
formula ${\cal M}(\rho)$ to two digits accuracy. Table 7 shows that, in the
symmetric nonnegative-definite case ${\bf X}={Sym}_{N}$, the empirical phase
transition falls within $[{\cal M}(\rho)-0.01,{\cal M}(\rho)+0.01]$.
Previous empirical studies of phase transition behavior in sparse recovery
show that, even in cases where the asymptotic phase transition curve is
rigorously proven and analytically available, such large-$N$ theory cannot be
expected to match empirical finite-$N$ data to within the usual naive standard
errors [14, 15]. Instead, one observes a finite transition zone of width
$\approx c_{1}/N^{1/2}$ and a small displacement of the empirical phase
transition away from the asymptotic Gaussian phase transition, of size
$\approx c_{2}/N$. Hence, the strict literal device of testing the hypothesis
that $E\hat{\delta}={\cal M}(\rho)$ is not appropriate in this setting 111As
shown in Figure 4, and in Tables 6,7,8,9,and 10, our results in many cases do
generate $Z$ scores for the intercept term in the logistic regression which
are consistent with traditional acceptance of this hypothesis. However,
traditional acceptance is not needed, in order for the main hypothesis to be
valid, and because of finite-$N$ scaling effects indicated above, would not
ordinarily be expected to hold.
A precise statement of our hypothesis uses the fitted logistic parameters
$\hat{a}=\hat{a}(r,N,M,{\bf X})$ and $\hat{b}=\hat{b}(r,N,T,{\bf X})$, defined
above. The asymptotic relation222 $=_{P}$ denotes convergence in probability.
$\lim_{N\rightarrow\infty}\lim_{T\rightarrow\infty}\frac{\hat{a}(r,N,T,{\bf
X})}{\hat{b}(r,N,T,{\bf X})}=_{P}0,\qquad r=\lfloor\rho N\rfloor,$ (5)
implies $\delta(\rho|{\bf X})={\cal M}(\rho|{\bf X})$ . Now note that in
Figure 4 the coefficient $b$ scales directly with $N$ and takes the value
several hundred for large $N$. This means that, in these experiments, the
transition from complete failure to complete success happens in a zone of
width $<1/100$. Notice also that $a$ stays bounded, for the most part between
$0$ and $2$. This means that the response probability evaluated exactly at
${\cal M}(\rho)$ obeys typically:
$\mbox{logit}(\hat{\pi}({\cal M}(\rho))\in[0,2],$
Figure 5: Finite-$N$ scaling effects. Fitted slopes $b$ from Tables 6 and 7
versus matrix sizes $N$. Left Panel: asymmetric square matrices ${\bf
X}={Mat}_{N}$. Right Panel: symmetric nonnegative-definite matrices ${\bf
X}={Sym}_{N}$. Superimposed lines have formulas $b=10+3N$. Note: in Left
Panel, data for $\rho\in\\{3/4,9/10\\}$ were excluded because the slopes $b$
were very large (700 and 2700, respectively).
Figure 5, Panel A presents the fitted slopes $b$ and problem sizes $N$,as well
as an empirical fit. All the data came from Table 6, but we omitted results
for $\rho\in\\{3/4,9/10\\}$ because those slopes were large multiples of all
other slopes. Similarly Figure 5, Panel B presents the slopes from Table 7. In
each plot, the line $b=6+3N$ is overlaid for comparison. Both plots show a
clear tendency of the slopes to increase with $N$.
The combination of linear growth of $b$ with $N$ and non-growth of $a$
333Actually, even sub linear growth of $a$ implies the result. implies Eq.
(5), and acceptance of our main hypothesis.
#### Nongaussian Measurements.
This paper studies matrix recovery from Gaussian measurements of the unknown
matrix, and specifically does not study recovery from partial entry wise
measurements typically called ’matrix completion’. Entrywise measurements is
yield a phase transition at a different location [9]. Our conclusions do
extend to certain nonGaussian measurements based on ${\cal A}$ with
independent and identically distributed entries that are equiprobable $\pm 1$
(a.k.a. Rademacher matrices). See Table 10.
## 4 Discussion: Existing Literature and Our Contributions
Phase transitions in the success of convex optimization at solving non-convex
problems have been observed previously. Donoho and Tanner considered linear
programs useful in the recovery of sparse vectors, rigorously established the
existence of asymptotic phase transitions, and derived exact formulas for the
asymptotic phase transition [25, 22, 14, 23, 24] as well as finite $N$
exponential bounds. Their work considered the recovery of $k$-sparse vectors
with $N$ entries from $n$ Gaussian measurements. The phase diagram in those
papers could be stated in terms of variables $(\kappa,\delta)$, where, as in
this paper, $\delta=n/N$ is the under sampling fraction, and $\kappa=k/N$ is
the sparsity fraction, which plays a role analogous to the role played here by
the rank fraction $\rho$. The proofs in those papers were obtained by
techniques from high-dimensional combinatorial geometry, and the formulas for
the asymptotic phase transition were implicit, written in terms of special
functions.
Donoho, Maleki, and Montanari [26] later developed a new so-called Approximate
Message Passing (AMP) approach to the sparse recovery problem, which gave new
formulas for phase boundaries, confirmed rigorously by Bayati and Montanari in
[27, 28]. While the previous formulas involved combinatorial geometry, the new
(necessarily equivalent) ones involved instead minimax decision theory. An
extensive literature on AMP algorithms has since developed see, e.g. [30, 29],
implying, among other things, universality in the sparse recovery phase
transition [31].
Donoho, Johnstone and Montanari [32] generalized previous AMP-based phase
transition results to block-sparse, monotone, and piecewise constant signals.
They provided evidence that the observed phase transition
$\delta^{*}(\,\cdot\,)$ of the associated convex optimization reconstruction
methods obeyed formulas of the form
$\delta^{*}(\kappa)={\cal M}(\kappa),\;\;0<\kappa<1.$ (6)
Here $\kappa$ is a variable measuring generalized sparsity and ${\cal
M}(\kappa)$ the minimax MSE of an appropriate denoiser based on direct noisy
measurements. The main result in this brief report fits in this general
framework whereby the sparsity $\kappa$ is identified with the fractional rank
$\rho$, and the minimax MSE symbol ${\cal M}$ applies to the singular value
thresholding denoiser. Our main result is therefore an extension of DJM-style
formulas from the sparse recovery setting problem to the low rank recovery
setting.
Much mathematical study of matrix recovery [1, 8, 3, 33] has focused on
providing rigorous bounds which show the existence of a region of success,
without however establishing a phase transition phenomenon, or determining its
exact boundary. A relatively accurate result by Candès and Recht (CR) for the
case of Gaussian measurements [34], implies that $n\geq 3r(M+N)[1+O(r/N)]$
measurements are sufficient. Our formulas for the square case $M=N$ show that
${\cal M}(\rho|Mat_{N})\sim 6\rho,\qquad\rho\rightarrow 0.$
This agrees with the CR bound in the very low-rank case. However, our relation
$\delta^{*}(\rho)={\cal M}(\rho)$ is apparently noticeably more accurate than
the CR formula at finite $N$. Table 9 presents experiments where the rank is
fixed at $r=1$,$2$,$3$, or $4$, and $N$ varies between $40$ and $90$. Even
though in such cases the corresponding $\rho=r/N$ is rather small, for example
$1/90$ in the case $r=1$ and $N=90$, the empirical PT in our experiments
agrees much more closely with the nonlinear formula ${\cal M}(\rho)$ than it
does with the linear formula $6\rho$. Also, in the non square case $\beta\neq
1$, ${\cal M}\sim 2\rho(1+\beta+\sqrt{\beta})$, which is strictly smaller than
$6\rho$ for $\beta<1$, so the CR formula is noticeably less accurate than the
$\delta^{*}={\cal M}$ formula in the non square case.
Of the mathematical methods developed to identify phase transitions but which
are not based on combinatorial geometry or approximate message passing, the
most precise are based on the ‘Escape Through the Mesh’ (ETM) technique of
Yoram Gordon [35, 10]. ETM was used to prove upper bounds on the number of
Gaussian measurements needed for reconstruction of sparse signals by Stojnic
[35] and for low-rank matices by Oymak and Hassibi [10]. In particular, [10]
studies one of the two cases studied here and observes in passing that in the
square case, ETM gives bounds that seemingly agree with actual phase
transition measurements. Very recently, building on the same approach, a DJM-
style inequality $\delta^{*}(\kappa)\leq{\cal M}(\kappa)$ has been announced
by Oymak and Hassibi for a wide range of convex optimization problems in [36],
including nuclear norm minimization; [36] also presented empirical evidence
for $\delta^{*}(\rho)={\cal M}(\rho)$ in the square case ${\bf X}={Mat}_{N}$.
Our Contributions. This paper presents explicit formulas for the minimax MSE
of singular value thresholding in various cases, and shows that the new
formulas match the appropriate empirical phase transitions in a formal
comparison. Compared to earlier work, we make here the following
contributions:
* •
A Broad Range of Phase Transition Studies for non square, square, and
symmetric nonnegative-definite matrix recovery from Gaussian measurements. We
also made certain nonGaussian measurements and observed similar results.
* •
A Broad Range of Prediction formulas. We make available explicit formulas for
Minimax MSE in the square symmetric nonnegative-definite, or asymmetric case,
as well as non square cases.
* •
Careful Empirical Technique, including the following:
* Reproducibility. Publication of the code and data underlying our conclusions.
* Validation. Matlab/CVX results were re-implemented in Python / CVXOPT, with similar results. Code was executed on 3 different computing clusters, with similar results.
* Study of Finite-$N$-scaling. We studied tendencies of $a$,$b$ as $N$ varies, and observed behavior consistent with our asymptotic phase transition hypothesis. Studies at a single $N$ could only have shown that an empirical phase transition was near to a given theoretical curve, at a given $N$, but not shown the scaling behavior with $N$ that the main hypothesis properly involves.
## 5 Conclusions
For the problem of matrix recovery from Gaussian measurements, our
experiments, as well as those of others, document the existence of a
finite-$N$ phase transition. We compared our measured empirical phase
transition curve with a formula from the theory of matrix denoising and
observed a compelling match. Although the matrix recovery and matrix denoising
problems are superficially different, this match evidences a deeper
connection, such that mean squared error properties of a denoiser in a noise
removal problem give precisely the exact recovery properties of a matrix
recovery rule in a noiseless, but incomplete data problem.
This connection suggests both new limits on what is possible in the matrix
recovery problem, but also new ways of trying to reach those limits.
## Acknowledgments
Thanks to Iain Johnstone for advice at several crucial points. This work was
partially supported by NSF DMS 0906812 (ARRA). MG was supported by a William
R. and Sara Hart Kimball Stanford Graduate Fellowship and a Technion EE Sohnis
Promising Scientist Award. AM was partially supported by the NSF CAREER award
CCF-0743978 and the grant AFOSR/DARPA FA9550-12-1-0411.
## References
* Candès and Recht [2008] Emmanuel J Candès and Benjamin Recht. Exact Matrix Completion via Convex Optimization. _Foundations of Computational Mathematics_ , 9(6):717–772, 2008. URL http://arxiv.org/abs/0805.4471.
* Gross [2011] David Gross. Recovering Low-Rank Matrices From Few Coefficients in Any Basis. _IEEE Transactions on Information Theory_ , 57(3):1548–1566, March 2011. ISSN 0018-9448. doi: 10.1109/TIT.2011.2104999. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5714248.
* Recht et al. [2010a] Benjamin Recht, Maryam Fazel, and PA Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. _SIAM review_ , 52(3):471–501, 2010a. URL http://epubs.siam.org/doi/abs/10.1137/070697835.
* Candès et al. [2009] Emmanuel J Candès, Xiaodong Li, Yi Ma, and John Wright. Robust Principal Component Analysis ? 2009\. URL http://arxiv.org/pdf/0912.3599.
* Gross et al. [2010] David Gross, Yi-Kai Liu, Steven Flammia, Stephen Becker, and Jens Eisert. Quantum State Tomography via Compressed Sensing. _Physical Review Letters_ , 105(15):1–4, October 2010. ISSN 0031-9007. doi: 10.1103/PhysRevLett.105.150401. URL http://link.aps.org/doi/10.1103/PhysRevLett.105.150401.
* Keshavan et al. [2010] RH Keshavan, Andrea Montanari, and S Oh. Matrix completion from noisy entries. _The Journal of Machine Learning_ , 11:2057–2078, 2010\. URL http://dl.acm.org/citation.cfm?id=1859890.1859920.
* Recht et al. [2008] Benjamin Recht, W Xu, and Babak Hassibi. Necessary and sufficient conditions for success of the nuclear norm heuristic for rank minimization. In _Proceedings of the 47th IEEE conference on Decision and Control Cancun, Mexico_ , 2008. URL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4739332.
* Recht et al. [2010b] Benjamin Recht, Weiyu Xu, and Babak Hassibi. Null space conditions and thresholds for rank minimization. _Mathematical Programming_ , 127(1):175–202, October 2010b. ISSN 0025-5610. doi: 10.1007/s10107-010-0422-2. URL http://www.springerlink.com/index/10.1007/s10107-010-0422-2.
* Tanner and Wei [2012] Jared Tanner and KE Wei. Normalized iterative hard thresholding for matrix completion. 2012\. URL http://people.maths.ox.ac.uk/tanner/papers/TaWei_NIHT.pdf.
* Oymak and Hassibi [2010] Samet Oymak and Babak Hassibi. New Null Space Results and Recovery Thresholds for Matrix Rank Minimization. page 28, November 2010. URL http://arxiv.org/abs/1011.6326.
* Donoho and Gavish [2013] David L. Donoho and Matan Gavish. Minimax Risk of Matrix Denoising by Singular Value Thresholding. Technical report, Stanford University Department of Statistics, 2013.
* Fazel [2002] Maryam Fazel. _Matrix rank minimization with applications_. PhD thesis, Stanford University, 2002. URL http://www.inatel.br/docentes/dayan/TP524/Artigos/MatrixRankMinimizationwithApplications.pdf.
* Fazel et al. [2001] Maryam Fazel, H Hindi, and Stephen P. Boyd. A rank minimization heuristic with application to minimum order system approximation. In _American Control Conference, 2001_ , pages 4734–4739, 2001. URL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=945730.
* Donoho and Tanner [2009a] David L. Donoho and Jared Tanner. Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. _Philosophical transactions of the Royal Society, Series A_ , 367(1906):4273–93, November 2009a. ISSN 1364-503X. doi: 10.1098/rsta.2009.0152. URL http://www.ncbi.nlm.nih.gov/pubmed/19805445.
* Monajemi et al. [2013] Hatef Monajemi, Sina Jafarpour, Matan Gavish, and David L. Donoho. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices. _Proceedings of the National Academy of Sciences of the United States of America_ , 110(4):1181–6, January 2013. ISSN 1091-6490. doi: 10.1073/pnas.1219540110. URL http://www.pnas.org/cgi/content/long/110/4/1181.
* Kalish [1990] Leslie A. Kalish. Efficient design for estimation of median lethal dose and quantal dose-response curves. _Biometrics_ , 46(3):737–48, September 1990. ISSN 0006-341X. URL http://www.ncbi.nlm.nih.gov/pubmed/2242412.
* Grant and Boyd [2010] M. Grant and Stephen P. Boyd. CVX: Matlab Software for Disciplined Convex Programming, 2010. URL cxvr.com/cvx.
* Sturm [1999] JF Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. _Optimization methods and software_ , 1999. URL http://www.tandfonline.com/doi/abs/10.1080/10556789908805766.
* Toh et al. [1999] K.C. Toh, M.J. Todd, and Tutuncu R.H. SDPT3 - a Matlab software package for semidefinite programming, Optimization Methods and Software, 1999.
* Andersen et al. [2012] D. Andersen, J. Dahl, and L. Vandenberghe. CVXOPT: Python Software for Convex Optimization, 2012. URL http://abel.ee.ucla.edu/cvxopt.
* Donoho et al. [2013] David L. Donoho, Matan Gavish, and Andrea Montanari. Data for the article The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising. http://purl.stanford.edu/tz124hw0000, 2013. URL purl.stanford.edu/tz124hw0000. accessed 15 February 2013.
* Donoho [2005] David L. Donoho. High-Dimensional Centrally Symmetric Polytopes with Neighborliness Proportional to Dimension. _Discrete & Computational Geometry_, 35(4):617–652, December 2005. ISSN 0179-5376. doi: 10.1007/s00454-005-1220-0. URL http://www.springerlink.com/index/10.1007/s00454-005-1220-0.
* Donoho and Tanner [2009b] David L. Donoho and Jared Tanner. Counting faces of randomly projected polytopes when the projection radically lowers dimension. _Journal of the American Mathematical Society_ , 22(1):1–53, 2009b.
* Donoho and Tanner [2010] David L. Donoho and Jared Tanner. Precise Undersampling Theorems. _Proceedings of the IEEE_ , 98(6):913–924, June 2010. ISSN 0018-9219. doi: 10.1109/JPROC.2010.2045630. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5458001.
* Donoho and Tanner [2005] David L. Donoho and Jared Tanner. Neighborliness of randomly projected simplices in high dimensions. _Proceedings of the National Academy of Sciences of the United States of America_ , 102(27):9452–7, July 2005. ISSN 0027-8424. doi: 10.1073/pnas.0502258102. URL http://www.pnas.org/cgi/content/long/102/27/9452.
* Donoho et al. [2009] David L. Donoho, Arian Maleki, and Andrea Montanari. Message-passing algorithms for compressed sensing. _Proceedings of the National Academy of Sciences of the United States of America_ , 106(45):18914–9, November 2009. ISSN 1091-6490. doi: 10.1073/pnas.0909892106. URL http://www.pnas.org/cgi/content/long/106/45/18914.
* Bayati and Montanari [2011] Mohsen Bayati and Andrea Montanari. The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing. _IEEE Transactions on Information Theory_ , 57(2):764–785, February 2011. ISSN 0018-9448. doi: 10.1109/TIT.2010.2094817. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5695122.
* Bayati and Montanari [2012] Mohsen Bayati and Andrea Montanari. The LASSO Risk for Gaussian Matrices. _IEEE Transactions on Information Theory_ , 58(4):1997–2017, April 2012. ISSN 0018-9448. doi: 10.1109/TIT.2011.2174612. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6069859.
* Rangan [2011] Sundeep Rangan. Generalized approximate message passing for estimation with random linear mixing. In _2011 IEEE International Symposium on Information Theory Proceedings_ , pages 2168–2172. Ieee, July 2011. ISBN 978-1-4577-0596-0. doi: 10.1109/ISIT.2011.6033942. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6033942.
* Vila and Schniter [2011] Jeremy Vila and Philip Schniter. Expectation-maximization Bernoulli-Gaussian approximate message passing. In _Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), 2011_ , number July, pages 799–803, 2011. ISBN 9781467303231. URL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6190117.
* Bayati et al. [2012] Mohsen Bayati, Marc Lelarge, and Andrea Montanari. Universality in Polytope Phase Transitions and Message Passing Algorithms. July 2012. URL http://arxiv.org/abs/1207.7321.
* Donoho et al. [2011] David L. Donoho, Iain M Johnstone, and Andrea Montanari. Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising. 2011\. URL http://http//arxiv.org/abs/1111.1041v2.
* Oymak et al. [2011] Samet Oymak, Karthik Mohan, Maryam Fazel, and Babak Hassibi. A simplified approach to recovery conditions for low rank matrices. _2011 IEEE International Symposium on Information Theory Proceedings_ , pages 2318–2322, July 2011. doi: 10.1109/ISIT.2011.6033976. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6033976.
* Candès and Recht [2011] Emmanuel J Candès and Benjamin Recht. Simple bounds for recovering low-complexity models. _Mathematical Programming_ , (June 2011):1–11, 2011. URL http://www.springerlink.com/index/655751h7568x3621.pdf.
* Stojnic [2009] Mihailo Stojnic. Various thresholds for l1-optimization in compressed sensing. July 2009. URL http://arxiv.org/abs/0907.3666.
* Oymak and Hassibi [2012] Samet Oymak and Babak Hassibi. On a Relation between the Minimax Denoising and the Phase Transitions of Convex Functions, 2012. URL http://www.its.caltech.edu/~soymak/Relation.pdf.
* Marcenko and Pastur [1967] VA Marcenko and LA Pastur. Distribution of eigenvalues for some sets of random matrices. _Mathematics USSR Sbornik_ , 1(4):457–483, 1967\. URL http://iopscience.iop.org/0025-5734/1/4/A01.
## Appendix A Asymptotic Minimax MSE Formula
The following provides explicit formulas for the matrix denoising minimax
curves $\mathcal{M}(\rho,\beta|Mat)$ and $\mathcal{M}(\rho|Sym)$ used above.
Please see [11] for the derivations. Computer programs that efficiently
calculate these quantities are provided in [21]. Let
$\displaystyle
P_{\gamma}(x;k)=\frac{1}{2\pi\gamma}\intop_{x}^{\gamma_{+}}t^{k-1}\sqrt{(\gamma_{+}-t)(t-\gamma_{-}})\,dt\,,$
(7)
where $\gamma_{\pm}=\left(1\pm\sqrt{\gamma}\right)^{2}$, denote the
complementary incomplete moments of the Marc̆enko-Pastur distribution [37].
Define
$\displaystyle\mathbf{M}_{\alpha}(\Lambda;\rho,\tilde{\rho})=\rho+\tilde{\rho}-\rho\tilde{\rho}+(1-\tilde{\rho})\Big{[}\rho\Lambda^{2}+$
$\displaystyle+\alpha(1-\rho)\Big{(}P_{\gamma}(\Lambda^{2};1)-2\Lambda
P_{\gamma}(\Lambda^{2};\tfrac{1}{2})+\Lambda^{2}P_{\gamma}(\Lambda^{2};0)\Big{)}\Big{]}\,,$
with
$\gamma=\gamma(\rho,\tilde{\rho})=(\tilde{\rho}-\rho\tilde{\rho})/(\rho-\rho\tilde{\rho})$.
#### Case ${\bf X}={Mat}_{M,N}$.
The minimax curve is given by
$\mathcal{M}(\rho,\beta|{Mat})=\inf_{\Lambda}\mathbf{M}_{1}(\Lambda;\rho,\beta\rho).$
The following minimaxity interpretation is proved in [11]
$\displaystyle\lim_{N\to\infty}\inf_{\lambda}\sup_{\stackrel{{\scriptstyle
X_{0}\in{\mathbb{R}}^{\lfloor\beta N\rfloor\times
N}}}{{rank(X_{0})\leq\rho\beta
N}}}MSE(\hat{X}_{\lambda},X_{0})=\mathcal{M}(\rho,\beta|{Mat})\,.$ (9)
#### Case ${\bf X}={Sym}_{N}$.
The minimax curve is given by
$\mathcal{M}(\beta|{Sym})=\inf_{\Lambda}\mathbf{M}_{1/2}(\Lambda;\rho,\rho).$
The following minimaxity interpretation is proved in [11]
$\displaystyle\lim_{N\to\infty}\inf_{\lambda}\sup_{\stackrel{{\scriptstyle
X_{0}\in S_{N}}}{{rank(X_{0})\leq\rho
N}}}MSE(\hat{X}_{\lambda},X_{0})=\mathcal{M}(\rho|{Sym})\,.$ (10)
#### Computing $\mathcal{M}(\rho,\beta|{\bf X})$.
The map $\Lambda\mapsto\mathbf{M}_{\alpha}(\Lambda;\rho,\tilde{\rho})$ is
convex. Solving $d\mathbf{M}_{\alpha}/d\Lambda=0$ we get that
$\text{argmin}_{\Lambda}\,\mathbf{M}_{\alpha}(\Lambda;\rho,\tilde{\rho})$ is
the unique root of the equation
$\displaystyle\Lambda^{-1}P_{\gamma}(\Lambda^{2};\tfrac{1}{2})-P_{\gamma}(\Lambda^{2};0)=\frac{\rho}{\alpha(1-\rho)}\,.$
(11)
The right hand side of (11) is decreasing in $\Lambda$ and the solution is
determined numerically by binary search.
For square matrices ($\rho=\tilde{\rho}$) (A) can be expressed using
elementary trigonometric functions. In [11] it is shown that
$\displaystyle\mathbf{M}_{\alpha}(\Lambda;\rho,\rho)=\rho(2-\rho)+(1-\rho)\big{[}\rho\Lambda^{2}+$
(12) $\displaystyle+\alpha(1-\rho)\left(Q_{2}\left(\Lambda\right)-2\lambda
Q_{1}\left(\Lambda\right)+\Lambda^{2}Q_{0}\left(\Lambda\right)\right)\big{]}\,.$
where
$\displaystyle Q_{0}(x)$ $\displaystyle=$
$\displaystyle\frac{1}{\pi}\intop_{x}^{2}\sqrt{4-x^{2}}=1-\frac{x}{2\pi}\sqrt{4-x^{2}}-\frac{2}{\pi}atan(\frac{x}{\sqrt{4-x^{2}}})$
$\displaystyle Q_{1}(x)$ $\displaystyle=$
$\displaystyle\frac{1}{\pi}\intop_{x}^{2}x\sqrt{4-x^{2}}=\frac{1}{3\pi}(4-x^{2})^{3/2}$
$\displaystyle Q_{2}(x)$ $\displaystyle=$
$\displaystyle\frac{1}{\pi}\intop_{x}^{2}x^{2}\sqrt{4-x^{2}}=1-\frac{1}{4\pi}x\sqrt{4-x^{2}}(x^{2}-2)-\frac{2}{\pi}asin(\frac{x}{2})$
are the complementary incomplete moments of the Quarter Circle law. Moreover
$\displaystyle\text{argmin}_{\Lambda}\mathbf{M}_{\alpha}(\Lambda;\rho,\rho)=2\cdot\sin\left(\theta_{\alpha}(\rho)\right)\,,$
(13)
where $\theta_{\alpha}(\rho)\in[0,\pi/2]$ is the unique solution to the
transcendental equation
$\displaystyle\theta+cot(\theta)\cdot\left(1-\frac{1}{3}cos^{2}(\theta)\right)=\frac{\pi(1+\alpha^{-1}\rho-\rho)}{2(1-\rho)}\,,$
(14)
which is a simplified version of (11).
#### Parametric representation of the minimax curves.
For square matrices ($\rho=\tilde{\rho}$) the minimax curves
$\mathcal{M}(\rho,1|{Mat})$ and $\mathcal{M}(\rho|{Sym})$ admit a parametric
representation in the $(\rho,\mathcal{M})$ plane using elementary
trigonometric functions, see [11]. As $\theta$ ranges over $[0,\pi/2]$,
$\displaystyle\rho(\theta)$ $\displaystyle=$ $\displaystyle
1-\frac{\pi/2}{\theta+(\cot(\theta)\cdot(1-\frac{1}{3}cos^{2}(\theta)))}$
$\displaystyle\mathcal{M}(\theta)$ $\displaystyle=$ $\displaystyle
2\rho(\theta)-\rho^{2}(\theta)+4\rho(\theta)(1-\rho(\theta))sin^{2}(\theta)$
$\displaystyle+$
$\displaystyle\frac{4}{\pi}(1-\rho)^{2}\left[(\pi-2\theta)(\frac{5}{4}-cos(\theta)^{2})+\frac{sin(2\theta)}{12}(cos(2\theta)-14)\right]$
is a parametric representation of $\mathcal{M}(\rho,1|{Mat})$, and similarly
$\displaystyle\rho(\theta)$ $\displaystyle=$ $\displaystyle
1-\frac{\theta+(\cot(\theta)\cdot(1-\frac{1}{3}cos^{2}(\theta)))-\pi/2}{\theta+(\cot(\theta)\cdot(1-\frac{1}{3}cos^{2}(\theta)))+\pi/2}$
$\displaystyle\mathcal{M}(\theta)$ $\displaystyle=$ $\displaystyle
2\rho(\theta)-\rho^{2}(\theta)+4\rho(\theta)(1-\rho(\theta))sin^{2}(\theta)$
$\displaystyle+$
$\displaystyle\frac{2}{\pi}(1-\rho)^{2}\left[(\pi-2\theta)(\frac{5}{4}-cos(\theta)^{2})+\frac{sin(2\theta)}{12}(cos(2\theta)-14)\right]$
is a parametric representation of $\mathcal{M}(\rho|{Sym})$.
## Appendix B Summary of Empirical Results
$\rho$ | N | T | ${\cal M}(\rho)$ | $\hat{\delta}(\rho)$ | $a$ | $b$ | $Z$
---|---|---|---|---|---|---|---
1/10 | 40 | 400 | 0.351 | 0.352 | -0.128 | 182.978 | -0.581
50 | 400 | 0.351 | 0.350 | 0.282 | 200.131 | 1.213
60 | 400 | 0.351 | 0.352 | -0.096 | 221.212 | -0.398
80 | 400 | 0.351 | 0.350 | 0.415 | 295.049 | 1.452
100 | 400 | 0.351 | 0.349 | 0.641 | 383.493 | 1.900
1/8 | 32 | 400 | 0.414 | 0.412 | 0.262 | 119.424 | 1.457
48 | 400 | 0.414 | 0.413 | 0.262 | 204.025 | 1.120
64 | 400 | 0.414 | 0.412 | 0.560 | 270.147 | 2.006
80 | 400 | 0.414 | 0.413 | 0.296 | 290.536 | 1.057
1/6 | 36 | 400 | 0.507 | 0.505 | 0.356 | 150.509 | 1.749
60 | 400 | 0.507 | 0.506 | 0.137 | 216.067 | 0.571
1/4 | 32 | 400 | 0.655 | 0.651 | 0.398 | 97.107 | 2.375
48 | 400 | 0.655 | 0.653 | 0.356 | 191.373 | 1.554
64 | 400 | 0.655 | 0.653 | 0.507 | 242.289 | 1.935
80 | 400 | 0.655 | 0.651 | 1.312 | 333.842 | 3.584
1/3 | 36 | 400 | 0.765 | 0.765 | -0.006 | 148.439 | -0.029
60 | 400 | 0.765 | 0.762 | 1.145 | 317.633 | 3.348
90 | 400 | 0.765 | 0.762 | 1.487 | 384.289 | 3.610
$1/2$ | 50 | 400 | 0.905 | 0.903 | 0.658 | 260.939 | 2.361
$3/4$ | 40 | 400 | 0.989 | 0.986 | 1.535 | 709.400 | 3.837
$9/10$ | 50 | 400 | 0.999 | 0.998 | 2.478 | 2980.216 | 3.435
Figure 6: Results with ${\bf X}={Mat}_{N}$. $\rho$ | N | T | ${\cal M}(\rho)$ | $\hat{\delta}(\rho)$ | $a$ | $b$ | $Z$
---|---|---|---|---|---|---|---
1/10 | 40 | 800 | 0.315 | 0.310 | 0.787 | 148.605 | 4.268
60 | 800 | 0.315 | 0.312 | 0.519 | 186.020 | 2.640
| 80 | 800 | 0.315 | 0.311 | 0.995 | 265.114 | 3.864
1/8 | 40 | 800 | 0.371 | 0.369 | 0.270 | 141.743 | 1.585
56 | 800 | 0.371 | 0.369 | 0.230 | 176.063 | 1.212
| 80 | 800 | 0.371 | 0.369 | 0.368 | 244.721 | 1.628
1/7 | 35 | 800 | 0.407 | 0.403 | 0.623 | 126.659 | 3.730
49 | 800 | 0.407 | 0.404 | 0.496 | 140.034 | 2.878
| 70 | 800 | 0.407 | 0.404 | 0.581 | 184.556 | 2.898
1/6 | 36 | 800 | 0.453 | 0.447 | 0.702 | 116.847 | 4.607
60 | 800 | 0.453 | 0.447 | 1.097 | 193.706 | 5.133
| 90 | 800 | 0.453 | 0.449 | 1.170 | 311.016 | 4.246
1/5 | 40 | 800 | 0.511 | 0.505 | 0.786 | 134.698 | 5.448
50 | 800 | 0.511 | 0.507 | 0.707 | 164.398 | 4.528
| 80 | 800 | 0.511 | 0.507 | 0.859 | 205.621 | 4.778
1/4 | 36 | 800 | 0.588 | 0.579 | 0.967 | 110.989 | 7.120
60 | 800 | 0.588 | 0.582 | 1.155 | 181.174 | 6.379
| 80 | 800 | 0.588 | 0.583 | 1.450 | 271.651 | 6.066
1/3 | 36 | 800 | 0.694 | 0.685 | 1.085 | 124.413 | 3.926
60 | 800 | 0.694 | 0.688 | 1.034 | 192.690 | 5.710
| 90 | 800 | 0.694 | 0.689 | 1.212 | 263.543 | 5.481
1/2 | 36 | 800 | 0.844 | 0.837 | 1.306 | 180.027 | 6.965
60 | 800 | 0.844 | 0.838 | 1.872 | 320.651 | 6.425
| 90 | 800 | 0.844 | 0.840 | 1.884 | 438.138 | 5.513
Figure 7: Results with ${\bf X}={Sym}_{N}$.
$\beta$ | $\rho$ | ${\cal M}(\rho)$ | $\hat{\delta}(\rho)$ | $a$ | $b$ | $Z$ | $\sqrt{M\cdot N}$
---|---|---|---|---|---|---|---
$1/4$ | 0.100 | 0.241 | 0.241 | 0.020 | 215.506 | 0.085 | 60.000
0.125 | 0.290 | 0.290 | -0.037 | 322.468 | -0.127 | 64.000
0.143 | 0.323 | 0.321 | 0.468 | 238.477 | 1.813 | 70.000
0.167 | 0.365 | 0.362 | 0.884 | 247.690 | 3.109 | 60.000
0.200 | 0.421 | 0.422 | -0.153 | 246.541 | -0.597 | 60.000
0.250 | 0.498 | 0.496 | 0.339 | 214.134 | 1.405 | 64.000
0.333 | 0.610 | 0.607 | 0.856 | 284.629 | 2.834 | 60.000
| 0.500 | 0.788 | 0.786 | 0.635 | 297.803 | 2.124 | 64.000
$1/3$ | 0.100 | 0.255 | 0.255 | -0.105 | 269.362 | -0.395 | 51.962
0.143 | 0.340 | 0.340 | 0.031 | 210.274 | 0.133 | 48.497
0.167 | 0.383 | 0.381 | 0.373 | 169.227 | 1.733 | 51.962
0.200 | 0.440 | 0.437 | 0.589 | 197.335 | 2.457 | 51.962
0.250 | 0.517 | 0.517 | 0.113 | 218.479 | 0.470 | 55.426
0.333 | 0.629 | 0.626 | 0.661 | 219.930 | 2.584 | 51.962
0.500 | 0.801 | 0.799 | 0.572 | 274.122 | 2.036 | 55.426
$1/2$ | 0.200 | 0.475 | 0.474 | 0.127 | 159.697 | 0.616 | 42.426
0.250 | 0.554 | 0.553 | 0.258 | 201.035 | 1.114 | 45.255
0.286 | 0.604 | 0.603 | 0.307 | 167.955 | 1.434 | 49.497
0.333 | 0.665 | 0.662 | 0.486 | 155.704 | 2.327 | 42.426
0.400 | 0.738 | 0.737 | 0.292 | 184.091 | 1.308 | 42.426
0.500 | 0.827 | 0.825 | 0.460 | 228.287 | 1.826 | 45.255
0.667 | 0.930 | 0.927 | 0.729 | 248.019 | 3.073 | 42.426
$3/5$ | 0.100 | 0.296 | 0.295 | 0.146 | 191.950 | 0.646 | 38.730
0.125 | 0.352 | 0.351 | 0.242 | 219.767 | 1.000 | 61.968
0.143 | 0.389 | 0.388 | 0.105 | 179.344 | 0.484 | 54.222
0.167 | 0.436 | 0.433 | 0.425 | 198.788 | 1.814 | 46.476
0.200 | 0.495 | 0.494 | 0.237 | 146.409 | 1.196 | 38.730
0.250 | 0.575 | 0.573 | 0.361 | 164.339 | 1.704 | 46.476
0.333 | 0.686 | 0.684 | 0.509 | 258.045 | 1.886 | 46.476
0.500 | 0.843 | 0.842 | 0.153 | 156.137 | 0.751 | 46.476
$2/3$ | 0.100 | 0.305 | 0.306 | -0.168 | 221.047 | -0.697 | 36.742
0.125 | 0.363 | 0.362 | 0.064 | 151.402 | 0.321 | 39.192
0.143 | 0.401 | 0.399 | 0.263 | 142.052 | 1.346 | 34.293
0.167 | 0.448 | 0.446 | 0.271 | 157.783 | 1.323 | 36.742
0.200 | 0.509 | 0.510 | -0.209 | 155.201 | -1.034 | 36.742
0.250 | 0.589 | 0.587 | 0.315 | 152.493 | 1.555 | 39.192
0.333 | 0.700 | 0.697 | 0.408 | 190.613 | 1.768 | 36.742
0.500 | 0.854 | 0.853 | 0.257 | 232.865 | 1.034 | 39.192
$3/4$ | 0.100 | 0.317 | 0.317 | -0.043 | 163.597 | -0.207 | 34.641
0.125 | 0.376 | 0.375 | 0.237 | 192.933 | 1.047 | 55.426
0.143 | 0.415 | 0.412 | 0.562 | 178.363 | 2.476 | 48.497
0.167 | 0.463 | 0.461 | 0.304 | 174.599 | 1.405 | 41.569
0.200 | 0.525 | 0.524 | 0.159 | 120.459 | 0.886 | 34.641
0.250 | 0.606 | 0.604 | 0.276 | 154.259 | 1.353 | 41.569
0.333 | 0.716 | 0.714 | 0.378 | 180.963 | 1.682 | 41.569
0.500 | 0.867 | 0.866 | 0.341 | 283.911 | 1.227 | 41.569
$4/5$ | 0.100 | 0.324 | 0.323 | 0.220 | 174.269 | 1.019 | 44.721
0.125 | 0.384 | 0.381 | 0.423 | 160.001 | 1.999 | 44.721
0.143 | 0.423 | 0.422 | 0.273 | 248.331 | 1.055 | 62.610
0.167 | 0.472 | 0.472 | -0.025 | 190.094 | -0.112 | 40.249
0.200 | 0.534 | 0.533 | 0.204 | 195.806 | 0.889 | 44.721
0.250 | 0.616 | 0.616 | -0.076 | 197.138 | -0.333 | 44.721
0.333 | 0.726 | 0.725 | 0.213 | 203.942 | 0.910 | 40.249
0.500 | 0.875 | 0.873 | 0.412 | 215.351 | 1.689 | 44.721
Figure 8: Results with non square matrices ${\bf X}={Mat}_{M,N}$. Each row
based on $T=400$ Monte Carlo trials.
N | r | $\rho$ | $6\rho$ | ${\cal M}(\rho)$ | $\hat{\delta}(\rho)$ | $a$ | $b$ | $Z$
---|---|---|---|---|---|---|---|---
90 | 1 | 0.011 | 0.067 | 0.059 | 0.054 | 3.595 | 828.597 | 4.627
80 | 1 | 0.013 | 0.075 | 0.065 | 0.059 | 2.766 | 534.431 | 5.117
70 | 1 | 0.014 | 0.086 | 0.072 | 0.069 | 1.444 | 432.528 | 3.962
60 | 1 | 0.017 | 0.100 | 0.081 | 0.078 | 1.374 | 434.911 | 3.542
50 | 1 | 0.020 | 0.120 | 0.094 | 0.091 | 1.159 | 348.687 | 3.226
90 | 2 | 0.022 | 0.133 | 0.103 | 0.101 | 1.225 | 611.400 | 2.499
40 | 1 | 0.025 | 0.150 | 0.114 | 0.114 | -0.169 | 328.547 | -0.513
40 | 1 | 0.025 | 0.150 | 0.114 | 0.114 | -0.169 | 328.547 | -0.513
70 | 2 | 0.029 | 0.171 | 0.127 | 0.125 | 6.383 | 2746.893 | 0.007
30 | 1 | 0.033 | 0.200 | 0.145 | 0.145 | 0.013 | 137.638 | 0.051
30 | 1 | 0.033 | 0.200 | 0.145 | 0.145 | 0.013 | 137.638 | 0.051
30 | 1 | 0.033 | 0.200 | 0.145 | 0.145 | 0.013 | 137.638 | 0.051
80 | 3 | 0.037 | 0.225 | 0.160 | 0.158 | 0.493 | 345.100 | 1.111
50 | 2 | 0.040 | 0.240 | 0.169 | 0.166 | 1.071 | 364.190 | 1.981
70 | 3 | 0.043 | 0.257 | 0.179 | 0.177 | 3.442 | 1883.417 | 0.006
90 | 4 | 0.044 | 0.267 | 0.184 | 0.183 | 2.044 | 1773.906 | 0.006
20 | 1 | 0.050 | 0.300 | 0.203 | 0.203 | -0.055 | 94.130 | -0.219
20 | 1 | 0.050 | 0.300 | 0.203 | 0.203 | -0.055 | 94.130 | -0.219
20 | 1 | 0.050 | 0.300 | 0.203 | 0.203 | -0.055 | 94.130 | -0.219
20 | 1 | 0.050 | 0.300 | 0.203 | 0.203 | -0.055 | 94.130 | -0.219
70 | 4 | 0.057 | 0.343 | 0.226 | 0.227 | -2.755 | 2732.891 | -0.001
50 | 3 | 0.060 | 0.360 | 0.235 | 0.230 | 6.462 | 1302.371 | 0.008
30 | 2 | 0.067 | 0.400 | 0.256 | 0.255 | 0.106 | 152.849 | 0.290
30 | 2 | 0.067 | 0.400 | 0.256 | 0.255 | 0.106 | 152.849 | 0.290
40 | 3 | 0.075 | 0.450 | 0.281 | 0.280 | 0.114 | 191.765 | 0.249
50 | 4 | 0.080 | 0.480 | 0.296 | 0.294 | 0.361 | 225.965 | 0.622
20 | 2 | 0.100 | 0.600 | 0.351 | 0.345 | 0.387 | 60.051 | 1.345
20 | 2 | 0.100 | 0.600 | 0.351 | 0.345 | 0.387 | 60.051 | 1.345
20 | 2 | 0.100 | 0.600 | 0.351 | 0.345 | 0.387 | 60.051 | 1.345
30 | 4 | 0.133 | 0.800 | 0.434 | 0.434 | -0.005 | 115.737 | -0.011
20 | 3 | 0.150 | 0.900 | 0.472 | 0.480 | -0.690 | 86.275 | -1.510
20 | 4 | 0.200 | 1.200 | 0.572 | 0.587 | -0.994 | 64.706 | -2.057
Figure 9: Results with low rank square matrices ${\bf X}={Mat}_{N,N}$. Each row based on $T=400$ Monte Carlo trials. $\rho$ | $N$ | ${\cal M}(\rho)$ | $\hat{\delta}(\rho)$ | $a$ | $b$ | $Z$
---|---|---|---|---|---|---
$1/10$ | 40.000 | 0.351 | 0.350 | 0.187 | 170.554 | 0.878
60.000 | 0.351 | 0.350 | 0.239 | 286.765 | 0.863
80.000 | 0.351 | 0.351 | 0.166 | 290.560 | 0.597
$1/8$ | 40.000 | 0.414 | 0.413 | 0.145 | 146.920 | 0.736
56.000 | 0.414 | 0.412 | 0.552 | 215.088 | 2.220
80.000 | 0.414 | 0.413 | 0.667 | 375.205 | 1.992
$1/7$ | 35.000 | 0.456 | 0.457 | -0.158 | 141.742 | -0.813
49.000 | 0.456 | 0.455 | 0.158 | 216.040 | 0.659
70.000 | 0.456 | 0.454 | 0.911 | 394.564 | 2.527
$1/6$ | 36.000 | 0.507 | 0.505 | 0.370 | 159.357 | 1.768
60.000 | 0.507 | 0.505 | 0.570 | 245.893 | 2.137
90.000 | 0.507 | 0.505 | 0.689 | 335.299 | 2.168
$1/5$ | 40.000 | 0.572 | 0.571 | 0.089 | 172.047 | 0.417
60.000 | 0.572 | 0.568 | 0.912 | 248.273 | 3.188
80.000 | 0.572 | 0.570 | 0.598 | 285.542 | 2.073
$1/3$ | 36.000 | 0.765 | 0.759 | 0.752 | 124.479 | 3.825
60.000 | 0.765 | 0.763 | 0.608 | 234.115 | 2.324
81.000 | 0.765 | 0.762 | 1.053 | 288.525 | 3.306
$1/2$ | 36.000 | 0.905 | 0.903 | 0.487 | 229.398 | 1.914
60.000 | 0.905 | 0.902 | 1.403 | 379.105 | 3.518
90.000 | 0.905 | 0.903 | 11.365 | 3971.012 | 0.008
Figure 10: Results with Rademacher measurements of square matrices ${\bf
X}={Mat}_{N,N}$. Each row based on $T=400$ Monte Carlo trials.
## Appendix C Data Deposition
The data have been deposited in a text file at [21]. A typical fragment of the
file is given here:
Line Project Experiment M N S Instance rank rho delta Err0 Err1 Err2
3381 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.73395061728395 0.016630719182629 0 0.444444444444444
3382 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.739213775178687 0.0159010475527301 0 0.465277777777778
3383 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.744476933073424 0.0161172486497232 0 0.416666666666667
3384 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.749740090968161 0.00149080158529591 1 1
3385 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.755003248862898 0.0340839945130298 0 0.173611111111111
3386 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.760266406757635 0.0235186056093925 0 0.361111111111111
3387 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.765529564652372 0.0136400454215757 0 0.506944444444444
3388 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.770792722547109 1.11644848368808e-09 1 1
3389 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.776055880441845 0.00194118165102407 0 1
3390 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.781319038336582 0.0059943065062774 0 0.902777777777778
3391 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.786582196231319 1.83878516274726e-09 1 1
3392 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.791845354126056 7.85225405546251e-09 1 1
3393 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.797108512020793 4.47138355029567e-10 1 1
3394 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.80237166991553 1.03566257175607e-08 1 1
3395 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.807634827810266 2.62954112892325e-09 1 1
3396 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.812897985705003 0.00120410713874671 1 1
3397 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.81816114359974 1.03259685844506e-08 1 1
3398 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.823424301494477 2.52202641519081e-10 1 1
3399 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.828687459389214 1.94437637948599e-09 1 1
3400 Nuc_CVX_20121129dii Nuc_CVX_N12S01m 12 12 1 m 4 0.333333333333333 0.83395061728395 1.69829115472996e-10 1 1
The fields have the following meaning
* •
Line – Line number in file; in the above example, lines 3381-3400.
* •
Project – File identifier – allows identification of code and logs that
generated these data; in the above fragment, ’Nuc_CVX_20121129dii’.
* •
Experiment – File identified – allows identification of code and logs that
generated these data; in the above fragment ’Nuc_CVX_N12S01m’.
* •
M,N – matrix size of $X_{0}$, i.e $M$ by $N$ matrix; in the above fragment
$M=N=12$.
* •
S – number of matrices in a stack (see below)
* •
Instance – alphabetic code a-t, identifying one of 20 identical runs which
generated this result; in the above fragment, ’m’.
* •
rank – integer rank of matrix; in the above fragment ’m’.
* •
rho – fraction in $[0,1]$, $\rho=\mbox{rank}/N$; in the above fragment, $1/3$.
* •
delta – $\delta=n/(MN)$ in case of asymmetric matrix, or $\delta=2*n/(N(N+1))$
in case of symmetric matrix.
* •
Err0 – $\|\hat{X}-X_{0}\|_{F}/(NM)^{1/2}$.
* •
Err1 – 1 iff $\|\hat{X}-X_{0}\|_{F}/\|X_{0}\|_{F}<tol$, and $0$ otherwise
* •
Err2 – fraction of entries with discrepancy $|\hat{X}(i,j)-X_{0}(i,j)|<tol$.
Additional concepts:
* •
Numerical Tolerance. In our experiments, we used a numerical error tolerance
parameter $tol=0.001$.
* •
Our experiments also extensively covered cases where $X_{0}$ is a ’stack of
matrices’, i.e. a 3-way array $M\times N\times S$, where $S$ is the number of
items in the stack. The only cases of interest for this paper are $S=1$.
## Appendix D Code Deposition
There are two types of code deposition.
* •
Reproduction from Data Deposition. The code that actually makes the figures
and tables we presented in this paper, starting from the data deposition. This
is deposited at [21]. The code we actually ran to create our figures and
tables is a set of R scripts, and was run on a Mac OS X. We believe the same
code runs with minimal changes on a LINUX environment.
* •
RunMyCode Deposition. For readers who wish simply to compute the value of the
Minimax Mean-Squared Error over each of the matrix classes we considered, we
offer a Minimax MSE calculator at RunMyCode.org.
* •
Full Code and Results Deposition. At [21], we also offer a literal dump of the
code we ran and all the logs and result files we obtained.
We believe the first two items are self-documenting. The third item can be
explained as follows. Our database of the experiment and all results is
contained in a unix directory tree rooted at exp.
Inside directory exp one finds further directories, as indicated below:
Nuc_CVX_20120605a
Nuc_CVX_20120607a
...
Nuc_CVX_20121107a
Nuc_CVX_20121113a
...
Nuc_CVX_20121120a
Nuc_CVX_20121121a
...
SDP_CVX_20121230h
SDP_CVX_20121230i
..
Nuc_CVX_20130110a
Nuc_CVX_20130110c
...
Nuc_CVX_20130117Cc
Nuc_CVX_20130117Cd
...
Nuc_CVX_20130126Df
Nuc_CVX_20130126Dg
Nuc_CVX_20130126Dh
These directory names are precisely the Project names used in the data
deposition. Say we look inside one of these directories, for example
’Nuc_CVX_20121121b’. We will find a directory called bin containing software,
and a list of further directories. We excerpt from a 2-column listing of those
directories:
Nuc_CVX_N05S1aΨΨΨNuc_CVX_N30S1c
Nuc_CVX_N05S1bΨΨΨNuc_CVX_N30S1d
Nuc_CVX_N05S1cΨΨΨNuc_CVX_N30S1e
...
Nuc_CVX_N10S1jΨΨΨNuc_CVX_N35S1l
Nuc_CVX_N10S1kΨΨΨNuc_CVX_N35S1m
Nuc_CVX_N10S1lΨΨΨNuc_CVX_N35S1n
...
Nuc_CVX_N15S1hΨΨΨNuc_CVX_N40S1j
Nuc_CVX_N15S1iΨΨΨNuc_CVX_N40S1k
Nuc_CVX_N15S1jΨΨΨNuc_CVX_N40S1l
...
Nuc_CVX_N20S1oΨΨΨNuc_CVX_N45S1q
Nuc_CVX_N20S1pΨΨΨNuc_CVX_N45S1r
Nuc_CVX_N20S1qΨΨΨNuc_CVX_N45S1s
...
Nuc_CVX_N25S1nΨΨΨNuc_CVX_N50S1p
Nuc_CVX_N25S1oΨΨΨNuc_CVX_N50S1q
Nuc_CVX_N25S1pΨΨΨNuc_CVX_N50S1r
...
These directory names are precisely the ’Experiment’ values seen in the data
deposition. Inside the bin directory we find the matlab code used in common by
all the above experiments.
Nuc_CVX_20121121b$ ls bin
aveRank.mΨΨΨsolveNuc_CVX_Stack_Arb.m
predNucPT.mΨΨΨstackRankRMatrix.m
rankRMatrix.m
Inside one of the experiment directories we find experiment-specific files (in
both Matlab and Bash script) as well as output files. The subdirectory logs
contains logs that were created while the jobs were running.
Nuc_CVX_20121121b $ ls -R -C1 Nuc_CVX_N25S1o
bashMain.sh
PTExperiment.mΨ
matlabMain.mΨ
results.mat
randState.matΨ
Nuc_CVX_N25S1o/logs:
Nuc_CVX_N25S1o.stderrΨΨΨ
Nuc_CVX_N25S1o.stdout
runMatlab.20121121b_Nuc_CVX_N25S1o.log
Here the .mat files were produced by matlab during the running of the
experiment.
* •
randState.mat preserves the state of the random number generator at the
beginning of that experiment.
* •
results.mat gives the results of the individual problem instances; typically
in the form $(r,n,N,M,Err0,Err1,Err2)$.
|
arxiv-papers
| 2013-02-10T14:01:11 |
2024-09-04T02:49:41.574300
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "David L. Donoho, Matan Gavish, Andrea Montanari",
"submitter": "Matan Gavish",
"url": "https://arxiv.org/abs/1302.2331"
}
|
1302.2380
|
# Rigidity of thin disk configurations, via fixed-point index
Andrey M. Mishchenko [email protected]
###### Abstract.
We prove some rigidity theorems for configurations of closed disks. First, fix
two collections $\mathcal{C}$ and $\tilde{\mathcal{C}}$ of closed disks in the
Riemann sphere $\hat{\mathbb{C}}$, sharing a contact graph which
(mostly-)triangulates $\hat{\mathbb{C}}$, so that for all corresponding pairs
of intersecting disks $D_{i},D_{j}\in\mathcal{C}$ and
$\tilde{D}_{i},\tilde{D}_{j}\in\tilde{\mathcal{C}}$ we have that the overlap
angle between $D_{i}$ and $D_{j}$ agrees with that between $\tilde{D}_{i}$ and
$\tilde{D}_{j}$. We require the extra condition that the collections are
_thin_ , meaning that no pair of disks of $\mathcal{C}$ meet in the interior
of a third, and similarly for $\tilde{\mathcal{C}}$. Then $\mathcal{C}$ and
$\tilde{\mathcal{C}}$ differ by a Möbius or anti-Möbius transformation. We
also prove the analogous statements for collections of closed disks in the
complex plane $\mathbb{C}$, and in the hyperbolic plane $\mathbb{H}^{2}$.
Our method of proof is elementary and self-contained, relying only on plane
topology arguments and manipulations by Möbius transformations. In particular,
we generalize a fixed-point argument which was previously applied by Schramm
and He to prove the analogs of our theorems in the circle-packing setting,
that is, where the disks in question are pairwise interiorwise disjoint. It
was previously thought that these methods of proof depended too crucially on
the pairwise interiorwise disjointness of the disks for there to be a hope for
generalizing them to the setting of configurations of overlapping disks.
We end by stating some open problems and conjectures, including conjectured
generalizations both of our main result and of our main technical theorem.
Specifically, we conjecture that our thinness condition is unnecessary in the
statements of our main theorems.
The author was partially supported by NSF grants DMS-0456940, DMS-0555750,
DMS-0801029, DMS-1101373. This article is adapted from the Ph.D. thesis
[mishchenko-thesis] of the author. MSC2010 subject classification: 52C26.
###### Contents
1. 1 Introduction
2. 2 Related work
3. 3 Fixed-point index preliminaries
4. 4 Rigidity proofs in the circle packing case
5. 5 Our main technical result, the Index Theorem
6. 6 Subsumptive collections of disks
7. 7 Proofs of our main rigidity theorems
8. 8 Topological configurations
9. 9 Preliminary topological lemmas
10. 10 Torus parametrization
11. 11 Proof of Propositions 11.1 and 11.5
12. 12 Generalizations, open problems, and conjectures
## 1\. Introduction
A _circle packing_ is defined to be a collection of pairwise interiorwise
disjoint metric closed disks in the Riemann sphere $\hat{\mathbb{C}}$. We will
always consider $\hat{\mathbb{C}}$ to have the usual constant curvature $+1$
spherical metric. The _contact graph_ of a circle packing $\mathcal{P}$ is the
graph $G$ having a vertex for every disk of $\mathcal{P}$, so that two
vertices of $G$ are connected by an edge if and only if the corresponding
disks of $\mathcal{P}$ meet. If $\mathcal{P}$ is a locally finite circle
packing in $\hat{\mathbb{C}}$, then clearly its contact graph is simple and
planar. A graph is _simple_ if it has no loops and no repeated edges. If a
circle packing $\mathcal{P}$ has contact graph $G$ then we say that
$\mathcal{P}$ _realizes_ $G$. It turns out that the converse also holds: if
$G$ is a simple planar graph, then there is a circle packing in
$\hat{\mathbb{C}}$ having $G$ as its contact graph. This well-known result,
known as the Circle Packing Theorem, is originally due to Koebe, first
appearing in [koebe-1936].
The Circle Packing Theorem settles the question of existence of circle
packings in $\hat{\mathbb{C}}$. It is then natural to ask for rigidity
statements. In the same article, Koebe states a theorem equivalent to the
following:
###### Koebe–Andreev–Thurston Theorem 1.1.
Let $G$ be the 1-skeleton of a triangulation of the 2-sphere $\mathbb{S}^{2}$.
Then the circle packing realizing $G$ is unique, up to Möbius and anti-Möbius
transformations.
A _triangulation_ of a topological surface $S$ is a collection of triangular
faces, each of which is a topological closed disk, so that two given faces are
glued either along a single edge, or at a single vertex, or not at all, and so
that there are no gluings along the boundary of any one fixed triangle, such
that that the resulting object is homeomorphic to $S$. An anti-Möbius
transformation is the composition of a Möbius transformation with
$z\mapsto\bar{z}$. Möbius and anti-Möbius transformations send circles to
circles and preserve contact graphs, so the rigidity given by Theorem 1.1 is
the best possible.
After Koebe, the Circle Packing Theorem and Theorem 1.1 were for a long time
forgotten. They were reintroduced to the mathematical community at large in
the 1970s by Thurston111At the International Congress of Mathematicians,
Helsinki, 1978, according to [MR1303402]*p. 135.. There he discussed his
methods of proof based on Andreev’s characterization of finite-volume
hyperbolic polyhedra given in [MR0273510]. The best source we are aware of for
Thurston’s original work on this topic is his widely circulated lecture notes,
[thurston-gt3m-notes]*Section 13.6.
Thurston later conjectured222In his address at the International Symposium in
Celebration of the Proof of the Bieberbach Conjecture, Purdue University,
March 1985, according to [MR1207210]*p. 371. that the Riemann mapping can be
approximated by circle packings. The subsequent proof of this conjecture by
Rodin and Sullivan in [MR906396] confirmed the importance of circle packing to
complex analysis. A flurry of research in the area followed, and circle
packing has since found applications in many other areas, for example, in
combinatorics, hyperbolic 3-manifolds, probability, and geometric analysis. A
list of references for successful applications of circle packing to other
areas appears for example in [MR2884870]*Section 2.2.
It is natural to ask for rigidity statements in the spirit of Theorem 1.1 in
geometries besides the spherical one, specifically in Euclidean and hyperbolic
geometries. This line of investigation led Schramm, and later He, to the
following theorem:
###### Discrete Uniformization Theorem 1.2.
Let $G$ be the 1-skeleton of a triangulation of a topological open disk.
Suppose that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are circle packings
realizing $G$, so that $\mathcal{P}$ is locally finite in $\mathbb{G}$ and
$\tilde{\mathcal{P}}$ is locally finite in $\tilde{\mathbb{G}}$, where each of
$\mathbb{G}$ and $\tilde{\mathbb{G}}$ is equal to one of $\mathbb{C}$ and
$\mathbb{H}^{2}$. Then $\mathbb{G}=\tilde{\mathbb{G}}$. Furthermore, the
packings $\mathcal{P}$ and $\tilde{\mathcal{P}}$ differ by a Euclidean
similarity if $\mathbb{G}=\tilde{\mathbb{G}}=\mathbb{C}$, and by a hyperbolic
isometry if $\mathbb{G}=\tilde{\mathbb{G}}=\mathbb{H}^{2}$.
From now on, we consider the hyperbolic plane $\mathbb{H}^{2}$ to be
identified with the open unit disk $\mathbb{D}\subset\mathbb{C}$ via the
Poincaré embedding, and embed $\mathbb{C}\subset\hat{\mathbb{C}}$ via usual
stereographic projection. Then
$\mathbb{H}^{2}\cong\mathbb{D}\subset\mathbb{C}\subset\hat{\mathbb{C}}=\mathbb{C}\cup\\{\infty\\}$.
Furthermore, a metric closed disk in $\mathbb{H}^{2}$ embeds into a metric
closed disk in $\mathbb{D}\subset\mathbb{C}$ under the Poincaré embedding.
Also, a metric closed disk in $\mathbb{C}$ is identified with a metric closed
disk in $\hat{\mathbb{C}}$ under stereographic projection. For clarity we
remark that metric centers of disks are in general not preserved under the
Poincaré embedding, nor under stereographic projection.
The first complete proof of Theorem 1.2 was given by Schramm in [MR1076089],
using only elementary plane topology arguments. Then, in [MR1207210], He and
Schramm implicitly reinterpreted the method of [MR1076089] as a fixed-point
argument. This approach turned out to be quite powerful, and allowed them to
prove much more general statements about domains in $\hat{\mathbb{C}}$ whose
boundary components are circles and points. In particular, they prove the
Koebe conjecture for domains having countably many boundary components. They
also prove an existence statement for circle packings in $\mathbb{C}$ and
$\mathbb{H}^{2}$, to go along with the rigidity of Theorem 1.2. We discuss the
results of [MR1207210] in more detail in Section 2 on related work. Other
proofs of Theorem 1.2 have since been found, which we discuss briefly also in
Section 2.
In this article, we generalize the fixed-point arguments used in [MR1207210],
and implicitly in [MR1076089], to prove generalizations of Theorems 1.1 and
1.2 to collections of disks whose interiors may overlap. It was previously
thought that those arguments depended too crucially on the pairwise
interiorwise disjointness of the disks for there to be hope of generalizing
them in this direction333For example, see comments in [MR1680531]*p. 3, made
by one of the authors of [MR1207210]. Specifically, we prove rigidity and
uniformization theorems for so-called _thin disk configurations_ :
###### Definition 1.3.
A _disk configuration_ is a collection of metric closed disks on the Riemann
sphere $\hat{\mathbb{C}}$, so that no disk of the collection is contained in
any other, but with no other conditions. A disk configuration is called _thin_
if no three disks of the configuration have a common point. The _contact
graph_ of a disk configuration $\mathcal{C}$ is a graph with a vertex for
every disk of $\mathcal{C}$, so that two vertices share an edge if and only if
the corresponding disks meet.
Suppose that $G=(V,E)$ is a graph, with vertex set $V$ and edge set $E$, so
that $G$ is the contact graph of the disk configuration
$\mathcal{C}=\\{D_{v}\\}_{v\in V}$. Let $\Theta:E\to[0,\pi)$ be so that if
$\left<u,v\right>$ is an edge of $G$, then
$\measuredangle(D_{u},D_{v})=\Theta\left<u,v\right>$, with
$\measuredangle(\cdot,\cdot)$ defined as in Figure 1. Then $(G,\Theta)$ is
called the _incidence data_ of $\mathcal{C}$, and $\mathcal{C}$ is said to
_realize_ $(G,\Theta)$.
$A$$B$$\measuredangle(A,B)$
Figure 1. The definition of $\measuredangle(A,B)$, the _external intersection
angle_ or _overlap angle_ between two closed disks $A$ and $B$.
The main rigidity and uniformization result of this paper is the following
theorem:
###### Main Uniformization Theorem 1.4.
Let $G$ be the 1-skeleton of a triangulation of a topological open disk.
Suppose that $\mathcal{C}$ and $\tilde{\mathcal{C}}$ are thin disk
configurations, locally finite in $\mathbb{G}$ and $\tilde{\mathbb{G}}$,
respectively, where each of $\mathbb{G}$ and $\tilde{\mathbb{G}}$ is equal to
one of $\mathbb{C}$ and $\mathbb{H}^{2}$, so that $\mathcal{C}$ and
$\tilde{\mathcal{C}}$ realize the same incidence data $(G,\Theta)$. Then
$\mathbb{G}=\tilde{\mathbb{G}}$, and $\mathcal{C}$ and $\tilde{\mathcal{C}}$
differ by a Euclidean similarity if
$\mathbb{G}=\tilde{\mathbb{G}}=\mathbb{C}$, or by a hyperbolic isometry if
$\mathbb{G}=\tilde{\mathbb{G}}=\mathbb{H}^{2}$.
We also prove the following closely related theorem, using the same
techniques:
###### Main Rigidity Theorem 1.5.
Let $G$ be the 1-skeleton of a triangulation of the 2-sphere $\mathbb{S}^{2}$.
Suppose that $\mathcal{C}$ and $\tilde{\mathcal{C}}$ are thin disk
configurations in $\hat{\mathbb{C}}$ realizing the same incidence data
$(G,\Theta)$. Then $\mathcal{C}$ and $\tilde{\mathcal{C}}$ differ by a Möbius
or an anti-Möbius transformation.
Although its statement has never appeared in the literature, Theorem 1.5 is
not new in the sense that it follows as a corollary of known results, for
example Rivin’s characterization of ideal hyperbolic polyhedra, c.f. Section
2.3. We discuss this further in Section 2 on related work. No counterexamples
are known to Theorems 1.4 and 1.5 if the thinness condition is omitted from
their statements, and we conjecture that the theorems continue to hold in this
case. More details are given in Section 12.
This article is organized as follows. First, we give a brief survey of related
work in Section 2. Then, in Section 3, we introduce the so-called _fixed-point
index_ , which will be the essential technical tool in our proofs of the main
rigidity results of this article, Theorems 1.5 and 1.4.
In Section 4, we apply fixed-point index to prove Theorems 1.1 and 1.2 on
rigidity and uniformization of classical circle packings, via the arguments of
[MR1207210]. We include Section 4 for the following reasons. First, He and
Schramm in [MR1207210] work in a much more technical setting, and do not
actually work out proofs of Theorems 1.1 and 1.2: rather, they describe how
such proofs may be obtained by adapting, in a non-trivial way, their proof of
the countably-connected case of the Koebe Conjecture. The hope is that that
the exposition given in Section 4 works to isolate the main ideas of
[MR1207210], and to clarify what is required to generalize those ideas to our
setting.
In Section 5, we state our main technical result, which we call the Index
Theorem 5.3 and sketch its proof. The proofs of Theorems 1.5 and 1.4 require
some elementary lemmas from plane geometry, and these are proved in Section 6.
In Section 7, we prove our main rigidity results, Theorems 1.5 and 1.4, using
the Index Theorem 5.3. Sections 8–11 are spent completing the proof of the
Index Theorem 5.3. We conclude with a discussion of related open questions and
generalizations of our results in Section 12.
Acknowledgments. Thanks to my Ph. D. advisor Jeff Lagarias, for helpful
comments on many portions of my dissertation, from which this article is
adapted. Thanks to Kai Rajala and Karen Smith for reading and commenting on an
early version of the proofs in this article. Thanks to Jordan Watkins for many
fruitful discussions, especially for pointing us strongly in the direction of
Section 10, greatly simplifying that portion of the exposition. Thanks to
Mario Bonk for helpful comments on this article.
## 2\. Related work
### 2.1. Koebe uniformization
Circle packings are closely related to classical complex analysis. As we have
already mentioned, it was conjectured by Thurston, and proved by Rodin and
Sullivan in [MR906396], that circle packings can be used to approximate the
Riemann mapping, in some precise sense. Conversely, theorems in circle packing
can sometimes be proved via applications of results of classical complex
analysis. For example, Koebe first discovered circle packing while researching
what is now known as the Koebe Conjecture, posed in [koebe-1908]*p. 358:
###### Koebe Conjecture 2.1.
Every domain $\Omega\subset\hat{\mathbb{C}}$ is biholomorphically equivalent
to a circle domain.
A _circle domain_ is a connected open subset of $\hat{\mathbb{C}}$ all of
whose boundary components are circles and points. In the same article, Koebe
himself gave a construction, via iterative applications of the Riemann
mapping, biholomorphically uniformizing an $\Omega$ having finitely many
boundary components to a circle domain. Later, in [koebe-1936], he used this
uniformization to prove that any finite simple planar graph $G$ admits a
circle packing realizing it. His construction approximates the desired circle
packing by first arranging disjoint not-necessarily-round compact sets roughly
according to the contact pattern demanded by $G$, then uniformizing the
resulting complementary region to a circle domain. The desired circle packing
is then obtained as a limit.
There is an existence statement associated to the rigidity statement of the
Koebe–Andreev–Thurston Theorem 1.1: if $G$ is the 1-skeleton of a
triangulation of $\mathbb{S}^{2}$, then $G$ is finite, simple, and planar, so
there exists a circle packing in $\hat{\mathbb{C}}$ realizing $G$. It is
natural to ask for an analogous existence statement to go along with the
Discrete Uniformization Theorem 1.2. In [MR1207210] He and Schramm prove that
if $G$ is the 1-skeleton of a triangulation of a topological open disk, then
there exists a locally finite circle packing in one of $\mathbb{C}$ and
$\mathbb{H}^{2}$ which realizes $G$. In the same article, they also prove the
existence of the uniformizing map described in the Koebe Conjecture 2.1 for
countably connected domains, that is, domains having countably many boundary
components. The two existence proofs are closely intertwined, both appealing
to fixed-point arguments at some crucial points.
Sometimes when the Koebe Conjecture 2.1 is stated, a statement of uniqueness
of the uniformizing biholomorphism, up to postcomposition by Möbius
transformations, is included as part of the conjecture. The article
[MR1207210] establishes this rigidity portion of the conjecture as well, for
countably connected domains. The main idea of this rigidity proof is visible,
adapted to the setting of circle packings, in Section 4. A sketch of the proof
is given in [MR2884870]*Theorem 2.11.
In the case of uncountably connected domains, the uniqueness part of the Koebe
Conjecture 2.1 is well-known to be false. A counterexample can be obtained by
“placing a nonzero Beltrami differential supported on [a Cantor set of non-
zero area] and solving the Beltrami equation to obtain a quasiconformal map
which is conformal outside the Cantor set,” as noted by [MR1207210]*p. 370.
The existence portion of the Koebe Conjecture 2.1 is still open in this case.
### 2.2. Existence statements for collections of disks with overlaps
Given that there are existence statements to go along with both Theorems 1.1
and 1.2, it is also natural to ask for analogous existence statements to go
along with the main results of this paper, Theorems 1.5 and 1.4. No such
existence results are presently available.
There are many non-trivial conditions on incidence data which are necessary
for the existence of a disk configuration in the Riemann sphere realizing that
data. For instance, it is not hard to show that given $n$ disks $D_{i}$, with
$i\in\mathbb{Z}/n\mathbb{Z}$, so that $D_{i}$ and $D_{j}$ meet if and only if
$i=j\pm 1$, we have that
$\sum_{i=1}^{n}\measuredangle(D_{i},D_{i+1})<(n-2)\pi$; see Figure 5 on p. 5.
In general, conditions on $(G,\Theta)$ which force extraneous contacts are not
well understood. An example of such a condition is when $G$ contains a closed
$n$-cycle consisting of distinct edges $e_{1},\ldots,e_{n}$ so that
$\sum_{i=1}^{n}\Theta(e_{i})\geq(n-2)\pi$: in this case, by the earlier
discussion, for there to be any hope of a positive answer to the existence
question for the data $(G,\Theta)$, there must be at least one additional
contact among the vertices which are the endpoints of the $e_{i}$.
The general existence question for configurations of disks realizing certain
given incidence data appears not to have been studied much. Presently, the
main obstruction to obtaining theorems in this vein is not in finding proofs,
but in finding the correct statements. For example: as we mentioned, the proof
given in [MR1207210] of existence of circle packings having contact graphs
triangulating a topological open disk relies at crucial points on fixed-point
arguments. Our Main Index Theorem 5.3 would exactly fill the gaps in the
fixed-point portions of what would be the generalizations of those arguments
to our setting, that of thin disk configurations. In general, the methods used
to prove existence of circle packings are quite robust and varied, and at
least some of these methods are likely to generalize nicely to the setting of
collections of disks with overlap, if the correct statements to be proved were
known. For further discussion on the general question of existence of disk
configurations realizing given incidence data $(G,\Theta)$, see [mishchenko-
thesis]*Section 2.9. The special case when $\Theta$ is uniformly bounded above
by $\pi/2$ is much simpler than the general situation, and is discussed
further in Section 2.3 and especially Section 2.4.
### 2.3. Hyperbolic polyhedra
Configurations of disks on $\hat{\mathbb{C}}$ are closely related to
hyperbolic polyhedra. For example, given a collection of disks covering
$\hat{\mathbb{C}}$, we may construct a hyperbolic polyhedron by cutting out
the half-spaces which are bounded at $\partial\mathbb{H}^{3}=\hat{\mathbb{C}}$
by the disks in our collection. This construction can be used to translate
theorems on hyperbolic polyhedra to theorems about circle packings or disk
configurations, and vice versa.
In [MR0273510], Andreev gives a characterization of finite-volume hyperbolic
polyhedra satisfying the condition that every two faces sharing an edge meet
at an interior angle of at most $\pi/2$. In particular, the combinatorics and
interior angles of such a polyhedron completely determine it, up to hyperbolic
isometry of $\mathbb{H}^{3}$. From this one may deduce Theorem 1.1. This was
the approach originally taken by Thurston. For the details of the
construction, see [thurston-gt3m-notes]*Section 13.6.
Rivin has worked extensively on generalizations of Andreev’s characterization
theorems. In particular, he has given a complete characterization of ideal
hyperbolic polyhedra all of whose vertices lie on
$\hat{\mathbb{C}}=\partial\mathbb{H}^{3}$, with no requirements on the
incidence angles of the faces; see [MR1283870]*Theorem 14.1 (rigidity)
[MR1370757]*Theorem 0.1 (existence) [MR1985831]*(generalizations). Our Theorem
1.5 can be obtained as a corollary of his. The full strength of our Theorem
1.5 cannot be obtained from Andreev’s results, because of the bound on the
interior angles $\Theta(e)$ at the edges $\\{e\\}$ of the polyhedra in his
hypotheses. Rivin remarks that in the setting of hyperbolic polyhedra, the
restriction $\Theta\leq\pi/2$ is a very strong one444See comments in
[MR1370757]*p. 52. However, interestingly, there are few places in the present
article where a corresponding upper bound of $\pi/2$ on the overlap angles of
our disks would simplify the arguments significantly.
No proofs of existence nor of rigidity statements for circle packings having
contact graphs triangulating a topological open disk have been obtained
directly via these or similar theorems on hyperbolic polyhedra. A major
obstruction is that the “polyhedron” constructed via Thurston’s methods from
an infinite circle packing in $\hat{\mathbb{C}}$ typically has infinite
volume.
Rivin’s theorem characterizing ideal hyperbolic polyhedra can be directly
translated into a statement about configurations of disks on
$\hat{\mathbb{C}}=\partial\mathbb{H}^{3}$. One may then hope to generalize
this translated statement to the higher-genus setting. Bobenko and Springborn
have done exactly this in [MR2022715]*Theorem 4, where they prove an existence
and uniqueness statement for disk configurations on positive-genus closed
Riemann surfaces. Their proof uses variational principles.
One may hope that because Rivin’s characterization of ideal hyperbolic
polyhedra includes an existence component, we may obtain an existence
statement about disk configurations to go along with our Rigidity Theorem 1.5
from the direct translation of Rivin’s results. However, the existence portion
of this translation does not take as input the contact graph $G$ of the disk
configuration $\mathcal{C}$ which is eventually realized, rather taking a
certain planar subgraph of $G$. There is no known method for computing the
eventual contact graph $G$ obtained this way from only the allowed input to
the translated theorem, although by the rigidity portion we know that $G$ is
completely determined by said input. This itself may be a difficult problem:
for a heuristic argument explaining why, see [mishchenko-thesis]*Section
2.9.6. This subtle issue also underlies Bobenko and Springborn’s results,
although it is not directly addressed by those authors.
### 2.4. Vertex extremal length and modulus
A discretized version of classical conformal modulus, equivalently extremal
length, of a curve family has been used extensively to prove circle packing
theorems. One of the earliest such applications was by He and Schramm in
[MR1331923]. There, using so-called vertex extremal length, given a graph $G$
which is the 1-skeleton of a triangulation of a topological open disk, they
reprove the existence of a locally finite circle packing realizing $G$ in
exactly one of $\mathbb{C}$ and $\mathbb{H}^{2}$. They also give discrete-
analytic conditions on $G$, for example the recurrence or transience of a
simple random walk on $G$, which determine whether the circle packing
realizing $G$ lives naturally in $\mathbb{C}$ or in $\mathbb{H}^{2}$.
These ideas have been generalized by He to the setting of disk configurations
with overlaps. In [MR1680531] he proves a generalization of Theorem 1.2 using
similar methods for configurations of disks whose overlap angles are bounded
above by $\pi/2$. He also includes an existence statement. In the same paper,
he wrote555See comments in [MR1680531]*p. 2. that he intended to generalize
his techniques further to handle the case of arbitrary overlap angles, but he
never published any work doing so.
### 2.5. Disk configurations in other Riemann surfaces
Given a triangulation $X$ of an open or closed oriented topological surface
$S$ without boundary, it is possible to find a complete constant curvature
Riemannian metric $d$ on $S$, and a circle packing $\mathcal{P}$ in $(S,d)$,
whose contact graph is the 1-skeleton of $X$. To see why, first lift $X$ to a
triangulation of the universal cover of $S$, allowing us to obtain a periodic
circle packing in a simply connected constant curvature surface, one of
$\hat{\mathbb{C}}$, $\mathbb{C}$, or $\mathbb{H}^{2}$. Quotienting by this
periodicity gives our desired circle packing realizing $X$ in some complete
constant curvature Riemann surface $(S,d)$. Note that the metric $d$ and the
packing $\mathcal{P}$ in $(S,d)$ are both essentially uniquely determined by
$X$, by the rigidity of Theorem 1.2. This construction is well-known,
appearing for example in [MR1087197], and later in modified form in
[MR1207210]*Section 8.
There is no obstruction to applying the same argument in the more general
setting of disk configurations with overlaps. For example, applying our
Theorem 1.4, if $\mathcal{C}$ and $\tilde{\mathcal{C}}$ are thin disk
configurations realizing the same incidence data, living in complete constant
curvature Riemann surfaces $R$ and $\tilde{R}$ respectively, then $R$ and
$\tilde{R}$ are conformally isomorphic. Thus it is generally sufficient to
prove an existence or rigidity statement for disk configurations in
$\mathbb{C}$ or $\mathbb{H}^{2}$ to get analogous statements in multiply
connected surfaces.
Some authors have studied related theorems in higher genus surfaces directly.
For example, Thurston himself proved an existence theorem for disk
configurations with overlap angles bounded above by $\pi/2$ on closed finite-
genus surfaces without boundary, in [thurston-gt3m-notes]*Theorem 13.7.1. He
did not give a rigidity statement. As was already mentioned, Bobenko and
Springborn proved an existence and uniqueness theorem for circle patterns on
closed finite-genus surfaces without boundary, in [MR2022715]. Both of these
proofs use variational principles.
### 2.6. Further references
Today many proofs are known of the Circle Packing Theorem and of Theorem 1.1.
For example, some proofs using variational principles appear in [MR1106755]
[MR1189006] [MR1283870]. Thurston describes how Theorem 1.1 may be obained
from Mostow–Prasad rigidity in [thurston-gt3m-notes]*Proof of 13.6.2. There is
a short, clean proof of Theorem 1.1 in [MR2884870]*Section 2.4.3, which is
attributed to Oded Schramm. The earliest published version of this same
argument that we are aware of appears in [MR1680531]*Section 2.
Many of these arguments generalize readily to prove our Theorem 1.5. However,
few of them have been adapted to prove Theorem 1.2, and it therefore appears
unlikely that they will work to prove our Theorem 1.4 either.
The fixed-point index techniques we use here have recently been used by
Merenkov to prove rigidity statements for Sierpinski carpets, see
[MR2900233]*Section 12.
Theorem 1.2 in the case where $G$ has uniformly bounded-above vertex degree is
proved in [MR1087197], by a modification of an argument by Rodin and Sullivan
given in [MR906396]*Appendix 1. The proof uses quasi-conformal mapping theory.
It appears unlikely that this method of proof can be generalized to the
unbounded-valence case. However, it would likely prove the bounded-valence
case of our Theorem 1.4 just as well.
A short survey of parts of the area of circle packing which nowadays are
considered classical is given by Sachs in [MR1303402]. It also gives a rough
outline of the history of circle packing through 1994. The first half of
[MR2884870] is an excellent survey by Rohde focused on the contributions of
Oded Schramm. Rohde also gives a long list of successful applications of
circle packing to other areas of math in his Section 2.2. Stephenson’s book
[MR2131318] provides a more detailed, elementary, and mostly self-contained
introduction to the area and could serve as a kind of “first course in circle
packing.” Stephenson’s methods of proof in this book are essentially adapted
from those of He and Schramm in [MR1207210]. Finally, the introduction of the
Ph. D. thesis [mishchenko-thesis]*Chapter 2 of the present author, from which
this article is adapted, gives a fairly thourough survey of the area of circle
packing.
## 3\. Fixed-point index preliminaries
A _Jordan curve_ is a homeomorphic image of a topological circle
$\mathbb{S}^{1}$ in the complex plane $\mathbb{C}$. A _Jordan domain_ is a
bounded open set in $\mathbb{C}$ with Jordan curve boundary. We use the term
_closed Jordan domain_ to refer to the closure of a Jordan domain. Suppose
that a Jordan curve appears as the boundary of a set $X\subset\mathbb{C}$
having non-empty interior. Then the _positive orientation_ of $\partial X$
with respect to $X$ is defined as usual. That is, the interior of $X$ stays to
the left as we traverse $\partial X$ in what we call the _positive_ direction.
As a rule, if we write a Jordan curve as $\partial X$, where $X$ is an open or
closed Jordan domain or the complement thereof, then we will take that to mean
that $\partial X$ is oriented positively with respect to $X$, unless otherwise
noted. In particular, if $X$ is an open or closed Jordan domain, then the
positive orientation induced on $\partial X$ is the counterclockwise one as
usual.
We now define the _fixed-point index_ , our main technical tool. The rest of
this section will consist of the proofs of several fundamental lemmas on
fixed-point index.
###### Definition 3.1.
Let $\gamma$ and $\tilde{\gamma}$ be oriented Jordan curves. Let
$\phi:\gamma\to\tilde{\gamma}$ be a homeomorphism which is fixed-point-free
and orientation-preserving. We call such a homeomorphism _indexable_. Then
$\\{\phi(z)-z\\}_{z\in\gamma}$ is a closed curve in $\mathbb{C}$ which misses
the origin. It has a natural orientation induced by traversing $\gamma$
according to its orientation. We define the _fixed-point index_ of $\phi$,
denoted $\eta(\phi)$, to be the winding number of
$\\{\phi(z)-z\\}_{z\in\gamma}$ around the origin.
Intuitively, fixed-point index counts the following. Suppose that
$\Phi:K\to\tilde{K}$ is a homeomorphism of closed Jordan domains having only
isolated fixed points, which restricts to an indexable $\phi:\partial
K\to\partial\tilde{K}$. Then $\eta(\phi)$ counts the number of fixed points,
with signed multiplicity, of $\Phi$. For more discussion on the history and
broader relevance of fixed-point index, see [MR1207210]*Section 2. Every
integer, positive or negative, occurs as a fixed-point index.
Our first lemma says that the fixed-point index between two (round) circles is
always non-negative:
###### Circle Index Lemma 3.2.
Let $K$ and $\tilde{K}$ be closed Jordan domains in $\mathbb{C}$, and let
$\phi:\partial K\to\partial\tilde{K}$ be an indexable homeomorphism. Then the
following hold.
1. (1)
The homeomorphism $\phi^{-1}:\partial\tilde{K}\to\partial K$ is indexable with
$\eta(\phi)=\eta(\phi^{-1})$.
2. (2)
If $K\subseteq\tilde{K}$ or $\tilde{K}\subseteq K$, then $\eta(\phi)=1$.
3. (3)
If $K$ and $\tilde{K}$ have disjoint interiors, then $\eta(\phi)=0$.
4. (4)
If $\partial K$ and $\partial\tilde{K}$ intersect in exactly two points, then
$\eta(\phi)\geq 0$.
As a consequence of the above, if $K$ and $\tilde{K}$ are metric closed disks
in the plane, then $\eta(\phi)\geq 0$.
This lemma can be found in [MR1207210]*Lemma 2.2. There it is indicated that
the same lemma appeared earlier in [MR0051934]. We sketch the proof of Lemma
3.2 given in [MR1207210]*Lemma 2.2:
_“Proof.”_ (1) By definition $\eta(f^{-1})$ is the winding number of
$\\{f^{-1}(\tilde{z})-\tilde{z}\\}_{\tilde{z}\in\partial\tilde{K}}$ around the
origin, which is equal to the winding number of $\\{z-f(z)\\}_{z\in\partial
K}$ around the origin under the coordinate change $f(z)=\tilde{z}$. But the
winding number around the origin of a cloesd curve
$\\{\gamma(t)\\}_{t\in\mathbb{S}^{1}}$ which misses $0$ is equal to the
winding number around the origin of $\\{-\gamma(t)\\}_{t\in\mathbb{S}^{1}}$.
Part (2) is believable if we imagine $K$ to be “very small,” and contained in
$\tilde{K}$. Then the endpoint $z$ of the vector $f(z)-z$ does not move very
much as $z$ traverses $\partial K$, while the endpoint $f(z)$ of the same
vector “winds once positively around $K$.” Part (3) is believable for similar
reasons if we imagine $K$ and $\tilde{K}$ to be very far away from each other.
These ideas can be made into proofs via simple homotopy arguments.
$K$$\tilde{K}$$f(z)$$z$
Figure 2. Closed Jordan domains whose boundaries cross transversely at
exactly two points, after an isotopy. Whenever $f(z)-z\in\mathbb{R}_{+}$, for
instance as shown, then $f(z)-z$ is locally turning counter-clockwise.
For part (4) we may assume without loss of generality by parts (2) and (3)
that $\partial K$ and $\partial\tilde{K}$ meet transversely at both of their
intersection points, so after applying an isotopy to $\mathbb{C}$ we have that
$K$ and $\tilde{K}$ are the square and circle depicted in Figure 2, c.f. Lemma
8.1. We ask ourselves when the vector $f(z)-z$ can possibly point in the
positive real direction, as in Figure 2. If $z\in\partial K$ does not lie in
the interior of $\tilde{K}$, the vector $f(z)-z$ has either an imaginary
component, or a negative real component. Similarly if
$f(z)\in\partial\tilde{K}$ does not lie inside of $K$, then $f(z)-z$ has a
negative real component. We conclude that the only way that $f(z)-z$ can be
real and positive is if $z$ lies along $\partial K$ in the interior of
$\tilde{K}$ and $f(z)$ lies along $\partial\tilde{K}$ inside of $K$. But in
this case because of the orientations on $\partial K$ and $\partial\tilde{K}$,
the vector $f(z)-z$ is locally turning in the positive direction. Thus
whenever the curve $\\{f(z)-z\\}_{z\in\partial K}$ crosses the positive real
axis it is turning in the positive direction, so this curve’s total winding
number around the origin cannot be negative.∎
Our next lemma says essentially that fixed-point indices “add nicely”:
###### Index Additivity Lemma 3.3.
Suppose that $K$ and $L$ are interiorwise disjoint closed Jordan domains which
meet along a single positive-length Jordan arc $\partial K\cap\partial L$,
similarly for $\tilde{K}$ and $\tilde{L}$. Then $K\cup L$ and
$\tilde{K}\cup\tilde{L}$ are closed Jordan domains.
Let $\phi_{K}:\partial K\to\partial\tilde{K}$ and $\phi_{L}:\partial
L\to\partial\tilde{L}$ be indexable homeomorphisms. Suppose that $\phi_{K}$
and $\phi_{L}$ agree on $\partial K\cap\partial L$. Let $\phi:\partial(K\cup
L)\to\partial(\tilde{K}\cup\tilde{L})$ be induced via restriction to
$\phi_{K}$ or $\phi_{L}$ as necessary. Then $\phi$ is an indexable
homeomorphism and $\eta(\phi)=\eta(\phi_{L})+\eta(\phi_{K})$.
###### Proof.
By the definition of the fixed-point index, we have that $\eta(\phi_{L})$ is
equal to $1/2\pi$ times the change in argument of the vector $f(z)-z$, as $z$
traverses $\partial K$ once in the positive direction, and similarly for
$\eta(\phi_{K})$ and $\eta(\phi)$. The orientation induced on $\partial
K\cap\partial L$ by the positive orientation on $\partial K$ is opposite to
the one induced by the positive orientation on $\partial L$, so as $z$ varies
positively in $\partial K$ and in $\partial\tilde{K}$ the contributions to the
sum $\eta(\phi_{K})+\eta(\phi_{L})$ along $\partial K\cap\partial L$ exactly
cancel.
If we consider the alternative interpretation of the fixed-point index of
$\phi$ to be counting the number of fixed points with signed multiplicity of a
homeomorphic extension of $\phi$ to all of $K\cup L$, and similarly for
$\eta(\phi_{K})$ and $\eta(\phi_{L})$, then the lemma is also clear. ∎
Moving on, we make a definition. Let $K$ and $\tilde{K}$ be closed Jordan
domains. We say that $K$ and $\tilde{K}$ are in _transverse position_ if
$\partial K$ and $\partial\tilde{K}$ cross wherever they meet. More precisely,
we say that $K$ and $\tilde{K}$ are in _transverse position_ if for any
$z\in\partial K\cap\partial\tilde{K}$, there is an open neighborhood $U$ of
$z$ and a homeomorphism $\phi:U\to\mathbb{D}$ to the open unit disk sending
$\partial K\cap U$ to $\mathbb{R}\cap\mathbb{D}$ and sending
$\partial\tilde{K}\cap U$ to $i\mathbb{R}\cap\mathbb{D}$.
We now state our next fundamental lemma about fixed-point index. This lemma
says essentially that we may almost always prescribe the images of three
points on $\partial K$ in $\partial\tilde{K}$, and obtain an indexable
homeomorphism $\partial K\to\partial\tilde{K}$ with non-negative fixed-point
index, which respects this prescription.
###### Three Point Prescription Lemma 3.4.
Let $K$ and $\tilde{K}$ be closed Jordan domains in transverse position. Let
$z_{1},z_{2},z_{3}\in\partial K\setminus\partial\tilde{K}$ appear in
positively oriented order around $\partial K$, and similarly
$\tilde{z}_{1},\tilde{z}_{2},\tilde{z}_{3}\in\partial\tilde{K}\setminus\partial
K$. Then there is an indexable homeomorphism $\phi:\partial
K\to\partial\tilde{K}$ sending $z_{i}\mapsto\tilde{z}_{i}$ for $i=1,2,3$, so
that $\eta(\phi)\geq 0$.
A version of this lemma is stated in [MR2131318]*Lemma 8.14, with the
following heuristic argument: by Carathéodory’s theorem (see [MR1511735]),
because $K$ and $\tilde{K}$ are closed Jordan domains, any Riemann mapping
from the interior of $K$ to that of $\tilde{K}$ extends homeomorphically to a
map $\Phi:K\to\tilde{K}$. Furthermore $\Phi$ may be chosen so that
$\Phi:z_{i}\mapsto\tilde{z}_{i}$ for $i=1,2,3$. Fix such a $\Phi$, and let
$\phi:\partial K\to\partial\tilde{K}$ be defined by restriction.
Suppose that $\phi:\partial K\to\partial\tilde{K}$ does not have any fixed
points. It is automatically orientation-preserving because a Riemann mapping
always is. Suppose also that $\partial K$ and $\partial\tilde{K}$ are
piecewise smooth. Then, using the standard complex analysis definition of
winding number, we have that:
$\eta(\phi)=\oint_{\\{\phi(z)-z\\}_{z\in\partial
K}}\frac{dw}{w}=\oint_{\partial K}\frac{\Phi^{\prime}(w)-1}{\Phi(w)-w}dw$
Then by the standard Argument Principle, the second integral counts the number
of zeros minus the number of poles of $\Phi(z)-z$ in the interior of $K$, but
$\Phi(z)-z$ is holomorphic there, thus has no poles there, so this integral is
non-negative.
Actually $\Phi^{\prime}$ is undefined on $\partial K$, because $\Phi$ is not
holomorphic in a neighborhood of $K$, so the second integral does not quite
make sense. There is another more serious issue, which is that in general
$\phi:\partial K\to\partial\tilde{K}$ may have many fixed points, and it is
not clear how to get rid of them. The argument given in [MR2131318]*Lemma 8.14
does not address these two issues. We give an original elementary inductive
proof of Lemma 3.4, using only plane topology arguments, in [mishchenko-
thesis]*Section 3.5. The proof is not hard, but is lengthy to include here.
This lemma fails if we try to prescribe the images of four points. For a
counterexample, see [MR1207210]*Figure 2.2.
## 4\. Rigidity proofs in the circle packing case
In this section we prove rigidity theorems for circle packings which are
special cases of our main rigidity results on thin disk configurations. The
arguments here are adapted from those in [MR1207210].
The following well-known and easy-to-check lemma will be implicit in our
discussion below, although we will not refer to it directly:
###### Lemma 4.1.
Let $G=(V,E)$ be a 3-cycle. Let $\mathcal{P}=\\{D_{v}\\}_{v\in V}$ and
$\tilde{\mathcal{P}}=\\{\tilde{D}_{v}\\}_{v\in V}$ be circle packings in
$\hat{\mathbb{C}}$ having contact graph $G$. Then any Möbius transformation
sending the three tangency points of $\mathcal{P}$ to those of
$\tilde{\mathcal{P}}$ in fact identifies $\mathcal{P}$ and
$\tilde{\mathcal{P}}$.
(Let $C^{*}$ and $\tilde{C}^{*}$ be the circles passing through the tangency
points of $\mathcal{P}$ and those of $\tilde{\mathcal{P}}$, respectively. Then
one can show that $C^{*}$ meets every $\partial D_{v}$ orthogonally, similarly
$\tilde{C}^{*}$ and the $\partial\tilde{D}_{v}$, and the lemma follows.)
Before moving on, it will help to have some definitions. First, suppose that
$\mathcal{P}$ is a circle packing locally finite in $\mathbb{G}$, where
$\mathbb{G}$ is equal to one of $\hat{\mathbb{C}},\mathbb{C},\mathbb{H}^{2}$.
Then $\mathcal{P}$ induces an embedding of its contact graph $G$ in
$\mathbb{G}$ by placing every vertex $v$ of $G$ at the metric center of its
associated disk $D_{v}\in\mathcal{P}$, and connecting the centers $u,v$ of
touching disks $D_{u},D_{v}$ with a geodesic arc passing through the point
$D_{u}\cap D_{v}$. We call this the _geodesic embedding of $G$ in $\mathbb{G}$
induced by $\mathcal{P}$_. Next, two circle packings $\mathcal{P}$ and
$\tilde{\mathcal{P}}$ are in _general position_ if the following hold:
* •
Every pair of disks $D\in\mathcal{P}$ and $\tilde{D}\in\tilde{\mathcal{P}}$
are in transverse position as closed Jordan domains. For a reminder of what
this means see p. 3.
* •
If $z$ is an intersection point of two distinct disks of $\mathcal{P}$, then
$z$ does not lie on $\partial\tilde{D}$ for any
$\tilde{D}\in\tilde{\mathcal{P}}$. Similarly if $\tilde{z}$ is an intersection
point of two distinct disks of $\tilde{\mathcal{P}}$, then $z$ does not lie on
$\partial D$ for any $D\in\mathcal{P}$.
We now proceed to the proofs of our rigidity theorems on circle packings.
First:
###### Theorem 4.2.
Suppose that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are circle packings in
$\hat{\mathbb{C}}$, sharing a contact graph $G$ which is the 1-skeleton of a
triangulation of the 2-sphere $\mathbb{S}^{2}$. Then $\mathcal{P}$ and
$\tilde{\mathcal{P}}$ differ by a Möbius or an anti-Möbius transformation.
###### Proof.
First, recall that if $G$ is the 1-skeleton of a triangulation of
$\mathbb{S}^{2}$, then there are exactly two ways to embed $G$ in
$\mathbb{S}^{2}$, up to orientation-preserving self-homeomorphism of
$\mathbb{S}^{2}$. Therefore we may suppose without loss of generality, by
applying $z\mapsto\bar{z}$ to one of the two packings if necessary, that the
geodesic embeddings of $G$ in $\hat{\mathbb{C}}$ induced by $\mathcal{P}$ and
by $\tilde{\mathcal{P}}$ are images of one another by orientation-preserving
self-homeomorphisms of $\hat{\mathbb{C}}$. In our next three proofs we note
this preliminary procedure simply by saying that we may suppose without loss
of generality that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ have the same
_orientation_.
Then we proceed by contradiction, supposing that there is no Möbius
transformation sending $\mathcal{P}$ to $\tilde{\mathcal{P}}$. The first step
of the proof is to normalize $\mathcal{P}$ and $\tilde{\mathcal{P}}$ in a
convenient way. In particular, we apply Möbius transformations so that the
following holds:
###### Normalization 4.3.
There are disks $D_{a},D_{b},D_{c}\in\mathcal{P}$ and
$\tilde{D}_{a},\tilde{D}_{b},\tilde{D}_{c}\in\tilde{\mathcal{P}}$, where
$a,b,c$ are distinct vertices of the common contact graph $G$ of $\mathcal{P}$
and $\tilde{\mathcal{P}}$, so that the following hold:
* •
One of $D_{v}$ and $\tilde{D}_{v}$ is contained in the interior of the other,
for all $v=a,b,c$.
* •
The point $\infty\in\hat{\mathbb{C}}$ lies in the interior of
$D_{a}\cap\tilde{D}_{a}$.
* •
The packings $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are in general position.
We do not prove here that Normalization 4.3 is possible, because it is a
special case of the stronger normalization we construct in detail in the proof
of our Main Rigidity Theorem 1.5. See Figure 3 for a model drawing of the
present situation.
$D_{b}$$D_{c}$$\partial D_{a}$
(a)
$\tilde{D}_{c}$$\tilde{D}_{b}$$\partial\tilde{D}_{a}$
(b)
Figure 3. The packing $\mathcal{P}$, with $\infty$ in the interior of
$D_{a}$; and the interaction between the disks $D_{a},D_{b},D_{c}$ and
$\tilde{D}_{a},\tilde{D}_{b},\tilde{D}_{c}$ after our normalization.
An _interstice_ of the packing $\mathcal{P}$ is defined to be a connected
component of $\hat{\mathbb{C}}\setminus\cup_{D\in\mathcal{P}}D$. That is, the
interstices of $\mathcal{P}$ are the curvilinear triangles which make up the
complement of the packing. If we write $F$ to denote the set of faces of the
triangulation of $\mathbb{S}^{2}$ having $G$ as its 1-skeleton, then the
interstices of $\mathcal{P}$ are in natural bijection with the faces $F$. (If
we embed $G$ via the embedding induced by $\mathcal{P}$, then every face of
the resulting triangulation of $\hat{\mathbb{C}}$ contains precisely one
interstice.) We write $T_{f}$ to denote the interstice of $\mathcal{P}$
corresponding to the face $f\in F$. The interstices $\tilde{T}_{f}$ of
$\tilde{\mathcal{P}}$ are defined analogously. Note also that there is a
natural correspondence from the corners of $T_{f}$ to those of
$\tilde{T}_{f}$, for a given $f\in F$. For every $f\in F$, fix an indexable
homeomorphism $\phi_{f}:\partial T_{f}\to\partial\tilde{T}_{f}$ which
identifies corresponding corners with $\eta(\phi_{f})\geq 0$. We may do so by
the Three Point Prescription Lemma 3.4. We remark that it is also important at
this step that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ have the same
orientation, as per the first paragraph of this proof. Then the homeomorphisms
$\phi_{f}$ induce, by restriction, indexable homeomorphisms $\phi_{v}:\partial
D_{v}\to\partial\tilde{D}_{v}$ between the boundary circles of the disks
$D_{v}\in\mathcal{P}$ and $\tilde{D}_{v}\in\tilde{\mathcal{P}}$. Here $v$
ranges over the vertex set $V$ of $G$.
Orient $\partial D_{a}$ and $\partial\tilde{D}_{a}$ positively with respect to
the open disks they bound in $\mathbb{C}$. We remark for clarity that this is
the opposite of the positive orientation on $\partial D_{a}$ and $\partial
D_{a}$ with respect to the interiors of $D_{a}$ and $\tilde{D}_{a}$, in
$\hat{\mathbb{C}}$. Then $\eta(\phi_{v})=1$ for all $v=a,b,c$, by the Circle
Index Lemma 3.2, because of our Normalization 4.3. On the other hand, by the
Index Additivity Lemma 3.3, we have the following:
$1=\eta(\phi_{a})=\sum_{f\in F}\eta(\phi_{f})+\sum_{v\in
V\setminus\\{a\\}}\eta(\phi_{v})$
Every $\eta(\phi_{f})$ in the first sum is non-negative by construction, and
every $\eta(\phi_{v})$ in the second sum is non-negative by the Circle Index
Lemma 3.2. Also, we have contributions from $\eta(\phi_{b})=1$ and
$\eta(\phi_{c})=1$ to second sum, so it must be at least 2, giving us the
desired contradiction. ∎
The proofs of our other rigidity and uniformization theorems for circle
packings are adapted from the proof of Theorem 4.2 using similar ideas. We
give these proofs now without further comment:
###### Theorem 4.4.
There cannot be two circle packings $\mathcal{P}$ and $\tilde{\mathcal{P}}$
sharing a contact graph $G$ which is the 1-skeleton of a triangulation of a
topological open disk, so that one of $\mathcal{P}$ and $\tilde{\mathcal{P}}$
is locally finite in $\mathbb{C}$ and the other is locally finite in the open
unit disk $\mathbb{D}$, or equivalently the hyperbolic plane
$\mathbb{H}^{2}\cong\mathbb{D}$.
###### Proof.
We again proceed by contradiction, supposing that $\mathcal{P}$ is locally
finite in $\mathbb{C}$ and $\tilde{\mathcal{P}}$ is locally finite in the open
unit disk $\mathbb{D}$. As before we apply $z\mapsto\bar{z}$ to one of the
packings if necessary to ensure that the geodesic embeddings of $G$ in
$\mathbb{C}$ and in $\mathbb{H}^{2}\cong\mathbb{D}\subset\mathbb{C}$ induced
by $\mathcal{P}$ and $\tilde{\mathcal{P}}$ respectively are identified via
some orientation-preserving homeomorphism $\mathbb{C}\to\mathbb{D}$, ensuring
that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ have the same orientation. This
time we normalize by applying orientation-preserving Euclidean similarities to
$\mathcal{P}$ so that the following holds:
###### Normalization 4.5.
There are disks $D_{a},D_{b}\in\mathcal{P}$ and
$\tilde{D}_{a},\tilde{D}_{b}\in\tilde{\mathcal{P}}$, where $a,b$ are distinct
vertices of the common contact graph $G$ of $\mathcal{P}$ and
$\tilde{\mathcal{P}}$, so that the following hold:
* •
One of $D_{v}$ and $\tilde{D}_{v}$ is contained in the interior of the other,
for all $v=a,b$.
* •
The packings $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are in general position.
(We give a detailed construction of a stronger normalization in the proof of
Theorem 7.3.)
Let $X=(V,E,F)$ be a triangulation of a topological open disk with vertices
$V$, edges $E$, and faces $F$, considered only up to its combinatorics, so
that the 1-skeleton $(V,E)$ of $X$ is $G$. We define the interstices $T_{f}$
and $\tilde{T}_{f}$ as before, and again fix $\phi_{f}:\partial
T_{f}\to\partial\tilde{T}_{f}$ having $\eta(\phi_{f})\geq 0$. For every $v\in
V$ we again write $\phi_{v}:\partial D_{v}\to\partial\tilde{D}_{v}$ for the
indexable homeomorphism induced by restriction to the $\phi_{f}$.
Let $(V_{0},E_{0},F_{0})=X_{0}\subset X$ be a subtriangulation of $X$, so that
$X_{0}$ is a triangulation of a topological closed disk, and so that
$\mathbb{D}\subset\bigcup_{v\in V_{0}}D_{v}\cup\bigcup_{f\in F_{0}}T_{f}$.
Call this total union $K$, and define $\tilde{K}$ analogously as
$\tilde{K}=\bigcup_{v\in V_{0}}\tilde{D}_{v}\cup\bigcup_{f\in
F_{0}}\tilde{T}_{f}$. Let $\phi_{K}:\partial K\to\partial\tilde{K}$ be the
indexable homeomorphism induced by restriction to the $\phi_{v}$. Then
$\eta(\phi_{K})=1$ by the Circle Index Lemma 3.2, because
$\tilde{K}\subset\mathbb{D}\subset K$. On the other hand, by the Index
Additivity Lemma 3.3:
$1=\eta(\phi_{K})=\sum_{f\in F_{0}}\eta(\phi_{f})+\sum_{v\in
V_{0}}\eta(\phi_{v})$
As before, in the first sum every $\eta(\phi_{f})\geq 0$ by construction and
in the second sum every $\eta(\phi_{v})\geq 0$ by the Circle Index Lemma 3.2.
Also the second sum has contributions from $\eta(\phi_{a})=1$ and
$\eta(\phi_{b})=1$, again giving us a contradiction as desired. ∎
###### Theorem 4.6.
Suppose that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are circle packings
locally finite in the open unit disk $\mathbb{D}$, equivalently the hyperbolic
plane $\mathbb{H}^{2}\cong\mathbb{D}$, so that $\mathcal{P}$ and
$\tilde{\mathcal{P}}$ share a contact graph $G$ which is the 1-skeleton of a
triangulation of a topological open disk. Then $\mathcal{P}$ and
$\tilde{\mathcal{P}}$ differ by a hyperbolic isometry, that is, a Möbius or
anti-Möbius transformation fixing $\mathbb{D}\cong\mathbb{H}^{2}$ set-wise.
###### Proof.
As usual, we may suppose that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ have the
same orientation. Proceeding by contradiction, we then normalize by
orientation-preserving Euclidean similarities so that the following holds:
###### Normalization 4.7.
There are disks $D_{a},D_{b}\in\mathcal{P}$ and
$\tilde{D}_{a},\tilde{D}_{b}\in\tilde{\mathcal{P}}$, where $a,b$ are distinct
vertices of the common contact graph $G$ of $\mathcal{P}$ and
$\tilde{\mathcal{P}}$, so that the following hold:
* •
One of $D_{v}$ and $\tilde{D}_{v}$ is contained in the interior of the other,
for all $v=a,b$.
* •
Letting $D$ and $\tilde{D}$ denote the images of the open unit disk
$\mathbb{D}$ under the normalizations applied to $\mathcal{P}$ and
$\tilde{\mathcal{P}}$ respectively, we have that one of $D$ and $\tilde{D}$ is
contained in the interior of the other.
* •
The packings $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are in general position.
(We give a detailed construction of a stronger normalization in the proof of
Theorem 7.5.)
Let $X=(V,E,F)$ be a triangulation of a topological open disk, considered up
to its combinatorics, having 1-skeleton $G=(V,E)$. We define all of
$T_{f},\tilde{T}_{f},\phi_{f},\phi_{v}$ as before.
Suppose without loss of generality, by interchanging the roles of
$\mathcal{P}$ and $\tilde{\mathcal{P}}$ if necessary, that $\tilde{D}$ is
contained in the interior of $D$. Let $(V_{0},E_{0},F_{0})=X_{0}\subset X$ be
a subtriangulation of $X$, so that $X_{0}$ is a triangulation of a topological
closed disk, and so that $\tilde{D}\subset\bigcup_{v\in
V_{0}}D_{v}\cup\bigcup_{f\in F_{0}}T_{f}$. Call this total union $K$, and
define $\tilde{K}$ analogously as before, again getting
$\tilde{K}\subset\tilde{D}\subset K$. We obtain the desired contradiction as
in the proof of Theorem 4.4. ∎
###### Theorem 4.8.
Suppose that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are circle packings
locally finite in $\mathbb{C}$, sharing a contact graph $G$ which is the
1-skeleton of a triangulation of a topological open disk. Then $\mathcal{P}$
and $\tilde{\mathcal{P}}$ differ by a Euclidean similarity.
###### Proof.
As usual, we may suppose that $\mathcal{P}$ and $\tilde{\mathcal{P}}$ have the
same orientation. We proceed by contradiction, and begin by normalizing
$\mathcal{P}$ and $\tilde{\mathcal{P}}$ by Möbius transformations so that the
following holds:
###### Normalization 4.9.
There are disks $D_{a},D_{b},D_{c}\in\mathcal{P}$ and
$\tilde{D}_{a},\tilde{D}_{b},\tilde{D}_{c}\in\tilde{\mathcal{P}}$, where
$a,b,c$ are distinct vertices of the common contact graph $G$ of $\mathcal{P}$
and $\tilde{\mathcal{P}}$, so that the following hold:
* •
One of $D_{v}$ and $\tilde{D}_{v}$ is contained in the interior of the other,
for all $v=a,b,c$.
* •
The point $\infty\in\hat{\mathbb{C}}$ lies in the interior of
$D_{a}\cap\tilde{D}_{a}$.
* •
Letting $z_{\infty}$ and $\tilde{z}_{\infty}$ denote the images of $\infty$
under the normalizations applied to $\mathcal{P}$ and $\tilde{\mathcal{P}}$
respectively, we have that $z_{\infty}\neq\tilde{z}_{\infty}$.
* •
The packings $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are in general position.
(We give a detailed construction of a stronger normalization in the proof of
Theorem 7.4.)
We define all of $X=(V,E,F),T_{f},\tilde{T}_{f},\phi_{f},\phi_{v}$ as before.
Let $U$ and $\tilde{U}$ be small disjoint open neighborhoods of $z_{\infty}$
and $\tilde{z}_{\infty}$ respectively, and let
$(V_{0},E_{0},F_{0})=X_{0}\subset X$ be a sub-triangulation of $X$ so that the
following hold:
* •
We have that $X_{0}$ is a triangulation of a topological closed disk.
* •
Setting $L=\bigcup_{v\not\in V_{0}}D_{v}\cup\bigcup_{f\not\in F_{0}}T_{f}$,
and defining $\tilde{L}$ analogously, we have that $L\subset U$ and
$\tilde{L}\subset\tilde{U}$.
Then the $\phi_{v}$ induce, via restriction, an indexable homeomorphism
$\phi_{L}:\partial L\to\partial\tilde{L}$, with $\eta(\phi_{L})=0$ by the
Circle Index Lemma 3.2, because $U\supset L$ and $\tilde{U}\supset\tilde{L}$
are disjoint.
We orient $\partial D_{a}$ and $\partial\tilde{D}_{a}$ positively with respect
to the open disks they bound in $\mathbb{C}$, as in the proof of Theorem 4.2.
Then by the Index Additivity Lemma 3.3, we have:
$1=\eta(\phi_{a})=\sum_{f\in F_{0}}\eta(\phi_{f})+\sum_{v\in
V_{0}}\eta(\phi_{v})+\eta(\phi_{L})$
The first sum is non-negative and the second sum is at least 2 as usual, and
$\eta(\phi_{L})=0$, so we get our desired contradiction. ∎
## 5\. Our main technical result, the Index Theorem
Let $\mathcal{K}=\\{K_{1},\ldots,K_{n}\\}$ and
$\tilde{\mathcal{K}}=\\{\tilde{K}_{1},\ldots,\tilde{K}_{n}\\}$ be collections
of closed Jordan domains. We denote
$\partial\mathcal{K}=\partial\cup_{i=1}^{n}K_{i}$, similarly
$\partial\tilde{\mathcal{K}}=\cup_{i=1}^{n}\tilde{K}_{i}$. A homeomorphism
$\phi:\partial\mathcal{K}\to\partial\tilde{\mathcal{K}}$ is called _faithful_
if whenever we restrict $\phi$ to $K_{j}\cap\partial\mathcal{K}$ we get a
homeomorphism
$K_{j}\cap\partial\mathcal{K}\to\tilde{K}_{j}\cap\partial\tilde{\mathcal{K}}$.
The particular choice of indices of $K_{i}$ and $\tilde{K}_{i}$ is important
in determining whether a given homeomorphism is faithful, so we consider the
labeling to be part of the information of the collections. Note that in
general $\partial\mathcal{K}$ and $\partial\tilde{\mathcal{K}}$ need not be
homeomorphic, and even if they are homeomorphic there may still be no faithful
homeomorphism between them.
We now give a weak form of our main technical result, to illustrate the manner
in which we generalize the Circle Index Lemma 3.2. It may be helpful to recall
Definition 1.3 on p. 1.3.
###### Main Index Theorem (weak form).
Let $\mathcal{D}=\\{D_{1},\ldots,D_{n}\\}$ and
$\tilde{\mathcal{D}}=\\{\tilde{D}_{1},\ldots,\tilde{D}_{n}\\}$ be finite thin
disk configurations in the plane $\mathbb{C}$ realizing the same incidence
data $(G,\Theta)$. Let
$\phi:\partial\mathcal{D}\to\partial\tilde{\mathcal{D}}$ be a faithful
indexable homeomorphism. Then $\eta(\phi)\geq 0$.
This follows from the full statement of the Main Index Theorem 5.3. To get rid
of the general position hypothesis from the statement of Theorem 5.3, we need
a lemma, which we do not prove in this article, which says that the fixed-
point index of a homeomorphism is invariant under a small perturbation of its
domain or range, see [mishchenko-thesis]*Lemma 3.3 [MR2131318]*Lemma 8.11.
Note that our current Definition 3.1 of fixed-point index is not strong enough
to accomodate the theorem statement we just gave. This is because
$\cup_{D\in\mathcal{D}}D$ need not be a closed Jordan domain. In light of the
Index Additivity Lemma 3.3, it is clear how to adapt Definition 3.1 to suit
our needs. In particular:
###### Definition 5.1.
Suppose that $K$ is a union of finitely many closed disks in $\mathbb{C}$,
some of which may intersect. Suppose also that $\partial K$ is oriented
positively with respect to $K$, meaning as usual that the interior of $K$
stays to the left as we traverse $\partial K$ in what we call the positive
direction. Then $\partial K$ decomposes, possibly in more than one way, as a
union of finitely many oriented Jordan cuves $\gamma_{1},\ldots,\gamma_{n}$
any two of which meet only at finitely many points. Some of the $\gamma_{i}$
will be oriented positively with respect to the finite Jordan domains they
bound in $\mathbb{C}$, some negatively. Suppose $\tilde{K}$ is another finite
union of closed disks, with $\partial\tilde{K}$ similarly decomposing as
$\tilde{\gamma}_{1},\ldots,\tilde{\gamma}_{n}$, and that $\phi:\partial
K\to\partial\tilde{K}$ is a fixed-point-free orientation-preserving
homeomorphism which extends to a homeomorphism $K\to\tilde{K}$, and which
identifies $\gamma_{i}$ with $\tilde{\gamma}_{i}$ for $1\leq i\leq n$. Then we
define
$\eta(\phi)=\sum_{i=1}^{n}\eta(\gamma_{i}\mathop{\to}\limits^{\phi}\tilde{\gamma}_{i})$.
Here we write $\gamma_{i}\mathop{\to}\limits^{\phi}\tilde{\gamma}_{i}$ to
denote the restriction of $\phi$ to $\gamma_{i}\to\tilde{\gamma}_{i}$. We will
continue to use this notational convention in the future. We remark that in
the definition, the decomposition of $\partial K$ into
$\gamma_{1},\ldots,\gamma_{n}$ may not be unique, similarly for
$\partial\tilde{K}$. We leave it as an exercise for the reader to verify that
the same value for the fixed-point index is obtained regardless of which
decomposition is chosen. We remark also that the natural generalization of our
Index Additivity Lemma 3.3 continues to hold, and leave this as an exercise as
well. Definition 5.1 will be general enough to completely accommodate the
statement of our Main Index Theorem 5.3.
To give the full statement of our Main Index Theorem 5.3, we need one more
definition:
###### Definition 5.2.
Let $\mathcal{D}=\\{D_{1},\ldots,D_{n}\\}$ and
$\tilde{\mathcal{D}}=\\{\tilde{D}_{1},\ldots,\tilde{D}_{n}\\}$ be finite
collections of closed disks in the complex plane $\mathbb{C}$, sharing a
contact graph $G$, with $D_{i}\in\mathcal{D}$ corresponding to
$\tilde{D}_{i}\in\tilde{\mathcal{D}}$ for all $1\leq i\leq n$. A subset
$I\subset\\{1,\ldots,n\\}$ is called _subsumptive_ if
* •
either $D_{i}\subset\tilde{D}_{i}$ for every $i\in I$, or
$\tilde{D}_{i}\subset D_{i}$ for every $i\in I$, and
* •
the set $\cup_{i\in I}D_{i}$ is connected, equivalently the set $\cup_{i\in
I}\tilde{D}_{i}$ is connected.
Let $I$ be a subsumptive subset of $\\{1,\ldots,n\\}$. Then $I$ is called
_isolated_ if there is no $i\in I$ and $j\in\\{1,\ldots,n\\}\setminus I$ so
that one of $D_{i}\cap D_{j}$ and $\tilde{D}_{i}\cap\tilde{D}_{j}$ contains
the other. The collections $\\{D_{i}\\}_{i\in I}$ and
$\\{\tilde{D}_{i}\\}_{i\in I}$ together are called a _pair of subsumptive
subconfigurations_ of $\mathcal{D}$ and $\tilde{\mathcal{D}}$. The pair is
called _isolated_ if $I$ is isolated.
The main technical result of this article is the following theorem:
###### Main Index Theorem 5.3.
Let $\mathcal{D}=\\{D_{1},\ldots,D_{n}\\}$ and
$\tilde{\mathcal{D}}=\\{\tilde{D}_{1},\ldots,\tilde{D}_{n}\\}$ be finite thin
disk configurations in the complex plane $\mathbb{C}$, in general position,
realizing the same incidence data $(G,\Theta)$, with $D_{i}\in\mathcal{D}$
corresponding to $\tilde{D}_{i}\in\tilde{\mathcal{D}}$ for all $1\leq i\leq
n$. Let $\phi:\partial\mathcal{D}\to\partial\tilde{\mathcal{D}}$ be a faithful
indexable homeomorphism. Then $\eta(\phi)$ is at least the number of maximal
isolated subsumptive subsets of $\\{1,\ldots,n\\}$. In particular
$\eta(\phi)\geq 0$.
For an example, look ahead to Figure 8 on p. 8. There we know that
$\eta(\phi)\geq 1$ for $\phi$ satisfying the hypotheses of Theorem 5.3. We
discuss possible generalizations of our Main Index Theorem 5.3 at the end of
Section 12.
We will now prove Theorem 5.3, modulo four propositions. We give the complete
statements of these propositions in the running text of the proof, and number
them according to where they are found with their own proofs in this article.
###### Proof of Theorem 5.3.
We first need to make some preliminary definitions and observations. We say
that two closed disks _overlap_ if their interiors meet. Suppose that
$D_{i}\neq D_{j}$ overlap. Then the _eye_ between them is defined to be
$E_{ij}=E_{ji}=D_{i}\cap D_{j}$. When we quantify over the eyes $E_{ij}$ of
$\mathcal{D}$, we keep in mind that $E_{ij}=E_{ji}$ and treat this as a single
case. The _eyes_ of $\tilde{\mathcal{D}}$ are defined analogously. A
homeomorphism $\epsilon_{ij}:\partial E_{ij}\to\partial\tilde{E}_{ij}$ is
called _faithful_ if it restricts to homeomorphisms $D_{i}\cap\partial
E_{ij}\to\tilde{D}_{i}\cap\partial\tilde{E}_{ij}$ and $D_{j}\cap\partial
E_{ij}\to\tilde{D}_{j}\cap\partial\tilde{E}_{ij}$.
We first note that for every eye $E_{ij}$ there exists a faithful indexable
homeomorphism $\epsilon_{ij}:\partial E_{ij}\to\partial\tilde{E}_{ij}$. The
only way that there could fail to exist any faithful fixed-point-free
homeomorphisms $\partial E_{ij}\to\partial\tilde{E}_{ij}$ is if a pair of
corresponding points in $\partial D_{i}\cap\partial D_{j}$ and
$\partial\tilde{D}_{i}\cap\partial\tilde{D}_{j}$ coincide, which cannot happen
by the general position hypothesis on $\mathcal{D}$ and $\tilde{\mathcal{D}}$.
Furthermore, however they are chosen, the homeomorphisms $\epsilon_{ij}$ agree
with one another because their domains are disjoint, and every $\epsilon_{ij}$
agrees with $\phi$ on $\partial E_{ij}\cap\partial\mathcal{D}$ because of the
faithfulness conditions on $\phi$ and on the $\epsilon_{ij}$.
For every $E_{ij}$ pick a faithful indexable $\epsilon_{ij}$. For
$i\in\\{1,\ldots,n\\}$ let $\delta_{i}:\partial D_{i}\to\partial\tilde{D}_{i}$
be the function induced by restricting to $\phi$ or to the $\epsilon_{ij}$, as
necessary. It is routine to check that $\delta_{i}$ defined this way is an
indexable homeomorphism. The following observation serves as a good intuition
builder, and will be appealed to later in our proof:
###### Observation 5.4.
$\displaystyle\eta(\phi)=\sum_{i=1}^{n}\eta(\delta_{i})-\sum_{E_{ij}}\eta(\epsilon_{ij})$
The second sum is taken over all eyes $E_{ij}$ of $\mathcal{D}$. This
observation follows from the Index Additivity Lemma 3.3: notice that
$\eta(\epsilon_{ij})$ is exactly double-counted in the sum
$\eta(\delta_{i})+\eta(\delta_{j})$. We remark now that as we hope to prove in
particular that $\eta(\phi)\geq 0$, one of our main strategies will be to try
to get $\epsilon_{ij}$ so that $\eta(\epsilon_{ij})=0$. Recall that we always
have $\eta(\delta_{i})\geq 0$ by the Circle Index Lemma 3.2.
If $I\subset\\{1,\ldots,n\\}$ then let $\mathcal{D}_{I}=\\{D_{i}:i\in I\\}$,
similarly $\tilde{\mathcal{D}}_{I}$. We denote by
$\phi_{I}:\partial\mathcal{D}_{I}\to\partial\tilde{\mathcal{D}}_{I}$ the
function obtained by restriction to $\phi$ or to the $\epsilon_{ij}$, as
necessary. Then $\phi_{I}$ is a faithful indexable homeomorphism. We make
another observation.
###### Observation 5.5.
Let $I,J\subset\\{1,\ldots,n\\}$ be disjoint and non-empty, satisfying
$I\sqcup J=\\{1,\ldots,n\\}$. Then by the Index Additivity Lemma 3.3 we get
$\eta(\phi)=\eta(\phi_{I})+\eta(\phi_{J})-\sum\eta(\epsilon_{ij})$
where the sum is taken over all $E_{ij}$ so that $i\in I,j\in J$.
We now proceed to the main portion of our proof. The proof is by induction on
$n$ the number of disks in each of our configurations $\mathcal{D}$ and
$\tilde{\mathcal{D}}$. The base case $n=1$ follows from the Circle Index Lemma
3.2, so we suppose from now on that $n\geq 2$. We begin with a simplifying
observation that gives us access to our main propositions:
###### Observation 5.6.
Suppose that $D_{j}\setminus\cup_{i\neq j}D_{i}=:d_{j}$ and
$\tilde{D}_{j}\setminus\cup_{i\neq j}\tilde{D}_{i}=:\tilde{d}_{j}$ are
disjoint for some $j$. Then we are done by induction.
To see why, observe the following. First, if neither of $D_{j}$ and
$\tilde{D}_{j}$ contains the other, then $j$ does not belong to any
subsumptive subset of $\\{1,\ldots,n\\}$, so letting
$I=\\{1,\ldots,n\\}\setminus\\{j\\}$, we observe that the lower bound we wish
to prove on $\eta(\phi)$ is the same as the lower bound we get on
$\eta(\phi_{I})$ by our induction hypothesis. Then by the Index Additivity
Lemma 3.3, we get $\eta(\phi)=\eta(\phi_{I})+\eta(\partial
d_{j}\mathop{\to}\limits^{\phi,\epsilon_{ij}}\partial\tilde{d}_{j})=\eta(\phi_{I})$.
Here $\partial
d_{j}\mathop{\to}\limits^{\phi,\epsilon_{ij}}\partial\tilde{d}_{j}$ denotes
the indexable homeomorphism induced by restriction to $\phi$ and to the
$\epsilon_{ij}$, as necessary. The fixed-point index of this homeomorphism is
$0$ because $d_{j}$ and $\tilde{d}_{j}$ are disjoint.
On the other hand, suppose that one of $D_{j}$ and $\tilde{D}_{j}$ contains
the other. We will be done by the same argument if we can show that the number
of maximal isolated subsumptive subsets of $\\{1,\ldots,n\\}$ is the same as
the number of maximal isolated subsumptive subsets of
$\\{1,\ldots,n\\}\setminus\\{j\\}$. Suppose without loss of generality that
$\tilde{D}_{j}\subset D_{j}$. Because $d_{j}$ and $\tilde{d}_{j}$ are
disjoint, it follows that there must be an $i\neq j$ so that
$\tilde{D}_{j}\subset D_{i}$. It is also not hard to see that if $E_{jk}$ and
$\tilde{E}_{jk}$ are eyes one of which contains the other, then we must have
$k=i$ and $\tilde{E}_{ij}\subset E_{ij}$. Let $J$ be the maximal subsumptive
subset of $\\{1,\ldots,n\\}$ containing $j$. If $\tilde{D}_{i}\not\subset
D_{i}$, then $i\not\in J$, but $\tilde{E}_{ij}\subset E_{ij}$, so $J$ is not
isolated. In fact $J=\\{j\\}$, so we are done by induction. To see why note
that if $k\in J$ is different from $j$ then $\tilde{D}_{k}\subset D_{k}$, so
$\tilde{E}_{jk}\subset E_{jk}$, so $k=i$ by the earlier discussion,
contradicting $\tilde{D}_{i}\not\subset D_{i}$. So, finally, suppose that
$\tilde{D}_{i}\subset D_{i}$, so $i\in J$. If $J$ fails to be isolated, then
it does so at a $k\in J$ different from $j$, and $J\setminus\\{j\\}$ is a
maximal subsumptive subset of $\\{1,\ldots,n\\}\setminus\\{j\\}$, both by the
preceding argument. Thus $J\setminus\\{j\\}$ is isolated in
$\\{1,\ldots,n\\}\setminus\\{j\\}$ if and only if $J$ is isolated in
$\\{1,\ldots,n\\}$. An element of $\\{1,\ldots,n\\}$ may belong to at most one
maximal subsumptive subset of $\\{1,\ldots,n\\}$, so once again we are done by
induction. This completes the proof of Observation 5.6.
We therefore assume without loss of generality via Observation 5.6 for the
remainder of the proof the weaker statement that $D_{j}\setminus D_{i}$ and
$\tilde{D}_{j}\setminus\tilde{D}_{i}$ meet, for all $i,j$.
The following proposition will be key in our induction step:
###### Proposition 11.1.
Let $\\{A,B\\}$ and $\\{\tilde{A},\tilde{B}\\}$ be pairs of overlapping closed
disks in the complex plane $\mathbb{C}$, in general position. Suppose that
neither of $E=A\cap B$ and $\tilde{E}=\tilde{A}\cap\tilde{B}$ contains the
other. Suppose further that $A\setminus B$ and $\tilde{A}\setminus\tilde{B}$
meet, and that $B\setminus A$ and $\tilde{B}\setminus\tilde{A}$ meet. Then
there is a faithful indexable homeomorphism $\epsilon:\partial
E\to\partial\tilde{E}$ satisfying $\eta(\epsilon)=0$.
For the remainder of the proof, suppose that for every eye $E_{ij}$ of
$\mathcal{D}$, we have chosen our faithful indexable $\epsilon_{ij}$ so that
$\eta(\epsilon_{ij})=0$ whenever neither of $E_{ij}$ and $\tilde{E}_{ij}$
contains the other, and necessarily so that $\eta(\epsilon_{ij})=1$ otherwise.
Then for example if for no $i,j$ is it the case that one of $E_{ij}$ and
$\tilde{E}_{ij}$ contains the other, then we are done by Observation 5.4.
Alternatively, if there exist disjoint non-empty $I,J\subset\\{1,\ldots,n\\}$
so that $I\sqcup J=\\{1,\ldots,n\\}$, and so that for every $i\in I,j\in J$ we
have that neither of $E_{ij}$ and $\tilde{E}_{ij}$ contains the other, then we
are done by induction and Observation 5.5.
Our next key proposition is the following:
###### Proposition 6.6.
Let $\mathcal{D}=\\{D_{1},\ldots,D_{n}\\}$ and
$\tilde{\mathcal{D}}=\\{\tilde{D}_{1},\ldots,\tilde{D}_{n}\\}$ be as in the
statement of Theorem 5.3. Let $I$ be a maximal nonempty subsumptive subset of
$\\{1,\ldots,n\\}$. Then there is at most one pair $i\in I$,
$j\in\\{1,\ldots,n\\}\setminus I$ so that one of $E_{ij}=D_{i}\cap D_{j}$ and
$\tilde{E}_{ij}=\tilde{D}_{i}\cap\tilde{D}_{j}$ contains the other.
This proposition says that maximal subsumptive configurations are always at
least “almost” isolated, and, together with Proposition 6.5, will allow us to
excise maximal subsumptive configurations from $\mathcal{D}$ and
$\tilde{\mathcal{D}}$ in the style of Observation 5.5, to complete our proof
by induction. We explain how in more detail shortly.
There is a potential problem: we would like to say that if
$I\subset\\{1,\ldots,n\\}$ is subsumptive, implying that one of $\cup_{i\in
I}D_{i}$ and $\cup_{i\in I}\tilde{D}_{i}$ contains the other, then
$\eta(\phi_{I})=1$. However, _a priori_ , this may fail, for example see
Figure 4. Our next proposition addresses this issue:
###### Proposition 6.5.
Let $n\geq 3$ be an integer. Let $\\{D_{i}:i\in\mathbb{Z}/n\mathbb{Z}\\}$ and
$\\{\tilde{D}_{i}:i\in\mathbb{Z}/n\mathbb{Z}\\}$ be thin collections of closed
disks in the plane $\mathbb{C}$, in general position, so that the following
conditions hold.
* •
We have that $\tilde{D}_{i}$ is contained in the interior of $D_{i}$ for all
$i$.
* •
The disk $D_{i}$ overlaps with $D_{i\pm 1}$, and the disk $\tilde{D}_{i}$
overlaps with $\tilde{D}_{i\pm 1}$, for all $i$.
* •
If $D_{i}$ and $D_{j}$ meet, then $i=j\pm 1$.
Then
$\sum_{i\in\mathbb{Z}/n\mathbb{Z}}\measuredangle(\tilde{D}_{i},\tilde{D}_{i+1})<\sum_{i\in\mathbb{Z}/n\mathbb{Z}}\measuredangle(D_{i},D_{i+1})$.
In particular, for some $i$ we must have
$\measuredangle(D_{i},D_{i+1})\neq\measuredangle(\tilde{D}_{i},\tilde{D}_{i+1})$.
$D_{i}$$\tilde{D}_{i}$
Figure 4. Two closed chains of disks with $\tilde{D}_{i}\subsetneq D_{i}$ for
all $i$. The $D_{i}$ are drawn solid and the $\tilde{D}_{i}$ dashed.
Proposition 6.5 implies that
$\measuredangle(\tilde{D}_{i},\tilde{D}_{i+1})\neq\measuredangle(D_{i},D_{i+1})$
for some $i$.
Thus suppose that $I$ is a maximal nonempty subsumptive subset of
$\\{1,\ldots,n\\}$. Then by Proposition 6.5 and the Circle Index Lemma 3.2, we
have that $\cup_{i\in I}D_{i}$ and $\cup_{i\in I}\tilde{D}_{i}$ are closed
Jordan domains, so $\eta(\phi_{I})=1$. Let $J=\\{1,\ldots,n\\}\setminus I$. If
$I$ is isolated in $\\{1,\ldots,n\\}$, then every $\epsilon_{ij}=0$, with
$i\in I$ and $j\in J$. Also $J$ has one fewer maximal isolated subsumptive
subset than does $\\{1,\ldots,n\\}$. Thus we are done by induction and
Observation 5.5. On the other hand, suppose that $I$ is not isolated. Then it
is not hard to see that $J$ has as many maximal isolated subsumptive subsets
as does $\\{1,\ldots,n\\}$. Also, by Proposition 6.6, there is exactly one eye
$E_{ij}$ with $i\in I$ and $j\in J$ so that $\eta(\epsilon_{ij})=1$, and for
all the others we have $\eta(\epsilon_{ij})=0$. Again, we are done by
induction and Observation 5.5.
We now state our final key proposition in the proof of Theorem 5.3:
###### Proposition 11.5.
Let $\mathcal{D}=\\{D_{1},\ldots,D_{n}\\}$ and
$\tilde{\mathcal{D}}=\\{\tilde{D}_{1},\ldots,\tilde{D}_{n}\\}$ be as in the
statement of Theorem 5.3, and so that for all $i,j$ the sets $D_{i}\setminus
D_{j}$ and $\tilde{D}_{i}\setminus\tilde{D}_{j}$ meet. Suppose that there is
no $i$ so that one of $D_{i}$ and $\tilde{D}_{i}$ contains the other. Suppose
that for every pair of disjoint non-empty subsets $I,J\subset\\{1,\ldots,n\\}$
so that $I\sqcup J=\\{1,\ldots,n\\}$, there exists an eye $E_{ij}$, with $i\in
I$ and $j\in J$, so that one of $E_{ij}$ and $\tilde{E}_{ij}$ contains the
other. Then for every $i$ we have that any faithful indexable homeomorphism
$\delta_{i}:\partial D_{i}\to\partial\tilde{D}_{i}$ satisfies
$\eta(\delta_{i})\geq 1$. Furthermore there is a $k$ so that $D_{i}$ and
$D_{k}$ overlap for all $i$, and so that one of $E_{ij}$ and $\tilde{E}_{ij}$
contains the other if and only if either $i=k$ or $j=k$.
Unless one of our earlier propositions has already finished off the proof of
Theorem 5.3 by induction, the hypotheses of Proposition 11.5 hold, and we are
done by Observation 5.4. ∎
We need to establish Propositions 6.5, 6.6, 11.1, and 11.5. We establish
Propositions 6.5 and 6.6 next, in Section 6. Their proofs are quick and
elementary, and some ingredients of their proofs are used in the proofs of our
main rigidity theorems. We then prove our main rigidity theorems in Section 7,
using Theorem 5.3. The proofs of Propositions 11.1 and 11.5 take up most of
the rest of the article.
## 6\. Subsumptive collections of disks
In this section we prove some lemmas, and Propositions 6.5 and 6.6, having to
do with subsumptive configurations of disks.
First, we establish some geometric facts, starting with the following
important observation, which is illustrated in Figure 5:
Figure 5. A complementary component of the union of four disks as in
Observation 6.1. The sum of the angles inside of the dashed honest
quadrilateral is exactly $2\pi$. This sum is greater than the sum of the
external intersection angles of the disks.
###### Observation 6.1.
Suppose that $D_{1},D_{2},D_{3},D_{4}$ are metric closed disks in
$\mathbb{C}$, so that there is a bounded connected component $U$ of
$\mathbb{C}\setminus\cup_{i=1}^{4}D_{i}$ which is a curvilinear quadrilateral,
whose boundary $\partial U$ decomposes as the union of four circular arcs, one
taken from each of $\partial D_{1},\partial D_{2},\partial D_{3},\partial
D_{4}$. Suppose that as we traverse $\partial U$ positively, we arrive at
$\partial D_{1},\allowbreak\partial D_{2},\allowbreak\partial
D_{3},\allowbreak\partial D_{4}$ in that order. Then
$\sum_{i=1}^{4}\measuredangle(D_{i},D_{i+1})<2\pi$, where we consider
$D_{5}=D_{1}$.
$m$$m$$\infty$$D$$\tilde{D}$$d_{-1}$$d_{+1}$$\theta_{-1}$$\theta_{+1}$$\tilde{\theta}_{-1}$$\tilde{\theta}_{+1}$$z$$\pi-\theta_{-1}$$\pi-\theta_{+1}$$\tilde{\theta}_{-1}$$\tilde{\theta}_{+1}$(4.73,1.5796875)
(4.81,1.5396875) (4.85,1.5196875)(4.92,1.4996876)(4.95,1.4996876)
(4.98,1.4996876)(5.035,1.5196875)(5.06,1.5396875)
(5.085,1.5596875)(5.135,1.6146874)(5.16,1.6496875)
(5.185,1.6846875)(5.225,1.7596875)(5.24,1.7996875)
(5.255,1.8396875)(5.275,1.9246875)(5.28,1.9696875)
(5.285,2.0146875)(5.295,2.0996876)(5.3,2.1396875)
(5.305,2.1796875)(5.335,2.2496874)(5.36,2.2796874)
(5.385,2.3096876)(5.44,2.3546875)(5.47,2.3696876)
(5.5,2.3846874)(5.575,2.4046874)(5.62,2.4096875)
(5.665,2.4146874)(5.735,2.4246874)(5.81,2.4396875)
Figure 6. A Möbius transformation chosen to prove Lemma 6.2.
We use Observation 6.1 to prove the following key lemma, illustrated in Figure
6:
###### Lemma 6.2.
Let $d_{-1},d_{+1},D,\tilde{D}$ be closed disks in $\mathbb{C}$, so that
$\tilde{D}$ is contained in the interior of $D$, so that both of $D$ and
$\tilde{D}$ meet both of $d_{-1}$ and $d_{+1}$, and so that $d_{-1}\cap
d_{+1}\cap D$ is empty. Suppose that neither of $d_{-1}$ and $d_{+1}$ is
contained in $D$. We denote $\theta_{-1}=\measuredangle(D,d_{-1})$ and
$\tilde{\theta}_{-1}=\measuredangle(\tilde{D},d_{-1})$, defining $\theta_{+1}$
and $\tilde{\theta}_{+1}$ analogously. Then
$\tilde{\theta}_{-1}+\tilde{\theta}_{+1}<\theta_{-1}+\theta_{+1}$.
###### Proof.
Suppose first that $d_{-1}$ and $d_{+1}$ are disjoint, as in Figure 6. Let $z$
be a point in the interior of $D\setminus(\tilde{D}\cup d_{-1}\cup d_{+1})$,
and let $m$ be a Möbius transformation sending $z$ to $\infty$. Then $m$
inverts the disk $D$ but none of the disks $\tilde{D},d_{-1},d_{+1}$. Because
$m$ preserves angles we get
$(\pi-\theta_{-1})+(\pi-\theta_{+1})+\tilde{\theta}_{-1}+\tilde{\theta}_{+1}<2\pi$
by Observation 6.1, and the desired inequality follows. The case where
$d_{-1}$ and $d_{+1}$ meet outside of $D$ is proved identically. ∎
The following follows as a corollary of Lemma 6.2, by applying a suitable
Möbius transformation:
###### Lemma 6.3.
Let $\\{A,B\\}$ and $\\{\tilde{A},\tilde{B}\\}$ be pairs of overlapping closed
disks in the plane $\mathbb{C}$, in general position, so that
$\measuredangle(A,B)=\measuredangle(\tilde{A},\tilde{B})$. Suppose that
$\tilde{A}$ is contained in the interior of $A$ and that $\tilde{B}$ is
contained in the interior of $B$. Suppose also that neither $\tilde{A}\subset
B$ nor $\tilde{B}\subset A$. Then
$2\measuredangle(A,B)=2\measuredangle(\tilde{A},\tilde{B})<\measuredangle(\tilde{A},B)+\measuredangle(A,\tilde{B})$.
In particular, it works to apply a Möbius transformation sending a point in
the interior of $B\setminus(A\cup\tilde{A}\cup\tilde{B})$ to $\infty$.
We proceed to our final preliminary geometric lemma, illustrated in Figure 7:
$A$$B$$C$$z$$C^{\prime}$$A$$B$$C$$\infty$
Figure 7. A Möbius transformation chosen to prove Lemma 6.4.
###### Lemma 6.4.
Let $A$, $B$, $C$ be closed disks, none of which is contained in any other.
Suppose that $A$ and $C$ overlap, with $A\cap C\subset B$. Then $A$ and $B$
overlap, with $\measuredangle(A,C)<\measuredangle(A,B)$.
###### Proof.
Let $z\in\partial A\setminus B$. Because of the hypothesis that $A\cap
C\subset B$, we have that $z\not\in C$. Apply a Möbius transformation sending
$z\mapsto\infty$ so that $A$ becomes the left half-plane. Because $z\not\in
B,C$ we have that $B$ and $C$ remain closed disks after this transformation.
Let $C^{\prime}$ be the closed disk so that
$\measuredangle(A,C^{\prime})=\measuredangle(A,B)$, and so that $C$ and
$C^{\prime}$ have the same Euclidean radius and the same vertical Euclidean
coordinate. Then $C^{\prime}\subset A$. Also, notice that $C$ is obtained from
$C^{\prime}$ by a translation to the right or to the left. In fact it must be
a translation to the right, because the points $\partial B\cap\partial C$ must
lie in the complement of $A$, which is the right-half plane. But we see that
$\measuredangle(A,C^{\prime})$ is monotone decreasing as $C^{\prime}$ slides
to the right. ∎
We now proceed to the proofs of Propositions 6.5 and 6.6. We restate them here
for the convenience of the reader. Our first proposition was illustrated in
Figure 4 on p. 4:
###### Proposition 6.5.
Let $n\geq 3$ be an integer. Let $\\{D_{i}:i\in\mathbb{Z}/n\mathbb{Z}\\}$ and
$\\{\tilde{D}_{i}:i\in\mathbb{Z}/n\mathbb{Z}\\}$ be thin collections of closed
disks in the plane $\mathbb{C}$, in general position, so that the following
conditions hold.
* •
We have that $\tilde{D}_{i}$ is contained in the interior of $D_{i}$ for all
$i$.
* •
The disk $D_{i}$ overlaps with $D_{i\pm 1}$, and the disk $\tilde{D}_{i}$
overlaps with $\tilde{D}_{i\pm 1}$, for all $i$.
* •
If $D_{i}$ and $D_{j}$ meet, then $i=j\pm 1$.
Then
$\sum_{i\in\mathbb{Z}/n\mathbb{Z}}\measuredangle(\tilde{D}_{i},\tilde{D}_{i+1})<\sum_{i\in\mathbb{Z}/n\mathbb{Z}}\measuredangle(D_{i},D_{i+1})$.
In particular, for some $i$ we must have
$\measuredangle(D_{i},D_{i+1})\neq\measuredangle(\tilde{D}_{i},\tilde{D}_{i+1})$.
###### Proof.
Note first that for $\measuredangle(D_{i},D_{i+1})$ to be well-defined, we
still need to show that neither $D_{i}\subset D_{i+1}$ nor $D_{i+1}\subset
D_{i}$. The same is true for $\measuredangle(\tilde{D}_{i},\tilde{D}_{i+1})$.
Suppose for contradiction that $D_{i}\subset D_{i+1}$. Then $D_{i-1}\cap
D_{i}\subset D_{i+1}$, contradicting our hypotheses. By symmetry we get that
$D_{i+1}\not\subset D_{i}$. The proof that
$\measuredangle(\tilde{D}_{i},\tilde{D}_{i+1})$ is well-defined is identical.
To finish off the proof, we apply Lemma 6.2 twice. In both cases we will let
$D=D_{i}$ and $\tilde{D}=\tilde{D}_{i}$. First let $d_{-1}=D_{i-1}$ and
$d_{+1}=D_{i+1}$. This gives:
(1)
$\measuredangle(D_{i-1},\tilde{D}_{i})+\measuredangle(D_{i+1},\tilde{D}_{i})<\measuredangle(D_{i-1},D_{i})+\measuredangle(D_{i+1},D_{i})$
Next let $d_{-1}=\tilde{D}_{i-1}$ and $d_{+1}=\tilde{D}_{i+1}$. This gives:
(2)
$\measuredangle(\tilde{D}_{i-1},\tilde{D}_{i})+\measuredangle(\tilde{D}_{i+1},\tilde{D}_{i})<\measuredangle(D_{i},\tilde{D}_{i-1})+\measuredangle(D_{i},\tilde{D}_{i+1})$
If we let $i$ range over $\mathbb{Z}/n\mathbb{Z}$, the sum of the terms on the
left side of equation 1 is equal to the sum of the terms on the right side of
equation 2. The desired inequality follows. ∎
###### Proposition 6.6.
Let $\mathcal{D}=\\{D_{1},\ldots,D_{n}\\}$ and
$\tilde{\mathcal{D}}=\\{\tilde{D}_{1},\ldots,\tilde{D}_{n}\\}$ be as in the
statement of Theorem 5.3, configurations in $\mathbb{C}$ which are thin and in
general position, realizing the same incidence data. Suppose there is some
pair $D_{i}$ and $\tilde{D}_{i}$, one of which contains the other. Let $I$ be
a maximal nonempty subsumptive subset of $\\{1,\ldots,n\\}$. Then there is at
most one pair $i\in I$, $j\in\\{1,\ldots,n\\}\setminus I$ so that $D_{i}$ and
$D_{j}$ overlap and one of $E_{ij}=D_{i}\cap D_{j}$ and
$\tilde{E}_{ij}=\tilde{D}_{i}\cap\tilde{D}_{j}$ contains the other.
###### Proof.
Suppose from now on, without loss of generality, that $\tilde{D}_{i}\subset
D_{i}$ for all $i\in I$.
First, let $H_{\mathrm{u}}$ be the undirected simple graph defined as follows:
the vertex set is $I$, and there is an edge between $i$ and $j$ if and only if
$D_{i}$ and $D_{j}$ overlap. Observe:
###### Observation 6.7.
The graph $H_{\mathrm{u}}$ is connected and is a tree.
This follows from Proposition 6.5 and the general position hypothesis.
Next, let $H$ be the directed graph so that $\left<i\to j\right>$ is an edge
of $H$ if and only if:
* •
we have that $\left<i,j\right>$ is an edge of $H_{\mathrm{u}}$, and
* •
either $\measuredangle(\tilde{D}_{i},D_{j})>\measuredangle(D_{i},D_{j})$ or
$\tilde{D}_{i}\subset D_{j}$.
If $\left<i\to j\right>$ is an edge of $H$ then we call $\left<i\to j\right>$
an edge _pointing away from $i$ in $H$_. The idea is that if $\left<i\to
j\right>$ is an edge in $H$ then the disk $\tilde{D}_{i}\subset D_{i}$ is
“shifted towards $D_{j}$ in $D_{i}$.” See Figure 8 for an example.
Figure 8. The directed graph $H$ associated to a maximal subsumptive
subconfigurations. The solid disks are the $D_{i}$ and the dashed disks are
the $\tilde{D}_{i}$. The graph $H_{\mathrm{u}}$ can be obtained by undirecting
every edge.
We now make a series of observations about $H$ and $H_{\mathrm{u}}$. First:
###### Observation 6.8.
If $\left<i,j\right>$ is an edge in $H_{\mathrm{u}}$ then at least one of
$\left<i\to j\right>$ and $\left<j\to i\right>$ is an edge in $H$, and
possibly both are.
This follows from Lemma 6.3.
###### Observation 6.9.
For every $i\in I$, there is at most one edge $\left<i\to j\right>$ in $H$
pointing away from $i$.
This follows from Lemma 6.2, with $D=D_{i}$, $\tilde{D}=\tilde{D}_{i}$,
$d_{-1}=D_{j}$, $d_{+1}=D_{k}$, for $j,k\in I$ so that $D_{i}$ overlaps with
both $D_{j}$ and $D_{k}$.
###### Observation 6.10.
Let $\left<i_{1},i_{2},\ldots,i_{m}\right>$ be a simple path in
$H_{\mathrm{u}}$, meaning that $\left<i_{\ell},i_{\ell+1}\right>$ is an edge
in $H_{\mathrm{u}}$ for all $1\leq\ell<m$ and that $i_{\ell}$ and
$i_{\ell^{\prime}}$ are distinct for $\ell\neq\ell^{\prime}$. Suppose that
$\left<i_{m-1}\to i_{m}\right>$ is an edge in $H$. Then $\left<i_{\ell}\to
i_{\ell+1}\right>$ is an edge in $H$ for $1\leq\ell<m$.
This follows from Observations 6.8 and 6.9, and induction.
###### Observation 6.11.
There is at most one $i\in I$ so that there is no edge pointing away from $i$
in $H$.
This follows from Observations 6.8, 6.9, and 6.10, because $H_{\mathrm{u}}$ is
connected. If there is an $i$ as in the statement of Observation 6.11, then we
call this $i$ the _sink_ of the subsumptive subset $I\subset\\{1,\ldots,n\\}$.
Having established all we need to about $H$, we are ready to make two final
observations which will complete the proof of Proposition 6.6. First:
###### Observation 6.12.
Let $i\in I$. Then there is at most one $1\leq j\leq n$ different from $i$ so
that $D_{i}$ and $D_{j}$ overlap and either $\tilde{D}_{i}\subset D_{j}$ or
$\measuredangle(D_{i},D_{j})<\measuredangle(\tilde{D}_{i},D_{j})$.
This follows from Lemma 6.2 in the same way as does Observation 6.9. Next:
###### Observation 6.13.
Suppose that $i$ and $j$ are as in the last sentence of the statement of
Proposition 6.6. Thus we have that $i\in I$ and $j\in\\{1,\ldots,n\\}\setminus
I$ so that $D_{i}$ and $D_{j}$ overlap and $\tilde{E}_{ij}\subset E_{ij}$.
Then
$\measuredangle(D_{i},D_{j})=\measuredangle(\tilde{D}_{i},\tilde{D}_{j})<\measuredangle(\tilde{D}_{i},D_{j})$.
This follows by an application of Lemma 6.4 with $\tilde{D}_{i}=A$, $D_{j}=B$,
and $\tilde{D}_{j}=C$. Thus if $i$ and $j$ are as in the statement of
Proposition 6.6, then $i$ is the unique sink of $H$. Furthermore by
Observations 6.12 and 6.13 there is no $k\in\\{1,\ldots,n\\}\setminus I$
different from $j$ so that $D_{i}$ and $D_{k}$ overlap and so that one of
$E_{ik}$ and $\tilde{E}_{ik}$ contains the other. Proposition 6.6 follows. ∎
The following lemmas will be helpful in the next section, and it is best to
get them out of the way now:
###### Lemma 6.14.
Let $\mathcal{D}$ and $\tilde{\mathcal{D}}$ be as in the statement of
Proposition 6.6. Suppose the disks $D_{i}\in\mathcal{D}$ and
$\tilde{D}_{i}\in\tilde{\mathcal{D}}$ are so that $\tilde{D}_{i}$ is contained
in the interior of $D_{i}$. Suppose finally that for every
$j\in\\{1,\ldots,n\\}$ different from $i$, so that $D_{i}$ and $D_{j}$ meet,
we have either that $\tilde{D}_{i}$ is disjoint from $D_{j}$, or that
$\measuredangle(\tilde{D}_{i},D_{j})<\measuredangle(D_{i},D_{j})$. Then $i$ is
the unqiue sink of some maximal isolated subsumptive subset of
$\\{1,\ldots,n\\}$.
Lemma 6.14 is really just an observation. Let $I$ be the subsumptive subset of
$\\{1,\ldots,n\\}$ containing $i$. Define the directed graph $H$ as in the
proof of Proposition 6.6. Then by definition of $H$ there is no edge pointing
away from $i$ in $H$.
The next lemma is an easy corollary of Lemma 6.14:
###### Lemma 6.15.
Let $\mathcal{D}$ and $\tilde{\mathcal{D}}$ be as in the statement of
Proposition 6.6. Suppose that the disks $D_{i}\in\mathcal{D}$ and
$\tilde{D}_{i}\in\tilde{\mathcal{D}}$ have coinciding Euclidean centers. Then
$i$ is the unique sink of some maximal isolated subsumptive subset of
$\\{1,\ldots,n\\}$.
###### Proof.
Note that by the general position hypothesis the disks $D_{i}$ and
$\tilde{D}_{i}$ cannot be equal. We may suppose without loss of generality in
our proof that $\tilde{D}_{i}\subset D_{i}$. Then the lemma follows from Lemma
6.14 because it is easy to see that if closed disks $A$ and $B$ in
$\mathbb{C}$ overlap, so that neither is contained in the other, then
$\measuredangle(A,B)$ is monotone decreasing as we shrink $B$ by a contraction
about its Euclidean center. ∎
## 7\. Proofs of our main rigidity theorems
In this section we prove our main rigidity results using our Main Index
Theorem 5.3. The main idea of the proofs we will see here is similar to the
main idea of the proofs of the circle packing rigidity theorems given in
Section 4. It may be helpful to review those now. It may also be helpful to
recall Definition 1.3 on p. 1.3. The normalizations we construct were inspired
by those of Merenkov, given in [MR2900233]*Section 12.
The following lemma will be implicit in much of our discussion below:
###### Lemma 7.1.
Let $G=(V,E)$ be a 3-cycle, and $\Theta:G\to[0,\pi)$. Then there is precisely
one triple $\\{D_{v}\\}_{v\in V}$ of disks in $\hat{\mathbb{C}}$, up to action
by Möbius transformations, realizing the incidence data $(G,\Theta)$.
This is not hard to prove, and we leave it as an exercise.
The first rigidity theorem we prove here is Theorem 1.5, restated here for the
reader’s convenience:
###### Theorem 1.5.
Let $\mathcal{C}$ and $\tilde{\mathcal{C}}$ be thin disk configurations in
$\hat{\mathbb{C}}$ realizing the same incidence data $(G,\Theta)$, where $G$
is the 1-skeleton of a triangulation of the 2-sphere $\mathbb{S}^{2}$. Then
$\mathcal{C}$ and $\tilde{\mathcal{C}}$ differ by a Möbius or an anti-Möbius
transformation.
###### Proof.
We begin by applying $z\mapsto\bar{z}$ to one of the configurations, if
necessary, to ensure that the geodesic embeddings of $G$ in $\hat{\mathbb{C}}$
induced by $\mathcal{P}$ and $\tilde{\mathcal{P}}$ differ by an orientation-
preserving self-homeomorphism of $\hat{\mathbb{C}}$. For a reminder of the
meaning of _geodesic embedding_ , see the proof of Theorem 4.2, on p. 4.2. The
proof then proceeds by contradiction, supposing that there is no Möbius
transformation identifying $\mathcal{C}$ and $\tilde{\mathcal{C}}$.
First, note that we may suppose without loss of generality that there is a
vertex $a$ of $G$ so that no disk of $\mathcal{C}\setminus\\{D_{a}\\}$
overlaps with $D_{a}$, that is, that every contact between $D_{a}$ and another
disk $D_{v}\in\mathcal{C}$ is a tangency. Then necessarily the same holds for
$\tilde{D}_{a}$. Every face $f$ of the triangulation of $\hat{\mathbb{C}}$
coming from the geodesic embedding of $G$ induced by $\mathcal{C}$ contains
exactly one interstice. Index these faces by $F$, and write $T_{f}$ to denote
the interstice of $\mathcal{C}$ contained in the face corresponding to $f\in
F$. We define the interstices $\tilde{T}_{f}$ of $\tilde{\mathcal{C}}$
analogously. Pick an interstice $T_{f}$ of $\mathcal{C}$, and let $D$ be the
metric closed disk of largest spherical radius whose interior fits inside of
$T_{f}$. Let $\tilde{D}$ be constructed analogously for the corresponding
interstice $\tilde{T}_{f}$ of $\tilde{\mathcal{C}}$. Each of the disks $D$ and
$\tilde{D}$ is internally tangent to all three sides of its respective
interstice $T_{f}$ or $\tilde{T}_{f}$. Then it is not hard to show using Lemma
7.1 that any Möbius transformation sending $\mathcal{C}$ to
$\tilde{\mathcal{C}}$ will send $T_{f}$ to $\tilde{T}_{f}$, thus also $D$ to
$\tilde{D}$, so $\mathcal{C}$ and $\tilde{\mathcal{C}}$ are Möbius equivalent
if and only if $\mathcal{C}\cup\\{D\\}$ and
$\tilde{\mathcal{C}}\cup\\{\tilde{D}\\}$ are. It is therefore harmless to add
$D$ and $\tilde{D}$ to our configurations if necessary.
If there is no disk $D_{b}\in\mathcal{C}$ which does not meet $D_{a}$, then
$G$ is the 1-skeleton of a tetrahedron and it is easy to check Theorem 1.5 by
hand using Lemma 7.1. Thus suppose that $D_{b}\in\mathcal{C}$ is disjoint from
$D_{a}$.
We now apply a series of Möbius transformations, to explicitly describe a
normalization on $\mathcal{C}$ and $\tilde{\mathcal{C}}$ in terms of one non-
negative real parameter $\varepsilon\geq 0$:
1. (1)
First ensure that $\infty$ lies in the interiors of $D_{a}$ and of $D_{a}$, so
that the circles $\partial D_{a}$ and $\partial D_{b}$ are concentric when
considered in $\mathbb{C}$, and so that $\partial\tilde{D}_{a}$ and
$\partial\tilde{D}_{b}$ are concentric when considered in $\mathbb{C}$. Apply
orientation-preserving Euclidean similarities so that $D_{b}$ and
$\tilde{D}_{b}$ are both equal to the closed unit disk $\bar{\mathbb{D}}$.
Then the Euclidean centers in $\mathbb{C}$ of the circles $\partial
D_{a},\partial D_{b},\partial\tilde{D}_{a},\partial\tilde{D}_{b}$ all
coincide. The disks $D_{b}$ and $\tilde{D}_{b}$ are equal, and the disks
$D_{a}$ and $\tilde{D}_{a}$ may be equal or unequal.
2. (2)
Pick a vertex $c$ of $G$, so that the disks $D_{c}$ and $\tilde{D}_{c}$ differ
either in Euclidean radii or in the distances of their Euclidean centers from
the origin, or both. Such a $c$ must certainly exist: for instance, if
$\tilde{D}_{a}\subset D_{a}$, then we may pick $\tilde{D}_{c}$ meeting
$D_{a}$. If $D_{a}=\tilde{D}_{a}$ and such a $c$ did not exist then we could
argue via Lemma 7.1 that $\mathcal{C}$ and $\tilde{\mathcal{C}}$ differ by a
rotation, after all of the normalizations applied thus far.
3. (3)
Apply a rotation to both packings so that the Euclidean centers of $D_{c}$ and
$\tilde{D}_{c}$ both lie on the positive real axis. Then apply a positive non-
trivial dilation about the origin to one of the two packings, so that the
Euclidean centers of $D_{c}$ and $\tilde{D}_{c}$ coincide.
4. (4)
At this point, the Euclidean centers of $\partial D_{v}$ and
$\partial\tilde{D}_{v}$ coincide, for all $v=a,b,c$. Because we applied a non-
trivial positive dilation to one of the packings in the previous step, we have
that $\partial D_{b}\neq\partial\tilde{D}_{b}$. For either $v=a,c$, the disks
$D_{v}$ and $\tilde{D}_{v}$ may be equal or unequal. Regardless, our final
step is to apply a dilation to $\mathcal{P}$ by $1+\varepsilon$ about the
common Euclidean center of $\partial D_{c}$ and $\partial\tilde{D}_{c}$. Call
the resulting normalization $\mathrm{N}(\varepsilon)$.
Note that there clearly is an open interval $(0,\ldots)$, having one of its
endpoints at $0$, of positive values that $\varepsilon$ may take so that after
applying $\mathrm{N}(\varepsilon)$, we have that one of $D_{v}$ and
$\tilde{D}_{v}$ is contained in the interior of the other, for all $v=a,b,c$.
For only finitely many of these values is it the case that $\mathcal{C}$ and
$\tilde{\mathcal{C}}$ fail to end up in general position.
Denote $\mathcal{D}=\mathcal{C}\setminus\\{D_{a}\\}$ and
$\tilde{\mathcal{D}}=\tilde{\mathcal{C}}\setminus\\{\tilde{D}_{a}\\}$. Then
$\mathcal{D}$ and $\tilde{\mathcal{D}}$ are thin disk configurations in
$\mathbb{C}$ realizing the same incidence data. Let $G_{\mathcal{D}}$ denote
their common contact graph, having vertex set $V_{\mathcal{D}}$. Then we have
the following:
###### Observation 7.2.
If $\varepsilon>0$ is sufficiently small, then after appling
$\mathrm{N}(\varepsilon)$, we have that each of $b$ and $c$ belongs to a
different maximal isolated subsumptive subset of the common index set
$V_{\mathcal{D}}$ of $\mathcal{D}$ and $\tilde{\mathcal{D}}$.
To see why, first note that $D_{b}$ and $\tilde{D}_{b}$ are unequal and
concentric in $\mathbb{C}$ under the normalization $\mathrm{N}(0)$. Thus by
the argument we used to prove Lemma 6.15, for any $v\in V_{\mathcal{D}}$ so
that $D_{v}$ and $D_{b}$ meet, we have that either $\tilde{D}_{b}$ is disjoint
from $D_{v}$ or
$\measuredangle(\tilde{D}_{b},D_{v})<\measuredangle(D_{b},D_{v})$. If we
consider all of the disks to vary under $\mathrm{N}(\varepsilon)$, then these
angles are continuous in the variable $\varepsilon$, so for some small
interval $[0,\ldots)$ they continue to hold. For all $\varepsilon$ in this
interval $b$ will be the unique sink of a maximal isolated subsumptive subset
of $V_{\mathcal{D}}$ by Lemma 6.14. Next, recall that $D_{c}$ and
$\tilde{D}_{c}$ are concentric under any $\mathrm{N}(\varepsilon)$, and are
unequal for all but one value $\varepsilon$. Then $c$ is the unqiue sink of a
maximal isolated subsumptive subset of $V_{\mathcal{D}}$ by Lemma 6.15, and
Observation 7.2 follows.
We are now ready to obtain the desired contradiction to complete the proof of
Theorem 1.5. Pick $\varepsilon>0$ sufficiently small as per Claim 7.2, so that
in addition $\mathcal{C}$ and $\tilde{\mathcal{C}}$ are in general position,
and so that one of $D_{v}$ and $\tilde{D}_{v}$ is contained in the interior of
the other for all $v=a,b,c$. For every pair of corresponding interstices
$T_{f}$ and $\tilde{T}_{f}$ of the packings $\mathcal{C}$ and
$\tilde{\mathcal{C}}$, let $\phi_{f}:\partial T_{f}\to\partial\tilde{T}_{f}$
be an indexable homeomorphism identifying corresponding corners, satisfying
$\eta(\phi_{f})\geq 0$. We may do so by the Three Point Prescription Lemma
3.4. Then the $\phi_{f}$ induce a faithful indexable homeomorphism
$\phi_{\mathcal{D}}:\partial\mathcal{D}\to\partial\tilde{\mathcal{D}}$. By our
choice of $\varepsilon$ and Claim 7.2, and by our Main Index Theorem 5.3, we
have that $\eta(\phi_{\mathcal{D}})\geq 2$. On the other hand, orient
$\partial D_{a}$ and $\partial\tilde{D}_{a}$ positively with respect to the
open disks they bound in $\mathbb{C}$. This is the opposite of the positive
orientation on them with respect to $D_{a}$ and $\tilde{D}_{a}$. Then
$\eta(\phi_{a})=1$ by the Circle Index Lemma 3.2. Then we get a contradiction,
because $\eta(\phi_{a})=\eta(\phi_{\mathcal{D}})+\sum_{f\in F}\eta(\phi_{f})$
by the Index Additivity Lemma 3.3, and $\eta(\phi_{f})\geq 0$ for all $f$ by
construction. ∎
We next prove our Main Uniformization Theorem 1.4. We break the statement of
Theorem 1.4 into three theorems, and prove each of these separately. The
proofs are adapted from the proof of Theorem 1.5 in exactly the same way that
the proof of the constitutent theorems of Theorem 1.2 were adapted from the
proof of Theorem 1.1, so we will not give the full details. Instead, we will
construct in detail the appropriate normalization to start the proof, and omit
the last part of each proof, where the contradiction is obtained.
###### Theorem 7.3.
There do not exist thin disk configurations $\mathcal{C}$ and
$\tilde{\mathcal{C}}$ realizing the same incidence data $(G,\Theta)$, where
$G$ is the 1-skeleton of a triangulation of a topological open disk, so that
$\mathcal{C}$ is locally finite in $\mathbb{C}$ and $\tilde{\mathcal{C}}$ is
locally finite in the open unit disk $\mathbb{D}$, equivalently the hyperbolic
plane $\mathbb{H}^{2}\cong\mathbb{D}$.
###### Proof.
This proof proceeds by contradiction, supposing that $\mathcal{C}$ is locally
finite in $\mathbb{C}$ and $\tilde{\mathcal{C}}$ is locally finite in
$\mathbb{D}$. Apply $z\mapsto\bar{z}$ to one of the configurations, if
necessary, to ensure that the geodesic embeddings of $G$ in $\mathbb{C}$ and
$\mathbb{D}$ induced by $\mathcal{P}$ and $\tilde{\mathcal{P}}$ respectively
differ by an orientation-preserving homeomorphism $\mathbb{C}\to\mathbb{D}$.
We now apply a series of orientation-preserving Euclidean similarities, to
explicitly describe a normalization on $\mathcal{C}$ and $\tilde{\mathcal{C}}$
in terms of one non-negative real parameter $\varepsilon\geq 0$:
1. (1)
First, pick $D_{a}\in\mathcal{C}$ and $\tilde{D}_{a}\in\tilde{\mathcal{C}}$,
and apply translations to both configurations, and a scaling to $\mathcal{C}$,
so that $D_{a}$ and $\tilde{D}_{a}$ coincide, and are centered at the origin.
2. (2)
Pick disks $D_{b}\in\mathcal{C}$ and $\tilde{D}_{b}\in\tilde{\mathcal{C}}$
which differ either in their Euclidean radii or in the distances of their
Euclidean centers from the origin, or both. We may obviously do so. Apply a
rotation about the origin to both configurations so that the Euclidean centers
of $D_{b}$ and $\tilde{D}_{b}$ both lie on the positive real axis, and then
apply a non-trivial dilation about the origin to one of the configurations so
that the Euclidean centers of $D_{b}$ and $\tilde{D}_{b}$ coincide.
3. (3)
At this point $D_{a}$ and $\tilde{D}_{a}$ are unequal, but are concentric in
$\mathbb{C}$, and $D_{b}$ and $\tilde{D}_{b}$ are concentric in $\mathbb{C}$,
and may be equal or unequal. As our last step, we dilate $\mathcal{P}$ by a
factor of $1+\varepsilon$ about the common Euclidean center of $D_{b}$ and
$\tilde{D}_{b}$. Denote the resulting normalization $\mathrm{N}(\varepsilon)$.
The rest of the proof proceeds in the same way as did the proof of Theorem
4.4. ∎
###### Theorem 7.4.
Let $\mathcal{C}$ and $\tilde{\mathcal{C}}$ be thin disk configurations
realizing the same incidence data $(G,\Theta)$, where $G$ is the 1-skeleton of
a triangulation of a topological open disk, so that both $\mathcal{C}$ and
$\tilde{\mathcal{C}}$ are locally finite in $\mathbb{C}$. Then $\mathcal{C}$
and $\tilde{\mathcal{C}}$ differ by a Euclidean similarity.
###### Proof.
Apply $z\mapsto\bar{z}$ to one of the configurations, if necessary, to ensure
that the geodesic embeddings of $G$ in $\mathbb{C}$ induced by $\mathcal{P}$
and $\tilde{\mathcal{P}}$ differ by an orientation-preserving self-
homeomorphism of $\mathbb{C}$. Suppose for contradiction that $\mathcal{C}$
and $\tilde{\mathcal{C}}$ do not differ by any orientation-preserving
Euclidean similarity. They therefore do not differ by any Möbius
transformation. We now apply a series of Möbius transformations, to explicitly
describe a normalization on $\mathcal{C}$ and $\tilde{\mathcal{C}}$ in terms
of one non-negative real parameter $\varepsilon\geq 0$:
1. (1)
We first argue as in the proof of Theorem 1.5 that we may assume without loss
of generality that we may take $D_{a}\in\mathcal{C}$ and
$\tilde{D}_{a}\in\tilde{\mathcal{C}}$ so that no other disk
$D_{v}\in\mathcal{C}\setminus\\{D_{a}\\}$ overlaps with $D_{a}$.
2. (2)
Pick $b\in V\setminus\\{a\\}$ so that $b$ does not share an edge with $a$ in
$G$. Apply Möbius transformations so that $\infty$ lies in the interiors of
both $D_{a}$ and $\tilde{D}_{a}$, and so that all of the circles $\partial
D_{a},\partial D_{b},\partial\tilde{D}_{a},\partial\tilde{D}_{b}$ have their
Euclidean centers at the origin. Apply a Euclidean scaling to $\mathcal{C}$ so
that $D_{b}$ and $\tilde{D}_{b}$ coincide. At this point, the disks $D_{a}$
and $\tilde{D}_{a}$ may be equal or unequal.
3. (3)
We argue as before that we may pick $c\in V\setminus\\{a,b\\}$ so that $D_{c}$
and $\tilde{D}_{c}$ differ in their Euclidean radii, the distances of their
Euclidean centers from the origin, or both. Apply rotations about the origin
so that the Euclidean centers of $D_{c}$ and $\tilde{D}_{c}$ lie on the
positive real axis, and apply a scaling to $\mathcal{C}$ so that the Euclidean
centers of $D_{c}$ and $\tilde{D}_{c}$ coincide.
4. (4)
At this point the disks $D_{a}$ and $\tilde{D}_{a}$ may be equal or unequal,
the disks $D_{b}$ and $\tilde{D}_{b}$ are unequal, and the disks $D_{c}$ and
$\tilde{D}_{c}$ may be equal or unequal. All of $\partial D_{a},\partial
D_{b},\partial\tilde{D}_{a},\partial\tilde{D}_{b}$ are centered at the origin,
and $D_{c}$ and $\tilde{D}_{c}$ are concentric in $\mathbb{C}$. As the last
step of our normalization, apply a dilation by a factor of $1+\varepsilon$
about the common Euclidean center of $D_{b}$ and $\tilde{D}_{b}$ to
$\mathcal{C}$. Denote the resulting normalization $\mathrm{N}(\varepsilon)$.
The rest of the proof proceeds in the same way as the proof of Theorem 4.8. ∎
###### Theorem 7.5.
Let $\mathcal{C}$ and $\tilde{\mathcal{C}}$ be thin disk configurations
realizing the same incidence data $(G,\Theta)$, where $G$ is the 1-skeleton of
a triangulation of a topological open disk, so that both $\mathcal{C}$ and
$\tilde{\mathcal{C}}$ are locally finite in the open unit disk $\mathbb{D}$,
equivalently in the hyperbolic plane $\mathbb{H}^{2}\cong\mathbb{D}$. Then
$\mathcal{C}$ and $\tilde{\mathcal{C}}$ differ by a hyperbolic isometry.
###### Proof.
As always, apply $z\mapsto\bar{z}$ to one of the configurations if necessary,
to ensure that the geodesic embeddings of $G$ in $\mathbb{D}$ induced by
$\mathcal{P}$ and $\tilde{\mathcal{P}}$ differ by an orientation-preserving
self-homeomorphism of $\mathbb{D}$. Suppose for contradiction that
$\mathcal{C}$ and $\tilde{\mathcal{C}}$ do not differ by any orientation-
preserving hyperbolic isometry. They therefore do not differ by any Möbius
transformation. We now apply a series of Möbius transformations, to explicitly
describe a normalization on $\mathcal{C}$ and $\tilde{\mathcal{C}}$ in terms
of one non-negative real parameter $\varepsilon\geq 0$:
1. (1)
First, there must be disks $D_{a}\in\mathcal{C}$ and
$\tilde{D}_{a}\in\tilde{\mathcal{C}}$ which have different hyperbolic radii in
$\mathbb{D}\cong\mathbb{H}^{2}$, otherwise the two configurations coincide by
elementary arguments. Apply hyperbolic isometries to both configurations so
that $D_{a}$ and $\tilde{D}_{a}$ are centered at the origin, and apply a
Euclidean scaling centered at the origin to $\mathcal{C}$, so that $D_{a}$ and
$\tilde{D}_{a}$ coincide.
2. (2)
Pick disks $D_{b}\in\mathcal{C}$ and $\tilde{D}_{b}\in\tilde{\mathcal{C}}$
which differ in their Euclidean radii, or the distances of their Euclidean
centers from the origin, or both. Apply rotations centered at the origin so
that the Euclidean centers of $D_{b}$ and $\tilde{D}_{b}$ lie on the positive
real axis, and apply a Euclidean scaling centered at the origin to one
configuration so that the Euclidean centers of $D_{b}$ and $\tilde{D}_{b}$
coincide.
3. (3)
At this point the disks $D_{a}$ and $\tilde{D}_{a}$ are unequal, and are
concentric in $\mathbb{C}$, the disks $D_{b}$ and $\tilde{D}_{b}$ are
concentric in $\mathbb{C}$, and may be equal or unequal. Also, denoting by $D$
and $\tilde{D}$ the images of $\mathbb{D}$ under the normalizations applied
thus far to $\mathcal{C}$ and $\tilde{\mathcal{C}}$ respectively, we have that
$D$ and $\tilde{D}$ are centered at the origin, and may be equal or unequal.
As the last step of our normalization, apply a dilation by a factor of
$1+\varepsilon$ about the common Euclidean center of $D_{b}$ and
$\tilde{D}_{b}$ to $\mathcal{C}$. Denote the resulting normalization
$\mathrm{N}(\varepsilon)$.
The rest of the proof proceeds in the same way as the proof of Theorem 4.6. ∎
## 8\. Topological configurations
Suppose that $X_{1},\ldots,X_{n}$ and $X^{\prime}_{1},\ldots,X^{\prime}_{n}$
are all subsets of $\mathbb{C}$. Then we say that the collections
$\\{X_{1},\ldots,X_{n}\\}$ and $\\{X^{\prime}_{1},\ldots,X^{\prime}_{n}\\}$
are in the same _topological configuration_ if there is an orientation-
preserving homeomorphism $\varphi:\mathbb{C}\to\mathbb{C}$ so that
$\varphi(X_{i})=X^{\prime}_{i}$ for all $1\leq i\leq n$. In practice the
collections of objects under consideration will not be labeled $X_{i}$ and
$X^{\prime}_{i}$, but there will be some natural bijection between the
collections. Then our requirement is that $\varphi$ respects this natural
bijection.
The following lemma says that when working with fixed-point index, we need to
consider our Jordan domains only “up to topological configuration.”
###### Lemma 8.1.
Suppose $K$ and $\tilde{K}$ are closed Jordan domains. Let $\phi:\partial
K\to\partial\tilde{K}$ be an indexable homeomorphism. Suppose that
$K^{\prime}$ and $\tilde{K}^{\prime}$ are also closed Jordan domains, so that
$\\{K,\tilde{K}\\}$ and $\\{K^{\prime},\tilde{K}^{\prime}\\}$ are in the same
topological configuration, via the homeomorphism
$\psi:\mathbb{C}\to\mathbb{C}$. Let $\phi^{\prime}:\partial
K^{\prime}\to\partial\tilde{K}^{\prime}$ be induced in the natural way,
explicitly as
$\phi^{\prime}=\psi|_{\partial\tilde{K}}\circ\phi\circ\psi^{-1}|_{\partial
K^{\prime}}$. Then $\phi^{\prime}$ is indexable and
$\eta(\phi)=\eta(\phi^{\prime})$.
This follows via homotopy arguments from the well-known fact that every
orientation-preserving homeomorphism $\mathbb{C}\to\mathbb{C}$ is homotopic to
the identity map via homeomorphisms.
The following proposition limits the relevant topological configurations that
two disks may be in to finitely many possibilities, which by Lemma 8.1 reduces
every subsequent proof to at worst a case-by-case analysis:
###### Proposition 8.2.
Suppose that $\\{A,B\\}$ and $\\{\tilde{A},\tilde{B}\\}$ are pairs of
overlapping closed disks in the plane $\mathbb{C}$ in general position.
Suppose that $A\setminus B$ meets $\tilde{A}\setminus\tilde{B}$, that $A\cap
B$ meets $\tilde{A}\cap\tilde{B}$, and that $B\setminus A$ meets
$\tilde{B}\setminus\tilde{A}$. Then given any three of the disks
$A,B,\tilde{A},\tilde{B}$, the topological configuration of those three disks
is one of those depicted in Figures 9 and 10.
$A$$B$$\tilde{A}$
(a)
$A$$B$$\tilde{A}$
(b)
$A$$B$$\tilde{A}$
(c)
$A$$B$$\tilde{A}$
(d)
$A$$B$$\tilde{A}$
(e)
$\tilde{A}$$B$$A$
(f)
$A$$B$$\tilde{A}$
(g)
$A$$B$$\tilde{A}$
(h)
$\diamondsuit$
$B$$A$$\tilde{B}$
(a)
$B$$A$$\tilde{B}$
(b)
$B$$A$$\tilde{B}$
(c)
$B$$A$$\tilde{B}$
(d)
$B$$A$$\tilde{B}$
(e)
$\tilde{B}$$A$$B$
(f)
$B$$A$$\tilde{B}$
(g)
$B$$A$$\tilde{B}$
(h)
$\heartsuit$
Figure 9. The relevant topological configurations of $\tilde{A}$ and of
$\tilde{B}$, relative to $A,B$.
$\tilde{A}$$\tilde{B}$$A$
(a)
$\tilde{A}$$\tilde{B}$$A$
(b)
$\tilde{A}$$\tilde{B}$$A$
(c)
$\tilde{A}$$\tilde{B}$$A$
(d)
$\tilde{A}$$\tilde{B}$$A$
(e)
$A$$\tilde{B}$$\tilde{A}$
(f)
$\tilde{A}$$\tilde{B}$$A$
(g)
$\tilde{A}$$\tilde{B}$$A$
(h)
$\spadesuit$
$\tilde{B}$$\tilde{A}$$B$
(a)
$\tilde{B}$$\tilde{A}$$B$
(b)
$\tilde{B}$$\tilde{A}$$B$
(c)
$\tilde{B}$$\tilde{A}$$B$
(d)
$\tilde{B}$$\tilde{A}$$B$
(e)
$B$$\tilde{A}$$\tilde{B}$
(f)
$\tilde{B}$$\tilde{A}$$B$
(g)
$\tilde{B}$$\tilde{A}$$B$
(h)
$\clubsuit$
Figure 10. The relevant topological configurations of $A$ and of $B$,
relative to $\tilde{A},\tilde{B}$.
We will often make reference to the configurations depicted in Figures 9 and
10. If the appropriate three-disk subset of $\\{A,B,\tilde{A},\tilde{B}\\}$ is
in a topological configuration depicted in one of these figures, we will
indicate this simply by saying that the corresponding configuration _occurs_ ,
for example that $\diamondsuit$a occurs.
###### Proof of Proposition 8.2.
Note that by the symmetries involved, it suffices to prove that
$\\{A,B,\tilde{A}\\}$ must be in one of the topological configurations on the
$\diamondsuit$ side of Figure 9. Therefore we restrict our attention to this
case from now on.
The following observation, which is an easy exercise, will be the key to our
proof:
###### Observation 8.3.
Fix $\ell_{1}$ and $\ell_{2}$ to be unequal straight lines in $\mathbb{C}$
both of which pass through the origin. The lines $\ell_{1}$ and $\ell_{2}$
divide the plane into four regions, which we loosely refer to as _quasi-
quadrants_. If $C$ is a variable metric circle in $\mathbb{C}$ which is not
allowed to pass through the origin, nor to be tangent to either of $\ell_{1}$
and $\ell_{2}$, then the topological configuration of
$\\{C,\ell_{1},\ell_{2}\\}$ is uniquely determined by which of the four quasi-
quadrants the circle $C$ passes through. Note also that $C$ cannot pass
through two diagonally opposite quasi-quadrants without passing through at
least one of the two remaining quasi-quadrants.
Then the idea of the proof of Proposition 8.2 is to apply a Möbius
transformation sending one of the two points of $\partial A\cap\partial B$ to
$\infty$. The images of the circles $\partial A$ and $\partial B$ will act as
the lines $\ell_{1}$ and $\ell_{2}$ of Observation 8.3, and
$\partial\tilde{A}$ will act as $C$.
We make one preliminary notational convention. First, orient $\partial A$ and
$\partial B$ positively as usual, and let $\\{u,v\\}=\partial A\cap\partial
B$. Label $u$ and $v$ so that $u$ is the point of $\partial A\cap\partial B$
where $\partial A$ enters $B$, and $v$ is the point of $\partial A\cap\partial
B$ where $\partial B$ enters $A$. See Figure 11 for an example.
$A$$B$$E$$u$$v$
Figure 11. The definitions of $u$ and $v$ in terms of the orientations on
$\partial A$ and $\partial B$.
Ultimately, we would like to say that if we fix overlapping $A$ and $B$,
letting $\tilde{A}$ vary under the constraint that the general position
hypothesis is not violated, then the topological configuration of
$\\{A,B,\tilde{A}\\}$ is uniquely determined by two pieces of information:
* •
whether or not $v\in\tilde{A}$, and
* •
which of the four regions $A\cap B,A\setminus B,B\setminus
A,\mathbb{C}\setminus(A\cup B)$ the circle $\partial\tilde{A}$ passes through.
$A$$B$$\tilde{A}$
(a)
$A$$B$$\tilde{A}$
(b)
Figure 12. Two different topological configurations of three disks
$\\{A,B,\tilde{A}\\}$, where $\partial\tilde{A}$ passes through the same
components of $\mathbb{C}\setminus(\partial A\cup\partial B)$ in both cases.
We see that $\tilde{A}$ and $A\cap B$ do not meet in either case, so this
example should not worry us too much in light of the hypotheses of Proposition
8.2.
Unfortunately, this is not completely true. There is a minor obstruction,
illustrated in Figure 12. However, we will see that this is the only possible
obstruction, and the nice classification described in this paragraph otherwise
holds:
###### Lemma 8.4.
Let $\\{A,B\\}$ and $\\{\tilde{A},\tilde{B}\\}$ be pairs of overlapping metric
closed disks in general position, similarly $\\{A^{\prime},B^{\prime}\\}$ and
$\\{\tilde{A}^{\prime},\tilde{B}^{\prime}\\}$. Define $u^{\prime}$ and
$v^{\prime}$ for $A^{\prime}$ and $B$’ as we defined $u$ and $v$ for $A$ and
$B$ in Figure 11. Suppose that $v\in\tilde{A}$ if and only if
$v^{\prime}\in\tilde{A}^{\prime}$, and that the subset of the four regions
$A^{\prime}\cap B^{\prime},A^{\prime}\setminus B^{\prime},B^{\prime}\setminus
A^{\prime},\mathbb{C}\setminus(A^{\prime}\cup B^{\prime})$ through which
$\partial\tilde{A}^{\prime}$ passes corresponds to the subset of the regions
$A\cap B,A\setminus B,B\setminus A,\mathbb{C}\setminus(A\cup B)$ through which
$\partial\tilde{A}$ passes, in the natural way. Then $\\{A,B,\tilde{A}\\}$ and
$\\{A^{\prime},B^{\prime},\tilde{A}^{\prime}\\}$ are in the same topological
configuration, unless one triple is arranged as in the left side of Figure 12
and the other is arranged as in the right side of the same figure.
###### Proof.
Let $m$ be a Möbius transformation sending $v$ to $\infty$ and $u$ to the
origin, letting $\ell_{1}=m(\partial A)$ and $\ell_{2}=m(\partial B)$. By the
general position hypothesis, letting $C=m(\partial\tilde{A})$ we have that $C$
is a circle as in the statement of Observation 8.3. Similarly, let
$m^{\prime}$ be a Möbius transformation sending $v^{\prime}$ to $\infty$ and
$u^{\prime}$ to the origin. Then by Observation 8.3 we may apply an
orientation-preserving homeomorphism $\psi:\mathbb{C}\to\mathbb{C}$ so that
$\ell_{1}=m(\partial A)=\psi\circ m^{\prime}(\partial A^{\prime})$, similarly
$\ell_{2}=m(\partial B)=\psi\circ m^{\prime}(\partial B^{\prime})$ and
$C=m(\partial\tilde{A})=\psi\circ m^{\prime}(\partial\tilde{A}^{\prime})$.
Now, if we can choose $\psi$ so that $m(\infty)=\psi\circ m^{\prime}(\infty)$,
then we will be done, because $m^{-1}\circ\psi\circ m^{\prime}$ will be an
orientation-preserving homeomorphism $\mathbb{C}\to\mathbb{C}$ identifying
$\\{A,B,\tilde{A}\\}$ with $\\{A^{\prime},B^{\prime},\tilde{A}^{\prime}\\}$.
Clearly there is such a $\psi$ so long as $\mathbb{C}\setminus m(A\cup
B\cup\tilde{A})$ equivalently $\mathbb{C}\setminus m(A^{\prime}\cup
B^{\prime}\cup\tilde{A}^{\prime})$ are connected. This happens if and only if
$\mathbb{C}\setminus(A\cup B\cup\tilde{A})$ equivalently
$\mathbb{C}\setminus(A^{\prime}\cup B^{\prime}\cup\tilde{A}^{\prime})$ are
connected, and it is easy to show that this fails only for the two
configurations shown in Figure 12. The lemma follows. ∎
We can now complete the proof by exhaustion. We will break the proof into two
major cases, depending on whether $v\in\tilde{A}$ or $v\not\in\tilde{A}$. We
will not make reference to Observation 8.3 again, so we overload terminology,
using the term quasi-quadrants from now on to refer to the four regions $A\cap
B,A\setminus B,B\setminus A,\mathbb{C}\setminus(A\cup B)$.
The following observation will be our source of contradictions to the
hypotheses of Proposition 8.2:
###### Observation 8.5.
Suppose that the hypotheses of Proposition 8.2 hold. Then we have
* •
that $\tilde{A}$ meets both $A\setminus B$ and $A\cap B$, and
* •
that $B\setminus A$ is not contained in $\tilde{A}$.
To see why, note that if $\tilde{A}$ does not meet $A\setminus B$, then
$\tilde{A}$ cannot possibly meet $\tilde{A}\setminus\tilde{B}$. We must have
that $\tilde{A}$ meets $A\cap B$ for a similar reason. Also, if $B\setminus A$
is contained in $\tilde{A}$, then $B\setminus A$ cannot possibly meet
$\tilde{B}\setminus\tilde{A}$.
In the forthcoming case analysis, we will rely on the reader to supply his own
drawings of the cases which we throw out by Observation 8.5. In general it is
not hard to draw an example of a configuration $\\{A,B,\tilde{A}\\}$ given the
type of hypotheses we write down below, and once a single example is drawn
Lemma 8.4 ensures that it is typically the only one, up to topological
configuration.
###### Case 1.
$v\in\tilde{A}$
We now consider the possibilities depending on how many of the quasi-quadrants
$\partial\tilde{A}$ passes through. If it passes through only one, then it is
easy to see that it must be $\mathbb{C}\setminus(A\cup B)$, otherwise we would
violate $v\in\tilde{A}$. Then $A\cup B\subset\tilde{A}$, in particular
$B\setminus A\subset\tilde{A}$, so we may ignore this possibility by
Observation 8.5.
Next, suppose that $\partial\tilde{A}$ passes through exactly two (necessarily
adjacent) quasi-quadrants. Then which two it hits is exactly determined by
which one of the four arcs $\partial A\cap B,\partial A\setminus B,\partial
B\cap A,\partial B\setminus A$ it hits. It cannot be $\partial A\cap B$ nor
$\partial B\cap A$ without violating $v\in\tilde{A}$. If it is $\partial
A\setminus B$ then $B\subset\tilde{A}$, so we may ignore this possibility by
Observation 8.5. The remaining possibility is represented as $\diamondsuit$b.
Now suppose that $\partial\tilde{A}$ passes through exactly three quasi-
quadrants. For brevity we will indicate which three it hits by saying instead
which one it misses. If it misses $B\setminus A$, then $B\setminus
A\subset\tilde{A}$, so we throw this case out Observation 8.5. Next, it cannot
miss $\mathbb{C}\setminus(A\cup B)$ without violating $v\in\tilde{A}$. The two
remaining cases are represented in $\diamondsuit$a and $\diamondsuit$c.
Finally, the case where $\partial\tilde{A}$ passes through all four quasi-
quadrants is drawn in $\diamondsuit$d.
###### Case 2.
$v\not\in\tilde{A}$
Suppose first that $\partial\tilde{A}$ hits exactly one of the quasi-
quadrants. Then $\tilde{A}$ is contained in that quasi-quadrant. But
$\tilde{A}$ must meet at least two quasi-quadrants, by Observation 8.5, if the
hypotheses of Proposition 8.2 hold.
Next, suppose that $\partial\tilde{A}$ meets exactly two quasi-quadrants.
Again we indicate which two by saying which one of the four arcs $\partial
A\cap B,\partial A\setminus B,\partial B\cap A,\partial B\setminus A$ it hits.
If it is $\partial B\setminus A$ or $\partial A\cap B$, then $\tilde{A}$ is
disjoint from $A\setminus B$, and if it is $\partial A\setminus B$, then
$\tilde{A}$ is disjoint from $A\cap B$. Thus we throw these cases out by
Observation 8.5. The final possibility is represented in $\diamondsuit$f.
Suppose now that $\partial\tilde{A}$ meets exactly three quasi-quadrants. As
before we indicate which three by indicating which one it misses. If it misses
$A\setminus B$, then we will get that $A\setminus B$ and $\tilde{A}$ are
disjoint. If it misses $A\cap B$, then the disks are in one of the
configurations of Figure 12, in which case $\tilde{A}$ and $A\cap B$ are
disjoint. We throw these cases out by Obsevation 8.5. The remaining two cases
are depicted in $\diamondsuit$g and $\diamondsuit$h.
Last, if $\partial\tilde{A}$ passes through all four quasi-quadrants, and
$v\not\in\tilde{A}$, then the disks are configured as in $\diamondsuit$e.
This completes the proof of Proposition 8.2. ∎
## 9\. Preliminary topological lemmas
In the section after this one, we will introduce a tool, called _torus
parametrization_ , for working with fixed-point index. This tool will handle
most of our cases for us relatively painlessly, but for some special cases we
will need extra lemmas. This section is devoted to the statements and proofs
of those lemmas. We also state and prove some simplifying facts that greatly
cut down the number of cases we will eventually need to check.
First:
###### Lemma 9.1.
Suppose $K$ and $\tilde{K}$ are closed Jordan domains in transverse position.
Then $\partial K$ and $\partial\tilde{K}$ meet a finite, even number of times,
by compactness and the transverse position hypothesis. In particular:
Suppose that $z\in\partial K\cap\partial\tilde{K}$. Orient $\partial K$ and
$\partial\tilde{K}$ positively with respect to $K$ and $\tilde{K}$ as usual.
Then one of the following two mutually exclusive possibilities holds at the
point $z$.
1. (1)
The curve $\partial\tilde{K}$ is entering $K$, and the curve $\partial K$ is
exiting $\tilde{K}$.
2. (2)
The curve $\partial K$ is entering $\tilde{K}$, and the curve
$\partial\tilde{K}$ is exiting $K$.
Thus as we traverse $\partial K$, we alternate arriving at points of $\partial
K\cap\partial\tilde{K}$ where (1) occurs and those where (2) occurs, and the
same holds as we traverse $\partial\tilde{K}$.
This is easy to check with a simple drawing.
Our next lemma characterizes the ways in which two convex closed Jordan
domains may intersect:
###### Lemma 9.2.
Let $K$ and $\tilde{K}$ be convex closed Jordan domains in transverse
position, so that $\partial K$ and $\partial\tilde{K}$ meet $2M>0$ times.
Suppose that $K^{\prime}$ and $\tilde{K}^{\prime}$ are also convex closed
Jordan domains in transverse position so that $\partial K$ and
$\partial\tilde{K}$ meet $2M>0$ times. Then $\\{K,\tilde{K}\\}$ and
$\\{K^{\prime},\tilde{K}^{\prime}\\}$ are in the same topological
configuration.
$P_{1}$$P_{2}$$P_{3}$$\tilde{P}_{1}$$\tilde{P}_{2}$$\tilde{P}_{3}$(6.65,2.3682814)
(6.62,2.3782814) (6.545,2.3782814)(6.415,2.3782814)(6.36,2.3782814)
(6.305,2.3782814)(6.175,2.3782814)(6.1,2.3782814)
(6.025,2.3782814)(5.875,2.3782814)(5.8,2.3782814)
(5.725,2.3782814)(5.555,2.3782814)(5.46,2.3782814)
(5.365,2.3782814)(5.205,2.3732812)(5.14,2.3682814)
(5.075,2.3632812)(4.91,2.3432813)(4.81,2.3282812)
(4.71,2.3132813)(4.545,2.2782812)(4.48,2.2582812)
(4.415,2.2382812)(4.285,2.1932812)(4.22,2.1682813)
(4.155,2.1432812)(4.015,2.0932813)(3.94,2.0682812)
(3.865,2.0432813)(3.695,1.9832813)(3.6,1.9482813)
(3.505,1.9132812)(3.275,1.8382813)(3.14,1.7982812)
(3.005,1.7582812)(2.83,1.7032813)(2.79,1.6882813)
(2.75,1.6732812)(2.67,1.6432812)(2.63,1.6282812)
(2.59,1.6132812)(2.435,1.5282812)(2.32,1.4582813)
(2.205,1.3882812)(2.05,1.2532812)(2.01,1.1882813)
(1.97,1.1232812)(1.92,0.8932812)(1.91,0.72828126)
(1.9,0.56328124)(1.92,0.24828126)(1.95,0.09828125)
(1.98,-0.05171875)(2.11,-0.38171875)(2.21,-0.56171876)
(2.31,-0.74171877)(2.47,-1.0067188)(2.53,-1.0917188)
(2.59,-1.1767187)(2.71,-1.3417188)(2.77,-1.4217187)
(2.83,-1.5017188)(2.945,-1.6467187)(3.0,-1.7117188)
(3.055,-1.7767187)(3.21,-1.9467187)(3.31,-2.0517187)
(3.41,-2.1567187)(3.605,-2.3367188)(3.7,-2.4117188)
(3.795,-2.4867187)(3.945,-2.5867188)(4.0,-2.6117187)
(4.055,-2.6367188)(4.165,-2.6967187)(4.22,-2.7317188)
(4.275,-2.7667189)(4.385,-2.8167188)(4.44,-2.8317187)
(4.495,-2.8467188)(4.685,-2.8367188)(4.82,-2.8117187)
(4.955,-2.7867188)(5.21,-2.7217188)(5.33,-2.6817188)
(5.45,-2.6417189)(5.66,-2.5417187)(5.75,-2.4817188)
(5.84,-2.4217188)(6.0,-2.2867188)(6.07,-2.2117188)
(6.14,-2.1367188)(6.295,-1.9617188)(6.38,-1.8617188)
(6.465,-1.7617188)(6.575,-1.6167188)(6.6,-1.5717187)
(6.625,-1.5267187)(6.695,-1.4117187)(6.74,-1.3417188)
(6.785,-1.2717187)(6.865,-1.1367188)(6.9,-1.0717187)
(6.935,-1.0067188)(7.015,-0.8617188)(7.06,-0.78171873)
(7.105,-0.70171875)(7.18,-0.5417187)(7.21,-0.46171874)
(7.24,-0.38171875)(7.285,-0.23171875)(7.3,-0.16171876)
(7.315,-0.09171875)(7.345,0.05328125)(7.36,0.12828125)
(7.375,0.20328125)(7.405,0.35828125)(7.42,0.43828124)
(7.435,0.5182812)(7.45,0.6982812)(7.45,0.79828125)
(7.45,0.8982813)(7.435,1.0682813)(7.42,1.1382812)
(7.405,1.2082813)(7.365,1.3482813)(7.34,1.4182812)
(7.315,1.4882812)(7.245,1.6482812)(7.2,1.7382812)
(7.155,1.8282813)(7.08,1.9682813)(7.05,2.0182812)
(7.02,2.0682812)(6.955,2.1582813)(6.92,2.1982813)
(6.885,2.2382812)(6.815,2.2982812)(6.78,2.3182812)
(6.745,2.3382812)(6.71,2.3632812)(6.65,2.3682814) (6.47,1.7682812)
(6.31,1.8782812) (6.23,1.9282813)(6.095,2.0082812)(6.04,2.0382812)
(5.985,2.0682812)(5.865,2.1332812)(5.8,2.1682813)
(5.735,2.2032812)(5.6,2.2582812)(5.53,2.2782812)
(5.46,2.2982812)(5.32,2.3382812)(5.25,2.3582811)
(5.18,2.3782814)(5.025,2.4132812)(4.94,2.4282813)
(4.855,2.4432812)(4.695,2.4582813)(4.62,2.4582813)
(4.545,2.4582813)(4.385,2.4532812)(4.3,2.4482813)
(4.215,2.4432812)(4.035,2.4032812)(3.94,2.3682814)
(3.845,2.3332813)(3.68,2.2532814)(3.61,2.2082813)
(3.54,2.1632812)(3.38,2.0432813)(3.29,1.9682813)
(3.2,1.8932812)(3.045,1.7482812)(2.98,1.6782813)
(2.915,1.6082813)(2.79,1.4582813)(2.73,1.3782812)
(2.67,1.2982812)(2.57,1.1282812)(2.53,1.0382812)
(2.49,0.9482812)(2.415,0.73828125)(2.38,0.61828125)
(2.345,0.49828124)(2.28,0.28828126)(2.25,0.19828124)
(2.22,0.10828125)(2.17,-0.05171875)(2.15,-0.12171875)
(2.13,-0.19171876)(2.095,-0.35171875)(2.08,-0.44171876)
(2.065,-0.53171873)(2.05,-0.69671875)(2.05,-0.77171874)
(2.05,-0.8467187)(2.085,-1.0067188)(2.12,-1.0917188)
(2.155,-1.1767187)(2.27,-1.3267188)(2.35,-1.3917187)
(2.43,-1.4567188)(2.585,-1.5717187)(2.66,-1.6217188)
(2.735,-1.6717187)(2.875,-1.7467188)(2.94,-1.7717187)
(3.005,-1.7967187)(3.175,-1.8567188)(3.28,-1.8917187)
(3.385,-1.9267187)(3.62,-1.9867188)(3.75,-2.0117188)
(3.88,-2.0367188)(4.155,-2.0767188)(4.3,-2.0917187)
(4.445,-2.1067188)(4.735,-2.1267188)(4.88,-2.1317186)
(5.025,-2.1367188)(5.255,-2.1417189)(5.34,-2.1417189)
(5.425,-2.1417189)(5.66,-2.1117187)(5.81,-2.0817187)
(5.96,-2.0517187)(6.165,-2.0017188)(6.22,-1.9817188)
(6.275,-1.9617188)(6.425,-1.8917187)(6.52,-1.8417188)
(6.615,-1.7917187)(6.75,-1.6967187)(6.79,-1.6517187)
(6.83,-1.6067188)(6.91,-1.4767188)(6.95,-1.3917187)
(6.99,-1.3067187)(7.055,-1.0917188)(7.08,-0.96171874)
(7.105,-0.83171874)(7.105,-0.50171876)(7.08,-0.30171874)
(7.055,-0.10171875)(7.015,0.17328125)(7.0,0.24828126)
(6.985,0.32328126)(6.95,0.47328126)(6.93,0.54828125)
(6.91,0.62328124)(6.865,0.80328125)(6.84,0.90828127)
(6.815,1.0132812)(6.77,1.1732812)(6.75,1.2282813)
(6.73,1.2832812)(6.69,1.3832812)(6.67,1.4282813)
(6.65,1.4732813)(6.615,1.5482812)(6.6,1.5782813)
(6.585,1.6082813)(6.555,1.6632812)(6.54,1.6882813)
(6.525,1.7132813)(6.5,1.7532812)(6.47,1.7682812)
$R_{\theta}$$K$$\tilde{K}$$\theta$$w$$z_{\theta}$$\tilde{z}_{\theta}$
Figure 13. Two convex closed Jordan domains $K$ and $\tilde{K}$ in transverse
position, with boundaries meeting at six points. As $\theta$ varies
positively, the ray $R_{\theta}$ scans around the boundaries of both $K$ and
$\tilde{K}$ positively.
###### Proof.
For the following construction, see Figure 13. Let $w$ be a common interior
point of $K$ and $\tilde{K}$. Let $R_{\theta}$ be the ray emanating from the
point $w$ at an angle of $\theta$ from the positive real direction. Let
$P_{i}$ be the points of $\partial K\cap\partial\tilde{K}$ where $\partial K$
is entering $\tilde{K}$, and let $\tilde{P}_{i}$ be those where
$\partial\tilde{K}$ is entering $K$. Define
$w^{\prime},R^{\prime}_{\theta},P^{\prime}_{i},\tilde{P}^{\prime}_{i}$
analogously for $K^{\prime}$ and $\tilde{K}^{\prime}$. Identify
$\mathbb{S}^{1}$ with the interval $[0,2\pi]$ with its endpoints identified,
and define a homeomorphism $\mathbb{S}^{1}\to\mathbb{S}^{1}$, denoting the
image of $\theta\in[0,2\pi]$ by $\theta^{\prime}$, so that $R_{\theta}$ hits a
point $P_{i}$ if and only if $R^{\prime}_{\theta^{\prime}}$ hits a point
$P^{\prime}_{i}$, similarly for $\tilde{P}_{i}$ and $\tilde{P}^{\prime}_{i}$.
Define homeomorphisms $R_{\theta}\to R^{\prime}_{\theta^{\prime}}$ piecewise
linearly on the components $R_{\theta}\setminus(\partial
K\cup\partial\tilde{K})\to R^{\prime}_{\theta^{\prime}}\setminus(\partial
K^{\prime}\cup\partial\tilde{K}^{\prime})$. Then these homeomorphisms glue to
an orientation-preserving homeomorphism $\mathbb{C}\to\mathbb{C}$ sending
$\\{K,\tilde{K}\\}$ to $\\{K^{\prime},\tilde{K}^{\prime}\\}$. ∎
Lemma 9.2 is very much false if we omit the condition that the Jordan domains
are convex. Which _a priori_ topological configurations can occur for two
Jordan curves in transverse position is a poorly understood question, and is
known as the study of _meanders_ 666Thanks to Thomas Lam for informing us of
the topic of meander theory. We are fortunate that our setting is nice enough
that a statement like Lemma 9.2 is possible. The clean construction we use in
our proof is due to Nic Ford and Jordan Watkins.
We now take a moment to introduce some notation we use throughout the rest of
the article. Let $\gamma$ be an oriented Jordan curve. Let $a,b\in\gamma$ be
distinct. Then $[a\to b]_{\gamma}$ is the oriented closed sub-arc of $\gamma$
starting at $a$ and ending at $b$. Then for example $[a\to
b]_{\gamma}\cap[b\to a]_{\gamma}=\\{a,b\\}$ and $[a\to b]_{\gamma}\cup[b\to
a]_{\gamma}=\gamma$.
Throughout the rest of this section, let $\\{A,B\\}$ and
$\\{\tilde{A},\tilde{B}\\}$ be pairs of overlapping closed disks in general
position. We label $\\{u,v\\}=\partial A\cap\partial B$ as in the preceding
section, see Figure 11 on p. 11 for a reminder. Label $\tilde{u}$ and
$\tilde{v}$ analogously. We denote $E=A\cap B$ and
$\tilde{E}=\tilde{A}\cap\tilde{B}$, and loosely refer to these as _eyes_. The
rest of this section consists of the proofs of an assortment of lemmas about
these disks, which we give without further comment.
###### Lemma 9.3.
The Jordan curves $\partial E$ and $\partial\tilde{E}$ meet exactly 0, 2, 4,
or 6 times.
###### Proof.
That they meet an even number of times is a consequence of the general
position hypothesis. There is an immediate upper bound of 8 meeting points
because each of $\partial E$ and $\partial\tilde{E}$ is the union of two
circular arcs. Suppose for contradiction that $\partial E$ and
$\partial\tilde{E}$ meet 8 times. Thus every meeting point of one of the
circles $\partial A$ and $\partial B$ with one of $\partial\tilde{A}$ and
$\partial\tilde{B}$ lies in $\partial E\cap\partial\tilde{E}$. It follows that
$\partial(A\cup B)$ does not meet $\partial(\tilde{A}\cup\tilde{B})$. But
these are Jordan curves, so then we have either that one of $A\cup B$ and
$\tilde{A}\cup\tilde{B}$ contains the other, or that they are disjoint. They
cannot be disjoint because $\partial E$ and $\partial\tilde{E}$ meet (8 times)
by hypothesis, so suppose without loss of generality that
$\tilde{A}\cup\tilde{B}$ is contained in $A\cup B$, in particular in its
interior by the general position hypothesis. Then the sub-arc $\partial
E\cap\partial A$ must enter the region $\tilde{A}\cup\tilde{B}$ somewhere, so
that it may intersect $\partial\tilde{E}$, a contradiction. ∎
$\tilde{E}$$v$$\tilde{u}$$E$$\tilde{v}$$u$
(a)
$\tilde{E}$$v$$\tilde{v}$$E$$\tilde{u}$$u$
(b)
Figure 14. The possible relevant topological configurations for two generally
positioned eyes whose boundaries meet at six points. There are two remaining
possibilities, not depicted, obtained by simultaneously swapping $u$ with $v$
and $\tilde{u}$ with $\tilde{v}$, which are irrelevant by symmetry .
###### Lemma 9.4.
Suppose that $\partial E$ and $\partial\tilde{E}$ meet 6 times, that
$A\setminus B$ and $\tilde{A}\setminus\tilde{B}$ meet, and that $B\setminus A$
and $\tilde{B}\setminus\tilde{A}$ meet. Then
$\\{E,u,v,\tilde{E},\tilde{u},\tilde{v}\\}$ are in one of the two topological
configurations represented in Figure 14, up to possibly simultaneously
swapping $u$ with $v$ and $\tilde{u}$ with $\tilde{v}$.
###### Proof.
By Lemma 9.2, if $\partial E$ and $\partial\tilde{E}$ meet 6 times then they
are in the topological configuration shown in Figure 15. We denote by
$\epsilon_{i}$ the connected components of $\partial
E\setminus\partial\tilde{E}$, and by $\tilde{\epsilon}_{i}$ the connected
components of $\partial\tilde{E}\setminus\partial E$, labeled as in Figure 15.
We consider the indices of the $\epsilon_{i}$ and $\tilde{\epsilon}_{i}$ only
modulo $6$. For example, we write $\epsilon_{2+5}=\epsilon_{1}$.
$\tilde{\epsilon}_{2}$$\epsilon_{3}$$\tilde{\epsilon}_{4}$$\epsilon_{5}$$\tilde{\epsilon}_{6}$$\epsilon_{1}$$\epsilon_{2}$$\tilde{\epsilon}_{5}$$\epsilon_{6}$$\tilde{\epsilon}_{1}$$\epsilon_{4}$$\tilde{\epsilon}_{3}$
Figure 15. The components of $\partial E\setminus\partial\tilde{E}$ and
$\partial\tilde{E}\setminus\partial E$ for two transversely positioned convex
closed Jordan domains $E$ and $\tilde{E}$ meeting at six points. The solid
curve represents $\partial E$, and the dashed curve represents
$\partial\tilde{E}$.
Proposition 8.2 allows us to make the following observation:
###### Observation 9.5.
Neither $\tilde{u}$ nor $\tilde{v}$ may lie in $E$, and neither $u$ nor $v$
may lie in $\tilde{E}$.
To see why, note that if $\partial E$ and $\partial\tilde{E}$ meet six times,
then by the pigeonhole principle at least one of $\partial A$ and $\partial B$
must meet $\partial\tilde{E}$ at least three times. Thus at least one of
$\spadesuit$g and $\clubsuit$g must occur. Thus $\tilde{u}$ and $\tilde{v}$
lie outside of at least one of $A$ and $B$, but $E=A\cap B$, thus neither
$\tilde{u}$ nor $\tilde{v}$ lies in $E$. The other part follows identically.
Thus we may assume that $u\in\epsilon_{1}$. Then $v$ lies along $\epsilon_{3}$
or $\epsilon_{5}$. By relabeling the $\epsilon_{i}$ and switching the roles of
$u$ and $v$ as necessary, we may assume that $v\in\epsilon_{3}$. Our proof
will be done once we show that neither $\tilde{u}$ nor $\tilde{v}$ may lie
along $\tilde{\epsilon}_{2}$. Suppose for contradiction that $\tilde{u}$ lies
along $\tilde{\epsilon}_{2}$. Then $\tilde{v}$ lies along either
$\tilde{\epsilon}_{4}$ or $\tilde{\epsilon}_{6}$. If
$\tilde{v}\in\tilde{\epsilon}_{4}$, then the circular arc $[v\to u]_{\partial
E}$ meets the circular arc $[\tilde{v}\to\tilde{u}]_{\partial\tilde{E}}$ three
times, a contradiction. Similarly, if $\tilde{v}\in\tilde{\epsilon}_{6}$, then
the circular arc $[v\to u]_{\partial E}$ meets the circular arc
$[\tilde{u}\to\tilde{v}]_{\partial\tilde{E}}$ three times, also a
contradiction. Thus $\tilde{u}\not\in\tilde{\epsilon}_{2}$. The argument is
the same if we had initially let $\tilde{v}\in\tilde{\epsilon}_{2}$. ∎
###### Lemma 9.6.
The following four statements hold.
1. (1)
If $[\tilde{u}\to\tilde{v}]_{\partial\tilde{E}}\subset A$ and
$[\tilde{v}\to\tilde{u}]_{\partial\tilde{E}}$ meets $\partial A$, then
$B\setminus A$ and $\tilde{B}\setminus\tilde{A}$ are disjoint.
2. (2)
If $[\tilde{v}\to\tilde{u}]_{\partial\tilde{E}}\subset B$ and
$[\tilde{u}\to\tilde{v}]_{\partial\tilde{E}}$ meets $\partial B$, then
$A\setminus B$ and $\tilde{A}\setminus\tilde{B}$ are disjoint.
3. (3)
If $[u\to v]_{\partial E}\subset\tilde{A}$ and $[v\to u]_{\partial E}$ meets
$\partial\tilde{A}$, then $\tilde{B}\setminus\tilde{A}$ and $B\setminus A$ are
disjoint.
4. (4)
If $[v\to u]_{\partial E}\subset\tilde{B}$ and $[u\to v]_{\partial E}$ meets
$\partial\tilde{B}$, then $\tilde{A}\setminus\tilde{B}$ and $A\setminus B$ are
disjoint.
###### Proof.
We prove only (1), as (2), (3), (4) are symmetric restatements of it. Suppose
the hypotheses of (1) hold. Then both $\tilde{u}$ and $\tilde{v}$ lie in $A$.
Thus the circular arc $[\tilde{v}\to\tilde{u}]_{\partial\tilde{E}}$ meets
$\partial A$ either exactly twice or not at all, in fact exactly twice because
of the hypotheses. But
$[\tilde{v}\to\tilde{u}]_{\partial\tilde{E}}=[\tilde{v}\to\tilde{u}]_{\partial\tilde{B}}$.
Thus $[\tilde{u}\to\tilde{v}]_{\partial\tilde{B}}$ does not meet $\partial A$,
and has its endpoints lying in $A$, so
$[\tilde{u}\to\tilde{v}]_{\partial\tilde{B}}\subset A$.
From our definitions of $\tilde{u}$ and $\tilde{v}$, it is easy to check that
$\partial(\tilde{B}\setminus\tilde{A})$ is the union of the arcs
$[\tilde{u}\to\tilde{v}]_{\partial\tilde{B}}$ and
$[\tilde{u}\to\tilde{v}]_{\partial\tilde{E}}$. It follows that
$\partial(\tilde{B}\setminus\tilde{A})$ is contained in $A$. Thus
$\tilde{B}\setminus\tilde{A}$ is contained in $A$, and so is disjoint from
$B\setminus A$. ∎
$\tilde{E}$$E$$u$$v$$\tilde{v}$$\tilde{u}$
Figure 16. A topological configuration of two eyes which guarantees that
$A\setminus B$ and $\tilde{A}\setminus\tilde{B}$ do not meet, and that
$B\setminus A$ and $\tilde{B}\setminus\tilde{A}$ do not meet.
###### Lemma 9.7.
Suppose $\\{E,u,v,\tilde{E},\tilde{u},\tilde{v}\\}$ are in the topological
configuration depicted in Figure 16. Then $A\setminus B$ and
$\tilde{A}\setminus\tilde{B}$ do not meet, and $B\setminus A$ and
$\tilde{B}\setminus\tilde{A}$ do not meet.
###### Proof.
The curves $\partial\tilde{A}\setminus\partial\tilde{E}$ and
$\partial\tilde{B}\setminus\partial\tilde{E}$ both have $\tilde{u}$ and
$\tilde{v}$ as their endpoints and otherwise avoid $\tilde{E}$. Thus each must
cross $\partial E$ twice. These four crossings together with the points
$\partial E\cap\partial\tilde{E}$ accounts for all eight possible intersection
points between $\partial A\cup\partial B$ and
$\partial\tilde{A}\cup\partial\tilde{B}$. Thus the arc
$[\tilde{v}\to\tilde{u}]_{\partial\tilde{E}}$ does not meet $\partial B$.
Because this arc meets $B\supset E$, we conclude that
$[\tilde{v}\to\tilde{u}]_{\partial\tilde{E}}$ is contained in $B$. Note that
$[\tilde{u}\to\tilde{v}]_{\partial\tilde{E}}$ meets $\partial B$. Thus by part
(1) of Lemma 9.7 we get that $A\setminus B$ and $\tilde{A}\setminus\tilde{B}$
are disjoint. That $B\setminus A$ and $\tilde{B}\setminus\tilde{A}$ are
disjoint follows by symmetry. ∎
###### Lemma 9.8.
Suppose $u\in\tilde{E}$ and $\tilde{u}\in E$. Then $A\setminus B$ and
$\tilde{A}\setminus\tilde{B}$ do not meet, or $B\setminus A$ and
$\tilde{B}\setminus\tilde{A}$ do not meet.
###### Proof.
Suppose for contradiction that $u\in\tilde{E}$ and $\tilde{u}\in E$, but that
$A\setminus B$ and $\tilde{A}\setminus\tilde{B}$ meet, and that $B\setminus A$
and $\tilde{B}\setminus\tilde{A}$ meet.
###### Observation 9.9.
Neither of $B\setminus A$ and $\tilde{B}\setminus\tilde{A}$ contains the
other.
To see why this is true, note that $u$ is in fact an interior point of
$\tilde{E}$ by the general position hypothesis, and that
$u\in\partial(B\setminus A)$. Thus $B\setminus A$ meets the exterior of
$\tilde{B}\setminus\tilde{A}$. A similar argument gives that
$\tilde{B}\setminus\tilde{A}$ meets the exterior of $B\setminus A$.
We are supposing for contradiction that $B\setminus A$ and
$\tilde{B}\setminus\tilde{A}$ meet, and by Observation 9.9 neither of them
contains the other. Thus if we can show that $\partial(B\setminus A)$ and
$\partial(\tilde{B}\setminus\tilde{A})$ do not meet we will have derived a
contradiction, as desired.
Note that Proposition 8.2 applies. This allows us to make the following
observation.
###### Observation 9.10.
Either $\diamondsuit$a or $\diamondsuit$e occurs, either $\heartsuit$a or
$\heartsuit$d occurs, either $\spadesuit$a or $\spadesuit$e occurs, and either
$\clubsuit$a or $\clubsuit$d occurs.
We prove that either $\diamondsuit$a or $\diamondsuit$e occurs, and the other
parts of the observation follow similarly. Because
$\tilde{E}\subset\tilde{A}$, we may eliminate any candidate topological
configurations where $u\not\in\tilde{A}$. This eliminates $\diamondsuit$d,
$\diamondsuit$f, $\diamondsuit$g, and $\diamondsuit$h. Next, because
$\tilde{u}\in\partial\tilde{A}$, we may eliminate any candidate topological
configurations where $\partial\tilde{A}$ does not meet $E$, as this would
preclude $\tilde{u}\in E$. This eliminates $\diamondsuit$b and
$\diamondsuit$c, leaving us with only the two claimed possibilities. Thus the
remainder of our proof breaks into cases as follows.
###### Case 1.
Suppose that both $\diamondsuit$a and $\heartsuit$a occur.
Then $\partial(A\setminus B)$ is contained in $\tilde{A}$, and
$\partial(B\setminus A)$ is contained in $\tilde{B}$. Thus
$\partial(A\setminus B)\cup\partial(B\setminus A)$ is contained in
$\tilde{A}\cup\tilde{B}$. But $\partial(A\cup B)$ is contained in
$\partial(A\setminus B)\cup\partial(B\setminus A)$, thus in
$\tilde{A}\cup\tilde{B}$. We conclude that $A\cup
B\subset\tilde{A}\cup\tilde{B}$. Now
$\tilde{u}\in\partial(\tilde{A}\cup\tilde{B})$ and $E\subset A\cup B$, so by
the general position hypothesis we get a contradiction to $\tilde{u}\in E$.
###### Case 2.
Suppose that both $\diamondsuit$a and $\heartsuit$d occur.
Then $u\in\tilde{E}$ and $v\in\tilde{A}\setminus\tilde{B}$. One of the
following two sub-cases occurs.
###### Sub-case 2.1.
Suppose that $\spadesuit$a occurs.
Then $\partial A$ does not meet $\tilde{A}\setminus\tilde{B}$. But $v$ lies on
$\partial A$, contradicting $v\in\tilde{A}\setminus\tilde{B}$.
###### Sub-case 2.2.
Suppose that $\spadesuit$e occurs. Then one of $\clubsuit$a and $\clubsuit$d
occurs.
From $\spadesuit$e and that $u\in\tilde{E}$ and
$v\in\tilde{A}\setminus\tilde{B}$, it follows that $\partial(B\setminus
A)\cap\partial A=[u\to v]_{\partial E}$ does not meet
$\partial(\tilde{B}\setminus\tilde{A})$. If $\clubsuit$a occurs, then
$\partial B\supset\partial(B\setminus A)\cap\partial B$ does not meet
$\partial(\tilde{B}\setminus\tilde{A})$. If $\clubsuit$d occurs, then via
$u\in\tilde{E}$ and $v\in\tilde{A}\setminus\tilde{B}$ we get that
$\partial(B\setminus A)\cap\partial B=[u\to v]_{\partial B}$ does not meet
$\partial(\tilde{B}\setminus\tilde{A})$. In either case $\partial(B\setminus
A)$ and $\partial(\tilde{B}\setminus\tilde{A})$ do not meet, giving us a
contradiction.
Cases (1) and (2) together rule out $\heartsuit$a, $\spadesuit$a, and
$\clubsuit$a by symmetry, so the only remaining case is the following.
###### Case 3.
Suppose that $\diamondsuit$e, $\heartsuit$d, $\spadesuit$e, and $\clubsuit$d
occur.
By $\diamondsuit$e and $\heartsuit$d we have that $u\in\tilde{E}$ and
$v\in\mathbb{C}\setminus(\tilde{A}\cup\tilde{B})$. Then from $\spadesuit$e and
$\clubsuit$d we get that neither $\partial(B\setminus A)\cap\partial A=[u\to
v]_{\partial A}$ nor $\partial(B\setminus A)\cap\partial B=[u\to v]_{\partial
B}$ meets $\partial(\tilde{B}\setminus\tilde{A})$, again giving us the desired
contradiction. ∎
## 10\. Torus parametrization
In this section we introduce a tool, called _torus parametrization_ that
allows us to work with fixed-point index combinatorially. This will allow us
to systematically and relatively painlessly handle the remaining case
analysis.
Let $K$ and $\tilde{K}$ be closed Jordan domains in transverse position, so
that $\partial K$ and $\partial\tilde{K}$ meet at $2M\geq 0$ points, with
boundaries oriented as usual. Let $\partial
K\allowbreak\cap\allowbreak\partial\tilde{K}=\allowbreak\\{P_{1},\allowbreak\ldots,\allowbreak
P_{M},\allowbreak\tilde{P}_{1},\ldots,\allowbreak\tilde{P}_{M}\\}$, where
$P_{i}$ and $\tilde{P}_{i}$ are labeled so that at every $P_{i}$ we have that
$\partial K$ is entering $\tilde{K}$, and at every $\tilde{P}_{i}$ we have
that $\partial\tilde{K}$ is entering $K$. Imbue $\mathbb{S}^{1}$ with an
orientation and let $\kappa:\partial K\to\mathbb{S}^{1}$ and
$\tilde{\kappa}:\partial\tilde{K}\to\mathbb{S}^{1}$ be orientation-preserving
homeomorphisms. We refer to this as fixing a _torus parametrization_ for $K$
and $\tilde{K}$.
We consider a point $(x,y)$ on the 2-torus
$\mathbb{T}=\mathbb{S}^{1}\times\mathbb{S}^{1}$ to be parametrizing
simultaneously a point $\kappa^{-1}(x)\in\partial K$ and a point
$\tilde{\kappa}^{-1}(y)\in\partial\tilde{K}$. We denote by
$p_{i}\in\mathbb{T}$ be the unique point $(x,y)\in\mathbb{T}$ satisfying
$\kappa^{-1}(x)=\tilde{\kappa}^{-1}(y)=P_{i}$, similarly
$\tilde{p}_{i}\in\mathbb{T}$. Note that by the transverse position hypothesis
no pair of points in
$\\{p_{1},\ldots,p_{M},\tilde{p}_{1},\ldots,\tilde{p}_{M}\\}$ share a first
coordinate, nor a second coordinate.
Suppose we pick $(x_{0},y_{0})\in\mathbb{S}^{1}\times\mathbb{S}^{1}$. Then we
may draw an image of $\mathbb{T}=\mathbb{S}^{1}\times\mathbb{S}^{1}$ by
letting $\\{x_{0}\\}\times\mathbb{S}^{1}$ be the vertical axis and letting
$\mathbb{S}^{1}\times\\{y_{0}\\}$ be the horizontal axis. Then we call
$(x_{0},y_{0})$ a _base point_ for the drawing. See Figure 17 for an example.
(0.11,0.03578125) (0.26,-0.31421876)
(0.335,-0.48921874)(0.46,-0.7892187)(0.51,-0.9142187)
(0.56,-1.0392188)(0.735,-1.2892188)(0.86,-1.4142188)
(0.985,-1.5392188)(1.235,-1.7142187)(1.36,-1.7642188)
(1.485,-1.8142188)(1.71,-1.8892188)(1.81,-1.9142188)
(1.91,-1.9392188)(2.135,-1.9642187)(2.26,-1.9642187)
(2.385,-1.9642187)(2.585,-1.9142188)(2.66,-1.8642187)
(2.735,-1.8142188)(2.835,-1.6642188)(2.86,-1.5642188)
(2.885,-1.4642187)(2.91,-1.2392187)(2.91,-1.1142187)
(2.91,-0.9892188)(2.785,-0.7392188)(2.66,-0.6142188)
(2.535,-0.48921874)(2.26,-0.21421875)(2.11,-0.06421875)
(1.96,0.08578125)(1.86,0.31078124)(1.91,0.38578126)
(1.96,0.46078125)(2.135,0.56078124)(2.26,0.5857813)
(2.385,0.61078125)(2.61,0.7107813)(2.71,0.78578126)
(2.81,0.86078125)(2.985,1.0857812)(3.06,1.2357812)
(3.135,1.3857813)(3.16,1.6107812)(3.11,1.6857812)
(3.06,1.7607813)(2.91,1.8607812)(2.81,1.8857813)
(2.71,1.9107813)(2.485,1.9607812)(2.36,1.9857812)
(2.235,2.0107813)(1.985,2.0357811)(1.86,2.0357811)
(1.735,2.0357811)(1.51,2.0107813)(1.41,1.9857812)
(1.31,1.9607812)(1.11,1.9107813)(1.01,1.8857813)
(0.91,1.8607812)(0.71,1.7357812)(0.61,1.6357813)
(0.51,1.5357813)(0.36,1.3607812)(0.31,1.2857813)
(0.26,1.2107812)(0.16,1.0357813)(0.11,0.93578124)
(0.06,0.8357813)(0.01,0.61078125)(0.01,0.48578125)
(0.01,0.36078125)(0.035,0.18578126)(0.11,0.03578125) (4.11,0.23578125)
(4.11,0.48578125) (4.11,0.61078125)(4.06,0.86078125)(4.01,0.98578125)
(3.96,1.1107812)(3.86,1.3107812)(3.81,1.3857813)
(3.76,1.4607812)(3.66,1.6107812)(3.61,1.6857812)
(3.56,1.7607813)(3.385,1.8607812)(3.26,1.8857813)
(3.135,1.9107813)(2.91,1.9357812)(2.81,1.9357812)
(2.71,1.9357812)(2.51,1.7857813)(2.41,1.6357813)
(2.31,1.4857812)(2.235,1.2107812)(2.26,1.0857812)
(2.285,0.9607813)(2.335,0.73578125)(2.36,0.6357812)
(2.385,0.53578126)(2.435,0.33578125)(2.46,0.23578125)
(2.485,0.13578124)(2.51,-0.08921875)(2.51,-0.21421875)
(2.51,-0.33921874)(2.435,-0.63921875)(2.36,-0.81421876)
(2.285,-0.9892188)(2.235,-1.2642188)(2.26,-1.3642187)
(2.285,-1.4642187)(2.385,-1.6142187)(2.46,-1.6642188)
(2.535,-1.7142187)(2.685,-1.8142188)(2.76,-1.8642187)
(2.835,-1.9142188)(3.035,-1.9642187)(3.16,-1.9642187)
(3.285,-1.9642187)(3.51,-1.9142188)(3.61,-1.8642187)
(3.71,-1.8142188)(3.86,-1.6892188)(3.91,-1.6142187)
(3.96,-1.5392188)(4.01,-1.3392187)(4.01,-1.2142187)
(4.01,-1.0892187)(4.035,-0.83921874)(4.06,-0.71421874)
(4.085,-0.58921874)(4.11,-0.33921874)(4.11,-0.21421875)
(4.11,-0.08921875)(4.11,0.08578125)(4.11,0.23578125)
$K$$u$$\tilde{K}$$\tilde{u}$$P_{1}$$P_{2}$$\tilde{P}_{1}$$\tilde{P}_{2}$
(a)
$p_{1}$$p_{2}$$\tilde{p}_{1}$$\tilde{p}_{2}$$(\kappa(u),\tilde{\kappa}(\tilde{u}))$
(b)
Figure 17. A pair of closed Jordan domains $K$ and $\tilde{K}$ and a torus
parametrization for them, drawn with base point
$(\kappa(u),\tilde{\kappa}(\tilde{u}))$. The key points to check are that as
we vary the first coordinate of $\mathbb{T}$ positively starting at $u$, we
arrive at $\kappa(P_{1})$, $\kappa(\tilde{P}_{1})$, $\kappa(P_{2})$, and
$\kappa(\tilde{P}_{2})$ in that order, and as we vary the second coordinate of
$\mathbb{T}$ positively starting at $\tilde{\kappa}(\tilde{u})$, we arrive at
$\tilde{\kappa}(P_{1})$, $\tilde{\kappa}(\tilde{P}_{2})$,
$\tilde{\kappa}(P_{2})$, and $\tilde{\kappa}(P_{1})$ in that order.
Suppose that $\phi:\partial K\to\partial\tilde{K}$ is an orientation-
preserving homeomorphism. Then $\phi$ determines an oriented curve $\gamma$ in
$\mathbb{T}$ for us, namely its graph
$\gamma=\\{(\kappa(z),\tilde{\kappa}(f(z))\\}_{z\in\partial K}$, with
orientation obtained by traversing $\partial K$ positively. Note that $\phi$
is fixed-point-free if and only if its associated curve $\gamma$ misses all of
the $p_{i}$ and $\tilde{p}_{i}$. Pick $u\in\partial K$ and denote
$\tilde{u}=\phi(u)$. Then if we draw the torus parametrization for $K$ and
$\tilde{K}$ using the base point $(\kappa(u),\tilde{\kappa}(\tilde{u}))$, the
curve $\gamma$ associated to $\phi$ “looks like the graph of a strictly
increasing function.” The converse is also true: given any such $\gamma$, it
determines for us an orientation-preserving homeomorphism $\partial
K\to\partial\tilde{K}$ sending $u$ to $\tilde{u}$, which is fixed-point-free
if and only if $\gamma$ misses all of the $p_{i}$ and $\tilde{p}_{i}$.
The nicest thing about torus parametrization is that it allows us to compute
$\eta(\phi)$ easily by looking at the curve $\gamma$ associated to $\phi$. In
particular, suppose that $\phi(u)=\tilde{u}$, equivalently that
$(\kappa(u),\tilde{\kappa}(\tilde{u}))\in\gamma$. The curve $\gamma$ and the
horizontal and vertical axes
$\\{\tilde{\kappa}(\tilde{u})\\}\times\mathbb{S}^{1}$ and
$\mathbb{S}^{1}\times\\{\kappa(u)\\}$ divide $\mathbb{T}$ into two simply
connected open sets $\Delta_{\uparrow}(u,\gamma)$ and
$\Delta_{\downarrow}(u,\gamma)$ as shown in Figure 18. We suppress the
dependence on $\tilde{u}$ in the notation because $\tilde{u}=\phi(u)$. If
neither $u\in\partial\tilde{K}$ nor $\tilde{u}\in\partial K$ then every
$p_{i}$ and every $\tilde{p}_{i}$ lies in either
$\Delta_{\downarrow}(u,\gamma)$ or $\Delta_{\uparrow}(u,\gamma)$. In this case
we write $\\#p_{\downarrow}(u,\gamma)$ to denote
$|\\{p_{1},\ldots,p_{M}\\}\cap\Delta_{\downarrow}(u,\gamma)|$ the number of
points $p_{i}$ which lie in $\Delta_{\downarrow}(u,\gamma)$, and we define
$\\#p_{\uparrow}(u,\gamma)$, $\\#\tilde{p}_{\downarrow}(u,\gamma)$, and
$\\#\tilde{p}_{\uparrow}(u,\gamma)$ in the analogous way. Denote by
$\omega(\alpha,z)$ the winding number of the closed curve
$\alpha\subset\mathbb{C}$ around the point $z\not\in\alpha$. Then:
$(\kappa(u),\tilde{\kappa}(\tilde{u}))$(0.0,-2.6325) (0.08,-2.4725)
(0.11,-2.4225)(0.165,-2.3375)(0.19,-2.3025)
(0.215,-2.2675)(0.28,-2.1925)(0.32,-2.1525)
(0.36,-2.1125)(0.445,-2.0425)(0.49,-2.0125)
(0.535,-1.9825)(0.65,-1.8825)(0.72,-1.8125)
(0.79,-1.7425)(0.895,-1.6275)(0.93,-1.5825)
(0.965,-1.5375)(1.035,-1.4325)(1.07,-1.3725)
(1.105,-1.3125)(1.19,-1.1925)(1.24,-1.1325)
(1.29,-1.0725)(1.38,-0.9675)(1.42,-0.9225)
(1.46,-0.8775)(1.565,-0.7775)(1.63,-0.7225)
(1.695,-0.6675)(1.825,-0.5675)(1.89,-0.5225)
(1.955,-0.4775)(2.075,-0.4025)(2.13,-0.3725)
(2.185,-0.3425)(2.305,-0.2875)(2.37,-0.2625)
(2.435,-0.2375)(2.56,-0.1925)(2.62,-0.1725)
(2.68,-0.1525)(2.8,-0.0975)(2.86,-0.0625)
(2.92,-0.0275)(3.035,0.0525)(3.09,0.0975)
(3.145,0.1425)(3.235,0.2325)(3.27,0.2775)
(3.305,0.3225)(3.37,0.4125)(3.4,0.4575) (3.43,0.5025)(3.48,0.5875)(3.5,0.6275)
(3.52,0.6675)(3.555,0.7625)(3.57,0.8175)
(3.585,0.8725)(3.62,0.9775)(3.64,1.0275)
(3.66,1.0775)(3.7,1.1625)(3.72,1.1975)
(3.74,1.2325)(3.795,1.3075)(3.83,1.3475)
(3.865,1.3875)(3.945,1.4675)(3.99,1.5075)
(4.035,1.5475)(4.125,1.6075)(4.17,1.6275)
(4.215,1.6475)(4.325,1.6925)(4.39,1.7175)
(4.455,1.7425)(4.59,1.7825)(4.66,1.7975)
(4.73,1.8125)(4.84,1.8425)(4.88,1.8575)
(4.92,1.8725)(5.015,1.9175)(5.07,1.9475)
(5.125,1.9775)(5.215,2.0425)(5.25,2.0775)
(5.285,2.1125)(5.35,2.1925)(5.38,2.2375)
(5.41,2.2825)(5.47,2.3875)(5.5,2.4475) (5.53,2.5075)(5.57,2.6025)(5.58,2.6375)
(5.59,2.6725)(5.61,2.7375)(5.62,2.7675)
(5.63,2.7975)(5.655,2.8475)(5.67,2.8675)
(5.685,2.8875)(5.705,2.9275)(5.71,2.9475)
(5.715,2.9675)(5.73,3.0025)(5.74,3.0175)
(5.75,3.0325)(5.765,3.0625)(5.82,3.1875)
$\Delta_{\uparrow}(u,\gamma)$$\Delta_{\downarrow}(u,\gamma)$$\gamma$$\mathbb{S}^{1}\times\\{\tilde{\kappa}(\tilde{u})\\}$$\\{\kappa(u)\\}\times\mathbb{S}^{1}$$p_{1}$$\tilde{p}_{1}$$p_{2}$$\tilde{p}_{2}$
Figure 18. A homotopy from $\partial\Delta_{\downarrow}(u,\gamma)$ to
$\Gamma$. Here the orientation shown on $\gamma$ is the opposite of the
orientation induced by traversing $\partial K$ positively.
###### Lemma 10.1.
Let $K$ and $\tilde{K}$ be closed Jordan domains. Fix a torus parametrization
of $K$ and $\tilde{K}$ via $\kappa$ and $\tilde{\kappa}$. Let $\phi:\partial
K\to\partial\tilde{K}$ be an indexable homeomorphism, with graph $\gamma$ in
$\mathbb{T}$. Suppose that $\phi(u)=\tilde{u}$, where
$u\not\in\partial\tilde{K}$ and $\tilde{u}\not\in\partial K$. Then:
(3) $\displaystyle\eta(\phi)=w(\gamma)$ $\displaystyle=\omega(\partial
K,\tilde{u})+\omega(\partial\tilde{K},u)-\\#p_{\downarrow}(u,\gamma)+\\#\tilde{p}_{\downarrow}(u,\gamma)$
(4) $\displaystyle=\omega(\partial
K,\tilde{u})+\omega(\partial\tilde{K},u)+\\#p_{\uparrow}(u,\gamma)-\\#\tilde{p}_{\uparrow}(u,\gamma)$
###### Proof.
Suppose $\gamma_{0}$ is any oriented closed curve in
$\mathbb{T}\setminus\\{p_{1},\ldots,p_{M},\tilde{p}_{1},\ldots,\tilde{p}_{M}\\}$.
Then the closed curve
$\\{\tilde{\kappa}^{-1}(y)-\kappa^{-1}(x)\\}_{(x,y)\in\gamma_{0}}$ misses the
origin, and has a natural orientation obtained by traversing $\gamma_{0}$
positively. We denote by $w(\gamma_{0})$ the winding number around the origin
of $\\{\tilde{\kappa}^{-1}(y)-\kappa^{-1}(x)\\}_{(x,y)\in\gamma_{0}}$. First:
###### Observation 10.2.
If $\gamma_{1}$ and $\gamma_{2}$ are homotopic in
$\mathbb{T}\setminus\\{p_{1},\ldots,p_{M},\tilde{p}_{1},\ldots,\tilde{p}_{M}\\}$
then $w(\gamma_{1})=w(\gamma_{2})$.
This is because the homotopy between $\gamma_{1}$ and $\gamma_{2}$ in
$\mathbb{T}\setminus\\{p_{2},\ldots,p_{M},\tilde{p}_{1},\ldots,\tilde{p}_{M}\\}$
induces a homotopy between the closed curves
$\\{\tilde{\kappa}^{-1}(y)-\kappa^{-1}(x)\\}_{(x,y)\in\gamma_{1}}$ and
$\\{\tilde{\kappa}^{-1}(y)-\kappa^{-1}(x)\\}_{(x,y)\in\gamma_{2}}$ in the
punctured plane $\mathbb{C}\setminus\\{0\\}$.
If $\gamma$ has orientation induced by traversing $\partial K$ and
$\partial\tilde{K}$ positively, then the following is a tautology.
###### Observation 10.3.
$\eta(\phi)=w(\gamma)$
Orient $\partial\Delta_{\downarrow}(u,\gamma)$ as shown in Figure 18. Then
$\partial\Delta_{\downarrow}(u,\gamma)$ is the concatenation of the curve
$\gamma$ traversed backwards with
$\mathbb{S}^{1}\times\\{\tilde{\kappa}(\tilde{u})\\}$ and
$\\{\kappa(u)\\}\times\mathbb{S}^{1}$, where the two latter curves are
oriented according to the positive orientation on $\mathbb{S}^{1}$.
(1.69,2.5796876) (1.89,2.4196875)
(1.99,2.3396876)(2.28,2.0546875)(2.47,1.8496875)
(2.66,1.6446875)(2.89,1.3196875)(2.93,1.1996875)
(2.97,1.0796875)(3.09,0.8246875)(3.17,0.6896875)
(3.25,0.5546875)(3.44,0.3296875)(3.55,0.2396875)
(3.66,0.1496875)(3.845,-0.0403125)(3.92,-0.1403125)
(3.995,-0.2403125)(4.145,-0.4453125)(4.22,-0.5503125)
(4.295,-0.6553125)(4.465,-0.8553125)(4.56,-0.9503125)
(4.655,-1.0453125)(4.82,-1.1903125)(4.89,-1.2403125)
(4.96,-1.2903125)(5.13,-1.4053125)(5.23,-1.4703125)
(5.33,-1.5353125)(5.48,-1.6753125)(5.53,-1.7503124)
(5.58,-1.8253125)(5.665,-1.9453125)(5.7,-1.9903125)
(5.735,-2.0353124)(5.81,-2.1303124)(5.85,-2.1803124)
(5.89,-2.2303126)(5.97,-2.3103125)(6.01,-2.3403125)
(6.05,-2.3703125)(6.13,-2.4153125)(6.17,-2.4303124)
(6.21,-2.4453125)(6.295,-2.4603126)(6.34,-2.4603126)
(6.385,-2.4603126)(6.5,-2.4353125)(6.57,-2.4103124)
(6.64,-2.3853126)(6.82,-2.2553124)(6.93,-2.1503124)
(7.04,-2.0453124)(7.235,-1.8803124)(7.32,-1.8203125)
(7.405,-1.7603126)(7.635,-1.5903125)(7.78,-1.4803125)
(7.925,-1.3703125)(8.155,-1.1853125)(8.24,-1.1103125)
(8.325,-1.0353125)(8.49,-0.8753125)(8.57,-0.7903125)
(8.65,-0.7053125)(8.765,-0.5353125)(8.8,-0.4503125)
(8.835,-0.3653125)(8.88,-0.2203125)(8.89,-0.1603125)
(8.9,-0.1003125)(8.93,0.0446875)(8.95,0.1296875)
(8.97,0.2146875)(9.005,0.3746875)(9.02,0.4496875)
(9.035,0.5246875)(9.045,0.7096875)(9.04,0.8196875)
(9.035,0.9296875)(8.98,1.1296875)(8.93,1.2196875)
(8.88,1.3096875)(8.725,1.5196875)(8.62,1.6396875)
(8.515,1.7596875)(8.305,2.0246875)(8.2,2.1696875)
(8.095,2.3146875)(7.9,2.5396874)(7.81,2.6196876)
(7.72,2.6996875)(7.445,2.8696876)(7.26,2.9596875)
(7.075,3.0496874)(6.76,3.1896875)(6.63,3.2396874)
(6.5,3.2896874)(6.09,3.3846874)(5.81,3.4296875)
(5.53,3.4746876)(5.115,3.5346875)(4.98,3.5496874)
(4.845,3.5646875)(4.53,3.5846875)(4.35,3.5896876)
(4.17,3.5946875)(3.845,3.5596876)(3.7,3.5196874)
(3.555,3.4796875)(3.275,3.3746874)(3.14,3.3096876)
(3.005,3.2446876)(2.79,3.1446874)(2.71,3.1096876)
(2.63,3.0746875)(2.485,3.0146875)(2.42,2.9896874)
(2.355,2.9646876)(2.195,2.8796875)(2.1,2.8196876)
(2.005,2.7596874)(1.89,2.6896875)(1.87,2.6796875)
(1.85,2.6696875)(1.825,2.6396875)(1.82,2.6196876)
(1.815,2.5996876)(1.8,2.5646875)(1.79,2.5496874)
(1.78,2.5346875)(1.77,2.5146875)(1.77,2.4996874) (1.47,-1.4403125)
(1.61,-1.4003125) (1.68,-1.3803124)(1.92,-1.2903125)(2.09,-1.2203125)
(2.26,-1.1503125)(2.49,-1.0453125)(2.55,-1.0103126)
(2.61,-0.9753125)(2.74,-0.8903125)(2.81,-0.8403125)
(2.88,-0.7903125)(3.02,-0.6653125)(3.09,-0.5903125)
(3.16,-0.5153125)(3.295,-0.3603125)(3.36,-0.2803125)
(3.425,-0.2003125)(3.58,-0.0753125)(3.67,-0.0303125)
(3.76,0.0146875)(3.965,0.0946875)(4.08,0.1296875)
(4.195,0.1646875)(4.455,0.2946875)(4.6,0.3896875)
(4.745,0.4846875)(4.96,0.6396875)(5.03,0.6996875)
(5.1,0.7596875)(5.24,0.8746875)(5.31,0.9296875)
(5.38,0.9846875)(5.52,1.1046875)(5.59,1.1696875)
(5.66,1.2346874)(5.79,1.3496875)(5.85,1.3996875)
(5.91,1.4496875)(6.055,1.5446875)(6.14,1.5896875)
(6.225,1.6346875)(6.37,1.7096875)(6.43,1.7396874)
(6.49,1.7696875)(6.615,1.8246875)(6.68,1.8496875)
(6.745,1.8746876)(6.86,1.9096875)(6.91,1.9196875)
(6.96,1.9296875)(7.07,1.9396875)(7.13,1.9396875)
(7.19,1.9396875)(7.3,1.9396875)(7.35,1.9396875)
(7.4,1.9396875)(7.5,1.9246875)(7.55,1.9096875)
(7.6,1.8946875)(7.68,1.8596874)(7.71,1.8396875)
(7.74,1.8196875)(7.835,1.6996875)(7.9,1.5996875)
(7.965,1.4996876)(8.115,1.1346875)(8.2,0.8696875)
(8.285,0.6046875)(8.45,0.2146875)(8.53,0.0896875)
(8.61,-0.0353125)(8.705,-0.2703125)(8.72,-0.3803125)
(8.735,-0.4903125)(8.695,-0.7453125)(8.64,-0.8903125)
(8.585,-1.0353125)(8.41,-1.4003125)(8.29,-1.6203125)
(8.17,-1.8403125)(7.975,-2.1903124)(7.9,-2.3203125)
(7.825,-2.4503126)(7.7,-2.6303124)(7.65,-2.6803124)
(7.6,-2.7303126)(7.43,-2.8553126)(7.31,-2.9303124)
(7.19,-3.0053124)(6.87,-3.1503124)(6.67,-3.2203126)
(6.47,-3.2903125)(6.115,-3.3803124)(5.96,-3.4003124)
(5.805,-3.4203124)(5.51,-3.4453125)(5.37,-3.4503126)
(5.23,-3.4553125)(4.995,-3.4903126)(4.9,-3.5203125)
(4.805,-3.5503125)(4.575,-3.5853126)(4.44,-3.5903125)
(4.305,-3.5953126)(4.07,-3.6003125)(3.97,-3.6003125)
(3.87,-3.6003125)(3.57,-3.5553124)(3.37,-3.5103126)
(3.17,-3.4653125)(2.845,-3.3803124)(2.72,-3.3403125)
(2.595,-3.3003125)(2.34,-3.1453125)(2.21,-3.0303125)
(2.08,-2.9153125)(1.92,-2.7403126)(1.89,-2.6803124)
(1.86,-2.6203125)(1.815,-2.5003126)(1.8,-2.4403124)
(1.785,-2.3803124)(1.745,-2.2553124)(1.72,-2.1903124)
(1.695,-2.1253126)(1.65,-2.0153124)(1.63,-1.9703125)
(1.61,-1.9253125)(1.57,-1.8353125)(1.55,-1.7903125)
(1.53,-1.7453125)(1.505,-1.6703125)(1.5,-1.6403126)
(1.495,-1.6103125)(1.485,-1.5553125)(1.48,-1.5303125)
(1.475,-1.5053124)(1.48,-1.4603125)(1.49,-1.4403125) (1.47,-1.4403125)
(1.61,-1.4003125) (1.68,-1.3803124)(1.92,-1.2903125)(2.09,-1.2203125)
(2.26,-1.1503125)(2.49,-1.0453125)(2.55,-1.0103126)
(2.61,-0.9753125)(2.74,-0.8903125)(2.81,-0.8403125)
(2.88,-0.7903125)(3.02,-0.6653125)(3.09,-0.5903125)
(3.16,-0.5153125)(3.295,-0.3603125)(3.36,-0.2803125)
(3.425,-0.2003125)(3.58,-0.0753125)(3.67,-0.0303125)
(3.76,0.0146875)(3.965,0.0946875)(4.08,0.1296875)
(4.195,0.1646875)(4.455,0.2946875)(4.6,0.3896875)
(4.745,0.4846875)(4.96,0.6396875)(5.03,0.6996875)
(5.1,0.7596875)(5.24,0.8746875)(5.31,0.9296875)
(5.38,0.9846875)(5.52,1.1046875)(5.59,1.1696875)
(5.66,1.2346874)(5.79,1.3496875)(5.85,1.3996875)
(5.91,1.4496875)(6.055,1.5446875)(6.14,1.5896875)
(6.225,1.6346875)(6.37,1.7096875)(6.43,1.7396874)
(6.49,1.7696875)(6.615,1.8246875)(6.68,1.8496875)
(6.745,1.8746876)(6.86,1.9096875)(6.91,1.9196875)
(6.96,1.9296875)(7.07,1.9396875)(7.13,1.9396875)
(7.19,1.9396875)(7.3,1.9396875)(7.35,1.9396875)
(7.4,1.9396875)(7.5,1.9246875)(7.55,1.9096875) (1.69,2.5796876)
(1.89,2.4196875) (1.99,2.3396876)(2.28,2.0546875)(2.47,1.8496875)
(2.66,1.6446875)(2.89,1.3196875)(2.93,1.1996875)
(2.97,1.0796875)(3.09,0.8246875)(3.17,0.6896875)
(3.25,0.5546875)(3.44,0.3296875)(3.55,0.2396875)
(3.66,0.1496875)(3.845,-0.0403125)(3.92,-0.1403125)
(3.995,-0.2403125)(4.145,-0.4453125)(4.22,-0.5503125)
(4.295,-0.6553125)(4.465,-0.8553125)(4.56,-0.9503125)
(4.655,-1.0453125)(4.82,-1.1903125)(4.89,-1.2403125)
(4.96,-1.2903125)(5.13,-1.4053125)(5.23,-1.4703125)
(5.33,-1.5353125)(5.48,-1.6753125)(5.53,-1.7503124)
(5.58,-1.8253125)(5.665,-1.9453125)(5.7,-1.9903125)
(5.735,-2.0353124)(5.81,-2.1303124)(5.85,-2.1803124)
(5.89,-2.2303126)(5.97,-2.3103125)(6.01,-2.3403125)
(6.05,-2.3703125)(6.13,-2.4153125)(6.17,-2.4303124)
(6.21,-2.4453125)(6.295,-2.4603126)(6.34,-2.4603126)
$K$$\tilde{K}$$P_{i}$$\kappa^{-1}(x_{0})$$\kappa^{-1}(x_{1})$$\tilde{\kappa}^{-1}(y_{0})$$\tilde{\kappa}^{-1}(y_{1})$(5.41,-1.3603125)
(5.58,-1.2303125) (5.665,-1.1653125)(5.835,-1.0853125)(5.92,-1.0703125)
(6.005,-1.0553125)(6.185,-1.0403125)(6.28,-1.0403125)
(6.375,-1.0403125)(6.61,-1.0603125)(6.75,-1.0803125)
(6.89,-1.1003125)(7.21,-1.1853125)(7.39,-1.2503124)
(7.57,-1.3153125)(7.945,-1.4953125)(8.14,-1.6103125)
(8.335,-1.7253125)(8.67,-1.8903126)(8.81,-1.9403125)
(8.95,-1.9903125)(9.33,-2.0553124)(9.57,-2.0703125)
(9.81,-2.0853126)(10.225,-2.0503125)(10.4,-2.0003126)
(10.575,-1.9503125)(10.785,-1.8703125)(10.82,-1.8403125)
(10.855,-1.8103125)(10.915,-1.7553124)(10.94,-1.7303125)
(10.965,-1.7053125)(11.005,-1.6553125)(11.02,-1.6303124)
(11.035,-1.6053125)(11.055,-1.5553125)(11.07,-1.4803125)
Figure 19. The local picture near $P_{i}$. This allows us to compute the
“local fixed-point index” $w(\zeta(p_{i}))$ of $f$ near $P_{i}$.
###### Observation 10.4.
If $\mathbb{S}^{1}\times\\{\tilde{\kappa}(\tilde{u})\\}$ and
$\\{\kappa(u)\\}\times\mathbb{S}^{1}$ are oriented according to the positive
orientation on $\mathbb{S}^{1}$, then
$w(\mathbb{S}^{1}\times\\{\tilde{\kappa}(\tilde{u})\\})=\omega(\partial
K,\tilde{u})$ and
$w(\\{\kappa(u)\\}\times\mathbb{S}^{1})=\omega(\partial\tilde{K},u)$.
It is also easy to see that if we concatenate two closed curves $\gamma_{1}$
and $\gamma_{2}$ that meet at a point, we get
$w(\gamma_{1}\circ\gamma_{2})=w(\gamma_{1})+w(\gamma_{2})$. Thus in light of
the orientations on $\partial\Delta_{\downarrow}(u,\gamma)$ and all other
curves concerned we get:
$\displaystyle w(\partial\Delta_{\downarrow}(u,\gamma))$
$\displaystyle=w(\mathbb{S}^{1}\times\\{\tilde{\kappa}(\tilde{u})\\})+w(\\{\kappa(u)\\}\times\mathbb{S}^{1})-w(\gamma)$
$\displaystyle=\omega(\partial
K,\tilde{u})+\omega(\partial\tilde{K},u)-\eta(\phi)$
For every $i$ let $\zeta(p_{i})$ and $\zeta(\tilde{p}_{i})$ be small squares
around $p_{i}$ and $\tilde{p}_{i}$ respectively in $\mathbb{T}$, oriented as
shown in Figure 18. By _square_ we mean a simple closed curve which decomposes
into four “sides,” so that on a given side one of the two coordinates of
$\mathbb{S}^{1}\times\mathbb{S}^{1}=\mathbb{T}$ is constant. Pick the
rectangles small enough so that the closed boxes they bound are pairwise
disjoint and do not meet $\partial\Delta_{\downarrow}(u,\gamma)$.
Let $\Gamma$ be the closed curve in $\Delta_{\downarrow}(u,\gamma)$ obtained
in the following way. First, start with every loop $\zeta(p_{i})$ and
$\zeta(\tilde{p}_{i})$ for those $p_{i}$ and $\tilde{p}_{i}$ lying in
$\Delta_{\downarrow}(u,\gamma)$. Let $\delta_{0}$ be an arc contained in the
interior of $\Delta_{\downarrow}(u,\gamma)$ which meets each $\zeta(p_{i})$
and $\zeta(\tilde{p}_{i})$ contained in $\Delta_{\downarrow}(u,\gamma)$ at
exactly one point. It is easy to prove inductively that such an arc exists.
Let $\delta$ be the closed curve obtained by traversing $\delta_{0}$ first in
one direction, then in the other. Then let $\Gamma$ be obtained by
concatenating $\delta$ with every $\zeta(p_{i})$ and $\zeta(\tilde{p}_{i})$
contained in $\Delta_{\downarrow}(u,\gamma)$.
###### Observation 10.5.
The curves $\Gamma$ and $\partial\Delta_{\downarrow}(u,\gamma)$ are homotopic
in $\mathbb{T}\setminus\\{p_{1},\ldots,\allowbreak
p_{M},\allowbreak\tilde{p}_{1},\ldots,\allowbreak\tilde{p}_{M}\\}$. Also
$w(\delta)=0$. It follows that:
$w(\partial\Delta_{\downarrow}(u,\gamma))=w(\Gamma)=\sum_{p_{i}\in\Delta_{\downarrow}(u,\gamma)}w(\zeta(p_{i}))+\sum_{\tilde{p}_{i}\in\Delta_{\downarrow}(u,\gamma)}w(\zeta(\tilde{p}_{i}))$
See Figure 18 for an example. On the other hand, the following holds.
###### Observation 10.6.
$w(\zeta(p_{i}))=1$, $w(\zeta(\tilde{p}_{i}))=-1$
To see why, suppose that $\zeta(p_{i})=\partial([x_{0}\to
x_{1}]_{\mathbb{S}^{1}}\times[y_{0}\to y_{1}]_{\mathbb{S}^{1}})$. Then up to
orientation-preserving homeomorphism the picture near $P_{i}$ is as in Figure
19. We let $(x,y)$ traverse $\zeta(p_{i})$ positively starting at
$(x_{0},y_{0})$, keeping track of the vector
$\tilde{\kappa}^{-1}(y)-\tilde{\kappa}^{-1}(x)$ as we do so. The vector
$\tilde{\kappa}^{-1}(y_{0})-\tilde{\kappa}^{-1}(x_{0})$ points to the right.
As $x$ varies from $x_{0}$ to $x_{1}$, the vector
$\tilde{\kappa}^{-1}(y)-\tilde{\kappa}^{-1}(x)$ rotates in the positive
direction, that is, counter-clockwise, until it arrives at
$\tilde{\kappa}^{-1}(y_{0})-\kappa^{-1}(x_{1})$, which points upward.
Continuing in this fashion, we see that
$\tilde{\kappa}^{-1}(y)-\tilde{\kappa}^{-1}(x)$ makes one full counter-
clockwise rotation as we traverse $\zeta(p_{i})$. The proof that
$w(\zeta(\tilde{p}_{i}))=-1$ is similar. Combining all of our observations
establishes equation 3. The proof that equation 4 holds is similar. ∎
$p_{1}$$\tilde{p}_{1}$$\tilde{s}_{1}$$\tilde{s}_{2}$$s_{1}$$s_{2}$
(a)
$p_{1}$$\tilde{p}_{1}$$(\kappa(u),\tilde{\kappa}(\tilde{u}))$$\tilde{t}_{1}$$\tilde{t}_{2}$$\tilde{t}_{3}$$t_{1}$$t_{2}$$t_{3}$
(b) $(\kappa(u),\tilde{\kappa}(u))=(s_{1},\tilde{s}_{2})$
Figure 20. Two drawings of a torus parametrization for two eyes whose
boundaries meet exactly twice. There is some choice of base point giving the
drawing on the left. The drawing on the right is the same torus
parametrization drawn using the base point
$(\kappa(u),\tilde{\kappa}(u))=(s_{1},\tilde{s}_{2})$.
## 11\. Proof of Propositions 11.1 and 11.5
In this section, we prove the two remaining outstanding propositions to
complete the proof of our Main Index Theorem 5.3 and thus our main rigidity
results.
###### Proposition 11.1.
Let $\\{A,B\\}$ and $\\{\tilde{A},\tilde{B}\\}$ be pairs of overlapping closed
disks in the plane $\mathbb{C}$ in general position. Suppose that neither of
$E=A\cap B$ and $\tilde{E}=\tilde{A}\cap\tilde{B}$ contains the other. Suppose
further that $A\setminus B$ and $\tilde{A}\setminus\tilde{B}$ meet, and that
$B\setminus A$ and $\tilde{B}\setminus\tilde{A}$ meet. Then there is a
faithful indexable homeomorphism $\epsilon:\partial E\to\partial\tilde{E}$
satisfying $\eta(e)=0$.
###### Proof.
If $\partial E$ and $\partial\tilde{E}$ do not meet, we get that $E$ and
$\tilde{E}$ are disjoint. Then any indexable homeomorphism $\epsilon:\partial
E\to\partial\tilde{E}$ satisfies $\eta(\epsilon)=0$. Thus suppose that
$\partial E$ and $\partial\tilde{E}$ meet. Fix a torus parametrization for $E$
and $\tilde{E}$ via $\kappa:\partial E\to\mathbb{S}^{1}$ and
$\tilde{\kappa}:\partial\tilde{E}\to\mathbb{S}^{1}$. As before denote by
$p_{i}$ the points of $\partial E\cap\partial\tilde{E}$ where $\partial E$ is
entering $\tilde{E}$, and by $\tilde{p}_{i}$ those where $\partial\tilde{E}$
is entering $E$. Note that $\partial E$ and $\partial\tilde{E}$ meet at
exactly 2, 4, or 6 points by Lemma 9.3. The proof breaks into these three
cases.
###### Case 1.
Suppose that $\partial E$ and $\partial\tilde{E}$ meet at exactly two points.
Then with an appropriate choice of base point, the torus parametrization for
$E$ and $\tilde{E}$ is as shown in Figure 20a. The points
$s_{1},s_{2}\in\mathbb{S}^{1}$ in Figure 20a are exactly the topologically
distinct places where $\kappa(u)$ may be, similarly
$\tilde{s}_{1},\tilde{s}_{2}\in\mathbb{S}^{1}$ for
$\tilde{\kappa}(\tilde{u})$. A choice of
$(s_{j},\tilde{s}_{\tilde{j}})=(\kappa(u),\tilde{\kappa}(\tilde{u}))$
completely determines the topological configuration of
$\\{E,\tilde{E},u,\tilde{u}\\}$, and conversely every possible topological
configuration of those sets is achieved via this procedure. By Lemma 9.8 we
may suppose without loss of generality that $u\not\in\tilde{E}$, thus that
$\kappa(u)=s_{1}$.
$(\kappa(u),\tilde{\kappa}(\tilde{u}))$$t_{1}$$t_{2}$$t_{3}$$\tilde{t}_{1}$$\tilde{t}_{2}$$\tilde{t}_{3}$$p_{1}$$\tilde{p}_{1}$
(a) $(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{1})$
$(\kappa(u),\tilde{\kappa}(\tilde{u}))$$t_{1}$$t_{2}$$t_{3}$$\tilde{t}_{1}$$\tilde{t}_{2}$$\tilde{t}_{3}$$p_{1}$$\tilde{p}_{1}$
(b) $(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{2})$
Figure 21. Graphs of homeomorphisms $\epsilon$ giving $\eta(\epsilon)=0$ for
a pair of eyes whose boundaries meet twice. The torus parametrizations are
drawn using the indicated choice of base point.
Suppose first that $\tilde{\kappa}(\tilde{u})=\tilde{s}_{2}$. In Figure 20b we
redraw the torus parametrization for $E$ and $\tilde{E}$ using the base point
$(s_{1},\tilde{s}_{2})=(\kappa(u),\tilde{\kappa}(\tilde{u}))$. Then the points
$t_{1},t_{2},t_{3}\in\mathbb{S}^{1}$ are exactly the topologically distinct
places where $\kappa(v)$ may be, similarly
$\tilde{t}_{1},\tilde{t}_{2},\tilde{t}_{3}\in\mathbb{S}^{1}$ for
$\tilde{\kappa}(\tilde{v})$.
###### Observation 11.2.
A choice of
$(s_{j},\tilde{s}_{\tilde{j}})=(\kappa(u),\tilde{\kappa}(\tilde{u}))$ and a
subsequent choice of
$(t_{k},\tilde{t}_{\tilde{k}})=(\kappa(v),\tilde{\kappa}(\tilde{v}))$ together
completely determine the topological configuration of
$\\{E,\tilde{E},u,\tilde{u},v,\tilde{v}\\}$. Conversely every possible
topological configuration of
$\\{E,\tilde{E},u,\tilde{u},v,\allowbreak\tilde{v}\\}$ is achieved by some
choice of $(s_{j},\tilde{s}_{\tilde{j}})$, and then a subsequent choice of
$(t_{k},\tilde{t}_{\tilde{k}})$, for $(\kappa(u),\tilde{\kappa}(\tilde{u}))$
and $(\kappa(v),\tilde{\kappa}(\tilde{v}))$ respectively.
$p_{1}$$\tilde{p}_{1}$$p_{2}$$\tilde{p}_{2}$$\tilde{s}_{1}$$\tilde{s}_{2}$$\tilde{s}_{3}$$\tilde{s}_{4}$$s_{1}$$s_{2}$$s_{3}$$s_{4}$
(a)
$p_{1}$$\tilde{p}_{1}$$p_{2}$$\tilde{p}_{2}$$t_{1}$$t_{2}$$t_{3}$$t_{4}$$t_{5}$$\tilde{t}_{1}$$\tilde{t}_{2}$$\tilde{t}_{3}$$\tilde{t}_{4}$$\tilde{t}_{5}$
(b) $(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{1})$
$p_{1}$$\tilde{p}_{1}$$p_{2}$$\tilde{p}_{2}$$t_{1}$$t_{2}$$t_{3}$$t_{4}$$t_{5}$$\tilde{t}_{1}$$\tilde{t}_{2}$$\tilde{t}_{3}$$\tilde{t}_{4}$$\tilde{t}_{5}$
(c) $(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{2})$
$p_{1}$$\tilde{p}_{1}$$p_{2}$$\tilde{p}_{2}$$t_{1}$$t_{2}$$t_{3}$$t_{4}$$t_{5}$$\tilde{t}_{1}$$\tilde{t}_{2}$$\tilde{t}_{3}$$\tilde{t}_{4}$$\tilde{t}_{5}$
(d) $(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{3})$
Figure 22. The situation if two eyes’ boundaries meet four times. Figure 22a
shows the torus parametrization for $E$ and $\tilde{E}$ with some suitable
choice of base point. Figures 22b–22d give graphs of homeomorphisms $\epsilon$
giving $\eta(\epsilon)=0$, with torus parametrizations drawn using base point
$(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{j},\tilde{s}_{\tilde{j}})$ as
indicated.
We are currently working under the assumption that
$(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{2})$. For every
choice of
$(t_{k},\tilde{t}_{\tilde{k}})=(\kappa(v),\tilde{\kappa}(\tilde{v}))$ we hope
to find a faithful indexable homeomorphism $\epsilon:\partial
E\to\partial\tilde{E}$ so that $\eta(\epsilon)=0$.
###### Observation 11.3.
Suppose we have drawn the parametrization for $E$ and $\tilde{E}$ using
$(s_{j},\tilde{s}_{\tilde{j}})=(\kappa(u),\tilde{\kappa}(\tilde{u}))$ as the
base point. Then finding a faithful indexable homeomorphism $\epsilon:\partial
E\to\partial\tilde{E}$ amounts to finding a curve $\gamma$ in
$\mathbb{T}\setminus\\{p_{1},\ldots,\tilde{p}_{M},\tilde{p}_{1},\ldots,\tilde{p}_{M}\\}$
which “looks like the graph of a strictly increasing function,” from the
lower-left-hand corner $(\kappa(u),\tilde{\kappa}(u))$ to the upper-right-hand
corner, passing through
$(\kappa(v),\tilde{\kappa}(\tilde{v}))=(t_{k},\tilde{t}_{\tilde{k}})$. Having
fixed such a curve $\gamma$, we may compute $\eta(\epsilon)$, where $\epsilon$
is the homeomorphism associated to $\gamma$, using Lemma 10.1.
In our current situation $\kappa(u)=s_{1}$ implies that $u\not\in\tilde{K}$,
and $\tilde{\kappa}(\tilde{u})=\tilde{s}_{2}$ implies that $\tilde{u}\not\in
K$. Thus by Lemma 10.1 we wish to find curves $\gamma$ so that both $p_{1}$
and $\tilde{p}_{1}$ lie in the upper diagonal $\Delta_{\uparrow}(u,\gamma)$,
or both lie in the lower diagonal $\Delta_{\downarrow}(u,\gamma)$. Figure 21b
depicts such a $\gamma$ for every $(t_{k},\tilde{t}_{\tilde{k}})$ except for
$(t_{2},\tilde{t}_{2})$. Suppose
$(t_{2},\tilde{t}_{2})=(\kappa(v),\tilde{\kappa}(\tilde{v}))$. Then
$v\in\tilde{K}$ and $\tilde{v}\in K$, so we get a contradiction by Lemma 9.8.
From now on points $(t_{k},\tilde{t}_{\tilde{k}})$ which are handled via Lemma
9.8 will be labeled with an asterisk, as in Figure 21b.
Next suppose that
$(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{1})$. The situation
is depicted in Figure 21a. Then $u\not\in\tilde{K}$ and $\tilde{u}\in K$, so
to achieve $\eta(\epsilon)=0$ we wish to find curves $\gamma$ so that
$p_{1}\in\Delta_{\downarrow}(u,\gamma)$ and
$\tilde{p}_{1}\in\Delta_{\uparrow}(u,\gamma)$. This time there are four
$(t_{k},\tilde{t}_{\tilde{k}})$ for which this is not possible. For
$(\kappa(v),\tilde{\kappa}(\tilde{v})=(t_{2},\tilde{t}_{1}),(t_{2},\tilde{t}_{3})$
we again get contradictions via Lemma 9.8. The following observation will be
helpful for
$(\kappa(v),\tilde{\kappa}(\tilde{v})=(t_{1},\tilde{t}_{3}),(t_{3},\tilde{t}_{1})$.
###### Observation 11.4.
Choose $(s_{j},\tilde{s}_{\tilde{j}})=(\kappa(u),\tilde{\kappa}(\tilde{u}))$
and draw our torus parametrization for $E$ and $\tilde{E}$ using
$(\kappa(u),\tilde{\kappa}(\tilde{u}))$ as the base point. Then a choice of
$(t_{k},\tilde{t}_{\tilde{k}})=(\kappa(v),\tilde{\kappa}(\tilde{v}))$ defines
for us four “quadrants,” namely
$[\kappa(u)\to\kappa(v)]_{\mathbb{S}^{1}}\times[\tilde{\kappa}(\tilde{u})\to\tilde{\kappa}(\tilde{v})]_{\mathbb{S}^{1}}$
the points “below and to the left of” $(t_{k},\tilde{t}_{\tilde{k}})$, etc.
Then which of the two arcs $\partial A\cap\partial E$ and $\partial
B\cap\partial E$, and which of $\partial\tilde{A}\cap\partial\tilde{E}$ and
$\partial\tilde{B}\cap\partial\tilde{E}$, a point $P_{i}$ or $\tilde{P}_{i}$
lies on is determined by which quadrant $p_{i}$ or $\tilde{p}_{i}$ lies in.
For example, suppose
$(\kappa(v),\tilde{\kappa}(\tilde{v})=(t_{1},\tilde{t}_{3})$. Then $p_{1}$ and
$\tilde{p}_{1}$ lie in the lower-right-hand quadrant
$[\kappa(v)\to\kappa(u)]_{\mathbb{S}^{1}}\times[\tilde{\kappa}(\tilde{u})\to\tilde{\kappa}(\tilde{v})]_{\mathbb{S}^{1}}$,
so both $P_{1}$ and $\tilde{P}_{1}$ lie on $\partial E\cap\partial B=[v\to
u]_{\partial E}$ and on
$\partial\tilde{E}\cap\partial\tilde{A}=[\tilde{u}\to\tilde{v}]_{\partial\tilde{E}}$.
Also $[\tilde{v}\to\tilde{u}]_{\partial E}$ is contained in $E$, because both
$\tilde{v}$ and $\tilde{u}$ are, and no $p_{i}$ nor any $\tilde{p}_{i}$ lies
the two upper quadrants
$[\tilde{v}\to\tilde{u}]_{\mathbb{S}^{1}}\times\mathbb{S}^{1}$. Then we get a
contradiction via Lemma 9.6. A similar argument gives us a contradiction via
Lemma 9.6 for $(\kappa(v),\tilde{\kappa}(\tilde{v})=(t_{3},\tilde{t}_{1})$.
From now on points $(t_{k},\tilde{t}_{\tilde{k}})$ which are handled via Lemma
9.6 in this way will be labeled with a diamond, as in Figure 21a. This
completes the proof of Proposition 11.1 when $\partial E$ and
$\partial\tilde{E}$ meet at exactly two points.
###### Case 2.
Suppose that $\partial E$ and $\partial\tilde{E}$ meet at exactly four points.
Lemma 9.2 guarantees that with a correct choice of base point, the torus
parametrization for $E$ and $\tilde{E}$ is as in Figure 22a. As before, we may
suppose without loss of generality that $u\not\in\tilde{K}$, thus
$\kappa(u)=s_{1}$, by Lemma 9.8 and relabeling the $s_{i}$ if necessary. Thus
we have the possibilities
$\tilde{\kappa}(\tilde{u})=\tilde{s}_{1},\tilde{s}_{2},\tilde{s}_{3},\tilde{s}_{4}$
to consider. The cases
$(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{2})$ and
$(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{4})$ are symmetric by
Figure 23. Figures 22b–22d give the solutions for
$\tilde{\kappa}(u)=\tilde{s}_{1},\tilde{s}_{2},\tilde{s}_{3}$, modulo some
remaining special cases.
$u=\kappa^{-1}(s_{1})$$\tilde{u}=\tilde{\kappa}^{-1}(\tilde{s}_{2})$
(a)
$u=\kappa^{-1}(s_{1})$$\tilde{u}=\tilde{\kappa}^{-1}(\tilde{s}_{4})$
(b)
Figure 23. The topological configurations of $\\{E,u,\tilde{u}\\}$ leading to
the cases
$(\kappa(u),\tilde{\kappa}(u)=(s_{1},\tilde{s}_{2}),(s_{1},\tilde{s}_{4})$. We
see that these are equivalent via a rotation, because
$\eta(\epsilon)=\eta(\epsilon^{-1})$.
Points $(t_{k},\tilde{t}_{\tilde{k}})$ labeled with an asterisk or a diamond
are handled via Lemma 9.8 or 9.6 respectively as before. Suppose
$(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{1})$, and
$(\kappa(v),\tilde{\kappa}(\tilde{v}))=(t_{1},\tilde{t}_{1})$ in Figure 22b.
Then the upper-right-hand quadrant defined for us by $(t_{1},\tilde{t}_{1})$
contains all four points $p_{1},p_{2},\tilde{p}_{1},\tilde{p}_{2}$, thus the
circular arcs $[v\to u]_{\partial E}$ and
$[\tilde{v}\to\tilde{u}]_{\partial\tilde{E}}$ meet four times, a
contradiction. All points that are handled in this way are labeled with a
small circle. Finally, if
$(\kappa(u),\tilde{\kappa}(\tilde{u}))=(s_{1},\tilde{s}_{3})$ and
$(\kappa(v),\tilde{\kappa}(\tilde{v}))=(t_{3},\tilde{t}_{3})$ in Figure 22d,
we get a contradiction via Lemma 9.7.
###### Case 3.
Suppose that $\partial E$ and $\partial\tilde{E}$ meet at exactly six points.
Then Lemma 9.4 restricts us to two cases to consider. These are handled in
Figure 24. This completes the proof of Proposition 11.1. ∎
$(\kappa(u),\tilde{\kappa}(\tilde{u}))$$p_{1}$$p_{2}$$p_{3}$$\tilde{p}_{1}$$\tilde{p}_{2}$$\tilde{p}_{3}$$(v,\tilde{v})$
(a)
$(\kappa(u),\tilde{\kappa}(\tilde{u}))$$p_{1}$$p_{2}$$p_{3}$$\tilde{p}_{1}$$\tilde{p}_{2}$$\tilde{p}_{3}$$(v,\tilde{v})$
(b)
Figure 24. Torus parametrizations for the eyes depicted in Figure 14. Both
are drawn with base point $(\kappa(u),\tilde{\kappa}(\tilde{u}))$. Each of the
two curves is the graph of a faithful indexable homeomorphism
$\epsilon:\partial E\to\partial\tilde{E}$ satisfying $\eta(\epsilon)=0$.
###### Proposition 11.5.
Let $\mathcal{D}=\\{D_{1},\ldots,D_{n}\\}$ and
$\tilde{\mathcal{D}}=\\{\tilde{D}_{1},\ldots,\tilde{D}_{n}\\}$ be as in the
statement of our Main Index Theorem 5.3. That is, they are thin disk
configurations in the plane $\mathbb{C}$ in general position, realizing the
same pair $(G,\Theta)$ where $G=(V,E)$ is a graph and $\Theta:E\to[0,\pi)$. In
addition, suppose that for all $i,j$ the sets $D_{i}\setminus D_{j}$ and
$\tilde{D}_{i}\setminus\tilde{D}_{j}$ meet. Suppose that there is no $i$ so
that one of $D_{i}$ and $\tilde{D}_{i}$ contains the other. Suppose that for
every disjoint non-empty $I,J\subset\\{1,\ldots,n\\}$ so that $I\sqcup
J=\\{1,\ldots,n\\}$, there exists an eye $E_{ij}$ with $i\in I$ and $j\in J$
so that one of $E_{ij}$ and $\tilde{E}_{ij}$ contains the other. Then for
every $i$ we have that any faithful indexable homeomorphism
$\delta_{i}:\partial D_{i}\to\partial\tilde{D}_{i}$ satisfies
$\eta(\delta_{i})\geq 1$. Furthermore there is a $k$ so that $D_{i}$ and
$D_{k}$ overlap for all $i$, and so that one of $E_{ij}$ and $\tilde{E}_{ij}$
contains the other if and only if either $i=k$ or $j=k$.
For the proof of Proposition 11.5, we need to establish three geometric
lemmas:
$\pi-\measuredangle(d_{-1},D)$$\pi-\measuredangle(d_{+1},D)$$\measuredangle(d_{-1},d_{+1})$$m(d_{-1})$$m(d_{+1})$$m(D)$(4.07,0.94328123)
(4.02,1.1432812) (3.995,1.2432812)(3.895,1.3932812)(3.82,1.4432813)
(3.745,1.4932812)(3.57,1.5682813)(3.47,1.5932813)
(3.37,1.6182812)(3.22,1.6932813)(3.07,1.8432813) (4.97,0.94328123)
(4.97,1.1432812) (4.97,1.2432812)(5.02,1.4182812)(5.07,1.4932812)
(5.12,1.5682813)(5.295,1.6682812)(5.42,1.6932813)
(5.545,1.7182813)(5.72,1.7932812)(5.87,1.9432813) (4.57,0.44328126)
(4.72,0.54328126) (4.795,0.59328127)(4.945,0.59328127)(5.02,0.54328126)
(5.095,0.49328125)(5.22,0.31828126)(5.27,0.19328125)
(5.32,0.06828125)(5.37,-0.18171875)(5.37,-0.30671874)
(5.37,-0.43171874)(5.395,-0.6067188)(5.47,-0.75671875)
Figure 25. The image of the Möbius transformation described in the proof of
Lemma 11.6.
###### Lemma 11.6.
Suppose that $D,d_{-1},d_{+1}$ are closed disks in the plane $\mathbb{C}$ in
the topological configuration depicted in Figure 27a. Then
$\pi+\measuredangle(d_{-1},d_{+1})<\measuredangle(d_{-1},D)+\measuredangle(d_{+1},D)$.
###### Proof.
Let $m$ be a Möbius transformation sending a point on the bottom arc of
$\partial D\setminus d_{-1}\cup d_{+1}$ to $\infty$, so that $m(D)$ is the
lower half plane. Then the images of the disks under $m$ are as depicted in
Figure 25. We see that
$(\pi-\measuredangle(d_{-1},D))+(\pi-\measuredangle(d_{+1},D))+\measuredangle(d_{-1},d_{+1})<\pi$
and the desired inequality follows. ∎
###### Lemma 11.7.
Suppose that $D,d_{-1},d_{+1}$ are closed disks in the plane $\mathbb{C}$ in
the topological configuration depicted in Figure 26. Then
$\measuredangle(d_{-1},D)+\measuredangle(d_{+1},D)<\pi+\measuredangle(d_{-1},d_{+1})$.
###### Proof.
This is proved similarly to Lemma 11.6, see Figure 26. ∎
$\pi-\measuredangle(d_{-1},D)$$\pi-\measuredangle(d_{+1},D)$$\measuredangle(d_{-1},d_{+1})$$m(d_{-1})$$m(d_{+1})$$m(D)$$d_{-1}$$d_{+1}$$D$$m$(10.39,-0.2203125)
(10.49,-0.0703125) (10.54,0.0046875)(10.69,0.1046875)(10.79,0.1296875)
(10.89,0.1546875)(11.115,0.1796875)(11.24,0.1796875)
(11.365,0.1796875)(11.64,0.2046875)(11.79,0.2296875)
(11.94,0.2546875)(12.165,0.3296875)(12.24,0.3796875)
(12.315,0.4296875)(12.465,0.5546875)(12.54,0.6296875)
(12.615,0.7046875)(12.715,0.8546875)(12.79,1.0796875) (9.49,-0.2203125)
(9.39,-0.0703125) (9.34,0.0046875)(9.115,0.1046875)(8.94,0.1296875)
(8.765,0.1546875)(8.465,0.2046875)(8.34,0.2296875)
(8.215,0.2546875)(8.015,0.3546875)(7.94,0.4296875)
(7.865,0.5046875)(7.74,0.7046875)(7.69,0.8296875)
(7.64,0.9546875)(7.59,1.1296875)(7.59,1.2796875) (9.99,0.5796875)
(10.19,0.6796875) (10.29,0.7296875)(10.365,0.9046875)(10.34,1.0296875)
(10.315,1.1546875)(10.24,1.4046875)(10.19,1.5296875)
(10.14,1.6546875)(10.09,1.8796875)(10.09,2.1796875) (2.69,-1.8203125)
(2.79,-2.0203125) (2.84,-2.1203125)(2.99,-2.2453125)(3.09,-2.2703125)
(3.19,-2.2953124)(3.315,-2.2953124)(3.39,-2.2203126) $\infty$$m$
Figure 26. A Möbius transformation chosen to prove Lemma 11.7. Here $\partial
m(D)=\mathbb{R}$, and $m(D)$ is the lower half-plane.
$d_{-1}$$d_{+1}$$D$
(a)
$d_{-1}$$d_{+1}$$D$
(b)
Figure 27. The topological configurations for which we prove Lemma 11.8.
###### Lemma 11.8.
Suppose that $D,d_{-1},d_{+1}$ are closed disks in the plane $\mathbb{C}$ in
one of the two topological configurations depicted in Figure 27. In either
case, we get that both $\measuredangle(d_{-1},D)$ and
$\measuredangle(d_{+1},D)$ are strictly greater than
$\measuredangle(d_{-1},d_{+1})$.
###### Proof.
Suppose that the disks are in the configuration depicted in Figure 27a. Let
$m$ be a Möbius transformation sending a point on $\partial d_{+1}\setminus D$
to $\infty$. We may suppose without loss of generality that $m(d_{+1})$ is the
lower half-plane. Then the image of our disks under $m$ is as in Figure 28,
where $\theta_{1}=\measuredangle(d_{+1},D)$ and
$\theta_{2}=\measuredangle(d_{-1},d_{+1})$. It is then an easy exercise to
show that $\theta_{2}<\theta_{1}$ because the two circles $\partial m(d_{-1})$
and $\partial m(D)$ meet in the upper half-plane. The other inequality follows
by symmetry. The case where the disks are in the configuration depicted in
Figure 27b follows from the first case after applying a Möbius transformation
sending a point in the interior of $D\cap d_{-1}\cap d_{+1}$ to $\infty$. ∎
###### Proof of Proposition 11.5.
Recalling notation from before, if $D_{i}$ and $D_{j}$ overlap then
$E_{ij}=D_{i}\cap D_{j}$, similarly $\tilde{E}_{ij}$, and a homeomorphism
$\delta_{i}:\partial D_{i}\to\partial\tilde{D}_{i}$ is called _faithful_ if it
restricts to homeomorphisms $D_{j}\cap\partial
D_{i}\to\tilde{D}_{j}\cap\partial D_{i}$ for all $j$.
###### Claim 11.9.
Let $i,j$ be so that $\tilde{E}_{ij}\subset E_{ij}$. Denote $A=D_{i}$,
$B=D_{j}$, $\tilde{A}=\tilde{D}_{i}$, $\tilde{B}=\tilde{D}_{j}$. Then both
$\spadesuit$c and $\clubsuit$c occur. Also one of $\diamondsuit$d,
$\diamondsuit$e, $\diamondsuit$g occurs, and one of $\heartsuit$d,
$\heartsuit$e, $\heartsuit$g occurs. Furthermore at least one of
$\diamondsuit$g and $\heartsuit$g occurs.
To see why, note first that both $\spadesuit$c and $\clubsuit$c occur, because
these are the only candidates in Figure 10 where $\tilde{A}\cap\tilde{B}$ is
contained in the respective one of $A$ and $B$. Note the following by Lemmas
11.8
(5)
$\measuredangle(A,B)=\measuredangle(\tilde{A},\tilde{B})<\measuredangle(\tilde{A},B)$
and the following by Lemma 11.6.
(6)
$\pi+\measuredangle(\tilde{A},\tilde{B})<\measuredangle(A,\tilde{A})+\measuredangle(\tilde{B},A),\qquad\pi+\measuredangle(\tilde{A},\tilde{B})<\measuredangle(\tilde{A},B)+\measuredangle(\tilde{B},B)$
Next, because $\tilde{A}\cap\tilde{B}$ contains part of $\partial\tilde{A}$
and part of $\partial\tilde{B}$, both of these circles must pass through
$A\cap B$. Noting that $\diamondsuit$f cannot occur because
$\tilde{A}\not\subset A$, we conclude that one of $\diamondsuit$a,
$\diamondsuit$d, $\diamondsuit$e, $\diamondsuit$g, and $\diamondsuit$h occurs.
If either of $\diamondsuit$a and $\diamondsuit$h occurs, then Lemma 11.8
implies that $\measuredangle(\tilde{A},B)<\measuredangle(A,B)$, contradicting
5. This leaves us with only the claimed possibilities $\diamondsuit$d,
$\diamondsuit$e, and $\diamondsuit$g. By symmetry we also get that one of
$\heartsuit$d, $\heartsuit$e, and $\heartsuit$g occurs.
Finally, note by Lemma 11.7 that if $\diamondsuit$d or $\diamondsuit$e occurs
then we get
$\measuredangle(\tilde{A},A)+\measuredangle(\tilde{A},B)<\pi+\measuredangle(A,B)$,
and if $\diamondsuit$d or $\diamondsuit$e occurs then we get
$\measuredangle(\tilde{B},A)+\measuredangle(\tilde{B},B)<\pi+\measuredangle(A,B)$.
We get that if neither of $\diamondsuit$g and $\heartsuit$g occurs, then we
may combine these two inequalities with 6 to arrive at a contradiction,
establishing Claim 11.9.
$\theta_{1}$$\theta_{2}$$m(d_{+1})$$m(d_{-1})$$m(D)$
Figure 28. The image of the Möbius transformation described in the proof of
Lemma 11.8.
Moving on, pick $1\leq i\leq n$. By the hypotheses of Proposition 11.5 there
is a $j$ so that one of $E_{ij}$ and $\tilde{E}_{ij}$ contains the other,
without loss of generality so that $\tilde{E}_{ij}\subset E_{ij}$. Let
$\delta_{i}:\partial D_{i}\to\partial\tilde{D}_{i}$ be a faithful indexable
homeomorphism. Continuing with the notation of Claim 11.9, regardless of which
of $\diamondsuit$d, $\diamondsuit$e, and $\diamondsuit$g occurs, there is a
point $z\in\partial A\cap\partial E$ so that $z$ lies in the interior of
$\tilde{A}$. Furthermore note that $\delta_{i}(z)\in\partial\tilde{E}$ by the
faithfulness condition, and that $\tilde{E}\subset A$ by our hypotheses, so
$\delta_{i}(z)$ lies in the interior of $A$. Thus if we draw a torus
parametrization for $A$ and $\tilde{A}$ using
$(\kappa(z),\tilde{\kappa}(\delta_{i}(z)))$ as the base point, Lemma 10.1
implies that $\eta(\delta_{i})\geq 1$, because $\partial A$ and
$\partial\tilde{A}$ meet exactly twice. This establishes the first part of
Proposition 11.5.
Next, let $H_{\mathrm{u}}$ be the undirected simple graph having
$\\{1,\ldots,n\\}$ as its vertex set, so that $\left<i,j\right>$ is an edge in
$H_{\mathrm{u}}$ if and only if $D_{i}$ and $D_{j}$ overlap and one of
$E_{ij}$ and $\tilde{E}_{ij}$ contains the other. Note that $H_{\mathrm{u}}$
is connected, otherwise we could pick $I$ to be the vertex set of one
connected component of $H_{\mathrm{u}}$ and $J$ to be
$\\{1,\ldots,n\\}\setminus I$ to contradict the hypotheses of Proposition
11.5.
Let $H$ be the directed graph obtained from $H_{\mathrm{u}}$ in the following
way. Suppose $\left<i,j\right>$ is an edge in $H_{\mathrm{u}}$. Denote
$A=D_{i}$, $B=D_{j}$, $\tilde{A}=\tilde{D}_{i}$, $\tilde{B}=\tilde{D}_{j}$.
Then $\left<i\to j\right>$ is an edge in $H$ if and only if one of
$\diamondsuit$g and $\spadesuit$g occurs. In particular Claim 11.9 implies
that if $\left<i,j\right>$ is an edge in $H_{\mathrm{u}}$ then at least one of
$\left<i\to j\right>$ and $\left<j\to i\right>$ is an edge in $H$, and
possibly both are.
###### Claim 11.10.
Suppose that $\left<i\to j\right>$ is an edge in $H$. Then $\left<i,j\right>$
is the only edge in $H_{\mathrm{u}}$ having $i$ as a vertex.
To see why, observe first that if $\diamondsuit$d or $\diamondsuit$e occurs
then one intersection point $\partial A\cap\partial\tilde{A}$ lie in the
interior of $B$, and if $\diamondsuit$g occurs then both do. Suppose without
loss of generality that $\tilde{D}_{i}\cap\tilde{D}_{j}\subset D_{i}\cap
D_{j}$. Then both intersection points $\partial
D_{i}\cap\partial\tilde{D}_{i}$ lie in the interior of $D_{j}$. For
contradiction let $k\neq j$ so that $\left<i,k\right>$ is an edge in
$H_{\mathrm{u}}$. There are two cases.
###### Case 1.
Suppose that $\tilde{D}_{i}\cap\tilde{D}_{k}\subset D_{i}\cap D_{k}$.
Then one or both points $\partial D_{i}\cap\partial\tilde{D}_{i}$ lie in the
interior of $D_{k}$. Then there is a point in the interior of $D_{i}$ which
lies in the interiors of both $D_{j}$ and $D_{k}$, a contradiction.
###### Case 2.
Suppose that $D_{i}\cap D_{k}\subset\tilde{D}_{i}\cap\tilde{D}_{k}$.
Then by a symmetric restatement of Claim 11.9 we get that both points
$\partial D_{i}\cap\partial D_{k}$ lie in $\tilde{D}_{i}$. On the other hand
$\tilde{D}_{i}\cap\partial D_{i}$ is contained in the interior of $D_{j}$ by
$\diamondsuit$a. Thus there are points interior to all of $D_{i},D_{j},D_{k}$,
a contradiction.
This establishes Caim 11.10.
Thus $H$ is either the graph on two vertices $\\{i,j\\}$ having one or both of
$\left<i\to j\right>$ and $\left<j\to i\right>$ as edges, or is a graph having
$\\{k,i_{1},\ldots,i_{n-1}\\}$ as vertices and exactly the edges
$\left<i_{\ell}\to k\right>$ for $1\leq\ell<n$. The last part of Proposition
11.5 follows. ∎
This completes the proofs of the main results of this article.
## 12\. Generalizations, open problems, and conjectures
We conclude the article with some general conjectures which are directly
related to the new results of this article.
First, we discuss eliminating the thinness condition from the hypotheses of
our theorem statements. Most simply, it seems likely that our Main Theorems
1.5 and 1.4 should continue to hold with the thinness condition completely
omitted. In this direction, we conjecture the following fixed-point index
statement for non-thin configurations of disks:
###### Conjecture 12.1.
Suppose that $\mathcal{D}$ and $\tilde{\mathcal{D}}$ are disk configurations
in $\mathbb{C}$ realizing the same incidence data. Then any faithful indexable
homeomorphism $\phi:\partial\mathcal{D}\to\partial\tilde{\mathcal{D}}$
satisfies $\eta(\phi)\geq 0$.
Note that something stronger than Conjecture 12.1 would be required to prove
the corresponding generalizations of our main results on disk configurations
using the methods of this article: in particular, we would probably need to
generalize the notion of an isolated subsumptive subset of the common index
set of $\mathcal{D}$ and $\tilde{\mathcal{D}}$. However, it is plausible that
this is a workable approach.
We remark at this point that in the present author’s thesis, see [mishchenko-
thesis], the definition of _thin_ used in the statements of our Main Theorems
1.5 and 1.4 is slightly weaker than the one we have given here: there we call
a disk configuration _thin_ if given three disks from it, the intersection of
their interiors is empty. The proofs there are essentially the same, without
any interesting new ideas, but there are technically annoying degenerate
situations to deal with, so we do not work at this level of generality here.
More strongly, we make two conjectures which together would subsume all other
currently known rigidity and uniformization statements on disk configurations.
First:
###### Conjecture 12.2.
Suppose that $\mathcal{C}$ and $\tilde{\mathcal{C}}$ are disk configurations,
locally finite in $\mathbb{G}$ and $\tilde{\mathbb{G}}$ respectively, where
each of $\mathbb{G}$ and $\tilde{\mathbb{G}}$ is equal to one of $\mathbb{C}$
and $\mathbb{H}^{2}$, with the _a priori_ possibility that
$\mathbb{G}\neq\tilde{\mathbb{G}}$. Suppose that $\mathcal{C}$ and
$\tilde{\mathcal{C}}$ share a contact graph $G=(V,E)$. Suppose further that
$calC$ and $\tilde{\mathcal{C}}$ fill their respective spaces, in sense that
every connected component of $\mathbb{G}\setminus\cup_{D\in\mathcal{C}}D$ or
of
$\tilde{\mathbb{G}}\setminus\cup_{\tilde{D}\in\tilde{\mathcal{C}}}\tilde{D}$
is bounded. Then $\mathbb{G}=\tilde{\mathbb{G}}$.
Second:
###### Conjecture 12.3.
Suppose that $\mathcal{C}$ and $\tilde{\mathcal{C}}$ are disk configurations,
both locally finite in $\mathbb{G}$, where $\mathbb{G}$ is equal to one of
$\mathbb{C}$ and $\mathbb{H}^{2}$. Suppose that $\mathcal{C}$ and
$\tilde{\mathcal{C}}$ realize the same incidence data $(G,\Theta)$. Suppose
further that some maximal planar subgraph of $G$ is the 1-skeleton of a
triangulation of a topological open disk. Then $\mathcal{P}$ and
$\tilde{\mathcal{P}}$ differ by a Euclidean similarity if
$\mathbb{G}=\mathbb{C}$ or by a hyperbolic isometry if
$\mathbb{G}=\mathbb{H}^{2}$.
We also make the natural conjecture analogous to Conjecture 12.3 for disk
configurations on the Riemann sphere. It seems plausible that a fixed-point
index approach could work to prove Conjectures 12.2 and 12.3. An alternative
approach to try to prove Conjecture 12.2 is via vertex extremal length
arguments, along the lines of [MR1680531]*Uniformization Theorem 1.3 and
[MR1331923].
Finally, we conjecture that Conjecture 12.3 is the best possible uniqueness
statement of its type, in the following precise sense:
###### Conjecture 12.4.
Let $\mathcal{C}$ be a disk configuration which is locally finite in
$\mathbb{G}$, where $\mathbb{G}$ is one of $\hat{\mathbb{C}}$, $\mathbb{C}$,
or $\mathbb{H}^{2}$. Let $(G,\Theta)$ be the incidence data of $\mathcal{C}$.
Suppose that no maximal planar subgraph of $G$ is the 1-skeleton of a
triangulation of a topological open disk. Then there are other locally finite
disk configurations in $\mathbb{G}$ realizing $(G,\Theta)$ which are not
images of $\mathcal{C}$ under any conformal or anti-conformal automorphism of
$\mathbb{G}$.
The most promising tool to prove Conjecture 12.4 would be a good existence
statement taking incidence data $(G,\Theta)$ as input.
$D_{1}$$D_{2}$$\tilde{D}_{1}$$\tilde{D}_{2}$(4.37,1.52) (4.33,1.65)
(4.31,1.715)(4.32,1.835)(4.35,1.89) (4.38,1.945)(4.48,2.03)(4.55,2.06)
(4.62,2.09)(4.845,2.13)(5.0,2.14) (5.155,2.15)(5.465,2.15)(5.62,2.14)
(5.775,2.13)(6.095,2.075)(6.26,2.03) (6.425,1.985)(6.69,1.88)(6.79,1.82)
(6.89,1.76)(7.045,1.64)(7.1,1.58) (7.155,1.52)(7.21,1.425)(7.21,1.39)
(7.21,1.355)(7.195,1.3)(7.18,1.28) (7.165,1.26)(7.125,1.22)(7.1,1.2)
(7.075,1.18)(7.03,1.145)(7.01,1.13) (6.99,1.115)(6.955,1.09)(6.94,1.08)
(6.925,1.07)(6.905,1.06)(6.89,1.06) (3.93,-1.48) (3.96,-1.68)
(3.975,-1.78)(3.885,-1.975)(3.78,-2.07) (3.675,-2.165)(3.3,-2.2)(3.03,-2.14)
(2.76,-2.08)(2.285,-1.93)(2.08,-1.84) (1.875,-1.75)(1.555,-1.595)(1.44,-1.53)
(1.325,-1.465)(1.25,-1.325)(1.29,-1.25) (1.33,-1.175)(1.405,-1.07)(1.44,-1.04)
(1.475,-1.01)(1.525,-0.975)(1.57,-0.96) (4.45,-1.38) (4.42,-1.53)
(4.405,-1.605)(4.57,-1.745)(4.75,-1.81) (4.93,-1.875)(5.34,-1.95)(5.57,-1.96)
(5.8,-1.97)(6.17,-1.95)(6.31,-1.92) (6.45,-1.89)(6.65,-1.805)(6.71,-1.75)
(6.77,-1.695)(6.83,-1.58)(6.83,-1.52) (6.83,-1.46)(6.825,-1.36)(6.82,-1.32)
(6.815,-1.28)(6.805,-1.21)(6.8,-1.18) (6.795,-1.15)(6.79,-1.105)(6.79,-1.06)
(3.77,1.46) (3.78,1.56) (3.785,1.61)(3.695,1.715)(3.6,1.77)
(3.505,1.825)(3.3,1.895)(3.19,1.91) (3.08,1.925)(2.805,1.94)(2.64,1.94)
(2.475,1.94)(2.145,1.895)(1.98,1.85) (1.815,1.805)(1.595,1.71)(1.54,1.66)
(1.485,1.61)(1.415,1.52)(1.4,1.48) (1.385,1.44)(1.385,1.345)(1.4,1.29)
(1.415,1.235)(1.455,1.145)(1.48,1.11) (1.505,1.075)(1.545,1.015)(1.56,0.99)
(1.575,0.965)(1.605,0.935)(1.65,0.92)
Figure 29. A counterexample to Theorem 5.3 if we allow
$\measuredangle(D_{1},D_{2})\neq\measuredangle(\tilde{D}_{1},\tilde{D}_{2})$.
Any indexable $\phi:\partial(D_{1}\cup
D_{2})\to\partial(\tilde{D}_{1}\cup\tilde{D}_{2})$ making the shown
identifications gives $\eta(f)=-1$.
Finally, we consider other directions in which our Main Index Theorem 5.3
could be generalized. First, one may hope to weaken the condition that
$\mathcal{D}$ and $\tilde{\mathcal{D}}$ realize the same incidence data,
insisting only that they share a contact graph. Figure 29 provides an explicit
small-scale counterexample. Alternatively, we may hope to prove a theorem
analogous to Theorem 5.3 for collections of shapes other than metric closed
disks. For example, if $K$ and $\tilde{K}$ are compact convex sets in
$\mathbb{C}$ having smooth boundaries, one of which is the image of the other
by translation and scaling, then $\partial K$ and $\partial\tilde{K}$ meet at
most twice, so the Circle Index Lemma 3.2 applies. This gives hope for a
generalization of Theorem 5.3 in this direction. Schramm has proved rigidity
theorems for packings by shapes other than circles using related ideas, for
example in [MR1076089].
## References
|
arxiv-papers
| 2013-02-11T00:40:46 |
2024-09-04T02:49:41.591860
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Andrey M. Mishchenko",
"submitter": "Andrey Mishchenko",
"url": "https://arxiv.org/abs/1302.2380"
}
|
1302.2386
|
# On a connection between the reliability of multi-channel systems and the
notion of controlled-invariance entropy
Getachew K. Befekadu G. K. Befekadu is with the Department of Electrical
Engineering, University of Notre Dame, Notre Dame, IN 46556, USA.
E-mail: [email protected]
Version - February 25, 2013.
###### Abstract
The purpose of this note is to establish a connection between the problem of
reliability (when there is an intermittent control-input channel failure that
may occur between actuators, controllers and/or sensors in the system) and the
notion of controlled-invariance entropy of a multi-channel system (with
respect to a subset of control-input channels and/or a class of control
functions). We remark that such a connection could be used for assessing the
reliability (or the vulnerability) of the system, when some of these control-
input channels are compromised with an external “malicious” agent that may try
to prevent the system from achieving more of its goal (such as from attaining
invariance of a given compact state and/or output subspace).
###### Index Terms:
Invariance entropy, multi-channel system, topological feedback entropy,
reliability.
## I Introduction
In the last decades, the notions of measure-theoretic entropy and topological
entropy have been intensively studied in the context of measure-preserving
transformations or continuous maps (e.g., see [21], [19] and [11] for the
review of entropy in ergodic theory as well as in dynamical systems). For
instance, Adler et al. (in the paper [1]) introduced the notion of topological
entropy as a topologically invariant conjugacy, which is an analogue to the
notion of measure-theoretic entropy, for measuring the rate at which a
continuous map in a compact topological space generates initial-state
information. Later, [10] and [5] gave a new, but equivalent, definition of
topological entropy for continuous maps that led to proofs for connecting the
topological entropy with that of measure-theoretic entropy.
In the recent paper [9], the authors have introduced the notion of invariance
entropy for continuous-time systems as a measure of information that is
necessary to achieve invariance of a given state (or output) subspace (i.e., a
measure of how open-loop control functions have to be updated in order to
achieve invariance of a given subspace of the state space). In the present
paper, we explore this concept, which is closely related to the notion of
topological feedback entropy (see [17] and [13]), for assessing the
reliability of a multi-channel system when there is an intermittent control-
channel failure that may occur between actuators, controllers and/or sensors
in the system. Specifically, we provide conditions on the minimum rate at
which the multi-channel system can generate information with respect to a
subset of control-input channels and/or a class of control functions when the
system states are restricted to a given controlled-invariant subspaces. Here,
it is important to note that the intermittent control-channel failures may not
necessarily represent any physical failures within the system. Rather, this
can also be interpreted as external “malicious” agent who is trying to prevent
the system from achieving more of its goal, i.e., from attaining invariance of
a given state (or output) subspace.
With the emergence of networked control systems (e.g., see [2]), these notions
of entropy have found renewed interest in the research community (e.g., see
[17], [18] and [6]). Notably, in the paper [17], Nair et al. have introduced
the notion of topological feedback entropy, which is based on the ideas of
[1], to quantify the minimum rate at which deterministic discrete-time
dynamical systems generate information relevant to the control objective of
set-invariance. More recently, the notion of controlled-invariance entropy (as
well as the notion of almost invariance entropy) has been studied for
continuous-time control systems in [6], [8], [14] and [9] based on the metric-
space technique of [5]. It is noted that such an invariant entropy provides a
measure of the smallest growth rate for the number of open-loop control
functions that are needed to confine the states within an arbitrarily small
distance (in the sense of gap metric) from a given subspace. For discontinuous
systems, we also note that the notion of topological entropy has been
investigated with respect to piecewise continuous piecewise monotone
transformations (e.g., see [15] and [16]).
The paper is organized as follows. In Section II, we present preliminary
results on the invariance entropy of multi-channel systems with respect to a
set of control-input channels and a class of control functions. Section III
presents the main results – where we provide conditions on the information
this is necessary for achieving invariance of the multi-channel system states
in (or near) a given subspace.
## II Preliminaries
### II-A Notation
For $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times r}$ and a linear
space $\mathscr{X}$, the supremal $(A,B)$-invariant subspace contained in
$\mathscr{X}$ is denoted by
$\mathscr{V}^{*}\triangleq\sup\mathscr{V}\left(A,B;\mathscr{X}\right)$. For a
subspace $\mathscr{V}\subset\mathscr{X}$, we use $\langle
A\,|\,\mathscr{V}\rangle$ to denote the smallest invariant subspace containing
$\mathscr{V}$.
### II-B Problem formulation
Consider the following generalized multi-channel system
$\displaystyle\dot{x}(t)$
$\displaystyle=Ax(t)+\sum_{j\in\mathcal{N}}B_{j}u_{j}(t),\quad
x(t_{0})=x_{0},\quad t\in[t_{0},+\infty),$ (1)
where $A\in\mathbb{R}^{n\times n}$, $B_{j}\in\mathbb{R}^{n\times r_{j}}$,
$x(t)\in\mathscr{X}\subset\mathbb{R}^{n}$ is the state of the system,
$u_{j}(t)\in\mathscr{U}_{j}\subset\mathbb{R}^{r_{j}}$ is the control input to
the $j$th-channel and $\mathcal{N}\triangleq\\{1,2,\ldots,N\\}$ represents the
set of controllers (or the set of control-input channels) in the system.
Let us introduce the following class of admissible controls that will be used
in the sequel
$\displaystyle\mathscr{U}\subseteq\biggm{\\{}u\in\prod_{i\in\mathcal{N}}L_{\infty}(\mathbb{R},\,\mathbb{R}^{r_{i}})\,\biggm{|}\,u_{\neg
j}(t)\in\mathscr{U}_{\neg j}\triangleq\prod_{i\in\mathcal{N}_{\neg
j}}\mathscr{U}_{i}~{}~{}\text{{\em for almost all}}$
$\displaystyle\,\,t\in[0,\,\infty)~{}\,\text{and}$ $\displaystyle\forall
j\in\mathcal{N}\cup\\{0\\}\biggm{\\}},$ (2)
where $u_{\neg 0}(t)=\bigl{(}u_{1}(t),\,u_{2}(t),\,\ldots\,u_{N}(t)\bigr{)}$
and $u_{\neg
i}(t)=\bigl{(}u_{1}(t),\,\dots\,u_{i-1}(t),\,u_{i+1}(t),\,\ldots\,u_{N}(t)\bigr{)}$
for $i\in\mathcal{N}$. Moreover, $\mathcal{N}_{\neg 0}\triangleq\mathcal{N}$
and $\mathcal{N}_{\neg j}\triangleq\mathcal{N}\setminus\\{j\\}$ for
$j=1,2,\ldots,N$.
In the remainder of this subsection, we provide some results from geometric
control theory (e.g., see [3], [22], [4] and [20] for details about this
theory).
###### Definition 1
Let $\mathscr{V}_{j}\subset\mathscr{X}$ for $j\in\mathcal{N}\cup\\{0\\}$.
1. (i)
If $\mathscr{V}_{j}$ is $(A)$ \- invariant, then
$A\mathscr{V}_{j}\subset\mathscr{V}_{j}$.
2. (ii)
If $\mathscr{V}_{j}$ is $(A,\,B_{\neg j})$-invariant, then
$A\mathscr{V}_{j}\subset\mathscr{V}_{j}+\mathscr{B}_{\neg j}$, where $B_{\neg
0}\triangleq\bigl{[}\begin{array}[]{cccc}B_{1}&B_{2}&\ldots&B_{N}\end{array}\bigr{]}$,
$B_{\neg
j}\triangleq~{}\bigl{[}\begin{array}[]{ccc}B_{1}\,\ldots\,B_{j-1}&B_{j+1}\,\ldots\,B_{N}\end{array}\bigr{]}$
and $\mathscr{B}_{\neg j}\triangleq\operatorname{Im}B_{\neg j}$ for
$j\in\mathcal{N}$.111In this paper, we consider a case in which one of the
controllers is extracted due to an intermittent failure. However, following
the same discussion, we can also consider when the fault is associated with at
most two or more possible controllers in the system.
The following lemma, which is a well-known result, will be stated without
proof (e.g., see [22] or [4]).
###### Lemma 1
Suppose $\mathfrak{I}_{j}\left(A,B_{\neg j};\mathscr{F}\right)$ is a family of
$(A,B_{\neg j})$-invariant subspaces for $j\in\mathcal{N}$. Then, every
subspace $\mathscr{F}\subset\mathscr{X}$ contains a unique supremal
$(A,B_{\neg j})$-invariant subspace which is given by
$\mathscr{V}_{j}^{*}=\sup\mathfrak{I}_{j}\left(A,B_{\neg
j};\mathscr{F}\right)$ for each $j\in\mathcal{N}$.
Then, we state the following result which is a direct application of Lemma 1.
###### Theorem 1
Let $\mathscr{V}_{j}\subset\mathscr{X}$ for each $j\in\mathcal{N}$. Then,
$\mathscr{V}_{j}$ is a member of the subspace families
$\mathfrak{I}_{j}(A,B_{\neg j};\mathscr{X})$ that preserves the property of
$(A,B_{\neg j})$-invariant (i.e.,
$\mathscr{V}_{j}\in\mathfrak{I}_{j}(A,B_{\neg j};\mathscr{X})$), if and only
if
$\displaystyle A\mathscr{V}_{j}\subset\mathscr{V}_{j}+\mathscr{B}_{\neg
j},~{}~{}\forall j\in\mathcal{N}.$ (3)
###### Proof:
The proof follows the same lines of argument as that of Wonham (see [22] p.
88). Suppose $\mathscr{V}_{j}\in\mathfrak{I}_{j}(A,B_{\neg j};\mathscr{X})$
and let $v^{j}\in\mathscr{V}_{j}$ for $j\in\mathcal{N}$, then
$(A,\sum_{i\in\mathcal{N}_{\neg j}}B_{i}K_{i})v^{j}=w^{j}$ for some
$w^{j}\in\mathscr{V}_{j}$, i.e.,
$\displaystyle Av^{j}=w^{j}-\sum_{i\in\mathcal{N}_{\neg
j}}B_{i}K_{i}v^{j}\in\mathscr{V}_{j}+\mathscr{B}_{\neg j},$ (4)
On the other hand, let $\\{v_{1}^{j},v_{2}^{j},\ldots,v_{\mu}^{j}\\}$ be a
basis for $\mathscr{V}_{j}$ for $j=1,2,\ldots,N$. Suppose that (3) holds true.
Then, there exist $w_{k}^{j}\in\mathscr{V}_{j}$ and $u_{k}^{\neg
j}\in\mathscr{U}_{\neg j}$ for $k\in\\{1,2,\dots,\mu^{j}\\}$ such that
$\displaystyle Av_{k}^{j}=w_{k}^{j}-B_{\neg j}u_{k}^{\neg j},\qquad
k\in\\{1,2,\dots,\mu^{j}\\}.$ (5)
If we further define the following mapping $K_{\neg
j}^{0}\colon\mathscr{V}_{j}\to\mathscr{U}_{\neg j}$
$\displaystyle K_{\neg j}^{0}v_{k}^{j}=u_{k}^{\neg j},\qquad
k\in\\{1,2,\dots,\mu^{j}\\},$ (6)
and then by letting $K_{\neg j}$ to be any extension of $K_{\neg j}^{0}$ to
$\mathscr{X}$. We, therefore, have $(A+\sum_{i\in\mathcal{N}_{\neg
j}}B_{i}K_{i})v_{k}^{j}=w_{k}^{j}\in\mathscr{V}$, i.e.,
$(A+\sum_{i\in\mathcal{N}_{\neg
j}}B_{i}K_{i})\mathscr{V}_{j}\subset\mathscr{V}_{j}$, so that the controlled-
invariant subspace $\mathscr{V}_{j}$ satisfies
$\displaystyle\mathscr{V}_{j}\in\mathfrak{I}_{j}(A,B_{\neg
j};\mathscr{X}),\quad\forall j\in\mathcal{N}.$ (7)
$\Box$
###### Corollary 1
Let the subspace $\mathscr{V}\subset\mathscr{X}$ be $(A,\,B_{\neg j})$ \-
invariant for each $j\in\mathcal{N}\cup\\{0\\}$, then there exists a class of
maps $\mathcal{K}\ni K:\mathscr{X}\to\mathscr{U}$ that satisfies
$\displaystyle\mathcal{K}\subseteq\biggm{\\{}\underbrace{\bigm{(}K_{1},K_{2},\ldots,K_{N}\bigm{)}}_{\triangleq
K}\in\prod_{j\in\mathcal{N}}\mathbb{R}^{r_{j}\times
n}\,\Bigl{\lvert}\,\biggm{(}A+\sum_{i\in\mathcal{N}_{\neg
j}}B_{i}K_{i}\biggm{)}\mathscr{V}\subset\mathscr{V},~{}~{}\forall
j\in\mathcal{N}\cup\\{0\\}\biggm{\\}}.$ (8)
###### Remark 1
Note that the controlled-invariant subspace $\mathscr{V}$, which is given in
the aforementioned corollary, is also a subspace of $\mathscr{V}^{*}$ (see
also Equation (9) below)
Next, we introduce the following theorem on the family of supremal controlled-
invariant subspaces that will be useful for our work in the next section.
###### Theorem 2
Let $\mathfrak{V}\triangleq\\{\mathscr{V}_{j}^{*}\\}_{j\in\mathcal{N}}$ be a
set of supremal controlled-invariant subspaces with respect to the family of
systems $\left\\{\bigl{(}A,B_{\neg j}\bigr{)}\right\\}_{j\in\mathcal{N}}$.
Then, the set $\mathfrak{V}$ forms a lattice of controlled-invariant
subspaces. Moreover, there exists a unique (nonempty) subspace that satisfies
$\displaystyle\mathscr{V}^{*}=\bigcap_{j\in\mathcal{N}}\mathscr{V}_{j}^{*}\in\left\\{\bigcap_{j\in\mathcal{N}}\mathscr{V}_{j}~{}\biggm{|}~{}\mathscr{V}_{j}\subset\sup\mathfrak{I}_{j}(A,B_{\neg
j};\mathscr{X}),~{}\forall j\in\mathcal{N},~{}~{}\exists\,u_{\neg
j}\in\mathscr{U}_{\neg j}\right\\},$ (9)
where
$\mathscr{V}_{j}^{*}=\sup\mathfrak{I}_{j}\left((A+\sum_{i\in\mathcal{N}_{\neg
j}}B_{i}K_{i}),\mathscr{B}_{j}\right)$ and
$\mathscr{B}_{j}\triangleq\operatorname{Im}B_{j}$ for all $j\in\mathcal{N}$.
###### Proof:
Note that $\mathscr{V}_{j}+\mathscr{V}_{\neg j}\in\mathfrak{V}$ and
$\mathscr{V}_{j}\cap\mathscr{V}_{\neg j}\in\mathfrak{V}$ for all
$j\in\mathcal{N}$. Moreover, if we define the gap metric
$\varrho_{j}(\mathscr{V}_{0},\mathscr{V}_{j})$ between the controlled-
invariant subspaces $\mathscr{V}_{0}$ and $\mathscr{V}_{j}$ as
$\displaystyle\varrho_{j}(\mathscr{V}_{0},\mathscr{V}_{j})=\|P_{\mathscr{V}_{0}}-P_{\mathscr{V}_{j}}\|,~{}~{}\forall
j\in\mathcal{N},$ (10)
where $\mathscr{V}_{0}=\sup\mathfrak{I}_{0}(A,B_{\neg 0};\mathscr{X})$,
$P_{\mathscr{V}_{0}}$ and $P_{\mathscr{V}_{j}}$ are orthogonal projectors on
$\mathscr{V}_{0}$ and $\mathscr{V}_{j}$, respectively. Then, the set of all
controlled-invariant subspaces in $\mathscr{X}$ forms a compact metric state-
space with respect to the above gap metric (see also [12]). On the other hand,
let us define the following family of subspaces
$\displaystyle\tilde{\mathscr{V}}=\left\\{\bigcap_{j\in\mathcal{N}}\mathscr{V}_{j}~{}\biggm{|}~{}\mathscr{V}_{j}\subset\sup\mathfrak{I}_{j}(A,B_{\neg
j};\mathscr{X}),~{}\forall j\in\mathcal{N},~{}~{}\exists\,u_{\neg
j}\in\mathscr{U}_{\neg j}\right\\}.$ (11)
Suppose the subspace $\mathscr{V}^{*}$ exists, then it is a unique member of
the family that is defined in (11), i.e.,
$\displaystyle\mathscr{V}^{*}=\bigcap_{j\in\mathcal{N}}\mathscr{V}_{j}^{*}\in\tilde{\mathscr{V}},$
(12)
with
$\mathscr{V}_{j}^{*}=\sup\mathfrak{I}_{j}\left((A+\sum_{i\in\mathcal{N}_{\neg
j}}B_{i}K_{i}),\mathscr{B}_{j}\right)$ for all $j\in\mathcal{N}$. Note that we
have $\mathscr{V}_{j}=\langle A+\sum_{i\in\mathcal{N}_{\neg
j}}B_{i}K_{i}|\mathscr{B}_{j}\rangle$ which also implies that
$\operatorname{Im}B_{j}\subset\mathscr{V}_{j}^{*}$.222We remark that the
induced continuous maps in $\mathscr{X}/\mathscr{V}_{j}^{*}$ and
$\mathscr{X}/\mathscr{V}_{\neg j}^{*}$ admit an enveloping lattice for the
family of controlled-invariant subspaces $\mathscr{V}_{j}^{*}$, $\forall
j\in\mathcal{N}$ (e.g., see [12]). $\Box$
### II-C Properties of (controlled)-invariance entropy
In the following, we start by giving the definition of (controlled)-invariance
entropy for the multi-channel system in (1) with respect to the subset of
control-input channels and that class of control functions.
###### Definition 2
For a given subspace
$\mathscr{F}\subset\tilde{\mathscr{V}}^{*}\in\tilde{\mathscr{V}}$ with
nonempty interior and $T$, $\epsilon>0$, the class of control functions
$\mathscr{C}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})\subset\mathscr{U}$
is called $(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$-spanning, if
there exits $u\in\mathscr{C}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$
for almost all $t\in[0,\,T)$ such that
$\displaystyle\max_{\begin{subarray}{c}j\in\mathcal{N}\end{subarray}}\sup_{\begin{subarray}{c}t\in[0,\,T]\end{subarray}}\inf_{\begin{subarray}{c}y\in\tilde{\mathscr{V}}^{*}\end{subarray}}\|\phi_{\neg
j}(t,x_{0},u_{\neg j}(t))-y\|\leq\epsilon,\quad\forall x_{0}\in\mathscr{F}.$
(13)
###### Remark 2
In the aforementioned definition, we use the notation $\phi_{\neg
j}(t,x_{0},u_{\neg j}(t))$ to denote the unique solution of the multi-channel
system with initial condition $x_{0}\in\mathscr{F}$ and control $u_{\neg
j}\in\mathscr{U}_{\neg j}$, i.e.,
$\displaystyle x(t)$ $\displaystyle=\phi_{\neg j}(t,x_{0},u_{\neg j}(t)),$
$\displaystyle\triangleq\exp
A\bigl{(}t-t_{0}\bigr{)}x_{0}+\sum_{i\in\mathcal{N}_{\neg
j}}\int_{t_{0}}^{t}\exp
A\bigl{(}t-s\bigr{)}B_{i}u_{i}(s)ds,\quad\forall[t_{0},\,t]\in[0,\,T],$ (14)
for each $j\in\mathcal{N}\cup\\{0\\}$.
Moreover, the relation $\phi_{\neg j}(t+t_{0},x_{0},u_{\neg j}(t))=\phi_{\neg
j}(t,\phi_{\neg j}(t,x_{0},u_{\neg j}(t)),u_{\neg j}(t_{0}+.))$ will also hold
for all $j\in\mathcal{N}\cup\\{0\\}$.
Let $r_{\rm inv}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$ be the
smallest cardinality of
$\mathscr{C}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$-spanning sets.
Then, we have the following properties for $r_{\rm
inv}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$.
1. (i)
Clearly $r_{\rm
inv}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})\in[0,\,\infty)$.333The
value of $r_{\rm inv}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$ could
be an infinity.
2. (ii)
If $\epsilon_{1}<\epsilon_{2}$, then $r_{\rm
inv}(T,\epsilon_{1},\mathscr{F},\tilde{\mathscr{V}}^{*})\geq r_{\rm
inv}(T,\epsilon_{2},\mathscr{F},\tilde{\mathscr{V}}^{*})$.
###### Definition 3
The (controlled)-invariance entropy of the multi-channel system in (1) (i.e.,
with respect to the subset of control-input channels and/or the class of
control functions) is given by
$\displaystyle{h_{\rm
inv}}(\mathscr{F},\tilde{\mathscr{V}}^{*})=\lim_{\begin{subarray}{c}\epsilon\searrow
0\end{subarray}}\biggm{\\{}\varlimsup_{\begin{subarray}{c}T\to\infty\end{subarray}}\frac{1}{T}\log
r_{\rm inv}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})\biggm{\\}}.$ (15)
###### Remark 3
We remark that the existence of such a limit in the aforementioned definition
for ${h_{\rm inv}}(\mathscr{F},\tilde{\mathscr{V}}^{*})$ follows directly from
the monotonicity of $r_{\rm
inv}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$ with respect to
$\epsilon$. Moreover, such an invariance entropy ${h_{\rm
inv}}(\mathscr{F},\tilde{\mathscr{V}}^{*})$ equals to the minimum amount of
information that is required to render $\tilde{\mathscr{V}}^{*}$-invariant
subspace by using a causal coding and/or control law (see [7] for discussion
on single control-channel systems).
Then, we have the following properties for ${h_{\rm
inv}}(\mathscr{F},\tilde{\mathscr{V}}^{*})$.
1. (i)
${h_{\rm
inv}}(\mathscr{F},\tilde{\mathscr{V}}^{*})\in[0,\,\infty)\cup\\{\infty\\}$.
2. (ii)
If $\mathscr{F}\triangleq\bigcup_{l\in\\{1,2,\ldots,L\\}}\mathscr{F}_{l}$ with
compact $\mathscr{F}_{l}$, $\forall l\in\\{1,2,\ldots,L\\}$, then ${h_{\rm
inv}}(\mathscr{F},\tilde{\mathscr{V}}^{*})=\max_{\begin{subarray}{c}l\in\\{1,2,\ldots,L\\}\end{subarray}}{h_{\rm
inv}}(\mathscr{F}_{l},\tilde{\mathscr{V}}^{*})$.
In the following, we state the main problem of this paper – where we establish
a connection between the invariance entropy of a multi-channel system and the
reliability of a multi-channel system.
Problem: Find a condition on the minimum amount of “information” (with respect
to the subset of control-input channels and/or the class of control functions)
that is necessary to keep the states of the multi-channel system in a given
subspace $\tilde{\mathscr{V}}^{*}$.
## III Main Results
In this section, we present our main results – where we provide a connection
between the invariance entropy (as a measure of “information” needed with
respect to the subset of control-input channels to keep the system in (or
near) this compact subspace) and the reliability of the multi-channel system
(when there is an intermittent channel failure that may occur between
actuators, controllers and/or sensors in the system).
###### Theorem 3
Suppose that Theorem 2 holds true and let $\mathscr{F}$ be a subspace of
$\tilde{\mathscr{V}}^{*}$. For every $x_{0}\in\mathscr{F}$, if there exists
$u(t)\in\mathscr{C}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$ for
almost all $t\in[0,\,T]$ that renders $\tilde{\mathscr{V}}^{*}$-invariant,
then the (controlled)-invariance entropy of the multi-channel system is given
by
$\displaystyle h_{\rm
inv}(\mathscr{F},\tilde{\mathscr{V}}^{*})=\lim_{\begin{subarray}{c}\epsilon\searrow
0\end{subarray}}\biggm{\\{}\varlimsup_{\begin{subarray}{c}T\to\infty\end{subarray}}\frac{1}{T}\log
r_{\rm inv}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})\biggm{\\}}.$ (16)
###### Proof:
For any subspace $\mathscr{F}\subset\tilde{\mathscr{V}}^{*}$, suppose there
exists $u(t)\in\mathscr{C}(T,\epsilon,\mathscr{F},\tilde{\mathscr{V}}^{*})$
for almost all $t\in[0,\,T]$ such that
$\displaystyle\sup_{\begin{subarray}{c}t\in[0,\,T]\end{subarray}}\inf_{\begin{subarray}{c}y\in\tilde{\mathscr{V}}^{*}\end{subarray}}\|\phi_{\neg
j}(t,x_{0},u_{\neg j}(t))-y\|\leq\epsilon,~{}~{}\forall
j\in\mathcal{N},~{}~{}\forall x_{0}\in\mathscr{F},$
with some finite-positive number $\epsilon>0$. Then, we see that (16) (i.e.,
the (controlled)-invariance entropy of the multi-channel system in (1)) will
directly follow. Moreover, the supremum is taken overall the set of all
admissible controls that renders $\tilde{\mathscr{V}}^{*}$-invariant. $\Box$
###### Remark 4
We remark that the aforementioned theorem essentially states that the set of
admissible controls $\mathscr{U}$ renders $\tilde{\mathscr{V}}^{*}$-invariant,
even if there is an intermittent failure in any one of the control-input
channels. Note that this quantity (which is also the minimum growth rate for
the number of open-loop control functions with respect to intermittently
faulty channel) provides a condition on the minimum amount of “information”
that is necessary to keep the system states in (or near) this subspace.
We conclude this section with the following result which is an immediate
corollary of Theorem 3.
###### Corollary 2
Suppose there exists a finite-positive number $\epsilon_{\mathcal{K}}>0$ such
that
$\displaystyle\max_{\begin{subarray}{c}j\in\mathcal{N}\end{subarray}}\sup_{\begin{subarray}{c}t\in[0,\,T]\end{subarray}}\inf_{\begin{subarray}{c}y\in\tilde{\mathscr{V}}_{\mathcal{K}}^{*}\end{subarray}}\|\phi_{\neg
j}(t,x_{0},u_{\neg j}(t))-y\|\leq\epsilon,\quad\forall x_{0}\in\mathscr{F},$
(17)
where
$\displaystyle\tilde{\mathscr{V}}_{\mathcal{K}}^{*}=\sup\left\\{\bigcap_{j\in\mathcal{N}}\mathscr{V}_{j}~{}\biggm{|}~{}\mathscr{V}_{j}\subset\sup\mathfrak{I}_{j}(A,B_{\neg
j};\mathscr{X}),~{}\forall
j\in\mathcal{N},~{}~{}\exists\,K\in\mathcal{K}\right\\}\supset\mathscr{F}.$
(18)
Then, the (controlled)-invariance entropy of the multi-channel system in (1)
is given by
$\displaystyle h_{\rm
inv}(\mathscr{F},\tilde{\mathscr{V}}_{\mathcal{K}}^{*})=\lim_{\begin{subarray}{c}\epsilon_{\mathcal{K}}\searrow
0\end{subarray}}\biggm{\\{}\varlimsup_{\begin{subarray}{c}T\to\infty\end{subarray}}\frac{1}{T}\log
r_{\rm
inv}(T,\epsilon_{\mathcal{K}},\mathscr{F},\tilde{\mathscr{V}}_{\mathcal{K}}^{*})\biggm{\\}}.$
(19)
###### Remark 5
Note that the bounds for $h_{\rm inv}(\mathscr{F},\tilde{\mathscr{V}}^{*})$
and $h_{\rm inv}(\mathscr{F},\tilde{\mathscr{V}}_{\mathcal{K}}^{*})$ are
different, since they may depend on their respective classes of control
functions. Moreover, the following $h_{\rm
inv}(\mathscr{F},\tilde{\mathscr{V}}^{*})\leq h_{\rm
inv}(\mathscr{F},\tilde{\mathscr{V}}_{\mathcal{K}}^{*})$ also holds true.
## References
* [1] Adler, R., Konheim, A. & McAndrew, M. Topological entropy. Transactions of the American Mathematical Society. 1965; 114:309–319.
* [2] Antsaklis, P. & Baillieul, J. Special issue on the technology of networked control systems, Proceedings of 2007 IEEE. 95.
* [3] Basile, G. & Marro, G. Controlled and conditioned invariant subspaces in linear system theory. Journal of Optimization Theory and Applications. 1969; 3(5):306–315.
* [4] Basile, G. & Marro, G. Controlled and conditioned invariants in linear system theory. Prentice-Hall, Englewood Cliffs, New Jersey, 1992.
* [5] Bowen, R. Entropy for group endomophisms and homogenous spaces. Transactions of the American Mathematical Society. 1971; 153:401–414.
* [6] Colonius, F. & Kawan, C. Invariance entropy for control systems. SIAM Journal of Control and Optimization. 2009; 48(3):1701–1721.
* [7] Colonius, F. Minimal data rates and invariance entropy. In: Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems, MTNS-2010, Budapest, Hungary, July 5–9, 2010.
* [8] Colonius, F. & Helmke, U. Entropy of controlled invariant subspaces. Technical Report. Preprint Nr. 31/2009, Institute of Mathematics, University of Augsburg, Germany.
* [9] Colonius, F. & Kawan, C. Invariance entropy with outputs. Mathematics of Control, Signals and Systems. 2011; 22:203–227.
* [10] Dinaburg, E. I. The relation between topological entropy and metric entropy. Doklady Akademii nauk SSSR. 1970; 190:19–22.
* [11] Downarowicz, T. Entropy in dynamical systems. Cambridge Press, 2011.
* [12] Gohberg, I., Lancaster, P. & Rodman, L. Invariant subspaces of matrices with applications. SIAM Classics in Applied Mathematics, 2006.
* [13] Hagihara, R. & Nair, G. N. Two extensions of topological feedback entropy. Available at: http://arxiv.org/abs/1209.6458. 2012.
* [14] Kawan, C. Invariance entropy for control systems. Ph.D. Thesis, University of Augsburg, Germany, 2011.
* [15] Kopf, C. Coding and entropy for piecewise continuous piecewise monotone transformations. Nonlinear Analysis. 2005; 61(1-2): 269–675.
* [16] Misiurewicz, M. & Ziemian, K. Horseshoes and entropy for piecewise continuous piecewise monotone maps. In: From Phase Transitions to Chaos. 498–500, World Scientific Publishing, 1992.
* [17] Nair, G. N., Evans, R. J., Mareels, I. M. Y., & Moran, W. Topological feedback entropy and nonlinear stabilization. IEEE Transactions on Automatic Control. 2004; 49(9):1585–1597.
* [18] Savkin, A. V. Analysis and synthesis of networked control systems: topological entropy, observability, robustness, and optimal control. Automatica. 2006; 42(1):51–62.
* [19] Sinai, Y. G. Topics in ergodic theory. Princeton Univ. Press, 1994.
* [20] Trentelman, H. L., Stoorvogel, A. A. & Hautus, M. L. J. Control theory for linear systems. Springer, London, 2001.
* [21] Walters, P. An introduction to ergodic theory. Springer, New York, 1982.
* [22] Wonham, W. M. Linear multivariable control: A geometric approach. Springer, New York, 1979.
|
arxiv-papers
| 2013-02-11T02:02:48 |
2024-09-04T02:49:41.614214
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Getachew K Befekadu",
"submitter": "Getachew Befekadu",
"url": "https://arxiv.org/abs/1302.2386"
}
|
1302.2435
|
# Comparison for upper tail probabilities of random series
Fuchang Gao [email protected], University of Idaho, Moscow, Idaho, USA
Zhenxia Liu [email protected], University of Idaho, Moscow, Idaho,
USA Xiangfeng Yang [email protected], Universidade de Lisboa, Lisboa, Portugal
###### Abstract
Let $\\{\xi_{n}\\}$ be a sequence of independent and identically distributed
random variables. In this paper we study the comparison for two upper tail
probabilities $\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|^{p}\geq
r\right\\}$ and $\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|^{p}\geq
r\right\\}$ as $r\rightarrow\infty$ with two different real series
$\\{a_{n}\\}$ and $\\{b_{n}\\}.$ The first result is for Gaussian random
variables $\\{\xi_{n}\\},$ and in this case these two probabilities are
equivalent after suitable scaling. The second result is for more general
random variables, thus a weaker form of equivalence (namely, logarithmic
level) is proved.
Keywords and phrases: tail probability, random series, small deviation
AMS 2010 subject classifications: 60F10; 60G50
## 1 Introduction
Let $\\{\xi_{n}\\}$ be a sequence of independent and identically distributed
(i.i.d.) random variables, and $\\{a_{n}\\}$ be a sequence of positive real
numbers. We consider the random series $\sum_{n=1}^{\infty}a_{n}\xi_{n}$. Such
random series are basic objects in time series analysis and in regression
models (see [2]), and there have been a lot of research. For example, [5] and
[6] studied tail probabilities and moment estimates of the random series when
$\\{\xi_{n}\\}$ have logarithmically concave tails. Of special interest are
the series of positive random variables, or the series of the form
$\sum_{n=1}^{\infty}a_{n}|\xi_{n}|^{p}$. Indeed, by Karhunen-Loéve expansion,
the $L_{2}$ norm of a centered continuous Gaussian process $X(t),t\in[0,1],$
can be represented as $\|X\|_{L_{2}}=\sum_{n=1}^{\infty}\lambda_{n}Z_{n}^{2}$
where $\lambda_{n}$ are the eigenvalues of the associated covariance operator,
and $Z_{n}$ are i.i.d. standard Gaussian random variables. It is also known
(see [7]) that the series $\sum_{n=1}^{\infty}a_{n}|Z_{n}|^{p}$ coincides with
some bounded Gaussian process $\\{Y_{t},t\in\mathrm{T}\\}$, where $\mathrm{T}$
is a suitable parameter set:
$\sum_{n=1}^{\infty}a_{n}|Z_{n}|^{p}=\sup_{\mathrm{T}}Y_{t}.$
In this paper, we study the the limiting behavior of the upper tail
probability of the series
$\displaystyle\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|^{p}\geq
r\right\\}\qquad\text{ as }r\rightarrow\infty.$ (1.1)
This probability is also called large deviation probability (see [1]). As
remarked in [4], for Gaussian process
$\|X\|_{L_{2}}=\sum_{n=1}^{\infty}\lambda_{n}Z_{n}^{2},$ the eigenvalues
$\lambda_{n}$ are rarely found exactly. Often, one only knows the asymptotic
approximation. Thus, a natural question is to study the relation between the
upper tail probability of the original random series and the one with
approximated eigenvalues. Also, it is much easier to analyze the rate function
in the large deviation theory when $\\{a_{n}\\}$ are explicitly given instead
of asymptotic approximation.
Throughout this paper, the following notations will be used. The $l^{q}$ norm
of a real sequence $a=\\{a_{n}\\}$ is denoted by
$||a||_{q}=\left(\sum_{n=1}^{\infty}a_{n}^{q}\right)^{1/q}.$ In particular,
the $l^{\infty}$ norm should be understood as $||a||_{\infty}=\max|a_{n}|.$
We focus on the following two types of comparisons. The first is at the exact
level
$\displaystyle\frac{\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq
r\|a\|_{2}\beta+|\alpha|\sum_{n=1}^{\infty}a_{n}\right\\}}{\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|\geq
r\|b\|_{2}\beta+|\alpha|\sum_{n=1}^{\infty}b_{n}\right\\}}\sim 1\qquad\text{
as }r\rightarrow\infty$ (1.2)
where $\\{\xi_{n}\\}$ are i.i.d. Gaussian random variables
$N(\alpha,\beta^{2});$ see Theorem 2.1 and Theorem 2.2. This is motivated by
[3] in which the following exact level comparison theorems for small
deviations were obtained: as $r\rightarrow 0,$
$\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\leq r\right\\}\sim
c\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|\leq r\right\\}$ for
i.i.d. random variables $\\{\xi_{n}\\}$ whose common distribution satisfies
several weak assumptions in the vicinity of zero. The proof of the small
deviation comparison is based on the equivalence form of
$\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\leq r\right\\}$
introduced in [8]. Our proof of upper tail probability comparison (1.2) is
also based on an equivalent form of
$\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq r\right\\}$ in [7]
for Gaussian random variables. The main difficulty is to come up with suitable
inequalities which can be used for a specified function
$\widehat{\varepsilon}(x,y)$ in Lemma 2.1, and such inequalities are obtained
in Lemma 2.3 and Lemma 2.4.
For more general random variables, difficulties arise due to the lack of known
equivalent form of $\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq
r\right\\}.$ Thus, instead of exact comparison, we consider logarithmic level
comparison for upper tail probabilities
$\displaystyle\frac{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq
r\|a\|_{q}\right\\}}{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|\geq
r\|b\|_{q}\right\\}}\sim 1\qquad\text{ as }r\rightarrow\infty.$ (1.3)
It turns out that under suitable conditions on the sequences $\\{a_{n}\\}$ and
$\\{b_{n}\\}$ the comparison (1.3) holds true for i.i.d. random variables
$\\{\xi_{n}\\}$ satisfying
$\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\\{|\xi_{1}|\geq
u\right\\}=-c$
for some finite constants $p\geq 1$ and $c>0;$ see Theorem 3.1. Here we note
that logarithmic level comparisons for small deviation probabilities can be
found in [4].
From comparisons (1.2) and (1.3), we see that two upper tail probabilities are
equivalent as long as suitable scaling is made. We believe that this holds
true for more general random variables; see the conjecture at the end of
Section 2 for details.
## 2 Exact comparisons for Gaussian random series
### 2.1 The main results
The following two theorems are the main results in this section. The first one
is on standard Gaussian random variables.
###### Theorem 2.1.
Let $\\{Z_{n}\\}$ be a sequence of i.i.d. standard Gaussian random variables
$N(0,1),$ and $\\{a_{n}\\},\\{b_{n}\\}$ be two non-increasing sequences of
positive real numbers such that
$\sum_{n=1}^{\infty}a_{n}<\infty,\sum_{n=1}^{\infty}b_{n}<\infty,$
$\displaystyle\prod_{n=1}^{\infty}\left(2-\frac{a_{n}/\|a\|_{2}}{b_{n}/\|b\|_{2}}\right)\text{
and
}\prod_{n=1}^{\infty}\left(2-\frac{b_{n}/\|b\|_{2}}{a_{n}/\|a\|_{2}}\right)\text{
converge}.$ (2.1)
Then as $r\rightarrow\infty$
$\displaystyle\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|Z_{n}|\geq
r\|a\|_{2}\right\\}\sim\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|Z_{n}|\geq
r\|b\|_{2}\right\\}.$
For general Gaussian random variables $Z_{n},$ it turns out that the condition
(2.1) is not convenient to derive the comparison because some more complicated
terms appear in the proof. Therefore, an equivalent condition in another form
is formulated which forms the following comparison.
###### Theorem 2.2.
Let $\\{Z_{n}\\}$ be a sequence of i.i.d. Gaussian random variables
$N(\alpha,\beta^{2}),$ and $\\{a_{n}\\},\\{b_{n}\\}$ be two non-increasing
sequences of positive real numbers such that
$\sum_{n=1}^{\infty}a_{n}<\infty,\sum_{n=1}^{\infty}b_{n}<\infty,$
$\displaystyle\sum_{n=1}^{\infty}\left(1-\frac{a_{n}/\|a\|_{2}}{b_{n}/\|b\|_{2}}\right)\text{
converges, and
}\sum_{n=1}^{\infty}\left(1-\frac{a_{n}/\|a\|_{2}}{b_{n}/\|b\|_{2}}\right)^{2}<\infty.$
(2.2)
Then as $r\rightarrow\infty$
$\displaystyle\mathbb{P}$
$\displaystyle\left\\{\sum_{n=1}^{\infty}a_{n}|Z_{n}|\geq
r\|a\|_{2}\beta+|\alpha|\sum_{n=1}^{\infty}a_{n}\right\\}$
$\displaystyle\qquad\qquad\sim\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|Z_{n}|\geq
r\|b\|_{2}\beta+|\alpha|\sum_{n=1}^{\infty}b_{n}\right\\}.$
### 2.2 Proofs of Theorem 2.1 and Theorem 2.2
The function $\Phi$ stands for the distribution function of a standard
Gaussian random variable
$\Phi(x)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-u^{2}/2}du.$
The first lemma is our starting point.
###### Lemma 2.1 ([7]).
Let $\\{\xi_{n}\\}$ be a sequence of i.i.d. Gaussian random variables
$N(\alpha,\beta^{2}),$ and $\\{a_{n}\\}$ be a sequence of positive real
numbers such that $\sum_{n=1}^{\infty}a_{n}<\infty.$ Then as
$r\rightarrow\infty$
$\displaystyle\mathbb{P}$
$\displaystyle\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq r\right\\}$ (2.3)
$\displaystyle\sim\prod_{n=1}^{\infty}\widehat{\varepsilon}\left(\frac{a_{n}(r-|\alpha|\sum_{n=1}^{\infty}a_{n})}{||a||_{2}^{2}\beta},\frac{\alpha}{\beta}\right)\cdot\left[1-\Phi\left(\frac{r-|\alpha|\sum_{n=1}^{\infty}a_{n}}{||a||_{2}\beta}\right)\right]$
where $\widehat{\varepsilon}(x,y)=\Phi(x+|y|)+\exp\\{-2x|y|\\}\Phi(x-|y|).$
###### Lemma 2.2 (Lemma 5 in [3]).
Suppose $\\{c_{n}\\}$ is a sequence of real numbers such that
$\sum_{n=1}^{\infty}c_{n}$ converges, and $g$ has total variation $D$ on
$[0,\infty).$ Then, for any monotonic non-negative sequence $\\{d_{n}\\},$
$\left|\sum_{n\geq
N}c_{n}g(d_{n})\right|\leq(D+\sup_{x}|g(x)|)\sup_{k>N}\left|\sum_{n=N}^{k}c_{n}\right|.$
As mentioned in the introduction, the key step of the proofs is to come up
with suitable inequalities that can be used for the function
$\widehat{\varepsilon}(x,y)$ in Lemma 2.1. For the proof of Theorem 2.1, we
need the following
###### Lemma 2.3.
For $a\leq 0$ and small enough $\delta,$ we have
$1+a\cdot\delta\leq(1+\delta)^{a}.$
The proof of this lemma is trivial. The proof of Theorem 2.2 requires a more
complicated inequality as follows.
###### Lemma 2.4.
For a fixed $\sigma>0$ and any $\gamma>0,$ there is a constant
$\lambda(\sigma)$ only depending on $\sigma$ such that for any $|a|\leq\sigma$
and $|\delta|\leq\lambda,$
$1+a\cdot\delta+\gamma\leq(1+\delta)^{a}(1+\delta^{2})(1+\gamma)^{2}.$
The proof of Lemma 2.4 is elementary (but not trivial) which is given at the
end of this section.
###### Proof of Theorem 2.1.
By otherwise considering $\tilde{a}_{n}=a_{n}/\|a\|_{2}$ and
$\tilde{b}_{n}=b_{n}/\|b\|_{2},$ we assume that $\|a\|_{2}=\|b\|_{2}=1.$ It
follows from Lemma 2.1 that
$\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|Z_{n}|\geq
r\right\\}\sim\prod_{n=1}^{\infty}2\Phi\left(ra_{n}\right)\cdot\left[1-\Phi\left(r\right)\right].$
Therefore,
$\displaystyle\frac{\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|Z_{n}|\geq
r\right\\}}{\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|Z_{n}|\geq
r\right\\}}\sim\prod_{n=1}^{\infty}\frac{\Phi\left(ra_{n}\right)}{\Phi\left(rb_{n}\right)}.$
Now we prove that
$\prod_{n=N}^{\infty}\frac{\Phi\left(ra_{n}\right)}{\Phi\left(rb_{n}\right)}$
tends to $1$ as $N\rightarrow\infty$ uniformly in $r.$ Then the limit of
$\prod_{n=1}^{\infty}\frac{\Phi\left(ra_{n}\right)}{\Phi\left(rb_{n}\right)}$
as $r\rightarrow\infty$ is equal to $1$ since the limit of each
$\frac{\Phi\left(ra_{n}\right)}{\Phi\left(rb_{n}\right)}$ as
$r\rightarrow\infty$ is $1.$
By applying Taylor’s expansion to $\Phi$ up to the second order, we have
$\displaystyle\Phi\left(ra_{n}\right)=$
$\displaystyle\Phi\left(rb_{n}\right)+\Phi^{\prime}\left(rb_{n}\right)\left(ra_{n}-rb_{n}\right)$
$\displaystyle+\frac{\Phi^{\prime\prime}(rc_{n})}{2}\left(ra_{n}-rb_{n}\right)^{2}$
where $c_{n}$ is between $a_{n}$ and $b_{n}.$ It follows from
$\Phi^{\prime\prime}(rc_{n})\leq 0$ that
$\displaystyle\frac{\Phi\left(ra_{n}\right)}{\Phi\left(rb_{n}\right)}\leq
1+\frac{rb_{n}\Phi^{\prime}\left(rb_{n}\right)}{\Phi\left(rb_{n}\right)}\left(\frac{a_{n}}{b_{n}}-1\right).$
Let us introduce a new function $g(x)=-\frac{x\Phi^{\prime}(x)}{\Phi(x)}.$ Now
we apply Lemma 2.3 with $a=g(rb_{n})$ to get
$\displaystyle\frac{\Phi\left(ra_{n}\right)}{\Phi\left(rb_{n}\right)}\leq\left(2-\frac{a_{n}}{b_{n}}\right)^{g(rb_{n})}.$
It then follows from Lemma 2.2 that
$\displaystyle\prod_{n\geq
N}\frac{\Phi\left(ra_{n}\right)}{\Phi\left(rb_{n}\right)}$
$\displaystyle\leq\exp\left\\{\sum_{n\geq
N}g(rb_{n})\log\left(2-\frac{a_{n}}{b_{n}}\right)\right\\}$
$\displaystyle\leq\exp\left\\{(D+\sup_{x}|g(x)|)\sup_{k>N}\left|\sum_{n=N}^{k}\log\left(2-\frac{a_{n}}{b_{n}}\right)\right|\right\\}$
which tends to $1$ uniformly in $r$ from condition (2.1). Thus
$\displaystyle\limsup_{N\rightarrow\infty}\prod_{n\geq
N}\frac{\Phi\left(ra_{n}\right)}{\Phi\left(rb_{n}\right)}\leq 1.$
Similarly,
$\displaystyle\limsup_{N\rightarrow\infty}\prod_{n\geq
N}\frac{\Phi\left(rb_{n}\right)}{\Phi\left(ra_{n}\right)}\leq 1$
which completes the proof. ∎
###### Proof of Theorem 2.2.
From Lemma 2.1 we get
$\displaystyle\frac{\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq
r\|a\|_{2}\beta+|\alpha|\sum_{n=1}^{\infty}a_{n}\right\\}}{\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|\geq
r\|b\|_{2}\beta+|\alpha|\sum_{n=1}^{\infty}b_{n}\right\\}}\sim\prod_{n=1}^{\infty}\frac{h(ra_{n}/\|a\|_{2})}{h(rb_{n}/\|b\|_{2})}$
where
$h(x)=\Phi(x+|\alpha/\beta|)+\exp\\{-2x|\alpha/\beta|\\}\Phi(x-|\alpha/\beta|).$
Without loss of generality, we assume $\|a\|_{2}=\|b\|_{2}=1.$ We use the
notation $f(x)=\exp\\{-2x|\alpha/\beta|\\}\Phi(x-|\alpha/\beta|),$ thus
$h(ra_{n})=\Phi(ra_{n}+|\alpha/\beta|)+f(ra_{n}).$
Now we apply Taylor’s expansions to $\Phi$ at point $rb_{n}+|\alpha/\beta|$,
and to $f$ at point $rb_{n}$ both up to the second order, so
$\displaystyle h(ra_{n})=$
$\displaystyle\Phi(rb_{n}+|\alpha/\beta|)+rb_{n}\Phi^{\prime}(rb_{n}+|\alpha/\beta|)\left(\frac{a_{n}}{b_{n}}-1\right)$
$\displaystyle\qquad\qquad\qquad+\Phi^{\prime\prime}(rc_{1,n}+|\alpha/\beta|)\left(ra_{n}-rb_{n}\right)^{2}/2$
$\displaystyle+f(rb_{n})+rb_{n}f^{\prime}(rb_{n})\left(\frac{a_{n}}{b_{n}}-1\right)+\frac{r^{2}b_{n}^{2}f^{\prime\prime}(rc_{2,n})}{2}\left(\frac{a_{n}}{b_{n}}-1\right)^{2}$
where $c_{1,n}$ and $c_{2,n}$ are between $a_{n}$ and $b_{n}.$ Because
$\Phi^{\prime\prime}\leq 0,$
$\displaystyle h(ra_{n})\leq$ $\displaystyle
h(rb_{n})+rb_{n}\left[\Phi^{\prime}(rb_{n}+|\alpha/\beta|)+f^{\prime}(rb_{n})\right]\left(\frac{a_{n}}{b_{n}}-1\right)$
$\displaystyle\qquad\qquad\qquad+\frac{r^{2}b_{n}^{2}f^{\prime\prime}(rc_{2,n})}{2}\left(\frac{a_{n}}{b_{n}}-1\right)^{2}.$
Taking into account that
$\left|r^{2}b_{n}^{2}f^{\prime\prime}(rc_{2,n})\right|\leq 2c(|\alpha/\beta|)$
for large $N$ uniformly in $r$ with some positive constant $c$ depending on
$|\alpha/\beta|,$ we have
$\displaystyle\frac{h(ra_{n})}{h(rb_{n})}\leq
1+\frac{rb_{n}\left[\Phi^{\prime}(rb_{n}+|\alpha/\beta|)+f^{\prime}(rb_{n})\right]}{h(rb_{n})}\left(\frac{a_{n}}{b_{n}}-1\right)+c\left(\frac{a_{n}}{b_{n}}-1\right)^{2}.$
The function
$g(x):=x\left[\Phi^{\prime}(x+|\alpha/\beta|)+f^{\prime}(x)\right]/h(x)$ is
bounded and continuously differentiable on $[0,\infty)$ with a bounded
derivative. Therefore it follows from Lemma 2.4 that
$\displaystyle\frac{h(ra_{n})}{h(rb_{n})}\leq\left(\frac{a_{n}}{b_{n}}\right)^{g(rb_{n})}\left(1+\left(\frac{a_{n}}{b_{n}}-1\right)^{2}\right)\left(1+c\left(\frac{a_{n}}{b_{n}}-1\right)^{2}\right)^{2}.$
By taking the infinite product, we get
$\displaystyle\prod_{n=N}^{\infty}\frac{h(ra_{n})}{h(rb_{n})}\leq\prod_{n=N}^{\infty}\left(\frac{a_{n}}{b_{n}}\right)^{g(rb_{n})}\prod_{n=N}^{\infty}\left(1+\left(\frac{a_{n}}{b_{n}}-1\right)^{2}\right)\left(1+c\left(\frac{a_{n}}{b_{n}}-1\right)^{2}\right)^{2}.$
According to Lemma 2.2, the first product
$\displaystyle\prod_{n=N}^{\infty}\left(\frac{a_{n}}{b_{n}}\right)^{g(rb_{n})}$
$\displaystyle=\exp\left\\{\sum_{n\geq
N}g(rb_{n})\log\left(\frac{a_{n}}{b_{n}}\right)\right\\}$
$\displaystyle\leq\exp\left\\{(D+\sup_{x}|g(x)|)\sup_{k>N}\left|\sum_{n=N}^{k}\log\left(\frac{a_{n}}{b_{n}}\right)\right|\right\\}$
which tends to $1$ because the series
$\sum_{n=1}^{\infty}\log\left(\frac{a_{n}}{b_{n}}\right)$ is convergent (this
is from condition (2.2), see Appendix for more details).
For the second product, we use $1+x\leq e^{x}$ to get
$\displaystyle\prod_{n=N}^{\infty}\left(1+\left(\frac{a_{n}}{b_{n}}-1\right)^{2}\right)\left(1+c\left(\frac{a_{n}}{b_{n}}-1\right)^{2}\right)^{2}$
$\displaystyle\leq\exp\left\\{(1+2c)\sum_{n\geq
N}\left(\frac{a_{n}}{b_{n}}-1\right)^{2}\right\\}$
and this tends to $1$ because of (2.2). Thus
$\displaystyle\limsup_{N\rightarrow\infty}\prod_{n=N}^{\infty}\frac{h(ra_{n})}{h(rb_{n})}\leq
1.$
We can similarly prove
$\limsup_{N\rightarrow\infty}\prod_{n=N}^{\infty}\frac{h(rb_{n})}{h(ra_{n})}\leq
1$ which ends the proof. ∎
###### Proof of Lemma 2.4.
We first show that under the assumptions of Lemma 2.4, the following
inequality holds
$\displaystyle 1+a\cdot\delta\leq(1+\delta)^{a}(1+\delta^{2}).$ (2.4)
Let us consider the function $p(\delta)$ for $|\delta|<1$ and $|a\delta|<1$
defined as
$p(\delta)=a\log(1+\delta)+\log(1+\delta^{2})-\log(1+a\delta).$
It is clear that $p(0)=0$ and
$\displaystyle
p^{\prime}(\delta)=\frac{\delta}{(1+\delta)(1+\delta^{2})(1+a\delta)}\left[\delta^{2}\left(a^{2}+a\right)+\delta\left(2a+2\right)+\left(a^{2}-a+2\right)\right]$
which is greater than $3/2$ for sufficiently small $\lambda_{1}$ depending on
$\sigma$ with $|a|\leq\sigma$ and $|\delta|\leq\lambda_{1},$ since
$a^{2}-a+2\geq 7/4.$ Inequality (2.4) is thus proved.
Now we define a new function
$q(\gamma)=(1+\delta)^{a}(1+\delta^{2})(1+\gamma)^{2}-(1+a\delta+\gamma).$
From (2.4) we have $q(0)\geq 0.$ Furthermore,
$q^{\prime}(\gamma)=(1+\delta)^{a}(1+\delta^{2})2(1+\gamma)-1$
which can be made positive for small $\lambda_{2}$ depending on $\sigma$ with
$|\delta|\leq\lambda_{2}.$ The proof is complete by taking
$\lambda=\min\\{\lambda_{1},\lambda_{2}\\}.$ ∎
### 2.3 Appropriate extensions
By using again an equivalence form for
$\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|Z_{n}|^{p}\geq r\right\\}$
discussed in [7] with $1\leq p<2,$ we can similarly derive, without much
difficulty, exact comparison for the upper tail probabilities of
$\sum_{n=1}^{\infty}a_{n}|Z_{n}|^{p}.$ We formulate this as a proposition as
follows without a proof.
###### Proposition 2.5.
Let $\\{\xi_{n}\\}$ be a sequence of i.i.d. Gaussian random variables
$N(\alpha,\beta^{2}),$ and $\\{a_{n}\\},\\{b_{n}\\}$ be two sequences of
positive real numbers such that
$\sum_{n=1}^{\infty}a_{n}<\infty,\sum_{n=1}^{\infty}b_{n}<\infty$ and
$\displaystyle\sum_{n=1}^{\infty}\left|1-\frac{a_{n}/\sigma_{a}^{p}}{b_{n}/\sigma_{b}^{p}}\right|<\infty$
(2.5)
for $1\leq p<2,$
$\sigma_{a}=\left(\sum_{n=1}^{\infty}a_{n}^{m/p}\right)^{1/m}\beta$ with
$m=2p/(2-p).$ Then as $r\rightarrow\infty$
$\displaystyle\mathbb{P}$
$\displaystyle\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|^{p}\geq\left(r\sigma_{a}+|\alpha|\sum_{n=1}^{\infty}a_{n}^{1/p}\right)^{p}\right\\}$
$\displaystyle\qquad\qquad\sim\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|^{p}\geq\left(r\sigma_{b}+|\alpha|\sum_{n=1}^{\infty}b_{n}^{1/p}\right)^{p}\right\\}.$
Based on what we have observed for Gaussian random variables so far, it is
reasonable to believe that after suitable scaling, two upper tail
probabilities involving $\\{a_{n}\\}$ and $\\{b_{n}\\}$ separately are
equivalent. Namely, we have the following.
Conjecture: Under suitable conditions on $\\{a_{n}\\}$ and $\\{b_{n}\\},$ for
general i.i.d. random variables $\\{\xi_{n}\\},$ the following exact
comparison holds
$\displaystyle\mathbb{P}$
$\displaystyle\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq
h\Big{(}rf^{\xi}(a)+g^{\xi}(a)\Big{)}\right\\}\sim\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|\geq
h\Big{(}rf^{\xi}(b)+g^{\xi}(b)\Big{)}\right\\}$
for some function $h(r)$ satisfying $\lim_{r\rightarrow\infty}h(r)=\infty,$
and for two suitable scaling coefficients $f^{\xi}(a)$ and $g^{\xi}(a)$ whose
values at sequence $a=\\{a_{n}\\}$ only depend on $a$ and the structure of the
distribution of $\xi_{1}$ (such as the mean, the variance, the tail behaviors,
etc).
In the next section, we show that indeed two upper tail probabilities in the
logarithmic level are equivalent after some scaling. This adds more evidence
of our conjecture.
## 3 Logarithmic level comparison
In this section, we illustrate the logarithmic level comparison for more
general random variables $\\{\xi_{n}\\}$ other than the Gaussian ones.
###### Theorem 3.1.
Let $\\{\xi_{n}\\}$ be a sequence of i.i.d. random variables whose common
distribution satisfies $\mathbb{E}|\xi_{1}|<\infty$ and
$\displaystyle\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\\{|\xi_{1}|\geq
u\right\\}=-c$ (3.1)
for some constants $p\geq 1$ and $0<c<\infty.$ Suppose that a sequence of
positive real numbers $\\{a_{n}\\}$ is such that
$\sum_{n=1}^{\infty}a_{n}^{2\wedge q}<\infty$ with $q$ given by
$\frac{1}{p}+\frac{1}{q}=1.$ Then as $r\rightarrow\infty$
$\displaystyle\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq
r\right\\}\sim-r^{p}\cdot c\cdot\|a\|_{q}^{-p}.$ (3.2)
###### Remark 3.1.
If $\xi_{1}$ is the standard Gaussian random variable, then $p=2$ and $c=1/2$
in condition (3.1). If $\xi_{1}$ is an exponential random variable with
density function $e^{-x}$ on $[0,\infty),$ then $p=c=1.$ One can easily
produce more examples. It is straightforward to deduce the following
comparison result from (3.2).
###### Corollary 3.2.
Let $\\{\xi_{n}\\}$ be a sequence of i.i.d. random variables satisfying the
assumptions in Theorem 3.1. Suppose that two sequences of positive real
numbers $\\{a_{n}\\}$ and $\\{b_{n}\\}$ satisfy
$\sum_{n=1}^{\infty}a_{n}^{2\wedge q}<\infty$ and
$\sum_{n=1}^{\infty}b_{n}^{2\wedge q}<\infty$ with $q$ given by
$\frac{1}{p}+\frac{1}{q}=1.$ Then as $r\rightarrow\infty$
$\frac{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq
r\right\\}}{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|\geq
r\right\\}}\sim\left(\frac{\|b\|_{q}}{\|a\|_{q}}\right)^{p}$
and
$\frac{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|\geq
r\|a\|_{q}\right\\}}{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|\geq
r\|b\|_{q}\right\\}}\sim 1.$
The proof of Theorem 3.1 is based on the large deviation principle for random
series which was derived in [1]. Let us recall a result in [1] (revised a
little for our purpose).
###### Lemma 3.3 ([1]).
Let $\\{\eta_{k}\\}$ be a sequence of i.i.d. random variables with mean zero
satisfying the following condition
$\begin{cases}&\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\\{\eta_{1}\leq-u\right\\}=-c_{1};\\\
&\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\\{\eta_{1}\geq
u\right\\}=-c_{2},\end{cases}$ (3.3)
for some $p\geq 1$ and $0<c_{1},c_{2}\leq\infty$ with
$\min\\{c_{1},c_{2}\\}<\infty.$ Suppose $\\{x_{k}\\}$ is a sequence of real
numbers such that $\sum_{k=1}^{\infty}|x_{k}|^{2\wedge q}<\infty.$ Then the
family $\\{n^{-1}\sum_{k=1}^{\infty}x_{k}\eta_{k}\\}$ satisfies the large
deviation principle with speed $n^{p}$ and a rate function
$I(z)=\inf\left\\{\sum_{j=1}^{\infty}\psi(u_{j}):\sum_{j=1}^{\infty}u_{j}x_{j}=z\right\\},\,\,\,z\in\mathbb{R}$
where
$\displaystyle\psi(t)=\begin{cases}c_{1}|t|^{p}&\text{ if }t<0;\\\ 0&\text{ if
}t=0;\\\ c_{2}|t|^{p}&\text{ if }t>0.\end{cases}$
Namely, for any measurable set $A\subseteq\mathbb{R},$
$\displaystyle-\inf\\{I(y):y\in\text{interior of
}A\\}\leq\liminf_{n\rightarrow\infty}n^{-p}\log\mathbb{P}\left\\{n^{-1}\sum_{k=1}^{\infty}x_{k}\eta_{k}\in
A\right\\}$
$\displaystyle\leq\limsup_{n\rightarrow\infty}n^{-p}\log\mathbb{P}\left\\{n^{-1}\sum_{k=1}^{\infty}x_{k}\eta_{k}\in
A\right\\}\leq-\inf\\{I(y):y\in\text{closure of }A\\}.$
###### Proof of Theorem 3.1.
We apply Lemma 3.3 to the i.i.d. random variables
$\eta_{k}=|\xi_{k}|-\mathbb{E}|\xi_{k}|.$ The condition (3.1) implies that
(3.3) is fulfilled. Let us consider a special measurable set $A=[1,\infty).$
By using the Lagrange multiplier, it follows that
$\displaystyle-\inf\\{I(y):y\in\text{interior of
}A\\}=-\inf\\{I(y):y\in\text{closure of }A\\}=-c\|x\|_{q}^{-p}$
(this can be also deduced from Lemma 3.1 of [1]). Then (3.2) follows from the
large deviation principle. ∎
Now let us assume
$\displaystyle\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\\{|\xi_{1}|\geq
u\right\\}=-c.$
Then it follows easily that
$\displaystyle\lim_{u\rightarrow\infty}u^{-p/k}\log\mathbb{P}\left\\{|\xi_{1}|^{k}\geq
u\right\\}=-c.$
So the logarithmic level comparison for $\xi_{n}^{k}$ can be similarly derived
as follows.
###### Proposition 3.4.
Let $k>0$ be a positive real number, $\\{\xi_{n}\\}$ be a sequence of i.i.d.
random variables whose common distribution satisfies
$\mathbb{E}|\xi_{1}|^{k}<\infty$ and
$\displaystyle\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\\{|\xi_{1}|\geq
u\right\\}=-c$
for some constants $0<c<\infty$ and $p$ such that $p/k\geq 1.$ Two sequences
of positive real numbers $\\{a_{n}\\}$ and $\\{b_{n}\\}$ satisfy
$\sum_{n=1}^{\infty}a_{n}^{2\wedge q}<\infty$ and
$\sum_{n=1}^{\infty}b_{n}^{2\wedge q}<\infty$ where $q$ is given by
$\frac{1}{p/k}+\frac{1}{q}=1.$ Then as $r\rightarrow\infty$
$\frac{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|^{k}\geq
r\right\\}}{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|^{k}\geq
r\right\\}}\sim\left(\frac{\|b\|_{q}}{\|a\|_{q}}\right)^{p/k}$
and
$\frac{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}a_{n}|\xi_{n}|^{k}\geq
r\|a\|_{q}\right\\}}{\log\mathbb{P}\left\\{\sum_{n=1}^{\infty}b_{n}|\xi_{n}|^{k}\geq
r\|b\|_{q}\right\\}}\sim 1.$
## Appendix
In this section, we make a few remarks on the conditions in Theorem 2.1 and
Theorem 2.2. First, we note that conditions (2.1) and (2.2) are not very
restrictive, and examples of sequences satisfying these conditions can be
produced. For instance, we can consider two sequences with
$1-\frac{a_{n}/\|a\|_{2}}{b_{n}/\|b\|_{2}}=\frac{(-1)^{n}}{n}.$
To see the relation between (2.1) and (2.2), let us post part of a useful
theorem in [9] from which many convergence results on infinite products and
series can be easily derived.
###### Lemma 3.5 (Part (a) of Theorem 1 in [9]).
Let $\\{x_{n}\\}$ be a sequence of real numbers. If any two of the four
expressions
$\prod_{n=1}^{\infty}(1+x_{n}),\quad\prod_{n=1}^{\infty}(1-x_{n}),\quad\sum_{n=1}^{\infty}x_{n},\quad\sum_{n=1}^{\infty}x_{n}^{2}$
are convergent, then this holds also for the remaining two.
Under condition (2.2) in Theorem 2.2, it follows from this result that
$\prod_{n=1}^{\infty}\left(2-\frac{a_{n}/\|a\|_{2}}{b_{n}/\|b\|_{2}}\right)\text{
and }\prod_{n=1}^{\infty}\frac{a_{n}/\|a\|_{2}}{b_{n}/\|b\|_{2}}\text{
converge.}$
This implies that
$\sum_{n=1}^{\infty}\log\left(\frac{a_{n}/\|a\|_{2}}{b_{n}/\|b\|_{2}}\right)$
is convergent. The facts that
$\sum_{n=1}^{\infty}\left(1-\frac{b_{n}/\|b\|_{2}}{a_{n}/\|a\|_{2}}\right)^{2}<\infty\text{
and }\prod_{n=1}^{\infty}\frac{b_{n}/\|b\|_{2}}{a_{n}/\|a\|_{2}}\text{
converge}$
yield
$\prod_{n=1}^{\infty}\left(2-\frac{b_{n}/\|b\|_{2}}{a_{n}/\|a\|_{2}}\right)\text{
is convergent.}$
## References
* [1] M.A.Arcones, The large deviation principle for certain series, ESAIM: Probability and Statistics, 8 (2004) 200-220.
* [2] R.Davis and S.Resnick, Extremes of moving averages of random variables with finite endpoint, Annals of Probability, 19, 1 (1991) 312-328.
* [3] F.Gao, J.Hannig and F.Torcaso, Comparison theorems for small deviations of random series, Electronic J. Probab. 8 (2003) 21:1-17.
* [4] F.Gao and W.V.Li, Logarithmic level comparison for small deviation probabilities, Journal of Theoretical Probability, 20, 1 (2007) 1-23.
* [5] E.D.Gluskin and S.Kwapień, Tail and moment estimates for sums of independent random variables with logarithmically concave tails, Studia Math., 114 (1995) 303-309.
* [6] R.Latala, Tail and moment estimates for sums of independent random vectors with logarithmically concave tails, Studia Math., 118 (1996) 301-304.
* [7] M.A.Lifshits, Tail probabilities of Gaussian suprema and Laplace transform, Ann. Inst. Henri Poincaré, 30, 2 (1994) 163-179.
* [8] M.A.Lifshits, On the lower tail probabilities of some random series, Annals of Probbility, 25, 1 (1997) 424-442.
* [9] E.Wermuth, Some elementary properties of infinite products, The American Mathematical Monthly, 99, (1992) 530-537.
|
arxiv-papers
| 2013-02-11T10:19:40 |
2024-09-04T02:49:41.621562
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Fuchang Gao, Zhenxia Liu, Xiangfeng Yang",
"submitter": "Xiangfeng Yang",
"url": "https://arxiv.org/abs/1302.2435"
}
|
1302.2460
|
# Scheme of 2-dimensional atom localization for a three-level atom via quantum
coherence
Sajjad Zafar†, Rizwan Ahmed‡, M. Khalid Khan†
†Department of Physics, Quaid-i-Azam University
Islamabad 45320, Pakistan.
‡Department of Physics and Applied Mathematics,
Pakistan Institute of of Engineering and Applied Sciences,
P. O. Nilore, Islamabad, Pakistan [email protected]
###### Abstract
We present a scheme for two-dimensional (2D) atom localization in a three-
level atomic system. The scheme is based on quantum coherence via classical
standing wave fields between the two excited levels. Our results show that
conditional position probability is significantly phase dependent of the
applied field and frequency detuning of spontaneously emitted photons. We
obtain a single localization peak having probability close to unity by
manipulating the control parameters. The effect of atomic level coherence on
the sub-wavelength localization has also been studied. Our scheme may be
helpful in systems involving atom-field interaction.
PACS: 42.50.Ct; 32.50.+d; 32.30.-r
Keywords: Atom localization; Quantum Coherence; Three-level atomic system
## 1 Introduction
Atom localization has been the potentially rich area of research because of a
number of important applications and fundamental research [1, 2, 3, 4]. The
idea of measuring the position of an atom has been discussed since the
beginning of quantum mechanics as proposed by Heisenberg [5] where the
resolving power of equipment is limitized by uncertainty principle.
Furthermore, typical resolution of measurement is restricted by diffraction
condition where the length scale of object to be measured must be of the order
of wavelength of light used for measurement [6]. A few of potential
applications include laser cooling and trapping of neutral atoms [7], nano-
lithography [8, 9], atomic wavefunction measurement [10, 11], Bose-Einstein
condensation [12, 13] and coherent patterning of matter waves [14].
There are many existing schemes for 1D atom localization that utilizes
spatially dependent wave fields carrying advantage of atom to be localized
inside standing wave region. Earlier, the schemes were based on absorption of
light masks [15, 16, 17], measurement of standing wave field inside the cavity
[18, 19, 20, 21] or of atomic dipole [22, 23, 24], Raman gain process [25],
entanglement of atom’s position and internal state [26], phase dependence of
standing wave field [27] and the localization scheme based on resonance
fluorescence [28]. Since atomic coherence and quantum interference results in
some interesting phenomenon such as Kerr non-linearity [29, 30], four wave
mixing (FWM) [31], electromagnetically induced transparency (EIT) [32],
spontaneous emission enhancement or suppression [33, 34] and optical
bistability (OB) [35, 36], the idea of sub-wavelength atom localization via
atomic coherence and quantum interference has also been acquired considerable
attention in recent years.
In past few years, atom localization in two dimensions has been extensively
studied for its better prospect in applications. A considerable attention has
been given to 2D atom localization in a multi-level atomic system where two
interacting orthogonal wave fields are utilized for measuring atom’s location.
These studies show that 2D atom localization could be possible by the
measurement of populations of atomic states as proposed by Ivanov and
Rozhdestvensky [37] and the interaction of double-dark resonances presented by
Gao et al. [38]. Other techniques practiced for 2D atom localization includes
probe absorption spectrum [39, 40, 41] and controlled spontaneous emission
[42, 43, 44].
In this work, we present an efficient scheme for two-dimensional (2D) atom
localization based on quantum coherence effects in a three-level atomic
system. There exists many proposals based on the atomic coherence and quantum
interference for one-dimensional atom localization.For example, the scheme
proposed by Herkommer et al. [45] in which they used Autler-Townes spontaneous
spectrum. In another scheme, Papalakis and Knight [46] proposed scheme based
on quantum interference that induces sub-wavelength atom localization in
three-level Lambda ($\Lambda$) system traveling through the standing wave
field. Furthermore, the scheme proposed by Zubairy and collaborators exploited
the phase of driving field for sub-wavelength localization,hencereducing the
number of localization peaks from four to two [47, 48]. Gong et al., in their
schemes demonstrated atom localization at the nodes of standing-wave field
with increased detection probability [49, 50]. In contrast to above schemes
that discussed atom localization via single decay channel for spontaneous
emission, we have focuson fact that spontaneous emission via two coherent
decay channels may potentially improve the localization probability. During
atom-field interaction in two dimensions,the spontaneously emitted photon
carries information of the atom by creating spatial structures of filter
function at various frequencies. Typically such spatial structures deliver
lattice-like, crater-like and spike-like patterns which are mainly dependent
on dynamic interference between two orthogonal spatially dependent fields.
Subsequently,high precision in 2D atom localization can be attained by
manipulating the system parameters.
Our scheme is based on a three level-ladder configuration, where the atom is
initially prepared in the coherent superpositionof upper two excited levels.
Consequently, we studied the combined effect of relative phase between two
applied orthogonal standing wave fields and frequency of emitted photon.
The system is also discussed in the absence of atomic coherence where the
result showed radially dependence on initial atomic state preparation. We
reported multiple results for two dimensional atom localization including a
single localization peak having fairly large conditional position probability.
This paper is organized as follows. Section II presents atomic model followed
by theoretical treatment for deriving conditional probability distribution. In
section III, we provide detailed analysis and discussion of results regarding
two-dimensional atom localization in our proposed scheme. Finally, section IV
offers concise conclusion.
## 2 Model and Equations
We consider an atom moving in $z$-direction passing through two intersecting
classical standing fields as shown in Fig. 1 (a). The two fields are assumed
to be orthogonal and aligned along $x$ and $y$ axis, respectively. The
internal energy levels of three level atomic system is shown in Fig. 1(b). The
two excited states $\left|2\right\rangle$ and $\left|1\right\rangle$ are
coupled to ground state $\left|0\right\rangle$ by vacuum modes in free space.
Further, the transition
$\left|2\right\rangle\longrightarrow\left|1\right\rangle$ is driven via
classical standing field with Rabi frequency $\Omega\left(x,y\right)$. Hence,
the interaction between atom and classical standing field is spatial dependent
in $x-y$ plane.
Next we assume that the atom (with mass $m$) follows thermal distribution at
temperature $T$ so that its energy $k_{B}T$ is quite large as compared to
photon recoil $\frac{\hbar^{2}k^{2}}{2m}.$Therefore, the atom moving along
$z$-direction is not effected by the interaction fields and we can treat it
classically. Neglecting the kinetic energy part of Hamiltonian in Raman-Nath
approximation [51], the interaction Hamiltonian of the system under dipole
approximation reads as,
$H_{int}=H_{field}+H_{vacuum},$ (1)
where
$H_{field}=\Omega\left(x,y\right)e^{i\left(\Delta
t+\alpha_{c}\right)}X_{12}+\Omega^{\ast}\left(x,y\right)e^{-i\left(\Delta
t+\alpha_{c}\right)}X_{21},$ (2)
and
$\displaystyle H_{vacuum}$ $\displaystyle=$
$\displaystyle\sum\limits_{\mathbf{k}}\left(g_{\mathbf{k}}^{\left(1\right)}e^{-i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{10}}\right)t}X_{10}b_{\mathbf{k}}+g_{\mathbf{k}}^{\left(1\right)\ast}e^{i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{10}}\right)t}X_{01}b_{\mathbf{k}}^{{\dagger}}\right)+$
(3)
$\displaystyle\sum\limits_{\mathbf{k}}\left(g_{\mathbf{k}}^{\left(2\right)}e^{-i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{20}}\right)t}X_{20}b_{\mathbf{k}}+g_{\mathbf{k}}^{\left(2\right)\ast}e^{i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{20}}\right)t}X_{02}b_{\mathbf{k}}^{{\dagger}}\right).$
Here $X_{ij}=\left|i\right\rangle\left\langle j\right|$ represents the atomic
transition operator for levels $\left|i\right\rangle$ and
$\left|j\right\rangle\ $with transitions
$\left|2\right\rangle\leftrightarrow\left|0\right\rangle$ and
$\left|1\right\rangle\leftrightarrow\left|0\right\rangle$ are characterized by
frequencies $\omega_{\mathbf{20}}$ and $\omega_{\mathbf{10}}$, respectively.
The frequency of coupling filed between the upper two levels is given by
$\omega_{c}$ with associated phase $\alpha_{c}$ and detuning parameter
$\Delta=\omega_{\mathbf{c}}-\omega_{\mathbf{21}}$. The coupling constants
$g_{\mathbf{k}}^{\left(n\right)}$ $\left(n=1,2\right)$ are defined for atom-
vacuum field interactions that corresponds to spontaneous decay while
$b_{\mathbf{k}}$ and $b_{\mathbf{k}}^{{\dagger}}$ are the annihilation and
creation operators of vacuum modes $\mathbf{k}$.
The wave function of the whole atom-field interaction system at time $t$ can
be represented in terms of state vectors as,
$\displaystyle\left|\psi\left(x,y;t\right)\right\rangle$ $\displaystyle=$
$\displaystyle\int\int
dxdyf\left(x,y\right)\left|x\right\rangle\left|y\right\rangle{\LARGE[}a_{2,0}\left(x,y;t\right)\left|2,\left\\{0\right\\}\right\rangle$
(4)
$\displaystyle+a_{1,0}\left(x,y;t\right)\left|1,\left\\{0\right\\}\right\rangle+\sum\limits_{\mathbf{k}}a_{0,1_{\mathbf{k}}}\left(x,y;t\right)\left|0,1_{\mathbf{k}}\right\rangle{\LARGE],}$
where $f\left(x,y\right)$ is the center of mass wave function for the atom.
The position dependent probability amplitudes $a_{n,0}\left(x,y;t\right)$
$\left(n=1,2\right)$ represents when there is no photon and
$a_{0,1_{\mathbf{k}}}\left(x,y;t\right)$ corresponds to a single photon
spontaneously emitted in $\mathbf{k}$th mode of vacuum.
In our scheme, we make use of the fact that the interaction of position
dependent Rabi frequency of atom, associated with classical standing field,
interacts with the frequency of spontaneously emitted photon [28, 45, 52].
Such emitted photon carries information regrading location of the atom. Hence,
atom position measurment is subjected to the detection of spontaneously
emitted photon. Thus, the probability of atom to be located at any position
$\left(x,y\right)$ at any time $t$ can be described by conditional position
pobability distribution function $W\left(x,y\right)$ defined as,
$W\left(x,y\right)\equiv
W\left(x,y;t|0,1_{\mathbf{k}}\right)=\mathcal{F}\left(x,y;t|0,1_{\mathbf{k}}\right)\left|f\left(x,y\right)\right|^{2}.$
(5)
Here $\mathcal{F}\left(x,y;t|0,1_{\mathbf{k}}\right)$ is the filter function
that can be defined as,
$\mathcal{F}\left(x,y;t|0,1_{\mathbf{k}}\right)=\left|\mathcal{N}\right|^{2}\left|a_{0,1_{\mathbf{k}}}\left(x,y;t\longrightarrow\infty\right)\right|^{2},$
(6)
with $\mathcal{N}$ being the normalization constant. Eq.(6) shows that the
filter function or more generally, the conditional probabilty distribution is
provided by probability amplitude $a_{0,1_{\mathbf{k}}}\left(x,y;t\right)$.
In order to find the probability amplitude, we solve Schrodinger wave equation
using interaction picture Hamiltonian given by Eq.(1) with state vector as
defined in Eq.(4). The time evolution equations of probability amplitudes are
then given by,
$\displaystyle i\dot{a}_{1,0}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle\Omega\left(x,y\right)a_{2,0}\left(x,y;t\right)e^{i\left(\Delta
t+\alpha_{c}\right)}$ (7)
$\displaystyle+\sum\limits_{\mathbf{k}}g_{\mathbf{k}}^{\left(1\right)}a_{0,1_{\mathbf{k}}}\left(x,y;t\right)e^{-i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{10}}\right)t},$
$\displaystyle i\dot{a}_{2,0}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle\Omega\left(x,y\right)a_{1,0}\left(x,y;t\right)e^{-i\left(\Delta
t+\alpha_{c}\right)}$ (8)
$\displaystyle+\sum\limits_{\mathbf{k}}g_{\mathbf{k}}^{\left(2\right)}a_{0,1_{\mathbf{k}}}\left(x,y;t\right)e^{-i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{20}}\right)t},$
$\displaystyle i\dot{a}_{0,1_{\mathbf{k}}}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle
g_{\mathbf{k}}^{\left(1\right)}a_{1,0}\left(x,y;t\right)e^{i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{10}}\right)t}$
(9)
$\displaystyle+g_{\mathbf{k}}^{\left(2\right)}a_{2,0}\left(x,y;t\right)e^{i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{20}}\right)t}.$
Proceeding with regular integration of Eq.(9) and substituting it in Eq.(7)
and (8) gives us,
$\displaystyle i\dot{a}_{1,0}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle\Omega\left(x,y\right)a_{2,0}\left(x,y;t\right)e^{i\left(\Delta
t+\alpha_{c}\right)}$
$\displaystyle-i\int\limits_{0}^{t}dt^{\prime}a_{1,0}\left(x,y;t^{\prime}\right)\sum\limits_{\mathbf{k}}\left|g_{\mathbf{k}}^{\left(1\right)}\right|^{2}e^{-i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{10}}\right)\left(t-t^{\prime}\right)}$
$\displaystyle-i\int\limits_{0}^{t}dt^{\prime}a_{2,0}\left(x,y;t^{\prime}\right)\sum\limits_{\mathbf{k}}g_{\mathbf{k}}^{\left(1\right)}g_{\mathbf{k}}^{\left(2\right)}e^{\begin{subarray}{c}-i\omega_{\mathbf{k}}\left(t-t^{\prime}\right)+i\omega_{\mathbf{10}}t-i\omega_{\mathbf{20}}t^{\prime}\end{subarray}},$
$\displaystyle i\dot{a}_{2,0}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle\Omega\left(x,y\right)a_{1,0}\left(x,y;t\right)e^{-i\left(\Delta
t+\alpha_{c}\right)}$
$\displaystyle-i\int\limits_{0}^{t}dt^{\prime}a_{2,0}\left(x,y;t^{\prime}\right)\sum\limits_{\mathbf{k}}\left|g_{\mathbf{k}}^{\left(2\right)}\right|^{2}e^{-i\left(\omega_{\mathbf{k}}-\omega_{\mathbf{20}}\right)\left(t-t^{\prime}\right)}$
$\displaystyle-i\int\limits_{0}^{t}dt^{\prime}a_{1,0}\left(x,y;t^{\prime}\right)\sum\limits_{\mathbf{k}}g_{\mathbf{k}}^{\left(1\right)}g_{\mathbf{k}}^{\left(2\right)}e^{\begin{subarray}{c}-i\omega_{\mathbf{k}}\left(t-t^{\prime}\right)+i\omega_{\mathbf{20}}t-i\omega_{\mathbf{10}}t^{\prime}\end{subarray}}.$
Simplifying the above two equations under Weisskopf-Wigner theory [53] results
in coupled differential equations of following form
$\displaystyle\dot{a}_{1,0}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle-\frac{\Gamma_{1}}{2}a_{1,0}\left(x,y;t\right)-$ (12)
$\displaystyle\left(i\Omega\left(x,y\right)e^{i\left(\Delta
t+\alpha_{c}\right)}+p\frac{\sqrt{\Gamma_{1}\Gamma_{2}}}{2}e^{\begin{subarray}{c}-i\omega_{\mathbf{21}}t\end{subarray}}\right)a_{2,0}\left(x,y;t\right),$
$\displaystyle\dot{a}_{2,0}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle-\left(i\Omega\left(x,y\right)e^{-i\left(\Delta
t+\alpha_{c}\right)}+p\frac{\sqrt{\Gamma_{1}\Gamma_{2}}}{2}e^{\begin{subarray}{c}i\omega_{\mathbf{21}}t\end{subarray}}\right)a_{2,0}\left(x,y;t\right)$
(13) $\displaystyle-\frac{\Gamma_{2}}{2}a_{2,0}\left(x,y;t\right),$
with $\Gamma_{1}$ and $\Gamma_{2}$ are the spontaneous decay rates for
$\left|1\right\rangle\leftrightarrow\left|0\right\rangle$ and
$\left|2\right\rangle\leftrightarrow\left|0\right\rangle$ transitions,
respectively. These are defined as
$\Gamma_{n}=2\pi\left|g_{n\mathbf{k}}\right|^{2}D\left(\omega_{\mathbf{k}}\right)$
where $D\left(\omega_{\mathbf{k}}\right)$ represents mode density at frequency
$\omega_{\mathbf{k}}$ for vaccum. Additionally, the term
$p\frac{\sqrt{\Gamma_{1}\Gamma_{2}}}{2}e^{\begin{subarray}{c}\pm
i\omega_{\mathbf{21}}t\end{subarray}}$ corresponds to quantum interference
whenever the higher energy levels in two spontaneous emissions are very close
[54]. The parameter $p$ in our case is defined as
$p=\frac{\mathbf{\mu}_{20}.\mathbf{\mu}_{01}}{\left|\mathbf{\mu}_{20}\right|\left|\mathbf{\mu}_{01}\right|}$
which provides alignment of two matrix elements. This clearly shows that
orthogonal and parallel matrix elements are presented by $p=0$ and $1$,
respectively. For orthogonal matrix elements i.e., $p=0$, there is no
interference and for parallel matrix elements $p=1$, the interference is
maximum. In order to solve Eq.(12) and (13) analytically, we assume orthogonal
dipole moments that are easy to find in nature [55]. Further, the time
dependence in above equations i.e., $e^{\begin{subarray}{c}\pm
i\omega_{\mathbf{21}}t\end{subarray}}$ can be ignore by setting energy
difference between upper two levels $\omega_{21}$ very large as comared to the
decay rates $\Gamma_{1}$ and $\Gamma_{2}$ [56]. iIntroducing the
transformations as,
$\displaystyle b_{1,0}\left(x,y;t\right)$ $\displaystyle=$ $\displaystyle
a_{1,0}\left(x,y;t\right),$ $\displaystyle b_{2,0}\left(x,y;t\right)$
$\displaystyle=$ $\displaystyle a_{2,0}\left(x,y;t\right)e^{i\Delta t},$
$\displaystyle b_{0,1_{\mathbf{k}}}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle a_{0,1_{\mathbf{k}}}\left(x,y;t\right),$ (14)
with
$\delta_{\mathbf{k}}\equiv\omega_{\mathbf{k}}-\left(\omega_{20}+\omega_{10}\right)/2,$
(15)
the rate equations (12), (13) and (9) becomes,
$\dot{b}_{1,0}\left(x,y;t\right)=-\frac{\Gamma_{1}}{2}b_{1,0}\left(x,y;t\right)-i\Omega\left(x,y\right)b_{2,0}\left(x,y;t\right),$
(16)
$\displaystyle\dot{b}_{2,0}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle-i\Omega\left(x,y\right)e^{-i\alpha_{c}}b_{1,0}\left(x,y;t\right)$
(17)
$\displaystyle+\left(i\Delta-\frac{\Gamma_{2}}{2}\right)b_{2,0}\left(x,y;t\right),$
$\displaystyle\dot{b}_{0,1_{\mathbf{k}}}\left(x,y;t\right)$ $\displaystyle=$
$\displaystyle-
ig_{\mathbf{k}}^{\left(1\right)}b_{1,0}\left(x,y;t\right)e^{i\left(\delta_{\mathbf{k}}+\frac{\omega_{21}}{2}\right)t}$
(18) $\displaystyle-
ig_{\mathbf{k}}^{\left(2\right)}b_{2,0}\left(x,y;t\right)e^{i\left(\delta_{\mathbf{k}}-\frac{\omega_{21}}{2}-\Delta\right)t.}$
The set of first two differential equations can readily be solved by formal
integration techniques. At this moment, we define initial state of the system
by superposition of two orthogonal states as,
$\left|\psi\left(x,y;t=0\right)\right\rangle=e^{i\alpha_{p}}\sin\left(\xi\right)\left|2,\left\\{0\right\\}\right\rangle+\cos\left(\xi\right)\left|1,\left\\{0\right\\}\right\rangle,$
(19)
where $\alpha_{p}$ is the phase related to pump filed. Under this initial
state with assumption that the pumping phase, $\alpha_{p}=0$ and decay rates
$\Gamma_{1}=\Gamma_{2}=\Gamma$, the solution of Eq.(16) and (17) reads as,
$b_{1,0}\left(x,y;t\right)=B_{1}e^{\lambda_{1}t}+B_{1}^{\prime}e^{\lambda_{2}t},$
(20)
$b_{2,0}\left(x,y;t\right)=B_{2}e^{\lambda_{1}t}+B_{2}^{\prime}e^{\lambda_{2}t},$
(21)
with
$\lambda_{1,2}=i\frac{\Delta}{2}-\frac{\Gamma}{2}\pm\frac{i}{2}\sqrt{4\left(\Omega^{2}\left(x,y\right)-\left(i\Delta-\frac{\Gamma}{2}\right)\frac{\Gamma}{2}\right)-\left(i\Delta-\Gamma\right)^{2}},$
(22)
$B_{1}=\frac{1}{\lambda_{2}-\lambda_{1}}\left(\left(\lambda_{2}+\frac{\Gamma}{2}\right)\sin\left(\xi\right)+i\Omega\left(x,y\right)e^{i\alpha_{c}}\cos\left(\xi\right)\right),$
(23)
$B_{1}^{\prime}=\frac{1}{\lambda_{1}-\lambda_{2}}\left(\left(\lambda_{1}+\frac{\Gamma}{2}\right)\sin\left(\xi\right)+i\Omega\left(x,y\right)e^{i\alpha_{c}}\cos\left(\xi\right)\right),$
(24)
$B_{2}=\frac{1}{\lambda_{2}-\lambda_{1}}\left(\left(\lambda_{2}+\frac{\Gamma}{2}-i\Delta\right)\sin\left(\xi\right)+i\Omega\left(x,y\right)e^{-i\alpha_{c}}\cos\left(\xi\right)\right),$
(25)
$B_{2}^{\prime}=\frac{1}{\lambda_{1}-\lambda_{2}}\left(\left(\lambda_{1}+\frac{\Gamma}{2}-i\Delta\right)\sin\left(\xi\right)+i\Omega\left(x,y\right)e^{-i\alpha_{c}}\cos\left(\xi\right)\right),$
(26)
for $\lambda_{1}\neq\lambda_{2}$.
Substituting Eq.(20) and (21) in Eq.(18) and following formal integration, the
probability amplitude $b_{0,1_{\mathbf{k}}}\left(x,y;t\right)$ for interaction
time much larger than the decay rates $\left(\Gamma_{1}t,\Gamma_{2}t\gg
1\right)$ offers,
$\displaystyle b_{0,1_{\mathbf{k}}}\left(x,y;t\rightarrow\infty\right)$
$\displaystyle=$ $\displaystyle
g_{\mathbf{k}}^{\left(1\right)}\left(\tfrac{B_{1}}{\delta_{\mathbf{k}}+\frac{\omega_{21}}{2}-i\lambda_{1}}+\tfrac{B_{1}^{\prime}}{\delta_{\mathbf{k}}+\frac{\omega_{21}}{2}-i\lambda_{2}}\right)+$
(27) $\displaystyle
g_{\mathbf{k}}^{\left(2\right)}\left(\tfrac{B_{2}}{\delta_{\mathbf{k}}-\frac{\omega_{21}}{2}-\Delta-i\lambda_{1}}+\tfrac{B_{2}^{\prime}}{\delta_{\mathbf{k}}-\frac{\omega_{21}}{2}-\Delta-i\lambda_{2}}\right).$
Incorporating the constants as specified in Eq.(22)-(26) with trivial
simplifications, we get
$\displaystyle b_{0,1_{\mathbf{k}}}\left(x,y;t\rightarrow\infty\right)$
$\displaystyle=$
$\displaystyle\frac{g_{\mathbf{k}}^{\left(1\right)}}{2}\left(\tfrac{\sin\left(\xi\right)-e^{i\alpha_{c}}\cos\left(\xi\right)}{\delta_{\mathbf{k}}+\frac{\omega_{21}}{2}+\Omega\left(x,y\right)+i\frac{\Gamma}{2}}+\tfrac{\sin\left(\xi\right)+e^{i\alpha_{c}}\cos\left(\xi\right)}{\delta_{\mathbf{k}}+\frac{\omega_{21}}{2}-\Omega\left(x,y\right)+i\frac{\Gamma}{2}}\right)+$
$\displaystyle\frac{g_{\mathbf{k}}^{\left(2\right)}}{2}\left(\tfrac{\cos\left(\xi\right)-e^{-i\alpha_{c}}\sin\left(\xi\right)}{\delta_{\mathbf{k}}-\frac{\omega_{21}}{2}-\Delta+\Omega\left(x,y\right)+i\frac{\Gamma}{2}}+\tfrac{\cos\left(\xi\right)+e^{-i\alpha_{c}}\sin\left(\xi\right)}{\delta_{\mathbf{k}}-\frac{\omega_{21}}{2}-\Delta-\Omega\left(x,y\right)+i\frac{\Gamma}{2}}\right).$
Consequently, the required conditional probability distribution of finding the
atom in state $\left|0\right\rangle$ with emitted photon of frequency
$\omega_{\mathbf{k}}$ corresponding to reservoir mode $\mathbf{k}$ is depicted
by
$W\left(x,y\right)=\left|\mathcal{N}\right|^{2}\left|f\left(x,y\right)\right|^{2}\left|b_{0,1_{\mathbf{k}}}\left(x,y;t\rightarrow\infty\right)\right|^{2},$
(29)
where we have used the transformation
$b_{0,1_{\mathbf{k}}}\left(x,y;t\right)=a_{0,1_{\mathbf{k}}}\left(x,y;t\right)$
from Eq. (14). Since the center-of-mass wave function $f(x,y)$ for atom is
assumed to be almost constant over many wavelengths of the standing-wave
fields in $x-y$ plane, the conditional probability distribution
$W\left(x,y\right)$ for atom localization is determined by filter function as
defined in Eq. (6).
## 3 Numerical Results and Discussion
In this section, we will discuss the conditional probability distribution of
an atom employing few numerical results based on filter function
$\mathcal{F}(x,y)$. We will then swing the system parameter to show how atom
localization can be attained via quantum coherence. In our analysis, we have
considered two orthogonal standing waves with corresponding Rabi frequency
$\Omega\left(x,y\right)$
$=\Omega_{1}\sin\left(k_{1}x\right)+\Omega_{2}\sin\left(k_{2}y\right)$ [40].
Further, all the parameters are taken in terms of decay rate $\Gamma$.
Apparently, the filter function $\mathcal{F}(x,y)$ depends on the paramaters
of standing wave driving fields and frequency of the emitted photon, rather it
also depends on the interference effects [40]. Since the two spontaneous decay
channels $\left|1\right\rangle\rightarrow\left|0\right\rangle$ and
$\left|2\right\rangle\rightarrow\left|0\right\rangle$ interact via same vaccum
modes, quantum interfence subsists. However, we have neglected quantum
interference by setting paramater $p=0$ in our analysis. Moreover, the
dynamically induced interference due to two orthogonal standing wave fields
does play a considerable effect. Hence, atom localization in 2D can be
manipulated by various parameters. As the filter function $\mathcal{F}(x,y)$
depends on $\Omega\left(x,y\right)$ which itself comprises of
$\sin\left(k_{1}x\right)$ and $\sin\left(k_{2}y\right)$, the localization is
possible for only those values of $(x,y)$ for which $\mathcal{F}(x,y)$ reveals
maxima. Here, the analytical form of $\mathcal{F}(x,y)$ appears to be quite
cumbersome, so we will provide only numerical results for precise atom
location in two dimensions.
Since the atom is initially prepared in the superposition of upper two excited
states $\left|1\right\rangle$ and $\left|2\right\rangle$ via Eq.(19), the
state is strongly dependent on coupling phase for upper two levels
$\left|1\right\rangle$ and $\left|2\right\rangle$ i.e., $\alpha_{c}$ which
shows quantum coherence is phase dependent. Accordingly, we have considered
three values of phase that is $\alpha_{c}=0,\frac{\pi}{2}$ and $\pi$.
In case of $\alpha_{c}=$ $\frac{\pi}{2}$, we first provide conditional
probability distribution by plotting filter function $\mathcal{F}(x,y)$ as a
function of $(k_{1}x,k_{2}y)$ over a single wavelength for different value of
detuning of spontaneously emitted photon i.e., $\delta_{\mathbf{k}}$ [57].
From Fig. 2(a)-(d), it is evident that the filter function $\mathcal{F}(x,y)$
strongly depends on the detuning of spontaneously emitted photon. When
$\delta_{\mathbf{k}}=9.3\Gamma$, the location of atom is distributed in all
four quadrants in $x-y$ plane, as shown in Fig. 2(a). Switching values to
$\delta_{\mathbf{k}}=5.3\Gamma$ , the location is restricted to quadrant I and
IV with peaks providing crater like pattern [Fig. 2(b)]. On further refining
the detuning to $\delta_{\mathbf{k}}=2.9\Gamma$, the peaks get narrowed, as
depicted in Fig. 2(c). Furthermore, Fig. 2(d) illustrate the effect of
detuning set to appropriate value i.e., $\delta_{\mathbf{k}}=0.1\Gamma$. The
filter function $\mathcal{F}(x,y)$ furnishes two spike like patterns
corresponding to maximas located in quadrant I and IV at
$(k_{1}x,k_{2}y)=(\frac{\pi}{2},\frac{\pi}{2})$ and
$(-\frac{\pi}{2},-\frac{\pi}{2})$ in $x-y$ plane which clearly indicates that
high precision and localization in two dimension can be obtained when emitted
photon is nearly resonant with the corresponding atomic transition.
Consequently, the probability of finding the atom at each location is
$\frac{1}{2}$ which is twice as compared to the probability obtained in
earlier cases [37, 39, 44, 58].
In Fig. 3, we plot the filter function $\mathcal{F}(x,y)$ versus
$(k_{1}x,k_{2}y)$ by modulating the detuning of spontaneously emitted photon
in 2D for $\alpha_{c}=$ $0$. Fig. 3(a) illustrates the results when detuning
is large i.e., $\delta_{\mathbf{k}}=12.4\Gamma$. The lattice like structure
obtained gives distributed on the diagonal in II and IV quadrants. This
specifies that the atom localization peaks are determined by
$k_{1}x+k_{2}y=2p\pi$ or $k_{1}x+k_{2}y=2\left(q+1\right)\pi$ where $p$ and
$q$ are intergers. Refining $\delta_{\mathbf{k}}$ to $9.5\Gamma,$ the position
probability of atom is rather complicated due to the interference of the two
fields and the filter function $\mathcal{F}(x,y)$ is dispersed in quadrant II,
III and IV. However, the distribation is mainly localized in quardant III, as
presented in Fig. 3(b). Narrowing the detuning parameter to
$\delta_{\mathbf{k}}=6.0\Gamma$, the location is distributed in quadrant III
with a crater like structure, as shown in Fig. 3(b). Such crater like
structure persists in quardant III for
$\delta_{\mathbf{k}}\in\left[5.2\Gamma,6.9\Gamma\right]$, offering atom
localized at the circle [Fig. 3(c)]. Tuning the photon detuning to
$\delta_{\mathbf{k}}=2.4\Gamma$, a single spike is achieved at
$(-\frac{\pi}{2},-\frac{\pi}{2})$ as illustrated in Fig. 3(d) which indicates
that probability of finding the atom within single wavelegth in 2D, is
increased by a factor of 2 (as in Fig.2(d)). Hence, we can say that atom
localization is undeniably acquired in 2D.
From Eq. (29) with $\Omega\left(x,y\right)$
$=\Omega_{1}\sin\left(k_{1}x\right)+\Omega_{2}\sin\left(k_{2}y\right)$, we can
easily identify that the filter function $\mathcal{F}(x,y)$ remains unaltered
under transform $0\leftrightarrow\pi$ and
$(k_{1}x,k_{2}y)\leftrightarrow(-k_{1}x,-k_{2}y).$ Therefore,
$\mathcal{F}(x,y;\alpha_{c}=0)=\mathcal{F}(-x,-y;\alpha_{c}=\pi)$ and we
obtain vice versa results as in previous case $(\alpha_{c}=0)$ However, the
localization distribution and peak are shifted in quadrant I, as shown in Fig.
4(a)-(d).
These results identify the strong association of detuning of spontaneously
emitted photon to the localization of atom. Furthermore, the peaks of atom
localization in all of the above cases are obtained at antinodes of the
standing fields with precise localization destroyed for large frequency of the
spontaneously emitted photon. Indeed, the localization seems to be possible
when emitted photon are near in resonance to the atomic transitions.
Finally we present the significance of initial conditions on atom localization
by explicitly preparing the atomic system is single state. Therefore for
$\alpha_{c}=0$, and setting $\xi=0$ in Eq. (19), the system is initially in
state $\left|1\right\rangle$ which provides the distribution of localization
peaks takes place in quadrant I and III as depicted in Fig. 5(a). Hence, the
probability of finding the atom at a single location, in 2D is decreased.
Therefore, the number of peaks increased by a factor of 2, as compared to the
case when atom is initially prepared in superposition of states
$\left|1\right\rangle$ and $\left|2\right\rangle$ [see Fig. 3(d)]. The reason
behind is the absence of atomic coherence for two decaying states
$\left|1\right\rangle$ and $\left|2\right\rangle$. A similar result can also
be obtained for $\xi=\pi$ where again the probability is decreased by a factor
of 2 as shown in Fig. 4(d). However, Fig. 5(b) shows sharp peaks by setting
$\xi=\frac{\pi}{2}$. This indeed provides us high resolution and precision for
atomic localization in 2D in the absence of atomic coherence.
## 4 Conclusions
In summary, we have proposed and analyzed atom localization for a three level
atomic system in two dimensions. The scheme under consideration is based on
the phenomenon of spontaneous emission when the atom interacts with spatially
dependent standing orthogonal fields. Following the position dependent atom-
field interaction, the precise location of atom in 2D, can be achieved by
detecting frequency of spontaneously emitted photon. Consequently, the
interaction provides various structures of filter function such as lattice
like structure, crater like structure and most importantly, the localization
spikes. The phenomenon of quantum coherence originates from coupling of two
excited levels to standing wave fields. Our results shows that not only the
relative phase between two orthogonal standing wave fields but also the
frequency detuning substantially controls florescence spectra in conditional
probaility distribution. The localization pattern generates a single spike for
inphase position dependent fields. However, the localization pattern is
destroyed with an increase in frequency detuning. Remarkably, the pattern of
localization peaks remains unaltered with varying vacuum field detuning which
is the major advantage of our scheme. We have also presented the effect of
initial state prepration on atom localization. In the absence of atomic
coherence, the localization probability decreases with increase in spatial
resolution. Our analysis indeed provides efficient way for atom localization
in two dimension that may be productive for laser cooling and atom nano-
lithography [58].
## References
* [1] W.D. Phillips, Rev. Mod. Phys. 70, 721 (1998)
* [2] P. Rudy, R. Ejnisman, N. P. Bigelow, Phys. Rev. Lett. 78, 4906 (1997)
* [3] G. Rempe, Appl. Phys. B 60, 233 (1995)
* [4] R. Quadt, M. Collett, D. F. Walls, Phys. Rev. Lett. 74, 351 (1995)
* [5] W. Heisenberg, Z. Phys. 43, 172 (1927)
* [6] M. Born and E. Wolf, Principles of Optics (Cambridge University Press, Cambridge, 1999)
* [7] H. Metcalf, P. Van der Straten, Phys. Rep. 244, 203 (1994)
* [8] K.S. Johnson, J.H. Thywissen, W.H. Dekker, K.K. Berggren, A.P. Chu, R. Younkin, M. Prentiss, Science, 280, 1583 (1998)
* [9] A.N. Boto, P. Kok, D.S. Abrams, S.L. Braunstein, C.P. Williams, J.P. Dowling, Phys. Rev. Lett. 85, 2733 (2000)
* [10] K.T. Kapale, S. Qamar, M.S. Zubairy, Phys. Rev. A 67, 023805 (2003)
* [11] J. Evers, S. Qamar, M.S. Zubairy, Phys. Rev. A 75, 053809 (2007)
* [12] G.P. Collins, Phys. Today 49, 18 (1996)
* [13] Y. Wu and R. Côté, Phys. Rev. A 65, 053603 (2002)
* [14] J. Mompart, V. Ahufinger, G. Birkl, Phys. Rev. A 79, 053638 (2009)
* [15] R. Abfalterer, C. Keller, S. Bernet, M.K. Oberthaler, J. Schmiedmayer, A. Zeilinger, Phys. Rev. A 56, 4365 (1997)
* [16] C. Keller, R. Abfalterer, S. Bernet, M.K. Oberthaler, J. Schmiedmayer, A. Zeilinger, J. Vac. Sci. Technol. B 16, 3850 (1998)
* [17] K.S. Johnson, J.H. Thywissen, N.H. Dekker, K.K. Berggren, A.P. Chu, R. Younkin, M. Prentiss, Science 280, 1583 (1998)
* [18] P. Storey, M. Collett, D. F. Walls, Phys. Rev. Lett. 68, 472 (1992)
* [19] M.A.M. Marte, P. Zoller, Appl. Phys. B 54, 477(1992)
* [20] P. Storey, M. Collett, D. F. Walls, Phys. Rev. A 47, 405 (1993)
* [21] R. Quadt, M. Collett, D. F. Walls, Phys. Rev. Lett. 74, 351 (1995)
* [22] S. Kunze, G. Rempe, M. Wilkens, Europhys. Lett. 27, 115 (1994)
* [23] S. Kunze, K. Dieckmann, G. Rempe, Phys. Rev. Lett. 78, 2038 (1997)
* [24] F. L. Kien, G. Rempe, W. P. Schleich, M. S. Zubairy, Phys. Rev. A 56, 2972 (1997)
* [25] S. Qamar, A. Mehmood, S. Qamar, Phys. Rev. A 79, 033848 (2009)
* [26] S. Kunze, K. Dieckmann, G. Rempe, Phys. Rev. Lett. 78, 2038 (1997)
* [27] F. Ghafoor, Phys. Rev. A 84, 063849 (2011)
* [28] S. Qamar, S.Y. Zhu, and M.S. Zubairy, Phys. Rev. A 61, 063806 (2000)
* [29] H.Wang, D. Goorskey, M. Xiao, Phys. Rev. Lett. 87, 073601 (2001)
* [30] Y. Wu and X. Yang, Appl. Phys. Lett. 91, 094104 (2007)
* [31] Y. Wu and X. Yang, Phys. Rev. A 70, 053818 (2004)
* [32] Y. Wu, L. L. Wen, Y. F. Zhu, Opt. Lett. 28, 631 (2003)
* [33] H. Lee, P. Polynkin, M. O. Scully, S. Y. Zhu, Phys. Rev. A 55, 4454 (1997)
* [34] J.H. Wu, A.J. Li, Y. Ding, Y.C. Zhao, J.Y. Gao, Phys. Rev. A 72, 023802 (2005)
* [35] W. Harshawardhan, G.S. Agarwal, Phys. Rev. A 53, 1812 (1996)
* [36] A. Joshi, M. Xiao, Phys. Rev. Lett. 91, 143904 (2003)
* [37] V. Ivanov, Y. Rozhdestvensky, Phys. Rev. A 81, 033809 (2010)
* [38] R.G. Wan, J. Kou, L. Jiang, Y. Jiang, J.Y. Gao, J. Opt. Soc. Am. B 28, 622 (2011)
* [39] R.G. Wan, J. Kou, L. Jiang, Y. Jiang, J.Y. Gao, Opt. Commun. 284, 985 (2011)
* [40] C. Ding, J. Li, Z. Zhan, X. Yang, Phys. Rev. A 83, 063834 (2011)
* [41] R.G. Wan, T.Y. Zhang, Opt. Express 29, 25823 (2011)
* [42] J. Li, R. Yu, M. Liu, C. Ding, X. Yang, Phys. Lett. A 375, 3978 (2011)
* [43] C. Ding, J. Li, X. Yang, D. Zhang, H. Xiong, Phys. Rev. A 84, 043840 (2011)
* [44] C.L. Ding, J.H. Li, X.X. Yang, Z.M. Zhang, J.B. Liu, J. Phys. B 44, 145501 (2011)
* [45] A.M. Herkommer, W.P. Schleich, M.S. Zubairy, J. Mod. Opt. 44, 2507 (1997)
* [46] E. Paspalakis,C. H. Keitel,P.L. Knight, Phys. Rev. A 58, 4868 (1998)
* [47] M. Sahrai, H. Tajalli, K.T. Kapale, M.S. Zubairy, Phys. Rev. A 72, 013820 (2005)
* [48] K.T. Kapale, M.S. Zubairy, Phys. Rev. A 73, 023813 (2006)
* [49] C.P. Liu, S.Q. Gong, D.C. Cheng, X.J. Fan, Z.Z. Xu, Phys. Rev. A 73, 025801 (2006)
* [50] D.C. Cheng, Y. P. Niu, R.X. Li, S.Q. Gong, J. Opt. Soc. Am. B 23, 2180 (2006)
* [51] P. Meystre, M. Sargent, Elements of Quantum Optics (Springer-Verlag, Berlin, 1999)
* [52] S. Qamar, S.Y. Zhu, M.S. Zubairy, Opt. Commun. 176, 409 (2000)
* [53] M.O. Scully, M.S Zubairy, Quantum Optics (Cambridge University Press, Cambridge, 1997)
* [54] G.S. Agarwal, Quantum Statistical Theories of Spontaneous Emission and their Relation to Other Approaches (Springer-Verlag, Berlin, 1974)
* [55] F. Ghafoor, S.Y. Zhu, M.S. Zubairy, Phys. Rev. A 62, 013811 (2000)
* [56] S.Y. Zhu, R.C.F. Chan, C.P. Lee, Phys. Rev. A 52, 710 (1995)
* [57] Z. Wang, B. Yu, F. Xu, S. Zhen, X. Wu, Appl. Phys. B 108, 479 (2012)
* [58] L.L. Jin, H. Sun, Y.P. Niu, S.Q. Jin, S.Q. Gong, J. Mod. Opt. 56, 805 (2009)
## 5 Figure Captions
Fig-1: Schematic diagram of the system. $\left(a\right)$ Atom moving along
$z-$axis interacts with two dimensional position dependent field in $x-y$
plane; $\left(b\right)$ Atomic model under consideration. The two excited
levels $\left|1\right\rangle$ and $\left|2\right\rangle$ are coupled by two
dimensional standing wavefield $\Omega\left(x,y\right)$ with level
$\left|2\right\rangle$ having a finite detuning $\Delta$. Both the excited
levels $\left|1\right\rangle$ and $\left|2\right\rangle$ decay spontaneously
to ground state via decay rates $\Gamma_{1}$ and $\Gamma_{2}$, respectively
Fig-2: Filter function $\mathcal{F}(x,y)$ as a function of $(k_{1}x,k_{2}y)$
for detuning of spontaneously emitted photon, $\left(a\right)$
$\delta_{\mathbf{k}}=9.3$; $\left(b\right)$ $\delta_{\mathbf{k}}=5.3$;
$\left(c\right)$ $\delta_{\mathbf{k}}=2.9$; $\left(d\right)$
$\delta_{\mathbf{k}}=0.1$. Other parameters are $\alpha_{c}=\frac{\pi}{2}$,
$\xi=\frac{\pi}{4}$, $\Delta=2.5$, $\omega_{21}=20$ and
$\Omega_{1}=\Omega_{2}=5.$All system parameters are scaled in units of
$\Gamma$.
Fig-3: Filter function $\mathcal{F}(x,y)$ as a function of $(k_{1}x,k_{2}y)$
for detuning of spontaneously emitted photon, $\left(a\right)$
$\delta_{\mathbf{k}}=12.4$; $\left(b\right)$ $\delta_{\mathbf{k}}=9.5$;
$\left(c\right)$ $\delta_{\mathbf{k}}=6.0$; $\left(d\right)$
$\delta_{\mathbf{k}}=2.4$. Other parameters are $\alpha_{c}=0$,
$\xi=\frac{\pi}{4}$, $\Delta=2.5$, $\omega_{21}=20$ and
$\Omega_{1}=\Omega_{2}=5.$All system parameters are scaled in units of
$\Gamma$.
Fig-4: Filter function $\mathcal{F}(x,y)$ as a function of $(k_{1}x,k_{2}y)$
for detuning of spontaneously emitted photon, $\left(a\right)$
$\delta_{\mathbf{k}}=12.4$; $\left(b\right)$ $\delta_{\mathbf{k}}=9.5$;
$\left(c\right)$ $\delta_{\mathbf{k}}=6.0$; $\left(d\right)$
$\delta_{\mathbf{k}}=2.4$. Other parameters are $\alpha_{c}=\pi$,
$\xi=\frac{\pi}{4}$, $\Delta=2.5$, $\omega_{21}=20$ and
$\Omega_{1}=\Omega_{2}=5.$All system parameters are scaled in units of
$\Gamma$.
Fig-5: Filter function $\mathcal{F}(x,y)$ as a function of $(k_{1}x,k_{2}y)$
for detuning of spontaneously emitted photon, $\left(a\right)$ $\xi=0$;
$\left(b\right)$ $\xi=\frac{\pi}{2}$. Other parameters are $\alpha_{c}=\pi$,
$\delta_{\mathbf{k}}=2.4$, $\Delta=2.5$, $\omega_{21}=20$ and
$\Omega_{1}=\Omega_{2}=5.$All system parameters are scaled in units of
$\Gamma$.
|
arxiv-papers
| 2013-02-11T12:20:37 |
2024-09-04T02:49:41.628203
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Sajjad Zafar, Rizwan Ahmed, M. Khalid Khan",
"submitter": "M. Khalid Khan",
"url": "https://arxiv.org/abs/1302.2460"
}
|
1302.2474
|
Generalizations of generating function for hypergeometric orthogonal
polynomials
Generalizations of generating functions for hypergeometric orthogonal
polynomials with definite integrals
Howard S. COHL, ${}^{\dagger}\\!\\!\ $ Connor MACKENZIE §† and Hans VOLKMER ‡
H. S. Cohl, C. MacKenzie & H. Volkmer
† Applied and Computational Mathematics Division, National Institute of
Standards and Technology, Gaithersburg, MD 20899-8910, USA
[email protected]
http://hcohl.sdf.org
§ Department of Mathematics, Westminster College, 319 South Market Street, New
Wilmington, PA 16172, USA [email protected]
‡ Department of Mathematical Sciences, University of Wisconsin-Milwaukee, P.O.
Box 413, Milwaukee, WI 53201, USA [email protected]
Received XX September 2012 in final form ????; Published online ????
We generalize generating functions for hypergeometric orthogonal polynomials,
namely Jacobi, Gegenbauer, Laguerre, and Wilson polynomials. These
generalizations of generating functions are accomplished through series
rearrangement using connection relations with one free parameter for these
orthogonal polynomials. We also use orthogonality relations to determine
corresponding definite integrals.
Orthogonal polynomials; Generating functions; Connection coefficients;
Generalized hypergeometric functions; Eigenfunction expansions; Definite
integrals
33C45, 05A15, 33C20, 34L10, 30E20
## 1 Introduction
In this paper, we apply connection relations (see for instance Andrews et al.
(1999) [2, Section 7.1]; Askey (1975) [3, Lecture 7]) with one free parameter
for the Jacobi, Gegenbauer, Laguerre, and Wilson polynomials (see Chapter 18
in [12]) to generalize generating functions for these orthogonal polynomials
using series rearrangement. This is because connection relations with one free
parameter only involve a summation over products of gamma functions and are
straightforward to sum. We have already applied our series rearrangment
technique using a connection relation with one free parameter to the
generating function for Gegenbauer polynomials [12, (18.12.4)]
$\frac{1}{(1+\rho^{2}-2\rho
x)^{\nu}}=\sum_{n=0}^{\infty}\rho^{n}C_{n}^{\nu}(x).$ (1)
The connection relation for Gegenbauer polynomials is given in Olver et al.
(2010) [12, (18.18.16)] (see also Ismail (2005) [9, (9.1.2)]), namely
$C_{n}^{\nu}(x)=\frac{1}{\mu}\sum_{k=0}^{\lfloor
n/2\rfloor}(\mu+n-2k)\frac{(\nu-\mu)_{k}\,(\nu)_{n-k}}{k!(\mu+1)_{n-k}}C_{n-2k}^{\mu}(x).$
(2)
Inserting (2) into (1), we obtained a result [5, (1)] which generalizes (1),
namely
$\frac{1}{(1+\rho^{2}-2\rho
x)^{\nu}}=\sum_{n=0}^{\infty}f_{n}^{(\nu,\mu)}(\rho)C_{n}^{\mu}(x),$ (3)
where
$f_{n}^{(\nu,\mu)}:\\{z\in{\mathbf{C}}:0<|z|<1\\}\setminus(-1,0)\to{\mathbf{C}}$
is defined by
$f_{n}^{(\nu,\mu)}(\rho):=\frac{\Gamma(\mu)e^{i\pi(\mu-\nu+1/2)}(n+\mu)}{\sqrt{\pi}\,\Gamma(\nu)\rho^{\mu+1/2}(1-\rho^{2})^{\nu-\mu-1/2}}Q_{n+\mu-1/2}^{\nu-\mu-1/2}\left(\frac{1+\rho^{2}}{2\rho}\right),$
where $Q_{\nu}^{\mu}$ is the associated Legendre function of the second kind
[12, Chapter 14]. It is easy to demonstrate that
$f_{n}^{(\nu,\nu)}(\rho)=\rho^{n}$. We have also succesfully applied this
technique to an extension of (1) expanded in Jacobi polynomials using a
connection relation with two free parameters in Cohl (2010) [4, Theorem 5.1].
In this case the coefficients of the expansion are given in terms of Jacobi
functions of the second kind. Applying this technique with connection
relations with more than one free parameter is therefore possible, but it is
more intricate and involves rearrangement and summation of three or more
increasingly complicated sums. The goal of this paper is to demonstrate the
effectiveness of the series rearrangement technique using connection relations
with one free parameter by applying it to some of the most fundamental
generating functions for hypergeometric orthogonal polynomials.
Unless otherwise stated, the domains of convergence given in this paper are
those of the original generating function and/or its corresponding definite
integral. In this paper, we only justify summation interchanges for a few of
the theorems we present. For the interchange justifications we have given, we
give all the details. However, for the sake of brevity, we leave justification
for the remaining interchanges to the reader.
Here we will make a short review of the special functions which are used in
this paper. The generalized hypergeometric function
${{}_{p}}F_{q}:{\mathbf{C}}^{p}\times({\mathbf{C}}\setminus{\mathbf{N}}_{0})^{q}\times\left\\{z\in{\mathbf{C}}:|z|<1\right\\}\to{\mathbf{C}}$
(see Chapter 16 in Olver et al. (2010) [12]) is defined as
${{}_{p}}F_{q}\left(\begin{array}[]{c}a_{1},\dots,a_{p}\\\\[5.69046pt]
b_{1},\dots,b_{q}\end{array};z\right):=\sum_{n=0}^{\infty}\frac{(a_{1})_{n}\,\dots\,(a_{p})_{n}}{(b_{1})_{n}\,\dots\,(b_{q})_{n}}\frac{z^{n}}{n!}.$
When $p=2,\,q=1,$ this is the special case referred to as the Gauss
hypergeometric function
${}_{2}F_{1}:{\mathbf{C}}^{2}\times({\mathbf{C}}\setminus-{\mathbf{N}}_{0})\times\left\\{z\in{\mathbf{C}}:|z|<1\right\\}\to{\mathbf{C}}$
(see Chapter 15 of Olver et al. (2010) [12]). When $p=1,\,q=1$ this is
Kummer’s confluent hypergeometric function of the first kind
$M:{\mathbf{C}}\times({\mathbf{C}}\setminus{\mathbf{N}}_{0})\times{\mathbf{C}}\to{\mathbf{C}}$
(see Chapter 13 in Olver et al. (2010) [12]), namely
$M(a,b,z):=\sum_{n=0}^{\infty}\frac{(a)_{n}}{(b)_{n}}\frac{z^{n}}{n!}={{}_{1}}F_{1}\left(\begin{array}[]{c}a\\\
b\end{array};z\right).$
When $p=0$, $q=1$, this is related to the Bessel function of the first kind
(see Chapter 10 in Olver et al. (2010) [12])
$J_{\nu}:{\mathbf{C}}\setminus\\{0\\}\to{\mathbf{C}}$, for
$\nu\in{\mathbf{C}}$, defined by
$J_{\nu}(z):=\frac{(z/2)^{\nu}}{\Gamma(\nu+1)}\,{}_{0}F_{1}\left(\begin{array}[]{c}-\\\
\nu+1\end{array};\frac{-z^{2}}{4}\right).$ (4)
The case of $p=0$, $q=1$ is also related to the modified Bessel function of
the first kind (see Chapter 10 in Olver et al. (2010) [12])
$I_{\nu}:{\mathbf{C}}\setminus(-\infty,0]\to{\mathbf{C}}$, for
$\nu\in{\mathbf{C}}$, defined by
$I_{\nu}(z):=\frac{(z/2)^{\nu}}{\Gamma(\nu+1)}\,{{}_{0}}F_{1}\left(\begin{array}[]{c}-\\\
\nu+1\end{array};\frac{z^{2}}{4}\right).$
When $p=1$, $q=0$, this is the binomial expansion (see for instance Olver et
al. (2010) [12, (15.4.6)]), namely
${{}_{1}}F_{0}\left(\begin{array}[]{c}\alpha\\\\[5.69046pt]
-\end{array};z\right)=(1-z)^{-\alpha}.$ (5)
In these sums, the Pochhammer symbol (rising factorial)
$(\cdot)_{n}:{\mathbf{C}}\to{\mathbf{C}}$ [12, (5.2.4)] is defined by
$(z)_{n}:=\prod_{i=1}^{n}(z+i-1),$
where $n\in{\mathbf{N}}_{0}$. Also, when $z\notin-{\mathbf{N}}_{0}$ we have
(see [12, (5.2.5)])
$(z)_{n}=\frac{\Gamma(z+n)}{\Gamma(z)},$ (6)
where $\Gamma:{\mathbf{C}}\setminus-{\mathbf{N}}_{0}\to{\mathbf{C}}$ is the
gamma function (see Chapter 5 in Olver et al. (2010) [12]).
Throughout this paper we rely on the following definitions. For
$a_{1},a_{2},a_{3},\ldots\in{\mathbf{C}}$, if $i,j\in{\mathbf{Z}}$ and $j<i$
then $\sum_{n=i}^{j}a_{n}=0$ and $\prod_{n=i}^{j}a_{n}=1$. The set of natural
numbers is given by ${\mathbf{N}}:=\\{1,2,3,\ldots\\}$, the set
${\mathbf{N}}_{0}:=\\{0,1,2,\ldots\\}={\mathbf{N}}\cup\\{0\\}$, and the set
${\mathbf{Z}}:=\\{0,\pm 1,\pm 2,\ldots\\}.$ The set ${\mathbf{R}}$ represents
the real numbers.
## 2 Expansions in Jacobi polynomials
The Jacobi polynomials $P_{n}^{(\alpha,\beta)}:{\mathbf{C}}\to{\mathbf{C}}$
can be defined in terms of a terminating Gauss hypergeometric series as
follows (Olver et al. (2010) [12, (18.5.7)])
$P_{n}^{(\alpha,\beta)}(z):=\frac{(\alpha+1)_{n}}{n!}\,{}_{2}F_{1}\left(\begin{array}[]{c}-n,n+\alpha+\beta+1\\\\[2.84544pt]
\alpha+1\end{array};\frac{1-z}{2}\right),$
for $n\in{\mathbf{N}}_{0}$, and $\alpha,\beta>-1$ such that if
$\alpha,\beta\in(-1,0)$ then $\alpha+\beta+1\neq 0$. The orthogonality
relation for Jacobi polynomials can be found in Olver et al. (2010) [12,
(18.2.1), (18.2.5), Table 18.3.1]
$\int_{-1}^{1}P_{m}^{(\alpha,\beta)}(x)P_{n}^{(\alpha,\beta)}(x)(1-x)^{\alpha}(1+x)^{\beta}dx=\frac{2^{\alpha+\beta+1}\Gamma(\alpha+n+1)\Gamma(\beta+n+1)}{(2n+\alpha+\beta+1)\Gamma(\alpha+\beta+n+1)n!}\delta_{m,n}.$
(7)
A connection relation with one free parameter for Jacobi polynomials can be
found in Olver et al. (2010) [12, (18.18.14)], namely
$\displaystyle
P_{n}^{(\alpha,\beta)}(x)=\frac{(\beta+1)_{n}}{(\gamma+\beta+1)(\gamma+\beta+2)_{n}}$
$\displaystyle\hskip
28.45274pt\times\sum_{k=0}^{n}\frac{(\gamma+\beta+2k+1)(\gamma+\beta+1)_{k}\,(n+\beta+\alpha+1)_{k}(\alpha-\gamma)_{n-k}}{(\beta+1)_{k}\,(n+\gamma+\beta+2)_{k}(n-k)!}P_{k}^{(\gamma,\beta)}(x).$
(8)
In the remainder of the paper, we will use the following global notation ${\rm
R}:=\sqrt{1+\rho^{2}-2\rho x}$.
###### Theorem .
Let $\alpha\in{\mathbf{C}}$, $\beta,\gamma>-1$ such that if
$\beta,\gamma\in(-1,0)$ then $\beta+\gamma+1\neq 0$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}$, $x\in[-1,1]$. Then
$\displaystyle\frac{2^{\alpha+\beta}}{{\rm R}\left(1+{\rm
R}-\rho\right)^{\alpha}\left(1+{\rm R}+\rho\right)^{\beta}}$
$\displaystyle\hskip
17.07182pt=\frac{1}{\gamma+\beta+1}\sum_{k=0}^{\infty}\frac{(2k+\gamma+\beta+1)(\gamma+\beta+1)_{k}\,\left(\frac{\alpha+\beta+1}{2}\right)_{k}\,\left(\frac{\alpha+\beta+2}{2}\right)_{k}}{(\alpha+\beta+1)_{k}\,\left(\frac{\gamma+\beta+2}{2}\right)_{k}\,\left(\frac{\gamma+\beta+3}{2}\right)_{k}}$
$\displaystyle\hskip
96.73918pt\times\,{{}_{3}}F_{2}\left(\begin{array}[]{c}\beta+k+1,\alpha+\beta+2k+1,\alpha-\gamma\\\\[2.84544pt]
\alpha+\beta+k+1,\gamma+\beta+2k+2\end{array};\rho\right)\rho^{k}P_{k}^{(\gamma,\beta)}(x).$
(11)
Proof. Olver et al. (2010) [12, (18.12.1)] give the generating for Jacobi
polynomials, namely
$\frac{2^{\alpha+\beta}}{{\rm R}\left(1+{\rm
R}-\rho\right)^{\alpha}\left(1+{\rm
R}+\rho\right)^{\beta}}=\sum_{n=0}^{\infty}\rho^{n}P_{n}^{(\alpha,\beta)}(x).$
(12)
This generating function is special in that it is the only known algebraic
generating function for Jacobi polynomials (see [9, p. 90]). Using the Jacobi
connection relation (8) in (12) produces a double sum. In order to justify
reversing the order of the double summation expression we show that
$\sum_{n=0}^{\infty}|c_{n}|\sum_{k=0}^{n}|a_{nk}|\left|P_{k}^{(\alpha,\beta)}(x)\right|<\infty,$
(13)
where $c_{n}=\rho^{n}$ and $a_{nk}$ are the connection coefficients satisfying
$P_{n}^{(\alpha,\beta)}(x)=\sum_{k=0}^{n}a_{nk}P_{k}^{(\gamma,\beta)}(x).$
We assume that $\alpha,\beta,\gamma>-1$, $x\in[-1,1],$ and $|\rho|<1$. It
follows from [15, Theorem 7.32.1] that
$\max_{x\in[-1,1]}\left|P_{n}^{(\alpha,\beta)}(x)\right|\leq
K_{1}(1+n)^{\sigma},$ (14)
where $K_{1}$ and $\sigma$ are positive constants. In order to estimate
$a_{nk}$ we use
$a_{nk}=\frac{\int_{-1}^{1}(1-x)^{\gamma}(1+x)^{\beta}P_{n}^{(\alpha,\beta)}(x)P_{k}^{(\gamma,\beta)}(x)\,dx}{\int_{-1}^{1}(1-x)^{\gamma}(1+x)^{\beta}\\{P_{k}^{(\gamma,\beta)}(x)\\}^{2}\,dx}.$
Using (14) we have
$\left|\int_{-1}^{1}(1-x)^{\gamma}(1+x)^{\beta}P_{n}^{(\alpha,\beta)}(x)P_{k}^{(\gamma,\beta)}(x)\,dx\right|\leq
K_{2}(1+k)^{\sigma}(1+n)^{\sigma}.$
Using (7), we get
$\left|\int_{-1}^{1}(1-x)^{\gamma}(1+x)^{\beta}\\{P_{k}^{(\gamma,\beta)}(x)\\}^{2}\,dx\right|\geq\frac{K_{3}}{1+k}$
where $K_{3}>0$. Therefore,
$|a_{nk}|\leq K_{4}(1+k)^{\sigma+1}(1+n)^{\sigma}.$
Finally, we show (13):
$\displaystyle\sum_{n=0}^{\infty}|c_{n}|\sum_{k=0}^{n}|a_{nk}|\left|P_{k}^{(\alpha,\beta)}(x)\right|$
$\displaystyle\leq$ $\displaystyle
K_{5}\sum_{n=0}^{\infty}|\rho|^{n}\sum_{k=0}^{n}(1+k)^{2\sigma+1}(1+n)^{\sigma}$
$\displaystyle\leq$ $\displaystyle
K_{5}\sum_{n=0}^{\infty}|\rho|^{n}(1+n)^{3\sigma+2}<\infty,$
because $|\rho|<1$. Reversing the order of the summation and shifting the
$n$-index by $k$ with simplification, and by analytic continuation in
$\alpha$, we produce the generalization (11). $\hfill\blacksquare$
###### Theorem .
Let $\alpha\in{\mathbf{C}}$, $\beta,\gamma>-1$ such that if
$\beta,\gamma\in(-1,0)$ then $\beta+\gamma+1\neq 0$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}$, $x\in(-1,1)$. Then
$\displaystyle\left(\frac{2}{(1-x)\rho}\right)^{\alpha/2}\left(\frac{2}{(1+x)\rho}\right)^{\beta/2}J_{\alpha}\left(\sqrt{2(1-x)\rho}\right)I_{\beta}\left(\sqrt{2(1+x)\rho}\right)$
$\displaystyle=\frac{1}{(\gamma+\beta+1)\Gamma(\alpha+1)\Gamma(\beta+1)}\sum_{k=0}^{\infty}\frac{(2k+\gamma+\beta+1)(\gamma+\beta+1)_{k}\,\left(\frac{\alpha+\beta+1}{2}\right)_{k}\,\left(\frac{\alpha+\beta+2}{2}\right)_{k}}{(\alpha+1)_{k}\,(\beta+1)_{k}\,(\alpha+\beta+1)_{k}\,\left(\frac{\gamma+\beta+2}{2}\right)_{k}\,\left(\frac{\gamma+\beta+3}{2}\right)_{k}}$
$\displaystyle\hskip
85.35826pt\times\,{{}_{2}}F_{3}\left(\begin{array}[]{c}2k+\alpha+\beta+1,\alpha-\gamma\\\\[2.84544pt]
\alpha+\beta+k+1,\gamma+\beta+2k+2,\alpha+k+1\end{array};\rho\right)\rho^{k}P_{k}^{(\gamma,\beta)}(x).$
(17)
Proof. Olver et al. [12, (18.12.2)] give a generating function for Jacobi
polynomials, namely
$\displaystyle\left(\frac{2}{(1-x)\rho}\right)^{\alpha/2}\left(\frac{2}{(1+x)\rho}\right)^{\beta/2}J_{\alpha}\left(\sqrt{2(1-x)\rho}\right)I_{\beta}\left(\sqrt{2(1+x)\rho}\right)$
$\displaystyle\hskip
176.407pt=\sum_{n=0}^{\infty}\frac{1}{\Gamma(\alpha+1+n)\Gamma(\beta+1+n)}\rho^{n}P_{n}^{(\alpha,\beta)}(x).$
(18)
Using the connection relation for Jacobi polynomials (8) in (18) produces a
double sum. Reversing the order of the summation and shifting the $n$-index by
$k$ with simplification produces this generalization for a generating function
of Jacobi polynomials. $\hfill\blacksquare$
###### Definition .
A companion identity is one which is produced by applying the map $x\mapsto-x$
to an expansion over Jacobi polynomials or in terms of those orthogonal
polynomials which can be obtained as limiting cases of Jacobi polynomials
(i.e., Gegenbauer, Chebyshev, and Legendre polynomials) with argument $x$ in
conjunction with the parity relations for those orthogonal polynomials.
By starting with (11) and (17), applying the parity relation for Jacobi
polynomials (see for instance Olver et al. (2010) [12, Table 18.6.1])
$P_{n}^{(\alpha,\beta)}(-x)=(-1)^{n}P_{n}^{(\beta,\alpha)}(x),$
and mapping $\rho\mapsto-\rho$, one obtains the corresponding companion
identities. Although for (17), one must substitute $-1=e^{\pm i\pi}$, and use
Olver et al. (2010) [12, (10.27.6)]. Therefore Theorems and are valid when
the left-hand sides remain the same, and on the right-hand sides
$\alpha,\beta\mapsto\beta,\alpha$, the arguments of the ${}_{3}F_{2}$ and
${}_{2}F_{3}$ are replaced by $-\rho$, and the order of the Jacobi polynomials
become $(\alpha,\gamma)$.
###### Theorem .
Let $\alpha\in{\mathbf{C}}$, $\beta,\gamma>-1$ such that if
$\beta,\gamma\in(-1,0)$ then $\beta+\gamma+1\neq 0$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}\setminus(-1,0]$, $x\in[-1,1]$. Then
$\displaystyle\frac{(1+x)^{-\beta/2}}{{\rm
R}^{\alpha+1}}P_{\alpha}^{-\beta}\left(\frac{1+\rho}{{\rm
R}}\right)=\frac{\Gamma(\gamma+\beta+1)}{2^{\beta/2}\Gamma(\beta+1)(1-\rho)^{\alpha-\gamma}\rho^{(\gamma+1)/2}}$
$\displaystyle\hskip
14.22636pt\times\sum_{k=0}^{\infty}\frac{(2k+\gamma+\beta+1)(\gamma+\beta+1)_{k}\,(\alpha+\beta+1)_{2k}}{(\beta+1)_{k}}P_{\gamma-\alpha}^{-\gamma-\beta-2k-1}\left(\frac{1+\rho}{1-\rho}\right)P_{k}^{(\gamma,\beta)}(x).$
(19)
Proof. Olver et al. (2010) [12, (18.12.3)] give a generating function for
Jacobi polynomials, namely
$(1+\rho)^{-\alpha-\beta-1}\,{{}_{2}}F_{1}\left(\begin{array}[]{c}\frac{1}{2}(\alpha+\beta+1),\frac{1}{2}(\alpha+\beta+2)\\\\[5.69046pt]
\beta+1\end{array};\frac{2(1+x)\rho}{(1+\rho)^{2}}\right)=\sum_{n=0}^{\infty}\frac{(\alpha+\beta+1)_{n}}{(\beta+1)_{n}}\rho^{n}P_{n}^{(\alpha,\beta)}(x).$
(20)
Using the connection relation for Jacobi polynomials (8) in (20) produces a
double sum on the right-hand side of the equation. Reversing the order of the
summation and shifting the $n$-index by $k$ with simplification gives a Gauss
hypergeometric function as the coefficient of the expansion. The resulting
expansion formula is
$\displaystyle(1+\rho)^{-\alpha-\beta-1}\,{{}_{2}}F_{1}\left(\begin{array}[]{c}\frac{1}{2}(\alpha+\beta+1),\frac{1}{2}(\alpha+\beta+2)\\\\[5.69046pt]
\beta+1\end{array};\frac{2(1+x)\rho}{(1+\rho)^{2}}\right)$ (23)
$\displaystyle\hskip
14.22636pt=\frac{1}{\gamma+\beta+1}\sum_{k=0}^{\infty}\frac{(2k+\gamma+\beta+1)(\gamma+\beta+1)_{k}\,\left(\frac{\alpha+\beta+1}{2}\right)_{k}\,\left(\frac{\alpha+\beta+2}{2}\right)_{k}}{(\beta+1)_{k}\,\left(\frac{\gamma+\beta+2}{2}\right)_{k}\,\left(\frac{\gamma+\beta+3}{2}\right)_{k}}$
$\displaystyle\hskip
142.26378pt\times\,{{}_{2}}F_{1}\left(\begin{array}[]{c}\alpha+\beta+1+2k,\alpha-\gamma\\\\[5.69046pt]
\gamma+\beta+2+2k\end{array};\rho\right)\rho^{k}P_{k}^{(\gamma,\beta)}(x).$
(26)
The Gauss hypergeometric function coefficient is realized to be an associated
Legendre function of the first kind. The associated Legendre function of the
first kind $P_{\nu}^{\mu}:{\mathbf{C}}\setminus(-\infty,1]\to{\mathbf{C}}$
(see Chapter 14 in Olver et al. (2010) [12]) can be defined in terms of the
Gauss hypergeometric function as follows (Olver et al. (2010) [12,
(14.3.6),(15.2.2), §14.21(i)])
$P_{\nu}^{\mu}(z):=\frac{1}{\Gamma(1-\mu)}\left(\frac{z+1}{z-1}\right)^{\mu/2}\,{{}_{2}}F_{1}\left(\begin{array}[]{c}-\nu,\nu+1\\\\[2.84544pt]
1-\mu\end{array};\frac{1-z}{2}\right),$ (27)
for $z\in{\mathbf{C}}\setminus(-\infty,1]$. Using a relation for the Gauss
hypergeometric function from Olver et al. (2010) [12, (15.9.19)], namely
${{}_{2}}F_{1}\left(\begin{array}[]{c}a,b\\\\[5.69046pt]
a-b+1\end{array};z\right)=\frac{z^{(b-a)/2}\,\Gamma(a-b+1)}{(1-z)^{b}}P_{-b}^{b-a}\left(\frac{1+z}{1-z}\right),$
(28)
for $z\in{\mathbf{C}}\setminus\left\\{(-\infty,0]\cup(1,\infty)\right\\}$, the
Gauss hypergeometric function coefficient of the expansion can be expressed as
an associated Legendre function of the first kind. The Gauss hypergeometric
function on the left-hand side of (26) can also be expressed in terms of the
associated Legendre function of the first kind using Magnus et al. (1966) [11,
p. 157, entry 11], namely
$P_{\nu}^{\mu}(z)=\frac{2^{\mu}z^{\nu+\mu}}{\Gamma(1-\mu)(z^{2}-1)^{\mu/2}}\,{{}_{2}}F_{1}\left(\begin{array}[]{c}\frac{-\nu-\mu}{2},\frac{-\nu-\mu+1}{2}\\\\[5.69046pt]
1-\mu\end{array};1-\frac{1}{z^{2}}\right),$
where $\Re z>0$. This completes the proof. $\hfill\blacksquare$
###### Corollary .
Let $\beta\in{\mathbf{C}}$, $\alpha,\gamma>-1$ such that if
$\alpha,\gamma\in(-1,0)$ then $\alpha+\gamma+1\neq 0$, $\rho\in(0,1)$,
$x\in[-1,1]$. Then
$\displaystyle\frac{(1-x)^{-\alpha/2}}{{\rm
R}^{\beta+1}}{\mathrm{P}}_{\beta}^{-\alpha}\left(\frac{1-\rho}{{\rm
R}}\right)=\frac{\Gamma(\gamma+\alpha+1)}{2^{\alpha/2}\Gamma(\alpha+1)(1+\rho)^{\beta-\gamma}\rho^{(\gamma+1)/2}}$
$\displaystyle\hskip
14.22636pt\times\sum_{k=0}^{\infty}\frac{(2k+\gamma+\alpha+1)(\gamma+\alpha+1)_{k}\,(\alpha+\beta+1)_{2k}}{(\alpha+1)_{k}}{\mathrm{P}}_{\gamma-\beta}^{-\gamma-\alpha-2k-1}\left(\frac{1-\rho}{1+\rho}\right)P_{k}^{(\alpha,\gamma)}(x).$
(29)
Proof. Applying the parity relation for Jacobi polynomials to (26) and mapping
$\rho\mapsto-\rho$ produces its companion identity. The Gauss hypergeometric
functions appearing in this expression are Ferrers functions of the first kind
(often referred to as the associated Legendre function of the first kind on
the cut). The Ferrers function of the first kind
${\mathrm{P}}_{\nu}^{\mu}:(-1,1)\to{\mathbf{C}}$. can be defined in terms of
the Gauss hypergeometric function as follows (Olver et al. (2010) [12,
(14.3.1)])
${\mathrm{P}}_{\nu}^{\mu}(x):=\frac{1}{\Gamma(1-\mu)}\left(\frac{1+x}{1-x}\right)^{\mu/2}\,{{}_{2}}F_{1}\left(\begin{array}[]{c}-\nu,\nu+1\\\\[2.84544pt]
1-\mu\end{array};\frac{1-x}{2}\right),$ (30)
for $x\in(-1,1)$. The Gauss hypergeometric function coefficient of the
expansion is seen to be a Ferrers function of the first kind by starting with
(30) and using the linear transformation for the Gauss hypergeometric function
[12, (15.8.1)]. This derives a Gauss hypergeometric function representation of
the Ferrers function of the first kind, namely
${{}_{2}}F_{1}\left(\begin{array}[]{c}a,b\\\\[5.69046pt]
a-b+1\end{array};-x\right)=\frac{x^{(b-a)/2}\,\Gamma(a-b+1)}{(1+x)^{b}}{\mathrm{P}}_{-b}^{b-a}\left(\frac{1-x}{1+x}\right),$
for $x\in(0,1)$. The Gauss hypergeometric function on the left-hand side of
the companion identity for (26) is shown to be a Ferrers function of the first
kind through Magnus et al. (1996) [11, p. 167], namely
${\mathrm{P}}_{\nu}^{\mu}(x)=\frac{2^{\mu}x^{\nu+\mu}}{\Gamma(1-\mu)(1-x^{2})^{\mu/2}}\,{{}_{2}}F_{1}\left(\begin{array}[]{c}\frac{-\nu-\mu}{2},\frac{-\nu-\mu+1}{2}\\\\[5.69046pt]
1-\mu\end{array};1-\frac{1}{x^{2}}\right),$
for $x\in(0,1)$. This completes the proof. $\hfill\blacksquare$
The above two theorems are interesting results, as they are general cases for
other generalizations found from related generating functions. For instance,
by applying the connection relation for Jacobi polynomials (8) to an important
extension of the generating function (20) (with the parity relation for Jacobi
polynomials applied) given by Ismail (2005) [9, (4.3.2)], namely
$\displaystyle\frac{1+\rho}{(1-\rho)^{\alpha+\beta+2}}\,{{}_{2}}F_{1}\left(\begin{array}[]{c}\frac{\alpha+\beta+2}{2},\frac{\alpha+\beta+3}{2}\\\\[5.69046pt]
\alpha+1\end{array};\frac{2\rho(x-1)}{(1-\rho)^{2}}\right)$ (33)
$\displaystyle\hskip
142.26378pt=\sum_{n=0}^{\infty}\frac{(\alpha+\beta+1+2n)(\alpha+\beta+1)_{n}}{(\alpha+\beta+1)(\alpha+1)_{n}}\rho^{n}P_{n}^{(\alpha,\beta)}(x),$
(34)
produces a generalization that is equivalent to mapping
$\alpha\mapsto\alpha+1$ in Theorem .
It is interesting that trying to generalize (34) using the connection relation
(8), does not produce a new generalized formula, since (19) is its
generalization – Ismail proves (34) by multiplying (20) by
$\rho^{(\alpha+\beta+1)/2}$, after the companion identity is applied, and then
differentiating by $\rho$. Ismail also mentions that (34) (and therefore
Theorem and Corollary ) are closely connected to the Poisson kernel of
$\bigl{\\{}P_{n}^{(\alpha,\beta)}(x)\bigr{\\}}$. One can also see that these
expansions are related to the translation operator associated with Jacobi
polynomials by mapping $\alpha\mapsto\alpha+1$ in Theorem .
Theorem and Corollary are also generalizations of the expansion (see Cohl &
MacKenzie (2013) [6])
$\displaystyle\frac{(1+x)^{-\beta/2}}{{\rm
R}^{\alpha+m+1}}P_{\alpha+m}^{-\beta}\left(\frac{1+\rho}{{\rm R}}\right)$
$\displaystyle\hskip
28.45274pt=\frac{\rho^{-(\alpha+1)/2}}{2^{\beta/2}(1-\rho)^{m}}\sum_{n=0}^{\infty}\frac{(2n+\alpha+\beta+1)\Gamma(\alpha+\beta+n+1)(\alpha+\beta+m+1)_{2n}}{\Gamma(\beta+n+1)}$
$\displaystyle\hskip 227.62204pt\times
P_{-m}^{-\alpha-\beta-2n-1}\left(\frac{1+\rho}{1-\rho}\right)P_{n}^{(\alpha,\beta)}(x),$
(35)
found by mapping $\alpha,\gamma\mapsto\alpha+m,\alpha$ for
$m\in{\mathbf{N}}_{0}$ in Theorem ; and its companion identity (see Cohl &
MacKenzie (2013) [6]),
$\displaystyle\frac{(1-x)^{-\alpha/2}}{{\rm
R}^{\beta+m+1}}{\mathrm{P}}_{\beta+m}^{-\alpha}\left(\frac{1-\rho}{{\rm
R}}\right)$ $\displaystyle\hskip
28.45274pt=\frac{\rho^{-(\beta+1)/2}}{2^{\alpha/2}(1+\rho)^{m}}\sum_{n=0}^{\infty}\frac{(2n+\alpha+\beta+1)\Gamma(\alpha+\beta+n+1)(\alpha+\beta+m+1)_{2n}}{\Gamma(\alpha+n+1)}$
$\displaystyle\hskip
227.62204pt\times{\mathrm{P}}_{-m}^{-\alpha-\beta-2n-1}\left(\frac{1-\rho}{1+\rho}\right)P_{n}^{(\alpha,\beta)}(x),$
(36)
found by mapping $\beta,\gamma\mapsto\beta+m,\beta$ for $m\in{\mathbf{N}}_{0}$
in Theorem . The expansions (35), (36) are produced using the definition of
the Gauss hypergeometric function on the left hand side of (34) and the
expansion of $(1-x)^{n}$ in terms of Jacobi polynomials (see Cohl & MacKenzie
(2013) [6, (7), (13)]). Interestingly, the expansions (35) and (36) are also
related to the generalized translation operator, but with a more general
translation that can be seen with the $\alpha\mapsto\alpha+m$ or
$\beta\mapsto\beta+m$.
## 3 Expansions in Gegenbauer polynomials
The Gegenbauer polynomials $C_{n}^{\mu}:{\mathbf{C}}\to{\mathbf{C}}$ can be
defined in terms of the terminating Gauss hypergeometric series as follows
(Olver et al. (2010) [12, (18.5.9)])
$C_{n}^{\mu}(z):=\frac{(2\mu)_{n}}{n!}\,{}_{2}F_{1}\left(\begin{array}[]{c}-n,n+2\mu\\\\[2.84544pt]
\mu+\frac{1}{2}\end{array};\frac{1-z}{2}\right),$
for $n\in{\mathbf{N}}_{0}$ and $\mu\in(-1/2,\infty)\setminus\\{0\\}$. The
orthogonality relation for Gegenbauer polynomials can be found in Olver et al.
(2010) [12, (18.2.1), (18.2.5), Table 18.3.1] for $m,n\in{\mathbf{N}}_{0}$,
namely
$\int_{-1}^{1}C_{m}^{\mu}(x)C_{n}^{\mu}(x)(1-x^{2})^{\mu-1/2}dx=\frac{\pi
2^{1-2\mu}\Gamma(2\mu+n)}{(n+\mu)\Gamma^{2}(\mu)n!}\delta_{m,n}.$ (37)
###### Theorem .
Let $\lambda,\mu\in{\mathbf{C}}$, $\nu\in(-1/2,\infty)\setminus\\{0\\},$
$\rho\in\left\\{z\in{\mathbf{C}}:|z|<1\right\\},$ $x\in[-1,1]$. Then
$\displaystyle(1-x^{2})^{1/4-\mu/2}P_{\lambda+\mu-1/2}^{1/2-\mu}\left({\rm
R}+\rho\right)\,{\mathrm{P}}_{\lambda+\mu-1/2}^{1/2-\mu}\left({\rm
R}-\rho\right)$ $\displaystyle\hskip
14.22636pt=\frac{(\rho/2)^{\mu-1/2}}{\nu\,\Gamma^{2}(\mu+1/2)}\sum_{n=0}^{\infty}\frac{(\nu+n)(-\lambda)_{n}\,(2\mu+\lambda)_{n}\,(\mu)_{n}}{(2\mu)_{n}\,(\mu+1/2)_{n}\,(\nu+1)_{n}}$
$\displaystyle\hskip
42.67912pt\times\,{{}_{6}}F_{5}\left(\begin{array}[]{c}\frac{-\lambda+n}{2},\frac{-\lambda+n+1}{2},\frac{2\mu+\lambda+n}{2},\frac{2\mu+\lambda+n+1}{2},\mu+n,\mu-\nu\\\\[5.69046pt]
\frac{2\mu+n}{2},\frac{2\mu+n+1}{2},\frac{\mu+n+\frac{1}{2}}{2},\frac{\mu+n+\frac{3}{2}}{2},\nu+1+n\end{array};\rho^{2}\right)\rho^{n}C_{n}^{\nu}(x).$
(40)
Proof. In Koekoek, Lesky & Swarttouw (2010) [10, (9.8.32)], there is the
following generating function for Gegenbauer polynomials
$\displaystyle{{}_{2}}F_{1}\left(\begin{array}[]{c}\lambda,2\mu-\lambda\\\\[5.69046pt]
\mu+\frac{1}{2}\end{array};\frac{1-\rho-{\rm
R}}{2}\right)\,{{}_{2}}F_{1}\left(\begin{array}[]{c}\lambda,2\mu-\lambda\\\\[5.69046pt]
\mu+\frac{1}{2}\end{array};\frac{1+\rho-{\rm
R}}{2}\right)=\sum_{n=0}^{\infty}\frac{(\lambda)_{n}\,(2\mu-\lambda)_{n}}{(2\mu)_{n}\,(\mu+\frac{1}{2})_{n}}\rho^{n}C_{n}^{\mu}(x).$
(45)
These Gauss hypergeometric functions can be re-written in terms of associated
Legendre and Ferrers functions of the first kind. The first Gauss
hypergeometric function can be written in terms the Ferrers function of the
first kind using Abramowitz & Stegun (1972) [1, (15.4.19)], namely
${}_{2}F_{1}\left(\begin{array}[]{c}a,b\\\\[2.84544pt]
\frac{a+b+1}{2}\end{array};x\right)=\Gamma\left(\frac{a+b+1}{2}\right)\left(x(1-x)\right)^{(1-a-b)/4}{\rm
P}_{(a-b-1)/2}^{(1-a-b)/2}(1-2x),$
for $x\in(0,1),$ with $a=\lambda$, $b=2\mu-\lambda$. The second Gauss
hypergeometric function can be written in terms of the associated Legendre
function of the first kind using [12, (14.3.6)] and Euler’s transformation
[12, (15.8.1)]. The substitutions yield
$\displaystyle(1-x^{2})^{1/4-\mu/2}P_{\lambda+\mu-1/2}^{1/2-\mu}({\rm
R}+\rho){\rm P}_{\lambda+\mu-1/2}^{1/2-\mu}({\rm R}-\rho)$
$\displaystyle\hskip
142.26378pt=\frac{(\rho/2)^{\mu-1/2}}{\Gamma^{2}(\mu+1/2)}\sum_{n=0}^{\infty}\frac{(-\lambda)_{n}(2\mu+\lambda)_{n}}{(2\mu)_{n}(\mu+1/2)_{n}}\rho^{n}C_{n}^{\mu}(x).$
(46)
Using the connection relation for Gegenbauer polynomials (2) on the generating
function (46) produces a double sum. Reversing the order of the summation and
shifting the $n$-index by $2k$ with simplification completes the proof.
$\hfill\blacksquare$
Associated Legendre and Ferrers functions of the first kind with special
values of the degree and order reduce to Gegenbauer polynomials. For instance,
if $n\in{\mathbf{N}}_{0}$, then through [12, (14.3.22)]
$P_{n+\mu-1/2}^{1/2-\mu}(z)=\frac{2^{\mu-1/2}\Gamma(\mu)n!}{\sqrt{\pi}\,\Gamma(2\mu+n)}(z^{2}-1)^{\mu/2-1/4}\,C_{n}^{\mu}(z),$
and from [12, (14.3.21)], one has
${\mathrm{P}}_{n+\mu-1/2}^{1/2-\mu}(x)=\frac{2^{\mu-1/2}\Gamma(\mu)n!}{\sqrt{\pi}\,\Gamma(2\mu+n)}(1-x^{2})^{\mu/2-1/4}\,C_{n}^{\mu}(x).$
From (46) using the above expressions, we have the following finite-summation
generating function expression with $m\in{\mathbf{N}}_{0}$,
$C_{m}^{\mu}({\rm R}+\rho)C_{m}^{\mu}({\rm
R}-\rho)=\frac{(2\mu)_{m}^{2}}{(m!)^{2}}\sum_{n=0}^{m}\frac{(-m)_{n}(2\mu+m)_{n}}{(2\mu)_{n}(\mu+\frac{1}{2})_{n}}\rho^{n}C_{n}^{\mu}(x),$
(47)
and from the generalized result (40) we have
$\displaystyle C_{m}^{\mu}({\rm R}+\rho)C_{m}^{\mu}({\rm
R}-\rho)=\frac{(2\mu)_{m}^{2}}{\nu(m!)^{2}}\sum_{n=0}^{m}\frac{(\nu+n)\,(-m)_{n}\,(2\mu+m)_{n}\,(\mu)_{n}}{(2\mu)_{n}\,(\mu+1/2)_{n}\,(\nu+1)_{n}}$
$\displaystyle\hskip
42.67912pt\times\,{{}_{6}}F_{5}\left(\begin{array}[]{c}\frac{-m+n}{2},\frac{-m+n+1}{2},\frac{2\mu+m+n}{2},\frac{2\mu+m+n+1}{2},\mu-\nu,\mu+n\\\\[5.69046pt]
\frac{2\mu+n}{2},\frac{2\mu+n+1}{2},\frac{\mu+n+\frac{1}{2}}{2},\frac{\mu+n+\frac{3}{2}}{2},\nu+1+n\end{array};\rho^{2}\right)\rho^{n}C_{n}^{\nu}(x),$
which reduces to (47) when $\nu=\mu$.
Consider the generating function for Gegenbauer polynomials, Olver et al.
(2010) [12, (18.12.5)]
$\frac{1-\rho x}{(1+\rho^{2}-2\rho
x)^{\nu+1}}=\frac{1}{2\nu}\sum_{n=0}^{\infty}(n+2\nu)\rho^{n}C_{n}^{\nu}(x),$
(49)
and the generating function
$\frac{x-\rho}{(1+\rho^{2}-2\rho
x)^{\nu+1}}=\frac{1}{2\nu\rho}\sum_{n=0}^{\infty}n\rho^{n}C_{n}^{\nu}(x),$
(50)
which follows from (49) using (1). The technique of this paper can also be
applied to generalize (49) and (50). However, note that
$\frac{1-\rho x}{(1+\rho^{2}-2\rho
x)^{\nu+1}}=\frac{1-\rho^{2}}{2}\frac{1}{(1+\rho^{2}-2\rho
x)^{\nu+1}}+\frac{1}{2}\frac{1}{(1+\rho^{2}-2\rho x)^{\nu}},$
$\frac{x-\rho}{(1+\rho^{2}-2\rho
x)^{\nu+1}}=\frac{1-\rho^{2}}{2\rho}\frac{1}{(1+\rho^{2}-2\rho
x)^{\nu+1}}-\frac{1}{2\rho}\frac{1}{(1+\rho^{2}-2\rho x)^{\nu}},$
so it is easier to use (3) on the right-hand sides.
The Gegenbauer polynomials can be defined as a specific case of the Jacobi
polynomials, namely
$C_{n}^{\nu}(x)=\frac{(2\nu)_{n}}{\left(\nu+\frac{1}{2}\right)_{n}}P_{n}^{(\nu-1/2,\nu-1/2)}(x).$
(51)
Therefore the expansions given in the section on Jacobi polynomials can also
be written as expansions in Gegenbauer polynomials by using symmetric
parameters. Furthermore, these expansions can also be written as expansions
over Chebyshev polynomials of the second kind and Legendre polynomials using
$U_{n}(z)=C_{n}^{1}(z),$ $P_{n}(z)=C_{n}^{1/2}(z),$ for
$n\in{\mathbf{N}}_{0}$. One may also take the limit of an expansion in
Gegenbauer polynomials as $\mu\to 0.$ This limit may be well defined with the
interpretation of obtaining Chebyshev polynomials of the first kind through
Andrews et al. (1999) [2, (6.4.13)], namely
$T_{n}(z)=\frac{1}{\epsilon_{n}}\lim_{\mu\to
0}\frac{n+\mu}{\mu}C_{n}^{\mu}(z),$ (52)
where the Neumann factor $\epsilon_{n}\in\\{1,2\\},$ defined by
$\epsilon_{n}:=2-\delta_{n,0},$ commonly seen in Fourier cosine series. We
can, for example, derive the following corollaries.
###### Corollary .
Let $\alpha\in{\mathbf{C}}$, $\gamma\in(-1/2,\infty)\setminus\\{0\\}$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}$, $x\in[-1,1]$. Then
$\displaystyle\frac{2^{\alpha+\gamma-1}}{{\rm R}\left(1+{\rm
R}-\rho\right)^{\alpha-1/2}\left(1+{\rm R}+\rho\right)^{\gamma-1/2}}$
$\displaystyle=\frac{1}{\gamma}\sum_{k=0}^{\infty}\frac{(k+\gamma)\left(\frac{\alpha+\gamma}{2}\right)_{k}\,\left(\frac{\alpha+\gamma+1}{2}\right)_{k}}{(\alpha+\gamma)_{k}\,(\gamma+1)_{k}}\,{{}_{3}}F_{2}\left(\begin{array}[]{c}\gamma+k+\frac{1}{2},\alpha+\gamma+2k,\alpha-\gamma\\\\[2.84544pt]
\alpha+\gamma+k,2\gamma+2k+1\end{array};\rho\right)\rho^{k}C_{k}^{\gamma}(x).$
(55)
Proof. Using (11), mapping $\alpha\mapsto\alpha-1/2$ and $\beta$,
$\gamma\mapsto\gamma-1/2$, and using (51) completes the proof.
$\hfill\blacksquare$
###### Corollary .
Let $\alpha\in{\mathbf{C}}$, $\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}$,
$x\in[-1,1]$. Then
$\frac{(1+{\rm R}+\rho)^{1/2}}{{\rm R}(1+{\rm
R}-\rho)^{\alpha-1/2}}=2^{1-\alpha}\sum_{k=0}^{\infty}\epsilon_{k}\frac{\left(\frac{\alpha}{2}\right)_{k}\,\left(\frac{\alpha+1}{2}\right)_{k}}{(\alpha)_{k}\,k!}\,{{}_{3}}F_{2}\left(\begin{array}[]{c}k+\frac{1}{2},\alpha+2k,\alpha\\\\[2.84544pt]
2k+1,\alpha+k\end{array};\rho\right)\rho^{k}\,T_{k}(x).$ (56)
Proof. Taking the limit as $\gamma\to 0$ of (55) and using (52) completes the
proof. $\hfill\blacksquare$
## 4 Expansions in Laguerre polynomials
The Laguerre polynomials $L_{n}^{\alpha}:{\mathbf{C}}\to{\mathbf{C}}$ can be
defined in terms of Kummer’s confluent hypergeometric function of the first
kind as follows (Olver et al. (2010) [12, (18.5.12)])
$L_{n}^{\alpha}(z):=\frac{(\alpha+1)_{n}}{n!}M(-n,\alpha+1,z),$
for $n\in{\mathbf{N}}_{0}$, and $\alpha>-1$. The Laguerre function
$L_{\nu}^{\alpha}:{\mathbf{C}}\to{\mathbf{C}},$ which generalizes the Laguerre
polynomials is defined as follows (Erdélyi et al. (1981) [7, (6.9.2.37), this
equation is stated incorrectly therein]) for $\nu,\alpha\in{\mathbf{C}}$,
$L_{\nu}^{\alpha}(z):=\frac{\Gamma(1+\nu+\alpha)}{\Gamma(\nu+1)\Gamma(\alpha+1)}M(-\nu,\alpha+1,z).$
(57)
The orthogonality relation for Laguerre polynomials can be found in Olver et
al. (2010) [12, (18.2.1), (18.2.5), Table 18.3.1]
$\int_{0}^{\infty}x^{\alpha}e^{-x}L_{n}^{\alpha}(x)L_{m}^{\alpha}(x)dx=\frac{\Gamma(n+\alpha+1)}{n!}\delta_{n,m}.$
(58)
The connection relation for Laguerre polynomials, given by Olver et al. (2010)
[12, (18.18.18)] (see also Ruiz & Dehesa (2001) [13]), is
$L_{n}^{\alpha}(x)=\sum_{k=0}^{n}\frac{(\alpha-\beta)_{n-k}}{(n-k)!}L_{k}^{\beta}(x).$
(59)
###### Theorem .
Let $\alpha,\beta\in{\mathbf{R}},$ $x>0$, $\rho\in{\mathbf{C}}.$ Then
$x^{-\alpha/2}J_{\alpha}\left(2\sqrt{x\rho}\right)=\rho^{\alpha/2}e^{-\rho}\sum_{k=0}^{\infty}\frac{\Gamma(\beta-\alpha+1)}{\Gamma(\beta+1+k)}L_{\beta-\alpha}^{\alpha+k}(\rho)\rho^{k}L_{k}^{\beta}(x).$
(60)
Proof. Olver et al. (2010) [12, (18.12.14)] give a generating function for
Laguerre polynomials, namely
$x^{-\alpha/2}J_{\alpha}(2\sqrt{x\rho})=\rho^{\alpha/2}e^{-\rho}\sum_{n=0}^{\infty}\frac{\rho^{n}}{\Gamma(\alpha+1+n)}L_{n}^{\alpha}(x),$
where $J_{\alpha}$ is the Bessel function of the first kind (4). Using the
Laguerre connection relation (59) to replace the Laguerre polynomial in the
generating function produces a double sum. In order to justify reversing the
resulting order of summation, we demonstrate that
$\sum_{n=0}^{\infty}|c_{n}|\sum_{k=0}^{n}|a_{nk}|\left|L_{k}^{\beta}(x)\right|<\infty,$
(61)
where
$c_{n}=\frac{\rho^{n}}{\Gamma(\alpha+1+n)}$
and
$a_{nk}=\frac{(\alpha-\beta)_{n-k}}{(n-k)!}.$ (62)
We assume that $\alpha,\beta\in{\mathbf{R}}$, $\rho\in{\mathbf{C}}$ and $x>0$.
It is known [15, Theorem 8.22.1] that
$\bigl{|}L_{n}^{\alpha}(x)\bigr{|}\leq K_{1}(1+n)^{\sigma_{1}},$ (63)
where $K_{1},\sigma_{1}=\frac{\alpha}{2}-\frac{1}{4}$ are constants
independent of $n$ (but depend on $x$ and $\alpha$). We also have
$|a_{nk}|\leq(1+n-k)^{\sigma_{2}}\leq(1+n)^{\sigma_{2}},$ (64)
where $\sigma_{2}=|\alpha-\beta|$. Therefore,
$\sum_{n=0}^{\infty}|c_{n}|\sum_{k=0}^{n}|a_{nk}|\left|L_{k}^{\beta}(x)\right|\leq
K_{1}\sum_{n=0}^{\infty}\frac{|\rho|^{n}}{\Gamma(\alpha+1+n)}(1+n)^{\sigma_{1}+\sigma_{2}+1}<\infty.$
Reversing the order of summation and shifting the $n$-index by $k$ yields
$x^{-\alpha/2}J_{\alpha}(2\sqrt{x\rho})=\rho^{\alpha/2}e^{-\rho}\sum_{k=0}^{\infty}\sum_{n=0}^{\infty}\frac{(\alpha-\beta)_{n}\,\rho^{n+k}}{\Gamma(\alpha+1+n+k)\,n!}L_{k}^{\beta}(x).$
Using (6) produces a Kummer’s confluent hypergeometric function of the first
kind as the coefficient of the expansion. Using the definition of Laguerre
functions (57) to replace the confluent hypergeometric function completes the
proof. $\hfill\blacksquare$
Consider the generating function (Srivastava & Manocha (1984) [14, p. 209])
$e^{-x\rho}=\frac{1}{(1+\rho)^{\alpha}}\sum_{n=0}^{\infty}\rho^{n}L_{n}^{\alpha-n}(x),$
(65)
for $\alpha\in{\mathbf{C}},$ $\rho\in\left\\{z\in{\mathbf{C}}:|z|<1\right\\},$
$x>0$. Using the connection relation for Laguerre polynomials (59) in the
generating function (65), yields a double sum. Reversing the order of the
summation and shifting the $n$-index by $k$ produces
$e^{-x\rho}=\frac{1}{(1+\rho)^{\alpha}}\sum_{k=0}^{\infty}\sum_{n=0}^{\infty}\frac{(\alpha-
n-k-\beta)_{n}}{n!}\rho^{n+k}L_{k}^{\beta}(x).$
Using (5), (6), and substituting $z=\rho/(1+\rho)$ yields the known generating
function for Laguerre polynomials Olver et al. (2010) [12, (18.12.13)], namely
$\exp\left(\frac{x\rho}{\rho-1}\right)=(1-\rho)^{\beta+1}\sum_{n=0}^{\infty}\rho^{n}L_{n}^{\beta}(x),$
(66)
for $\beta\in{\mathbf{C}}$. Note that using the connection relation for
Laguerre polynomials (59) on (66) leaves this generating function invariant.
###### Theorem .
Let $\lambda\in{\mathbf{C}}$, $\alpha\in{\mathbf{C}}\setminus-{\mathbf{N}},$
$\beta>-1,$ $\rho\in\left\\{z\in{\mathbf{C}}:|z|<1\right\\},$ $x>0$. Then
$M\left(\lambda,\alpha+1,\frac{x\rho}{\rho-1}\right)=(1-\rho)^{\lambda}\sum_{k=0}^{\infty}\frac{(\lambda)_{k}}{(\alpha+1)_{k}}~{}{{}_{2}}F_{1}\left(\begin{array}[]{c}\lambda+k,\alpha-\beta\\\\[5.69046pt]
\alpha+1+k\end{array};\rho\right)\rho^{k}L_{k}^{\beta}(x).$
Proof. On p. 132 of Srivastava & Manocha (1984) [14] there is a generating
function for Laguerre polynomials, namely
$M\left(\lambda,\alpha+1,\frac{x\rho}{\rho-1}\right)=(1-\rho)^{\lambda}\sum_{n=0}^{\infty}\frac{(\lambda)_{n}\,\rho^{n}}{(\alpha+1)_{n}}L_{n}^{\alpha}(x).$
Using the connection relation for Laguerre polynomials (59) we obtain a double
summation. In order to justify reversing the resulting order of summation, we
demonstrate (61), where
$c_{n}=\frac{(\lambda)_{n}\rho^{n}}{(\alpha+1)_{n}}$
and $a_{nk}$ is given in (62). We assume that
$\alpha\in{\mathbf{C}}\setminus-{\mathbf{N}}$, $\beta>-1,$ $|\rho|<1$ and
$x>0$. Given (63), (64), then
$\sum_{n=0}^{\infty}|c_{n}|\sum_{k=0}^{n}|a_{nk}|\left|L_{k}^{\beta}(x)\right|\leq
K_{1}\sum_{n=0}^{\infty}\frac{|(\lambda)_{n}||\rho|^{n}}{|(\alpha+1)_{n}|}(1+n)^{\sigma_{1}+\sigma_{2}+1}\leq
K_{3}\sum_{n=0}^{\infty}|\rho|^{n}(1+n)^{\sigma_{1}+\sigma_{2}+\lambda-\alpha}<\infty,$
for some $K_{3}\in{\mathbf{R}}$. Reversing the order of the summation and
shifting the $n$-index by $k$ produces
$M\left(\lambda,\alpha+1,\frac{x\rho}{\rho-1}\right)=(1-\rho)^{\lambda}\sum_{k=0}^{\infty}\sum_{n=0}^{\infty}\frac{(\lambda)_{n+k}\,(\alpha-\beta)_{n}}{(\alpha+1)_{n+k}\,n!}\rho^{n+k}L_{k}^{\beta}(x).$
Then, using (6) with simplification completes the proof. $\hfill\blacksquare$
## 5 Expansions in Wilson polynomials
The Wilson polynomials $W_{n}\left(x^{2};a,b,c,d\right),$ originally
introduced in Wilson (1980) [16], can be defined in terms of a terminating
generalized hypergeometric series as follows (Olver et al. (2010) [12,
(18.26.1)])
$W_{n}(x^{2};a,b,c,d):=(a+b)_{n}(a+c)_{n}(a+d)_{n}\,{}_{4}F_{3}\left(\begin{array}[]{c}-n,n+a+b+c+d-1,a+ix,a-ix\\\\[5.69046pt]
a+b,a+c,a+d\end{array};1\right).$
These polynomials are perhaps the most general hypergeometric orthogonal
polynomials in existence being at the very top of the Askey scheme which
classifies these orthogonal polynomials (see for instance [12, Figure
18.21.1]). The orthogonality relation for Wilson polynomials can be found in
Koekoek et al. (2010) [10, Section 9.1], namely
$\displaystyle\int_{0}^{\infty}\left|\frac{\Gamma(a+ix)\Gamma(b+ix)\Gamma(c+ix)\Gamma(d+ix)}{\Gamma(2ix)}\right|^{2}W_{m}\left(x^{2};a,b,c,d\right)W_{n}\left(x^{2};a,b,c,d\right)dx$
$\displaystyle\hskip 19.91684pt=\frac{2\pi
n!\,\Gamma(n+a+b)\Gamma(n+a+c)\Gamma(n+a+d)\Gamma(n+b+c)\Gamma(n+b+d)\Gamma(n+c+d)}{(2n+a+b+c+d-1)\Gamma(n+a+b+c+d-1)}\delta_{m,n}$
where $\Re\,a,\Re\,b,\Re\,c,\Re\,d>0,$ and non-real parameters occurring in
conjugate pairs. A connection relation with one free parameter for the Wilson
polynomials is given by [13, equation just below (15)], namely
$\displaystyle
W_{n}\left(x^{2};a,b,c,d\right)=\sum_{k=0}^{n}\frac{n!}{k!(n-k)!}\,W_{k}\left(x^{2};a,b,c,h\right)$
$\displaystyle\hskip
8.5359pt\times\frac{(n+a+b+c+d-1)_{k}\,(d-h)_{n-k}\,(k+a+b)_{n-k}\,(k+a+c)_{n-k}\,(k+b+c)_{n-k}}{(k+a+b+c+h-1)_{k}\,(2k+a+b+c+h)_{n-k}}.$
(67)
In this section, we give a generalization of a generating function for Wilson
polynomials. This example is intended to be illustrative. In Koekoek, Lesky &
Swarttouw et al. (2010) [10] for instance there are four separate generating
functions given for the Wilson polynomials. The technique applied in the proof
of the theorem presented in this section, can be easily applied to the rest of
the generating functions for Wilson polynomials in Koekoek, Lesky & Swarttouw
et al. (2010) [10]. Generalizations of these generating functions (and their
corresponding definite integrals) can be extended by a well-established
limiting procedure (see [10, Chapter 9]) to the continuous dual Hahn,
continuous Hahn, Meixner–Pollaczek, pseudo Jacobi, Jacobi, Laguerre and
Hermite polynomials.
###### Theorem .
Let $\rho\in\left\\{z\in{\mathbf{C}}:|z|<1\right\\},$ $x\in(0,\infty)$,
$\Re\,a,\Re\,b,\Re\,c,\Re\,d,\Re\,h>0$ and non-real parameters $a,b,c,d,h$
occurring in conjugate pairs. Then
$\displaystyle{{}_{2}}F_{1}\left(\begin{array}[]{c}a+ix,\,b+ix\\\\[2.84544pt]
a+b\end{array};\rho\right){{}_{2}}F_{1}\left(\begin{array}[]{c}c-ix,\,d-ix\\\\[2.84544pt]
c+d\end{array};\rho\right)$ (72)
$\displaystyle=\sum_{k=0}^{\infty}\frac{(k+a+b+c+d-1)_{k}}{(k+a+b+c+h-1)_{k}\,(a+b)_{k}\,(c+d)_{k}\,k!}$
$\displaystyle\times\,{{}_{4}}F_{3}\left(\begin{array}[]{c}d-h,\,2k+a+b+c+d-1,\,k+a+c,\,k+b+c\\\\[2.84544pt]
k+a+b+c+d-1,\,2k+a+b+c+h,\,k+c+d\end{array};\rho\right)\\!\rho^{k}\,W_{k}\left(x^{2};a,b,c,h\right).$
(75)
Proof. Koekoek et al. (2010) [10, (1.1.12)] give a generating function for
Wilson polynomials, namely
$\displaystyle{{}_{2}}F_{1}\left(\begin{array}[]{c}a+ix,\,b+ix\\\\[2.84544pt]
a+b\end{array};\rho\right){{}_{2}}F_{1}\left(\begin{array}[]{c}c-ix,\,d-ix\\\\[2.84544pt]
c+d\end{array};\rho\right)=\sum_{n=0}^{\infty}\frac{\rho^{n}\,W_{n}\left(x^{2};a,b,c,d\right)}{(a+b)_{n}\,(c+d)_{n}\,n!}.$
(80)
Using the connection relation for Wilson polynomials (67) in the above
generating function produces a double sum. In order to justify reversing the
summation symbols we show that
$\sum_{n=0}^{\infty}|c_{n}|\sum_{k=0}^{n}|a_{nk}|\left|W_{k}(x^{2};a,b,c,h)\right|<\infty,$
where
$c_{n}=\frac{\rho^{n}}{(a+b)_{n}(c+d)_{n}n!},$
and $a_{nk}$ are the connection coefficients satisfying
$W_{n}(x^{2};a,b,c,d)=\sum_{k=0}^{n}a_{nk}W_{k}(x^{2};a,b,c,h).$
We assume that $a,b,c,d$ and $a,b,c,h$ are positive except for complex
conjugate pairs with positive real parts, and $x>0$. It follows from [17,
bottom of page 59] that
$\left|W_{n}(x^{2};a,b,c,d)\right|\leq K_{1}(n!)^{3}(1+n)^{\sigma_{1}},$ (81)
where $K_{1}$ and $\sigma_{1}$ are positive constants independent of $n$.
###### Lemma 5.1.
Let $j\in{\mathbf{N}}$, $k,n\in{\mathbf{N}}_{0}$, $z\in{\mathbf{C}}$, $\Re
u>0$, $w>-1$, $v\geq 0$, $x>0$. Then
$\displaystyle|(u)_{j}|\geq(\Re u)(j-1)!,$ (82)
$\displaystyle\frac{(v)_{n}}{n!}\leq(1+n)^{v},$ (83)
$\displaystyle(n+w)_{k}\leq\max\\{1,2^{w}\\}\frac{(n+k)!}{n!},\qquad(k\leq
n),$ (84) $\displaystyle|(k+z)_{n-k}|\leq(1+n)^{|z|}\frac{n!}{k!},\qquad(k\leq
n),$ (85)
$\displaystyle(k+x-1)_{k}\geq\min\left\\{\frac{x}{2},\frac{1}{6}\right\\}\frac{(2k)!}{k!},$
(86)
$\displaystyle(2k+x)_{n-k}\geq\min\\{x,1\\}\frac{1}{1+n}\frac{(n+k)!}{(2k)!},\qquad(k\leq
n).$ (87)
###### Proof 5.2.
Let us consider
$|(u)_{j}|=|u||u+1|\dots|u+j-1|\geq\Re u(\Re u+1)\dots(\Re u+j-1)\geq(\Re
u)(j-1)!.$
This completes the proof of (82). Choose $m\in{\mathbf{N}}_{0}$ such that
$m\leq v\leq m+1$. Then
$\displaystyle\frac{(v)_{n}}{n!}$ $\displaystyle\leq$
$\displaystyle\frac{(m+1)_{n}}{n!}=\frac{(n+1)(n+2)\dots(n+m)}{m!}=\left(1+n\right)\left(1+\frac{n}{2}\right)\dots\left(1+\frac{n}{m}\right)$
$\displaystyle\leq$ $\displaystyle(1+n)^{m}\leq(1+n)^{v}.$
This completes the proof of (83). If $-1<w\leq 1$ then
$n!(n+w)_{k}\leq n!(n+1)(n+2)\dots(n+k)=(n+k)!.$
If $m\leq w\leq m+1$ with $m\in{\mathbf{N}}$ then
$n!(n+w)_{k}\leq(n+k)!\frac{n+k+1}{n+1}\frac{n+k+2}{n+2}\dots\frac{n+m+k}{n+m}\leq
2^{m}(n+k)!\leq 2^{w}(n+k)!$
This completes the proof of (84). If $|z|\leq 1$ then
$|(k+z)_{n-k}|\leq(k+1)(k+2)\dots n=\frac{n!}{k!}.$
If $|z|>1$ then, using (83),
$k!|(k+z)_{n-k}|\leq|z|(|z|+1)\dots(|z|+n-1)=(|z|)_{n}\leq n!(1+n)^{|z|}.$
This completes the proof of (85). Let $k\geq 2$. Then
$(k+x-1)_{k}\geq(k-1)k\dots(2k-2)=\frac{k(k-1)}{2k(2k-1)}\frac{(2k)!}{k!}\geq\frac{1}{6}\frac{(2k)!}{k!}.$
The cases $k=0,1$ can be verified directly. This completes the proof of (86).
Let $k\geq 1$. Then
$(2k+x)_{n-k}\geq(2k)_{n-k}=\frac{(n+k)!}{(2k)!}\frac{2k}{n+k}\geq\frac{1}{1+n}\frac{(n+k)!}{k!}.$
The case $k=0$ can be verified separately. This completes the proof of (87).
Using (82), we obtain, for $n\in{\mathbf{N}}$,
$|c_{n}|=\frac{|\rho|^{n}}{|(a+b)_{n}||(c+d)_{n}|n!}\leq\frac{|\rho^{n}|}{\Re(a+b)\Re(c+d)(n-1)!^{2}n!}=\frac{1}{\Re(a+b)\Re(c+d)}\frac{n^{2}|\rho|^{n}}{(n!)^{3}}.$
Therefore, we obtain, for all $n\in{\mathbf{N}}_{0}$,
$|c_{n}|\leq K_{2}(1+n)^{2}\frac{|\rho|^{n}}{(n!)^{3}},$ (88)
where
$K_{2}=\max\left\\{1,\frac{1}{\Re(a+b)\Re(c+d)}\right\\}.$
Using (83), we find
$\left|\frac{(d-h)_{n-k}}{(n-k)!}\right|\leq(1+n)^{|d-h|}.$ (89)
Using (84), we obtain
$|n!(n+a+b+c+d-1)_{k}|\leq K_{3}(n+k)!.$ (90)
From (85), we find
$|(k+a+b)_{n-k}|\leq(1+n)^{\sigma_{4}}\frac{n!}{k!},$ (91)
and similar estimates with $a+c$ and $b+c$ in place of $a+b$. Using (86), we
obtain
$|(k+a+b+c+h-1)_{k}|\geq K_{4}\frac{(2k)!}{k!},$ (92)
where $K_{4}>0$. Using (87), we obtain
$|(2k+a+b+c+h)_{n-k}|\geq\frac{K_{5}}{1+n}\frac{(n+k)!}{(2k)!},$ (93)
where $K_{5}>0$.
Combining (89), (90), (91), (92), (93), we find
$|a_{nk}|\leq K_{6}(1+n)^{\sigma_{6}}\left(\frac{n!}{k!}\right)^{3}.$ (94)
Now (81), (88), (94) give
$\displaystyle\sum_{n=0}^{\infty}|c_{n}|\sum_{k=0}^{n}|a_{nk}|\left|W_{k}(x^{2};a,b,c,h)\right|$
$\displaystyle\leq$ $\displaystyle
K_{1}K_{2}K_{6}\sum_{n=0}^{\infty}|\rho|^{n}(1+n)^{2}\sum_{k=0}^{n}(1+n)^{\sigma_{1}+\sigma_{6}}$
$\displaystyle=$ $\displaystyle
K_{1}K_{2}K_{6}\sum_{n=0}^{\infty}|\rho|^{n}(1+n)^{\sigma_{1}+\sigma_{6}+3}<\infty$
since $|\rho|<1$. Reversing the order of the summation and shifting the
$n$-index by $k$ produces the generalized expansion (75). $\hfill\blacksquare$
## Appendix A Definite integrals
As a consequence of the series expansions given above, one may generate
corresponding definite integrals (in a one-step procedure) as an application
of the orthogonality relation for these hypergeometric orthogonal polynomials.
Integrals of such sort are always of interest since they are very likely to
find applications in applied mathematics and theoretical physics.
###### Corollary 1.
Let $k\in{\mathbf{N}}_{0}$, $\alpha\in{\mathbf{C}}$, $\beta,\gamma>-1$ such
that if $\beta,\gamma\in(-1,0)$ then $\beta+\gamma+1\neq 0$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}$. Then
$\displaystyle\int_{-1}^{1}\frac{(1-x)^{\gamma}(1+x)^{\beta}}{{\rm
R}\left(1+{\rm R}-\rho\right)^{\alpha}\left(1+{\rm
R}+\rho\right)^{\beta}}P_{k}^{(\gamma,\beta)}(x)dx$ $\displaystyle\hskip
88.2037pt=\frac{2^{1+\gamma-\alpha}\Gamma(\gamma+k+1)\Gamma(\beta+k+1)\left(\frac{\alpha+\beta+1}{2}\right)_{k}\,\left(\frac{\alpha+\beta+2}{2}\right)_{k}}{\Gamma(\gamma+\beta+2)(\alpha+\beta+1)_{k}\,\left(\frac{\gamma+\beta+2}{2}\right)_{k}\,\left(\frac{\gamma+\beta+3}{2}\right)_{k}\,k!}$
$\displaystyle\hskip
190.63338pt\times\,{{}_{3}}F_{2}\left(\begin{array}[]{c}\beta+k+1,\alpha+\beta+2k+1,\alpha-\gamma\\\\[2.84544pt]
\alpha+\beta+k+1,\gamma+\beta+2k+2\end{array};\rho\right)\rho^{k}.$ (97)
Proof. Multiplying both sides of (11) by
$P_{n}^{(\gamma,\beta)}(x)(1-x)^{\gamma}(1+x)^{\beta}$ and integrating from
$-1$ to $1$ using the orthogonality relation for Jacobi polynomials (7) with
simplification completes the proof. $\hfill\blacksquare$
###### Corollary 2.
Let $k\in{\mathbf{N}}_{0}$, $\alpha\in{\mathbf{C}}$, $\beta,\gamma>-1$ such
that if $\beta,\gamma\in(-1,0)$ then $\beta+\gamma+1\neq 0$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}$. Then
$\displaystyle\int_{-1}^{1}(1-x)^{\gamma-\alpha/2}(1+x)^{\beta/2}J_{\alpha}\left(\sqrt{2(1-x)\rho}\right)I_{\beta}\left(\sqrt{2(1+x)\rho}\right)P_{k}^{(\gamma,\beta)}(x)dx$
$\displaystyle\hskip
28.45274pt=\frac{2^{\gamma+\beta/2-\alpha/2+1}\,\Gamma(\gamma+k+1)\left(\frac{\alpha+\beta+1}{2}\right)_{k}\,\left(\frac{\alpha+\beta+2}{2}\right)_{k}}{\Gamma(\gamma+\beta+2)\,\Gamma(\alpha+k+1)\,(\alpha+\beta+1)_{k}\,\left(\frac{\gamma+\beta+2}{2}\right)_{k}\,\left(\frac{\gamma+\beta+3}{2}\right)_{k}\,k!}$
$\displaystyle\hskip
85.35826pt\times\,{{}_{2}}F_{3}\left(\begin{array}[]{c}2k+\alpha+\beta+1,\alpha-\gamma\\\\[2.84544pt]
\alpha+\beta+k+1,\gamma+\beta+2k+2,\alpha+1+k\end{array};\rho\right)\rho^{\alpha/2+\beta/2+k}.$
(100)
Proof. Same as the proof of Corollary 1, except apply to both sides of (17).
$\hfill\blacksquare$
###### Corollary 3.
Let $\beta\in{\mathbf{C}}$, $\alpha,\gamma>-1$ such that if
$\alpha,\gamma\in(-1,0)$ then $\alpha+\gamma+1\neq 0$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}\setminus(-1,0]$. Then
$\displaystyle\int_{-1}^{1}\frac{(1+x)^{\beta/2}(1-x)^{\gamma}}{{\rm
R}^{\alpha+1}}P_{\alpha}^{-\beta}\left(\frac{1+\rho}{{\rm
R}}\right)P_{k}^{(\gamma,\beta)}(x)dx$ $\displaystyle\hskip
113.81102pt=\frac{2^{\gamma+\beta/2+1}\Gamma(\gamma+k+1)(\alpha+\beta+1)_{2k}}{(1-\rho)^{\alpha-\gamma}\rho^{(\gamma+1)/2}k!}P_{\gamma-\alpha}^{-\gamma-\beta-2k-1}\left(\frac{1+\rho}{1-\rho}\right).$
Proof. Same as the proof of Corollary 1, except apply to both sides of (19).
$\hfill\blacksquare$
###### Corollary 4.
Let $\beta\in{\mathbf{C}}$, $\alpha,\gamma>-1$ such that if
$\alpha,\gamma\in(-1,0)$ then $\alpha+\gamma+1\neq 0$, $\rho\in(0,1)$. Then
$\displaystyle\int_{-1}^{1}\frac{(1-x)^{\alpha/2}(1+x)^{\gamma}}{{\rm
R}^{\beta+1}}{\mathrm{P}}_{\beta}^{-\alpha}\left(\frac{1-\rho}{{\rm
R}}\right)P_{k}^{(\alpha,\gamma)}(x)dx$ $\displaystyle\hskip
113.81102pt=\frac{2^{\gamma+\alpha/2+1}\Gamma(\gamma+k+1)(\alpha+\beta+1)_{2k}}{(1+\rho)^{\beta-\gamma}\rho^{(\gamma+1)/2}k!}P_{\gamma-\beta}^{-\gamma-\alpha-2k-1}\left(\frac{1-\rho}{1+\rho}\right).$
Proof. Same as the proof of Corollary 1, except apply to both sides of (29).
$\hfill\blacksquare$
###### Corollary 5.
Let $n\in{\mathbf{N}}_{0},$ $\alpha,\mu\in{\mathbf{C}}$,
$\nu\in(-1/2,\infty)\setminus\\{0\\}$, $\rho\in(0,1)$. Then
$\displaystyle\int_{-1}^{1}\left(1-x^{2}\right)^{\nu-\mu/2-1/4}P_{\mu-\alpha-1/2}^{1/2-\mu}\left({\rm
R}+\rho\right){\mathrm{P}}_{\mu-\alpha-1/2}^{1/2-\mu}\left({\rm
R}-\rho\right)C_{n}^{\nu}(x)dx$ $\displaystyle\hskip
71.13188pt=\frac{\sqrt{\pi}2^{1/2-\mu}(2\nu)_{n}(\alpha)_{n}(2\mu-\alpha)_{n}(\mu)_{n}\Gamma(\frac{1}{2}+\nu)}{(2\mu)_{n}\Gamma(\frac{1}{2}+\mu+n)\Gamma(1+\nu+n)\Gamma(\frac{1}{2}+\mu)n!}\rho^{n+\mu-1/2}$
$\displaystyle\hskip
128.0374pt\times\,{{}_{6}}F_{5}\left(\begin{array}[]{c}\frac{\alpha+n}{2},\frac{\alpha+n+1}{2},\frac{2\mu-\alpha+n}{2},\frac{2\mu-\alpha+n+1}{2},\mu+n,\mu-\nu\\\\[5.69046pt]
\frac{2\mu+n}{2},\frac{2\mu+n+1}{2},\frac{\mu+n+\frac{1}{2}}{2},\frac{\mu+n+\frac{3}{2}}{2},1+\nu+n\end{array};\rho^{2}\right).$
(103)
Proof. Multiplying both sides of (40) by $C_{n}^{\nu}(x)(1-x^{2})^{\nu-1/2}$
and integrating from $-1$ to $1$ using the orthogonality relation for
Gegenbauer polynomials (37) with simplification completes the proof.
$\hfill\blacksquare$
###### Corollary 6.
Let $k\in{\mathbf{N}}_{0}$, $\alpha\in{\mathbf{C}}$,
$\gamma\in(-1/2,\infty)\setminus\\{0\\}$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}$. Then
$\displaystyle\int_{-1}^{1}\frac{(1-x^{2})^{\gamma-1/2}}{{\rm R}\left(1+{\rm
R}-\rho\right)^{\alpha-1/2}\left(1+{\rm
R}+\rho\right)^{\gamma-1/2}}C_{k}^{\gamma}(x)dx$ $\displaystyle\hskip
8.5359pt=\frac{\sqrt{\pi}\,2^{1-\gamma-\alpha}\Gamma(\gamma+1/2)\left(\frac{\alpha+\gamma}{2}\right)_{k}\,\left(\frac{\alpha+\gamma+1}{2}\right)_{k}\,(2\gamma)_{k}}{\Gamma(\gamma+k+1)(\alpha+\gamma)_{k}\,k!}\,{{}_{3}}F_{2}\left(\begin{array}[]{c}\gamma+k+\frac{1}{2},\alpha+\gamma+2k,\alpha-\gamma\\\\[2.84544pt]
\alpha+\gamma+k,2\gamma+2k+1\end{array};\rho\right)\rho^{k}.$ (106)
Proof. Same as in the proof of Corollary 5, except apply to both sides of
(55). $\hfill\blacksquare$
###### Corollary 7.
Let $k\in{\mathbf{N}}_{0}$, $\alpha\in{\mathbf{C}}$,
$\rho\in\\{z\in{\mathbf{C}}:|z|<1\\}$. Then
$\displaystyle\int_{-1}^{1}\frac{(1+{\rm R}+\rho)^{1/2}}{{\rm R}(1+{\rm
R}-\rho)^{\alpha-1/2}(1-x^{2})^{1/2}}T_{k}(x)dx=\frac{\pi\left(\frac{\alpha}{2}\right)_{k}\,\left(\frac{\alpha+1}{2}\right)_{k}}{2^{\alpha-1}(\alpha)_{k}\,k!}\,{{}_{3}}F_{2}\left(\begin{array}[]{c}k+\frac{1}{2},\alpha+2k,\alpha\\\\[2.84544pt]
2k+1,\alpha+k\end{array};\rho\right)\rho^{k}.$ (109)
Proof. Multiplying both sides of (56) by $T_{k}(x)(1-x^{2})^{-1/2}$ and
integrating from $-1$ to $1$ using the orthogonality relation for Chebyshev
polynomials of the first kind, Olver et al. (2010) [12, (18.2.1), (18.2.5),
Table 18.3.1]
$\int_{-1}^{1}T_{m}(x)T_{n}(x)(1-x^{2})^{-1/2}dx=\frac{\pi}{\epsilon_{n}}\delta_{m,n},$
with simplification completes the proof. $\hfill\blacksquare$
###### Corollary 8.
Let $k\in{\mathbf{N}}_{0}$, $\alpha,\beta\in{\mathbf{R}}$,
$\rho\in{\mathbf{C}}\setminus\\{0\\}$. Then
$\int_{0}^{\infty}x^{\beta-\alpha/2}e^{-x}J_{\alpha}(2\sqrt{\rho
x})L_{k}^{\beta}(x)dx=\Gamma(\beta-\alpha+1)\frac{e^{-\rho}\rho^{k+\alpha/2}}{k!}L_{\beta-\alpha}^{\alpha+k}(\rho).$
Proof. Multiplying both sides of (60) by
$x^{\beta}e^{-x}L_{k^{\prime}}^{\beta}(x)$ for
$k^{\prime}\in{\mathbf{N}}_{0}$, integrating over $(0,\infty)$ and using the
orthogonality relation for Laguerre polynomials (58) completes the proof.
$\hfill\blacksquare$
Applying the process of Corollary 8 to both sides of (66) produces the
definite integral for Laguerre polynomials
$\int_{0}^{\infty}x^{\beta}\exp\left(\frac{x}{\rho-1}\right)L_{n}^{\beta}(x)dx=\frac{\Gamma(n+\beta+1)(1-\rho)^{\beta+1}}{n!}\rho^{n},$
(110)
which is a specific case of the definite integral given by Gradshteyn & Ryzhik
(2007) [8, (7.414.8)]. This is not surprising since (110) was found using the
generating function for Laguerre polynomials. $\hfill\blacksquare$
###### Corollary 9.
Let $k\in{\mathbf{N}}_{0}$, $\rho\in\left\\{z\in{\mathbf{C}}:|z|<1\right\\},$
$\Re\,a,\Re\,b,\Re\,c,\Re\,d,\Re\,h>0$ and non-real parameters occurring in
conjugate pairs. Then
$\displaystyle\int_{0}^{\infty}\,{{}_{2}}F_{1}\left(\begin{array}[]{c}a+ix,\,b+ix\\\\[2.84544pt]
a+b\end{array};\rho\right){{}_{2}}F_{1}\left(\begin{array}[]{c}c-ix,\,d-ix\\\\[2.84544pt]
c+d\end{array};\rho\right)W_{k}\left(x^{2};a,b,c,h\right)w(x)\,dx$
$\displaystyle\hskip
54.06006pt=\frac{2\pi\Gamma(a+b)\Gamma(k+a+c)\Gamma(k+a+h)\Gamma(k+b+c)\Gamma(k+b+h)\Gamma(k+c+h)}{(c+d)_{k}\Gamma(2k+a+b+c+h)\left\\{(k+a+b+c+d-1)_{k}\right\\}^{-1}}$
$\displaystyle\hskip
125.19194pt\times\,{}_{4}F_{3}\left(\begin{array}[]{c}d-h,2k+a+b+c+d-1,k+a+c,k+b+c\\\\[5.69046pt]
k+a+b+c+d-1,2k+a+b+c+h,k+c+d\end{array};\rho\right)\rho^{k}.$
where $w:(0,\infty)\to{\mathbf{R}}$ is defined by
$w(x):=\left|\frac{\Gamma(a+ix)\Gamma(b+ix)\Gamma(c+ix)\Gamma(h+ix)}{\Gamma(2ix)}\right|^{2}.$
Proof. Multiplying both sides of (75) by
$\left|\frac{\Gamma(a+ix)\Gamma(b+ix)\Gamma(c+ix)\Gamma(h+ix)}{\Gamma(2ix)}\right|^{2}W_{k^{\prime}}(x^{2};a,b,c,h),$
for $k^{\prime}\in{\mathbf{N}}_{0}$, integrating over $x\in(0,\infty)$ and
using the orthogonality relation for Wilson polynomials (5) completes the
proof. $\hfill\blacksquare$
### Acknowledgements
This work was conducted while H. S. Cohl was a National Research Council
Research Postdoctoral Associate in the Applied and Computational Mathematics
Division at the National Institute of Standards and Technology, Gaithersburg,
Maryland, U.S.A. C. MacKenzie would like to thank the Summer Undergraduate
Research Fellowship program at the National Institute of Standards and
Technology for financial support while this research was carried out.
## References
* [1] M. Abramowitz and I. A. Stegun. Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55 of National Bureau of Standards Applied Mathematics Series. U.S. Government Printing Office, Washington, D.C., 1972.
* [2] G. E. Andrews, R. Askey, and R. Roy. Special functions, volume 71 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1999.
* [3] R. Askey. Orthogonal polynomials and special functions. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1975\.
* [4] H. S. Cohl. Fourier, Gegenbauer and Jacobi expansions for a power-law fundamental solution of the polyharmonic equation and polyspherical addition theorems. arXiv:1209.6047, 2012.
* [5] H. S. Cohl. On a generalization of the generating function for Gegenbauer polynomials. Integral Transforms and Special Functions, 2013. DOI:10.1080/10652469.2012.761613.
* [6] H. S. Cohl and C. MacKenzie. Generalizations and simplifications of generating functions for Jacobi, Gegenbauer, Chebyshev and Legendre polynomials with definite integrals. arXiv:1210.0039, 2013.
* [7] A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi. Higher transcendental functions. Vol. I. Robert E. Krieger Publishing Co. Inc., Melbourne, Fla., 1981.
* [8] I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Elsevier/Academic Press, Amsterdam, seventh edition, 2007.
* [9] M. E. H. Ismail. Classical and quantum orthogonal polynomials in one variable, volume 98 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2005. With two chapters by Walter Van Assche, With a foreword by Richard A. Askey.
* [10] R. Koekoek, P. A. Lesky, and R. F. Swarttouw. Hypergeometric orthogonal polynomials and their $q$-analogues. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010. With a foreword by Tom H. Koornwinder.
* [11] W. Magnus, F. Oberhettinger, and R. P. Soni. Formulas and theorems for the special functions of mathematical physics. Third enlarged edition. Die Grundlehren der mathematischen Wissenschaften, Band 52. Springer-Verlag New York, Inc., New York, 1966.
* [12] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST handbook of mathematical functions. Cambridge University Press, Cambridge, 2010.
* [13] J. Sánchez-Ruiz and J. S. Dehesa. Some connection and linearization problems for polynomials in and beyond the Askey scheme. In Proceedings of the Fifth International Symposium on Orthogonal Polynomials, Special Functions and their Applications (Patras, 1999), volume 133, pages 579–591, 2001.
* [14] H. M. Srivastava and H. L. Manocha. A treatise on generating functions. Ellis Horwood Series: Mathematics and its Applications. Ellis Horwood Ltd., Chichester, 1984.
* [15] G. Szegő. Orthogonal polynomials. American Mathematical Society Colloquium Publications, Vol. 23. Revised ed. American Mathematical Society, Providence, R.I., 1959.
* [16] J. A. Wilson. Some hypergeometric orthogonal polynomials. SIAM Journal on Mathematical Analysis, 11(4):690–701, 1980.
* [17] J. A. Wilson. Asymptotics for the ${}_{4}F_{3}$ polynomials. Journal of Approximation Theory, 66(1):58–71, 1991.
|
arxiv-papers
| 2013-02-11T13:23:42 |
2024-09-04T02:49:41.636199
|
{
"license": "Public Domain",
"authors": "Howard S. Cohl, Connor MacKenzie, Hans Volkmer",
"submitter": "Howard Cohl",
"url": "https://arxiv.org/abs/1302.2474"
}
|
1302.2517
|
,
# Quantifying phases in homogenous twisted birefringent medium
Dipti Banerjee∗ and Srutarshi Banerjee ∗ $*[email protected]
$**[email protected] $*$Department of Physics,Vidyasagar College for
Women
39, Sankar Ghosh lane,Kolkata-700006, West Bengal,INDIA $**$Department of
Electrical Engineering,Indian Institute of Technology
Kharagpur-721302,West Bengal,INDIA
(12.06.12)
###### Abstract
The internal birefringence of an optical medium develops the dynamical phase
through natural rotation of incident polarized light. The uniform twist of the
medium induces an external birefringence in the system.This can be visualized
through the geometric phase by the solid angle in association with the angular
twist per unit thickness of the medium $k$.An equivalent physical analysis in
the $l=1$ orbital angular momentum sphere also has been pointed out.
keywords: Birefringent,geometric phase.
Pacs code:$42.25.Bs$
The theory of twisted birefringent material was developed long back by Gennes
paper1 and Chandrasekhar paper2 in connection with optics of cholesteric and
twisted nematic liquid crystals.In birefringent media three kinds of twists
are studied: (i) the limit of twist pitch is very short with respect to the
optical wavelength paper3 (ii) the twist pitch is comparable to the optical
wavelength that is solved by the previous approach. (iii) the very long twist
pitch is known as Geometric Optics Approximation or the Maguin paper4 limit.
Much later Jones paper5 ,paper6 , attracted our attention in formulating the
transformation through optical elements such as linear and circular polarizer,
retarder, rotator,etc arranged in proper sequence by the method of $(2\times
2)$ matrix. Azzam studied later by differential matrix formalism paper7 the
anisotropic behavior of the optical medium with and without depolarization.
The property of birefringence develops quantum phases in the optical material.
The appeared phases may be either dynamical or geometric or mixture of
both.There are four types of Geometric Phases (GP) that have been reported in
optics so far: i)The first identified GP is Pancharatnam phase paper8
,$\Omega/2$, where $\Omega$ is the solid angle enclosed by the path on the
Poincare sphere. Berry explained the quantal counterpart paper9 of
Pancharatnam’s phase in case of cyclic adiabatic evolution. He also studied
the phase two-form (GP) paper10 in connection with the dielectric tensor of
the optical medium. ii) The second kind of phase was experimentally performed
by Chaio and co-workers paper11 when the light with fixed polarization slowly
rotate around a closed path with varied directions. The developed GP was the
spin-redirection or coiled light phase. iii) The third one was developed by
the squeezed state of light through the cyclic changes by Lorentz
transformation paper12 . iv) The fourth GP was studied by van Enk paper13 , in
case of cyclic change of transverse mode pattern of Gaussian light beam
without affecting the direction of propagation or the polarization of
light.Bhandari paper14 studied the details of geometric phase in various
combination of optical material. Berry et.al. paper15 studied the phase two-
form (GP) of the twisted stack of birefringent plates.
The physical mechanism of these different kind of geometric phases originate
from spin or orbital angular momentum of polarized photon.The first
observation of the angular momentum of light was performed by Beth paper16
through an experiment where a beam of right circularly polarized light was
passed through a birefringent medium (quarter-wave plate) and transformed to
left circularly polarized light.Indeed as pointed out by Enk paper13 that
Pancharatnam phase in mode space is associated with spin angular momentum
transfer of light and optical medium.The orbital angular momentum GP of
polarized photon has been studied experimentally by Galvez et.al.paper17 in
mode space and theoretically by Padgget paper18 in Poincare sphere.In recent
days the OAM beams are generated by a kind of birefringent plate known as
”q-plates”, which are very fruitful application in classical and quantum
regime.In an interesting method the physics of OAM beams by ”q-plates” has
been developed by Santamato et.al. paper19 recently.
All these recent findings indicate that our previous study on the GP of
polarized photon (passing through polarization matrix M and rotator) in
connection with helicity, was an obvious new representation paper20
.Explicitly with the spinorial representation of polarized photon by spherical
harmonics, we consider paper21 that as the light gets a fixed polarization,
its helicity in connection with the spin angular momentum also fixed. It
varies along with the rotation of plane of polarization of light.We have
expanded this idea by finding the dielectric matrix and isolating GP in terms
of helicity of polarized photon paper22 .In this paper the properties of the
birefringent medium are visualized through the dynamical and geometric phases.
Here it has been assumed that the polarized photon is passed through the
uniformly twisted medium having very long twist pitch nearly at the Maguin
limit.In the next section the Jones matrix representation of birefringent
medium is reviewed while in section-2 the dynamical and geometric phase for
slow uniform angle of twist has been evaluated.
## I The matrix representation of birefringent medium
An optical element could change the polarization of the incident polarized
light. Jones developed the effect by representation of $2\times 2$ matrix
paper5
$\vec{D}^{\prime}=M\vec{D}$ (1)
If the input polarization of light is unaltered after going through an optical
medium, then the state can be identified as the eigenvector $\vec{D}_{i}$ of
the optical component $M_{i}$ following the eigen value equation
$M_{i}\vec{D}_{i}=d_{i}\vec{D}_{i}$ (2)
where $d_{i}$ is the corresponding eigenvalue of a particular polarization
matrix $M_{i}=\pmatrix{m_{1}&m_{4}\cr m_{3}&m_{2}}.$
The optical properties such as birefringence and dichroism of a homogeneous
medium can be described by the differential matrix $N$.Jones paper6 pointed
out that $N$ governs the propagation of the polarized light vector
$\varepsilon$ at a particular wavelength through an infinitesimal distance
within the optical element
$\frac{d\varepsilon}{dz}=\frac{dM}{dz}\varepsilon_{0}=\frac{dM}{dz}M^{-1}\varepsilon=N\varepsilon$
(3)
where it is evident that $N$ is the operator that determines $dM/dz$ from
polarization matrix $M$ as follows
$N=\frac{dM}{dz}M^{-1}=\pmatrix{n_{1}&n_{2}\cr n_{3}&n_{4}}$ (4)
If $N$ is independent of z, it shows its dependence on polarization matrix M
paper6 by
$M=M_{0}\exp(\int{Ndz})$ (5)
In the lamellar representation suggested by Jones paper6 , a thin slab of a
given medium is equivalent to a pile of retardation plates and partial
polarizers. Eight constants are required to specify the real and imaginary
part of the four matrix elements of N matrix ($2\times 2$), each possessing
one of the eight fundamental properties.The eight optical properties can be
paired paper7 for following four properties.
i) Isotropic refraction and absorption
ii) Linear birefringence and linear dichroism along the xy coordinate axis.
iii) Linear birefringence and linear dichroism along the bisector of xy
coordinate axes.
iv) Circular birefringence and circular dichroism.
The optical medium that has circular birefringence and linear birefringence
will be our point of interest paper7 , and could have the following matrix
form
$\displaystyle\theta_{cb}=\tau\pmatrix{0&-1\cr 1&0}$ (6)
$\displaystyle\theta_{lb}=\rho\pmatrix{0&-i\cr i&0}$ (7)
These $\theta_{cb}$ and $\theta_{lb}$ matrices form the required differential
matrix.
$\displaystyle
N=\theta_{cb}+\theta_{lb}=\pmatrix{0&-\tau-i\rho\cr\tau+i\rho&0}=\pmatrix{0&n_{2}\cr
n_{3}&0}$ (8)
where, $\tau$ is the circular birefringence that measures the rotation of the
plane polarized light per unit thickness and $\rho$ is the part of linear
birefringence that measures the difference between the two principal constants
along the coordinate axes.
The evolution of the ray vector
$\varepsilon={\varepsilon_{1}\choose\varepsilon_{2}}$ as in eq.(3) passing
through such medium $N$, could be re-written as
$\displaystyle\frac{d\varepsilon_{1}}{dz}=n_{1}\varepsilon_{1}+n_{2}\varepsilon_{2}$
(9)
$\displaystyle\frac{d\varepsilon_{2}}{dz}=n_{3}\varepsilon_{1}+n_{4}\varepsilon_{2}$
(10)
For pure birefringent medium represented by eq.(8), one may use the evolution
of ray vector
$\frac{d\varepsilon_{1}}{dz}=n_{2}\varepsilon_{2},\\\
\frac{d\varepsilon_{2}}{dz}=n_{3}\varepsilon_{1}$ (11)
which implies that the spatial variation of component of electric vectors in
one direction give the effect in the other perpendicular direction. Thus an
exchange of optical power between the two component states of the polarized
light takes place indicating the rotation of the ray vector after entering the
medium.
Geometrically this state $\varepsilon$ is a point P on the surface of the
Poincare sphere that defines a position vector $\vec{p}$ in three dimensional
space. Huard pointed out paper23 that the evolution of the vector $\vec{p}$
is equivalent to the cyclic change of the state vector during the passage of
infinitesimal distance dz of the optical medium. The spatial change of vector
as it passes through the crystal becomes
$\frac{d\vec{p}}{dz}=\vec{\Omega}\times\vec{p}$ (12)
A natural twist for an elementary angle $d\alpha=\Omega dz$ is experienced by
the instantaneous vector $\vec{p}$ for thickness $dz$. The magnitude and
direction of the rotation vector depends on thickness and inherent property of
the optical medium.
Jones pointed out paper6 that as homogeneous birefringent crystal is
uniformly twisted about the direction of transmission, the $N$ matrices are
transformed upon rotation both for angle of twist dependent and independent of
crystal thickness.In the former case, Jones mentioned the twisted matrix
$N^{\prime}$ in terms of untwisted matrix $N_{0}$ and rotation matrix.
$N^{\prime}=S(\theta)N_{0}S(-\theta)$ (13)
For angle of twist independent on crystal thickness,Jones showed that paper6
the twisted matrix $N^{\prime}$ becomes
$N^{\prime}=N_{0}-kS(\pi/2)$ (14)
where $S(\pi/2)=\pmatrix{0&-1\cr 1&0\cr}$ denotes the rotation matrix for
normal incidence of light. The solution of the above eqs.(13) and (14) is
$\varepsilon^{\prime}=\exp(N^{\prime}z)\varepsilon^{\prime}_{0}$ where
$\varepsilon^{\prime}_{0}$ is the value of the vector $\varepsilon^{\prime}$
at $z=0$. We realize here from the definition of angle of twist per unit
distance that k has similarity with $\Omega$ in eq.(12). The basic difference
lies in their space of appearance where the former $(k)$ exist in the external
and later $(\Omega)$ in the internal space.
In the next section we will point out that the nature of quantum phases will
depend on the kind of angle of twist. Due to the property of inherent
birefringence represented by $\eta$, a natural twist is realized by the
incident polarized light acquiring dynamical phase. External twist of the
optical medium develops further external birefringence of the medium by k that
could be visualized by geometric phase(GP). It may be noted that GP will
differ as and when the angle of twist is independent or dependent on the
crystal thickness. From the work of Santamato paper19 one can realize that in
former case GP is visualized in OAM sphere.
## II Quantum phases by twisting homogeneous birefringent medium
A polarized light traveling in the z direction can be written as a two
component spinor
$|\psi>={\psi_{+}\choose\psi_{-}}$ (15)
in terms of the electric displacement vector $d_{x},d_{y}$ where
$\psi_{\pm}=(d_{x}\pm id_{y})/\sqrt{2}$ . Berry paper9 pointed out that the
polarization matrix $M$ satisfying $M|\psi>=1/2|\psi>$, can be determined from
the eigenvector $|\psi>$ using the relation $(|\psi><\psi|-1/2)$.
From the spherical harmonics, the eigenvector
$|\psi>={{Y_{1/2}}^{-1/2,1/2}\choose{Y_{1/2}}^{-1/2,-1/2}}={\cos\frac{\theta}{2}\exp
i(\phi+\chi)/2\choose\sin\frac{\theta}{2}\exp-i(\phi-\chi)/2}$ (16)
can be considered here omitting the phase factor $\exp-i(\phi-\chi)/2$, from
above $|\psi>$ as
$|\psi>={\cos\theta/2e^{i\phi}\choose\sin\theta/2}$ (17)
In view of Berry paper9 the polarization matrix could be expressed as
$\left.\begin{array}[]{lcl}M(r)&=&\frac{1}{2}{\psi_{+}\choose\psi_{-}}(\psi_{+}~{}~{}~{}\psi_{)}-\frac{1}{2}\\\
&=&\frac{1}{2}\pmatrix{{\psi_{+}\psi_{+}}&{\psi_{+}\psi_{-}}\cr{\psi_{-}\psi_{+}}&{\psi_{-}\psi_{-}}}-\frac{1}{2}\pmatrix{1&0\cr
0&1}\\\
&=&\frac{1}{2}\pmatrix{{\psi_{+}\psi_{+}-1}&{\psi_{+}\psi_{-}}\cr{\psi_{-}\psi_{+}}&{\psi_{-}\psi_{-}-1}}\end{array}\right\\}$
(18)
Representing each term by spherical harmonics paper21 as
${Y_{1}}^{1}\approx\psi_{-}\psi_{+}\approx{Y_{1/2}}^{-1/2,1/2}.{Y_{1/2}}^{-1/2,-1/2}$
${Y_{1}}^{-1}\approx\psi_{+}\psi_{-}\approx{Y_{1/2}}^{1/2,1/2}.{Y_{1/2}}^{1/2,-1/2}$
and
${Y_{1}}^{0}\approx(\psi_{+}\psi_{+}-1)\\\ or(1-\psi_{-}\psi_{-})\\\ \\\
\approx{Y_{1/2}}^{1/2,1/2}.{Y_{1/2}}^{-1/2,-1/2}$
$-{Y_{1/2}}^{-1/2,1/2}.{Y_{1/2}}^{1/2,-1/2}$
also
${Y_{1/2}}^{1/2,1/2}.{Y_{1/2}}^{-1/2,-1/2}\approx 1$
that results the polarization matrix in eq.(18) of the form
$\left.\begin{array}[]{lcl}M(r)&=&\frac{1}{2}\pmatrix{\cos\theta&\sin\theta
e^{-i\phi}\cr{\sin\theta e^{i\phi}}&{-\cos\theta}}\end{array}\right\\}$ (19)
Every elements of the above polarization matrix eq.(19) can be realized as the
product harmonics ${Y_{1}}^{1}$, ${Y_{1}}^{-1}$ and ${Y_{1}}^{0}$. This enable
one to realize here the polarization matrix for orbital angular momentum $l=1$
paper21 .
$M(r)=\frac{1}{2}\pmatrix{{Y_{1}}^{0}&{Y_{1}}^{1}\cr{Y_{1}}^{-}1&{Y_{1}}^{0}\cr}\\\
$ (20)
The polarization matrix for this case $(l=1)$ parameterized by $(\theta,\phi)$
lies on the conventional Poincare sphere and equivalent as OAM sphere for
$l=1$. Spin angular momentum (SAM) of polarized photon is associated with
optical polarization. Another parameter $\chi$ for helicity is included that
extend the Poincare sphere to $(\theta,\phi,\chi)$ whose picture is seen in
fig.1. SAM space is possible to realize by parameters $(\theta,\chi)$.There
are two eigenvalues of helicity operator $+1$ and $-1$ that correspond to the
right handed state (spin parallel to motion) and left handed state (spin
opposite to motion) respectively. Hence the parameter for helicity, $\chi$
changes with the change of polarization of light For every OAM sphere there
exist two SAM hemi-sphere. Since the eigen values of helicites for polarized
photons is $\pm 1$, the factor $1/2$ from the polarization matrix M has been
omitted. For higher OAM states $l=2,3..$, further study is needed to evaluate
the polarization matrix for a particular orbital angular momentum from the
respective product harmonics ${Y_{l}}^{m}$.
Figure 1: Spinorial representation of polarized photon
To study with helicity of photon the words of Berry paper10 , ”Photons have no
magnetic moment and so cannot be turned with a magnetic field but have the
property of helicity to use” is very supportive. With this view we will study
here further.
The property of birefringence of the optical medium can be represented by the
differential matrix $N$. At a particular position $z$ of optical medium, the
spatial variation of the polarization matrix $M$ becomes,
$N=(\frac{dM}{d\theta})(\frac{d\theta}{dz})M^{-1}$ (21)
Considering $z=cos\theta$, the thickness of the optical medium, the N matrix
can be obtained from M in eq.(19), where $\theta$ is the angular variable of
light after refraction.
$N=\eta\pmatrix{0&-e^{i\phi}\cr e^{-i\phi}&0\cr}$ (22)
This N matrix has complex eigenvalues $\pm i\eta$. Comparing with our previous
work paper21 it may be pointed out that $\eta$ has dependence on $\theta$
through the eigenvalue $1/sin\theta$ of the eigenvector,
$\left(\begin{array}[]{c}\pm ie^{i\phi}\\\ 1\end{array}\right)$ (23)
Thus N matrix will be different for different values of $\theta$. The
birefringence can not be measured if $\theta=0$ that makes $\eta=\infty$ where
for $\theta=\pi/2$ and $\pi/3$ the accepted values of $\eta$ will be $1$ and
$2$. Also its dependence on $\sin\theta$ indicate that $\eta$ might have only
non-negative values. The nature of the optical medium could be identified
comparing the above N matrix in eq.(22) with eq.(8). It is seen that developed
N matrix is homogenous possessing both circular and linear birefringence
represented by $\eta\cos\phi$ and $(-\eta\sin\phi)$ respectively.
When the angle of twist has dependence on the crystal thickness by
$\theta=kz$,following Jones paper6 the twisted matrix $N^{\prime}$ will be
${N}^{\prime}=\pmatrix{0&-\eta e^{i\phi}+k\cr\eta e^{-i\phi}-k&0\cr}$ (24)
The corresponding twisted ray will be obtained as the initial polarized light
suffers rotation in opposite direction of the twisted matrix $N^{\prime}$.
$\varepsilon^{\prime}=\pmatrix{\cos\theta&\sin\theta\cr-\sin\theta&\cos\theta\cr}\left(\begin{array}[]{c}ie^{i\phi}\\\
1\end{array}\right)$ (25)
in other words
$\varepsilon^{\prime}=\left(\begin{array}[]{c}ie^{i\phi}\cos\theta+\sin\theta\\\
-ie^{i\phi}\sin\theta+\cos\theta\end{array}\right)$ (26)
.
Light having fixed polarization and helicity if suffers the slow variation of
path in real space it can be mapped on to the surface of unit sphere in the
wave vector space. Slow twist pitch as considered here is comparable with the
Maguin limit paper4 in connection with the optics of cholesteric and twisted
nematic liquid crystal.
The effect of twist help to achieve the geometric phase by the initial state
$|A>$ of polarized light as it unite with the final $|A^{\prime}>$.
$<A|A^{\prime}>=\pm\exp(i\gamma(C)/2)$ (27)
where $\gamma$ is the solid angle swept out on its unit sphere.
This work is based on the consideration of polarized light passing normally
through a medium having linear and circular birefringence. The incident
polarized light suffers a natural twist due to inherent property of the
medium. As a result the dynamical phase $\gamma$ is developed in the optical
medium $N$. Further external twist with the consideration of
$d\theta=d\theta^{\prime}$, develops the phase $\gamma^{\prime}$ that has
dynamical and geometrical part too.
$\displaystyle\gamma=\varepsilon^{*}\frac{d\varepsilon}{d\theta}\frac{d\theta}{dz}={\varepsilon^{*}}N\varepsilon$
(28)
$\displaystyle\gamma^{\prime}={\varepsilon^{\prime}}^{*}\frac{d\varepsilon^{\prime}}{d\theta}\frac{d\theta}{dz}={\varepsilon^{\prime}}^{*}N^{\prime}{\varepsilon^{\prime}}$
(29)
The dynamical phase $\gamma$ can be obtained using eq.(22) and (23) in (28)
$\displaystyle\gamma={\varepsilon^{*}}N\varepsilon$ $\displaystyle=$
$\displaystyle{\varepsilon_{1}}^{*}n_{2}\varepsilon_{2}+{\varepsilon_{2}}^{*}n_{3}\varepsilon_{1}$
(30) $\displaystyle=$ $\displaystyle(-ie^{-i\phi})(-\eta e^{i\phi})+(\eta
e^{-i\phi})(ie^{i\phi})$ $\displaystyle=$ $\displaystyle 2i\eta$
Comparing this eq.(30) with (27), the developed dynamical phase $\gamma$
appears as imaginary term in exponent. It could be varied between $1\&2$ for
positive $\theta$ values through $1/\sin\theta$. Hence Fig.-2 shows the
uniform variation of this dynamical phase $\gamma$ with the natural
birefringence of the medium $\eta$. If $\theta$ becomes negative, the
corresponding $\eta$ shows negative value also.
Figure 2: Variation of dynamical phase $\gamma$
Any ray passing through $N$ or $N^{\prime}$ will suffer a natural twist due to
the internal dynamics of the birefringent medium. The polarized light passing
through the twisted medium $N^{\prime}$ will acquire the phase
$\gamma^{\prime}$, that has two parts, one from the dynamics and another
through the parametric change of the medium. The phase $\gamma^{\prime}$ will
contain both the dynamical and geometric phase in the exponent. To grasp the
geometric phase due to external twist one could make the difference
$\gamma^{\prime}-\gamma$ to eliminate the dynamical phase developed.
Figure 3: Variation of Net Phase $\gamma^{\prime}$ with $\eta$ and k
To calculate the net quantum phase $\gamma^{\prime}$ appeared after a twist,
the twisted matrix $N^{\prime}$ in eq.(24) is used which intuitively will act
on the twisted light ray $\varepsilon^{\prime}$ in eq.(26).
Figure 4: Variation of $\gamma^{\prime}$ with k and $\cos(\phi)$.
$\displaystyle\gamma^{\prime}={{\varepsilon^{\prime}}_{1}}^{*}{n^{\prime}}_{2}{\varepsilon^{\prime}}_{2}+{\varepsilon_{2}}^{*}{n^{\prime}}_{3}{\varepsilon^{\prime}.}_{1}$
(31) $\displaystyle=$ $\displaystyle(-ie^{-i\phi}\cos\theta+\sin\theta)(k-\eta
e^{i\phi})(-ie^{i\phi}\sin\theta+\cos\theta)$ $\displaystyle+$
$\displaystyle(ie^{-i\phi}\sin\theta+\cos\theta)(\eta
e^{-i\phi}-k)(ie^{i\phi}\cos\theta+\sin\theta)$ $\displaystyle=$
$\displaystyle i[2\eta{\sin}^{2}\theta\cos 2\phi-2k\cos\phi]$
The above phase $\gamma^{\prime}$ at $\theta=\pi/2$ becomes
$\gamma^{\prime}=2i[\eta\cos 2\phi-k\cos\phi]$ (32)
Here the respective variation of net phase $\gamma^{\prime}$ with $\eta$ and k
are shown in fig.3 and in fig.4.
Figure 5: Variation of $\Gamma$ with $\eta$ and k.
As the dynamical phase is independent of twist angle $\theta$,the geometric
phase in case of $\theta=\pi/2$ could be recovered by
$\Gamma=\gamma^{\prime}-\gamma=i[2\eta(\sin^{2}\theta\cos
2\phi-1)-2k\cos\phi]$ (33)
Figure 6: Variation of $\Gamma$ with $\cos\phi$ and k.
in other words
$\Gamma=i\eta[(1-\cos 2\theta)\cos 2\phi-2]-2ik\cos\phi$ (34)
Similar nature of curve is seen for the net phase $\gamma^{\prime}$ and
geometric phase $\Gamma$. Fig.5 and Fig.6 shows the variation of $\Gamma$ with
$\eta$ and k and $\Gamma$ with $\cos\phi$ and k respectively.
Figure 7: Variation of $\Gamma_{1}$ with $\cos\phi$ and k.
It may be noted that $\eta$ and $k$ parameters have dependence on the angle of
incidence $\theta$ and angle of external twist $\theta^{\prime}$ respectively.
Here for simplicity the two angles ($\theta=\theta^{\prime}$) are considered
equal. The angle $\phi$ is associated with the natural(internal) twist of the
light ray inside the medium. The three types of phases
$\gamma,\gamma^{\prime}$ and $\Gamma$ could be studied graphically with
respect to variation of k and $\eta$. At normal incidence $\theta=\pi/2$,the
GP will be
$\Gamma=i[2\eta(\cos 2\phi-1)-2k\cos\phi]$ (35)
At $\theta=0$ angle of twist,GP has been identified as $\Gamma_{1}$
$\Gamma_{1}=-2i[\eta+k\cos\phi]$ (36)
Fig.7 and 8 shows the variation of GP at $\theta=0$ with respective parameters
$\cos\phi$, $\eta$ and k.
Figure 8: Variation of $\Gamma_{1}$ with $\eta$ and k.
The property of circular birefringence of the medium visualized by the
parameter $\eta$ makes a natural twist to the incident light and for appearing
dynamical phase. The external twist of the optical medium though is associated
with $k$, the developed geometric phase has its dependence on $\eta$ also.
Graphical analysis shows that the presence of external birefringence
introduces a spiral behavior of the geometric phase. To realize GP in the OAM
and SAM space the respective parameters $\phi$ and $\chi$ are responsible.
We like to extend our study using circularly polarized light incident on the
birefringent medium.Let us identify the LCP and RCP of polarization of light
by
$|L>={1\choose i},~{}~{}~{}~{}~{}|R>={1\choose-i}$ (37)
The $|L>$ and $|R>$ states gives rise to the dynamical phases for up polarized
photon
$\displaystyle\gamma_{L}={<L|N|L>}=-2i\eta\cos\phi$ (38)
$\displaystyle\gamma_{R}={<R|N|R>}=2i\eta\cos\phi$ (39)
In case of twisted birefringent crystal, incidence of left circularly
polarized light (LCP) on $N^{\prime}$ develop the phase $\gamma^{\prime}$
$\begin{array}[]{lcl}{\gamma^{\prime}}_{L}=<L|N^{\prime}|L>\\\
=(1-i)e^{-i\theta}\pmatrix{0&-\eta e^{i\phi}+k\cr\eta
e^{-i\phi}-k&0\cr}{1\choose i}e^{i\theta}\\\ =2ik-2i\eta\cos\phi\end{array}$
(40)
that consists both the dynamical and geometrical phases. The later(GP) could
be isolated by $\Gamma=\gamma^{\prime}-\gamma$
$\Gamma=2ik-2i\eta\cos\phi+2i\eta\cos\phi=2ik$ (41)
Here it seen that for circularly polarized light the dynamical phase and
geometric phase depend only on the external birefringence k and the internal
birefringence $\eta$ respectively.
Discussion:
In this communication two types of birefringence internal and external are
studied. Due to the property of inherent birefringence represented by $\eta$,
a natural twist is realized by the incident polarized light acquiring
dynamical phase. The dynamical phase $\gamma$ in all cases varies linearly
with internal birefringence $\eta$ of the medium. External twist of the
optical medium develops further external birefringence of the medium by k that
could be visualized by geometric phase(GP). It may be noted that GP will
differ as and when the angle of twist is independent or dependent on the
crystal thickness. GP has dependence on both the internal and external
birefringence when eigen polarized light is passed through the twisted optical
medium. It depends completely on the external birefringence k of the optical
medium for the passage of left or right circularly polarized light. Further
could be noted that if $\phi=0$ the value of geometric phase for the two types
of polarized light becomes identical. In future we wish to study the twisted
optical medium having the property of dichroism.
Acknowledgement: This work had been supported by the Abdus Salam International
Centre for Theoretical Physics (ICTP), Trietse, Italy. Correspondence with
Prof.Santamato,Napoli,Italy is gratefully acknowledged. Also the help from
Mr.S.Bhar, colleague of DB at VCFW (Department of Physics) is acknowledged.
## References
* (1) P.G.de Gennes,The Physics of Liquid Crystals(Clarendon,Oxford,1974).
* (2) S.Chdrasekhar,Liquid Crystals,2nd Edition(Cambridge University Press,Cambridge,1992).
* (3) D.W.Berreman,”Optica in Stratified and Anisotropic Media:4X4 Matrix formmulation” JOSA 62(1972)502.
* (4) C. Mauguin,Phys.Z. 12(1911)1011.
* (5) R.C.Jones;’A New Calculus for the Treatment of Optical Systems. V, Properties of M-matrices”,J.Opt.Soc.Am.37(1942)486.
* (6) R.C.Jones;’A New Calculus for the Treatment of Optical Systems. VII, Properties of N-matrices”,J.Opt.Soc.Am. 38(1948)671-685.
* (7) R.M.A.Azzam;”Propagation of partially polarized light through anisotropic media with or without depoalrization: A differential $4\times 4$ matrix calculus”, J.Opt.Soc.Am.68(1978)1756-1767.
* (8) S.Pancharatnam;”Generalized theory of interference and its application” Proc. Ind. Acd. Sci: A44(1956)247-262.
* (9) M.V.Berry;”The adiabatic phase and Pancharatnam phase for polarized light”, J.Mod.Opt:34(1987)1401-1407.
* (10) M.V.Berry;1986,”Adiabatic phase shifts for neutrons and photons”, in Fundamental Aspects of quantum theory,eds.V.Gorini and A.Frigerio,Plenum, NATO ASI series, vol-144, 267-278.
* (11) R.Y.Chiao and Y.S.Wu;”Manifestations of Berry’s topological phase for photons”, Phys. Rev. Lett.56(1986)933-936.
* (12) R.Y.Chiao and T.F.Jordan;”Lorentz group Berry phases in squeezed state of light” Phys.Lett.A132(1988)77-81.
* (13) S.J.van Enk;”Geometric phase,transformations of gaussian light beams and angular momentum transfer”,Optics Communications 102(1993)59-64.
* (14) R.Bhandari,”Polarization of light and topological phases” Phys.Rep.281(1997)1.
* (15) M.V.Berry and S.Klein,”Geometric phases from stacks of crystal plates” J.Mod.Optics43(1996)165.
* (16) R.A.Beth;”Mechanical detection and measurement of the angular momentum of light.” Phys.Rev.50(1936)115-125.
* (17) E.J.Galvez,”Geometric phase associated with mode transformations of optical beams bearing orbital angular momentum” Phys.Rev.Lett.90(2003) 203901-4.
* (18) M.J.Padgett and J.Courtial;”Poincare sphere equivalent for light beams containing orbital angular momentum”,Opt.Lett.24(1999)430-432.
* (19) E.Karimi,B.Piccirillo,E.Nagali,L.Marrucci and E.Santamato;Appl.Phys.Lett.94(2009)231124-3.
* (20) D.Banerjee;”Polarization matrix and geometric phase”Phys.Rev.-E56(1997)1129.
* (21) D.Banerjee;”The spinorial representation of polarized light and Berry phase”,Comm.in Theo.Physics3183-198.
* (22) D.Banerjee;”Geometric phase from a dielectric matrix”,J.Opt.Soc.Am.--23B,(2006)817-822.
* (23) S.Huard,Polarization of Light(John Wiley and Sons, 1997).
|
arxiv-papers
| 2013-02-11T16:04:52 |
2024-09-04T02:49:41.645184
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Dipti Banerjee and Srutarshi Banerjee",
"submitter": "Dipti Banerjee",
"url": "https://arxiv.org/abs/1302.2517"
}
|
1302.2804
|
Flat Rotational Surface with Pointwise 1-type Gauss map in E4
Ferdag KAHRAMAN AKSOYAK 1, Yusuf YAYLI 200footnotetext: E-mail:
[email protected](F. Kahraman Aksoyak ); [email protected]
(Y.Yayli)
1Erciyes University, Department of Mathematics, Kayseri, Turkey
2Ankara University, Department of Mathematics, Ankara, Turkey
###### Abstract
In this paper we study general rotational surfaces in the 4- dimensional
Euclidean space $\mathbb{E}^{4}$ and give a characterization of flat general
rotation surface with pointwise 1-type Gauss map. Also, we show that a non-
planar flat general rotation surface with pointwise 1-type Gauss map is a Lie
group if and only if it is a Clifford torus.
> Key words and Phrases: Rotation surface, Gauss map, Pointwise 1-type Gauss
> map , Euclidean space.
> 2010 Mathematics Subject Classification: 53B25 ; 53C40 .
## 1 Introduction
A submanifold $M$ of a Euclidean space $\mathbb{E}^{m}$ is said to be of
finite type if its position vector $x$ can be expressed as a finite sum of
eigenvectors of the Laplacian $\Delta$ of $M$, that is,
$x=x_{0}+x_{1}+...x_{k}$, where $x_{0}$ is a constant map, $x_{1},...,x_{k}$
are non-constant maps such that $\Delta x_{i}=\lambda_{i}x_{i},$
$\lambda_{i}\in$ $\mathbb{R}$, $i=1,2,...,k.$ If
$\lambda_{1},\lambda_{2},$…,$\lambda_{k}$ are all different, then $M$ is said
to be of $k-$type. This definition was similarly extended to differentiable
maps, in particular, to Gauss maps of submanifolds [6].
If a submanifold $M$ of a Euclidean space or pseudo-Euclidean space has 1-type
Gauss map $G$, then $G$ satisfies $\Delta G=\lambda\left(G+C\right)$ for some
$\lambda\in\mathbb{R}$ and some constant vector $C.$ Chen and Piccinni made a
general study on compact submanifolds of Euclidean spaces with finite type
Gauss map and they proved that a compact hypersurface $M$ of
$\mathbb{E}^{n+1}$ has 1-type Gauss map if and only if $M$ is a hypersphere in
$\mathbb{E}^{n+1}$ [6].
Hovewer the Laplacian of the Gauss map of some typical well known surfaces
such as a helicoid, a catenoid and a right cone in Euclidean 3-space
$\mathbb{E}^{3}$ take a somewhat different form, namely,
$\Delta G=f\left(G+C\right)$ (1)
for some smooth function $f$ on $M$ and some constant vector $C.$ A
submanifold $M$ of a Euclidean space $\mathbb{E}^{m}$ is said to have
pointwise 1-type Gauss map if its Gauss map satisfies $\left(1\right)$ for
some smooth function $f$ on $M$ and some constant vector $C.$ A submanifold
with pointwise 1-type Gauss map is said to be of the first kind if the vector
$C$ in $\left(1\right)$ is zero vector. Otherwise, the pointwise 1-type Gauss
map is said to be of the second kind.
Surfaces in Euclidean space and in pseudo-Euclidean space with pointwise
1-type Gauss map were recently studied in [7], [8], [10], [11], [12], [13],
[14]. Also Dursun and Turgay in [9] gave all general rotational surfaces in
$\mathbb{E}^{4}$ with proper pointwise 1-type Gauss map of the first kind and
classified minimal rotational surfaces with proper pointwise 1-type Gauss map
of the second kind. Arslan et al. in [2] investigated rotational embedded
surface with pointwise 1-type Gauss map. Arslan at el. in [3] gave necessary
and sufficent conditions for Vranceanu rotation surface to have pointwise
1-type Gauss map. Yoon in [19] showed that flat Vranceanu rotation surface
with pointwise 1-type Gauss map is a Clifford torus.
In this paper, we study general rotational surfaces in the 4- dimensional
Euclidean space $\mathbb{E}^{4}$ and give a characterization of flat general
rotation with pointwise 1-type Gauss map. Also, we show that a non-planar flat
general rotation surface with pointwise 1-type Gauss map is a Lie group if and
only if it is a Clifford torus.
## 2 Preliminaries
Let $M$ be an oriented $n-$dimensional submanifold in $m-$dimensional
Euclidean space $\mathbb{E}^{m}.$ Let $e_{1},$…,$e_{n},e_{n+1},$…,$e_{m}$ be
an oriented local orthonormal frame in $\mathbb{E}^{m}$ such that
$e_{1},$…,$e_{n}$ are tangent to $M$ and $e_{n+1},$…,$e_{m}$ normal to $M.$ We
use the following convention on the ranges of indices: $1\leq i,j,k,$…$\leq
n$, $n+1\leq r,s,t,$…$\leq m$, $1\leq A,B,C,$…$\leq m.$
Let $\tilde{\nabla}$ be the Levi-Civita connection of $\mathbb{E}^{m}$ and
$\nabla$ the induced connection on $M$. Let $\omega_{A}$ be the dual-1 form of
$e_{A}$ defined by $\omega_{A}\left(e_{B}\right)=\delta_{AB}$. Also, the
connection forms $\omega_{AB}$ are defined by
$de_{A}=\sum\limits_{B}\omega_{AB}e_{B},\text{ \ \
}\omega_{AB}+\omega_{BA}=0.$
Then we have
$\tilde{\nabla}_{e_{k}}^{e_{i}}=\sum\limits_{j=1}^{n}\omega_{ij}\left(e_{k}\right)e_{j}+\sum\limits_{r=n+1}^{m}h_{ik}^{r}e_{r}$
(2)
and
$\tilde{\nabla}_{e_{k}}^{e_{s}}=-A_{s}(e_{k})+\sum\limits_{r=n+1}^{m}\omega_{sr}\left(e_{k}\right)e_{r},\text{
\
}D_{e_{k}}^{e_{s}}=\sum\limits_{r=n+1}^{m}\omega_{sr}\left(e_{k}\right)e_{r},$
(3)
where $D$ is the normal connection, $h_{ik}^{r}$ the coefficients of the
second fundamental form $h$ and $A_{s}$ the Weingarten map in the direction
$e_{s}.$
For any real function $f$ on $M$ the Laplacian of $f$ is defined by
$\Delta
f=-\sum\limits_{i}\left(\tilde{\nabla}_{e_{i}}\tilde{\nabla}_{e_{i}}f-\tilde{\nabla}_{\nabla_{e_{i}}^{e_{i}}}f\right).$
(4)
If we define a covariant differention $\tilde{\nabla}h$ of the second
fundamental form $h$ on the direct sum of the tangent bundle and the normal
bundle $TM\oplus T^{\perp}M$ of $M$ by
$\left(\tilde{\nabla}_{X}h\right)\left(Y,Z\right)=D_{X}h\left(Y,Z\right)-h\left(\nabla_{X}Y,Z\right)-h\left(Y,\nabla_{X}Z\right)$
for any vector fields $X,$ $Y$ and $Z$ tangent to $M.$ Then we have the
Codazzi equation
$\left(\tilde{\nabla}_{X}h\right)\left(Y,Z\right)=\left(\tilde{\nabla}_{Y}h\right)\left(X,Z\right)$
(5)
and the Gauss equation is given by
$\left\langle R(X,Y)Z,W\right\rangle=\left\langle
h\left(X,W\right),h\left(Y,Z\right)\right\rangle-\left\langle
h\left(X,Z\right),h\left(Y,W\right)\right\rangle,$ (6)
where the vectors $X,$ $Y,$ $Z$ and $W$ are tangent to $M$ and $R$ is the
curvature tensor associated with $\nabla$ and the curvature tensor $R$ is
defined by
$R(X,Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{\left[X,Y\right]}Z.$
Let us now define the Gauss map $G$ of a submanifold $M$ into $G(n,m)$ in
$\wedge^{n}\mathbb{E}^{m},$ where $G(n,m)$ is the Grassmannian manifold
consisting of all oriented $n-$planes through the origin of $\mathbb{E}^{m}$
and $\wedge^{n}\mathbb{E}^{m}$ is the vector space obtained by the exterior
product of $n$ vectors in $\mathbb{E}^{m}.$ In a natural way, we can identify
$\wedge^{n}\mathbb{E}^{m}$ with some Euclidean space $\mathbb{E}^{N}$ where
$N=\left(\begin{array}[]{c}m\\\ n\end{array}\right).$ The map $G:M\rightarrow
G(n,m)\subset E^{N}$ defined by $G(p)=\left(e_{1}\wedge...\wedge
e_{n}\right)\left(p\right)$ is called the Gauss map of $M,$ that is, a smooth
map which carries a point $p$ in $M$ into the oriented $n-$plane through the
origin of $\mathbb{E}^{m}$ obtained from parallel translation of the tangent
space of $M$ at $p$ in .
Bicomplex number is defined by the basis $\left\\{1,i,j,ij\right\\}$ where
$i,j,ij$ satisfy $i^{2}=-1,$ $j^{2}=-1,$ $ij=ji.$ Thus any bicomplex number
$x$ can be expressed as $x=x_{1}1+x_{2}i+x_{3}j+x_{4}ij$, $\forall
x_{1},x_{2},x_{3},x_{4}\in\mathbb{R}.$ We denote the set of bicomplex numbers
by $C_{2}.$ For any $x=x_{1}1+x_{2}i+x_{3}j+x_{4}ij$ and
$y=y_{1}1+y_{2}i+y_{3}j+y_{4}ij$ in $C_{2}$ the bicomplex number addition is
defined by
$x+y=\left(x_{1}+y_{1}\right)+\left(x_{2}+y_{2}\right)i+\left(x_{3}+y_{3}\right)j+\left(x_{4}+y_{4}\right)ij\text{.}$
The multiplication of a bicomplex number $x=x_{1}1+x_{2}i+x_{3}j+x_{4}ij$ by a
real scalar $\lambda$ is given by
$\lambda x=\lambda x_{1}1+\lambda x_{2}i+\lambda x_{3}j+\lambda
x_{4}ij\text{.}$
With this addition and scalar multiplication, $C_{2}$ is a real vector space.
Bicomplex number product, denoted by $\times$, over the set of bicomplex
numbers $C_{2}$ is given by
$\displaystyle x\times y$ $\displaystyle=$
$\displaystyle\left(x_{1}y_{1}-x_{2}y_{2}-x_{3}y_{3}+x_{4}y_{4}\right)+\left(x_{1}y_{2}+x_{2}y_{1}-x_{3}y_{4}-x_{4}y_{3}\right)i$
$\displaystyle+\left(x_{1}y_{3}+x_{3}y_{1}-x_{2}y_{4}-x_{4}y_{2}\right)j+\left(x_{1}y_{4}+x_{4}y_{1}+x_{2}y_{3}+x_{3}y_{2}\right)ij\text{.}$
Vector space $C_{2}$ together with the bicomplex product $\times$ is a real
algebra.
Since the bicomplex algebra is associative, it can be considered in terms of
matrices. Consider the set of matrices
$Q=\left\\{\left(\begin{array}[]{cccc}x_{1}&-x_{2}&-x_{3}&x_{4}\\\
x_{2}&x_{1}&-x_{4}&-x_{3}\\\ x_{3}&-x_{4}&x_{1}&-x_{2}\\\
x_{4}&x_{3}&x_{2}&x_{1}\end{array}\right);\text{ \ \ \ \ \ \
}x_{i}\in\mathbb{R}\text{ ,\ \ \ \ }1\leq i\leq 4\right\\}\text{.}$
The set $Q$ together with matrix addition and scalar matrix multiplication is
a real vector space. Furthermore, the vector space together with matrix
product is an algebra [15].
The transformation
$g:C_{2}\rightarrow Q$
given by
$g\left(x=x_{1}1+x_{2}i+x_{3}j+x_{4}ij\right)=\left(\begin{array}[]{cccc}x_{1}&-x_{2}&-x_{3}&x_{4}\\\
x_{2}&x_{1}&-x_{4}&-x_{3}\\\ x_{3}&-x_{4}&x_{1}&-x_{2}\\\
x_{4}&x_{3}&x_{2}&x_{1}\end{array}\right)$
is one to one and onto. Morever $\forall x,y\in C_{2}$ and
$\lambda\in\mathbb{R},$ we have
$\displaystyle g\left(x+y\right)$ $\displaystyle=$ $\displaystyle
g\left(x\right)+g\left(y\right)$ $\displaystyle g\left(\lambda x\right)$
$\displaystyle=$ $\displaystyle\lambda g\left(x\right)$ $\displaystyle
g\left(xy\right)$ $\displaystyle=$ $\displaystyle
g\left(x\right)g\left(y\right)\text{.}$
Thus the algebras $C_{2}$ and $Q$ are isomorphic.
Let $x\in C_{2}.$ Then $x$ can be expressed as
$x=\left(x_{1}+x_{2}i\right)+\left(x_{3}+x_{4}i\right)j.$ In this case, there
is three different conjugations for bicomplex numbers as follows:
$\displaystyle x^{t_{1}}$ $\displaystyle=$
$\displaystyle\left[\left(x_{1}+x_{2}i\right)+\left(x_{3}+x_{4}i\right)j\right]^{t_{1}}=\left(x_{1}-x_{2}i\right)+\left(x_{3}-x_{4}i\right)j$
$\displaystyle x^{t_{2}}$ $\displaystyle=$
$\displaystyle\left[\left(x_{1}+x_{2}i\right)+\left(x_{3}+x_{4}i\right)j\right]^{t_{2}}=\left(x_{1}+x_{2}i\right)-\left(x_{3}+x_{4}i\right)j$
$\displaystyle x^{t_{3}}$ $\displaystyle=$
$\displaystyle\left[\left(x_{1}+x_{2}i\right)+\left(x_{3}+x_{4}i\right)j\right]^{t_{3}}=\left(x_{1}-x_{2}i\right)-\left(x_{3}-x_{4}i\right)j$
## 3 Flat Rotation Surfaces with Pointwise 1-Type Gauss Map in $E^{4}$
In this section, we consider the flat rotation surfaces with pointwise 1-type
Gauss map in Euclidean 4- space. Let consider the equation of the general
rotation surface given in [16].
$\varphi\left(t,s\right)=\begin{pmatrix}\cos mt&-\sin mt&0&0\\\ \sin mt&\cos
mt&0&0\\\ 0&0&\cos nt&-\sin nt\\\ 0&0&\sin nt&\cos
nt\end{pmatrix}\left(\begin{array}[]{c}\alpha_{1}(s)\\\ \alpha_{2}(s)\\\
\alpha_{3}(s)\\\ \alpha_{4}(s)\end{array}\right),$
where
$\alpha\left(s\right)=\left(\alpha_{1}\left(s\right),\alpha_{2}\left(s\right),\alpha_{3}\left(s\right),\alpha_{4}\left(s\right)\right)$
is a regular smooth curve in $\mathbb{E}^{4}$ on an open interval $I$ in
$\mathbb{R}$ and $m$, $n$ are some real numbers which are the rates of the
rotation in fixed planes of the rotation. If we choose the meridian curve
$\alpha$ as $\alpha\left(s\right)=\left(x\left(s\right),0,y(s),0\right)$ is
unit speed curve and the rates of the rotation $m$ and $n\ $as $m=n=1,$ we
obtain the surface as follows:
$M:\text{ \ }X\left(s,t\right)=\left(x\left(s\right)\cos t,x\left(s\right)\sin
t,y(s)\cos t,y(s)\sin t\right)$ (7)
Let $M$ be a general rotation surface in $\mathbb{E}^{4}$ given by $(7)$. We
consider the following orthonormal moving frame
$\left\\{e_{1},e_{2},e_{3},e_{4}\right\\}$ on $M$ such that $e_{1},e_{2}$ are
tangent to $M$ and $e_{3},e_{4}$ are normal to $M:$
$\displaystyle e_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{x^{2}\left(s\right)+y^{2}(s)}}\left(-x\left(s\right)\sin
t,x\left(s\right)\cos t,-y(s)\sin t,y(s)\cos t\right)$ $\displaystyle e_{2}$
$\displaystyle=$ $\displaystyle\left(x^{\prime}\left(s\right)\cos
t,x^{\prime}\left(s\right)\sin t,y^{\prime}(s)\cos t,y^{\prime}(s)\sin
t\right)$ $\displaystyle e_{3}$ $\displaystyle=$
$\displaystyle\left(-y^{\prime}(s)\cos t,-y^{\prime}(s)\sin
t,x^{\prime}\left(s\right)\cos t,x^{\prime}\left(s\right)\sin t\right)$
$\displaystyle e_{4}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{x^{2}\left(s\right)+y^{2}(s)}}\left(-y(s)\sin
t,y(s)\cos t,x\left(s\right)\sin t,-x\left(s\right)\cos t\right)$
where
$e_{1}=\frac{1}{\sqrt{x^{2}\left(s\right)+y^{2}(s)}}\frac{\partial}{\partial
t}$ and $e_{2}=\frac{\partial}{\partial s}$. Then we have the dual 1-forms as:
$\omega_{1}=\sqrt{x^{2}\left(s\right)+y^{2}(s)}dt\text{ \ \ \ \ and \ \ \ \
}\omega_{2}=ds$
By a direct computation we have components of the second fundamental form and
the connection forms as:
$h_{11}^{3}=b(s),\text{ \ }h_{12}^{3}=0,\text{ \ }h_{22}^{3}=c(s),$
$h_{11}^{4}=0,\text{ \ }h_{12}^{4}=-b(s),\text{ \ }h_{22}^{4}=0,$
$\displaystyle\omega_{12}$ $\displaystyle=$
$\displaystyle-a(s)\omega_{1},\text{ \ \ }\omega_{13}=b(s)\omega_{1},\text{ \
\ }\omega_{14}=-b(s)\omega_{2}$ $\displaystyle\omega_{23}$ $\displaystyle=$
$\displaystyle c(s)\omega_{2},\text{ \ \ }\omega_{24}=-b(s)\omega_{1},\text{ \
\ }\omega_{34}=-a(s)\omega_{1}.$
By covariant differentiation with respect to $e_{1}$ and $e_{2}$ a
straightforward calculation gives:
$\displaystyle\tilde{\nabla}_{e_{1}}e_{1}$ $\displaystyle=$
$\displaystyle-a(s)e_{2}+b(s)e_{3},$ (8)
$\displaystyle\tilde{\nabla}_{e_{2}}e_{1}$ $\displaystyle=$
$\displaystyle-b(s)e_{4},$ $\displaystyle\tilde{\nabla}_{e_{1}}e_{2}$
$\displaystyle=$ $\displaystyle a(s)e_{1}-b(s)e_{4},$
$\displaystyle\tilde{\nabla}_{e_{2}}e_{2}$ $\displaystyle=$ $\displaystyle
c(s)e_{3},$ $\displaystyle\tilde{\nabla}_{e_{1}}e_{3}$ $\displaystyle=$
$\displaystyle-b(s)e_{1}-a(s)e_{4},$
$\displaystyle\tilde{\nabla}_{e_{2}}e_{3}$ $\displaystyle=$
$\displaystyle-c(s)e_{2},$ $\displaystyle\tilde{\nabla}_{e_{1}}e_{4}$
$\displaystyle=$ $\displaystyle b(s)e_{2}+a(s)e_{3},$
$\displaystyle\tilde{\nabla}_{e_{2}}e_{4}$ $\displaystyle=$ $\displaystyle
b(s)e_{1},$
where
$a(s)=\frac{x(s)x^{\prime}(s)+y(s)y^{\prime}(s)}{x^{2}\left(s\right)+y^{2}(s)},$
(9) $\text{\
}b(s)=\frac{x(s)y^{\prime}(s)-x^{\prime}(s)y(s)}{x^{2}\left(s\right)+y^{2}(s)},$
(10) $c(s)=x^{\prime}(s)y^{\prime\prime}-x^{\prime\prime}y^{\prime}(s).$ (11)
The Gaussian curvature is obtained by
$K=\det\left(h_{ij}^{3}\right)+\det\left(h_{ij}^{4}\right)=b(s)c(s)-b^{2}(s).$
(12)
If the surface $M$ is flat, from $(12)$ we get
$b(s)c(s)-b^{2}(s)=0.$ (13)
Furthermore, by using the equations of Gauss and Codazzi after some
computation we obtain
$a^{\prime}\left(s\right)+a^{2}\left(s\right)=b^{2}(s)-b(s)c(s)$ (14)
and
$b^{\prime}\left(s\right)=-2a(s)b(s)+a(s)c(s),$ (15)
respectively.
By using $\left(4\right)$ and $\left(8\right)$ and straight-forward
computation the Laplacian $\Delta G$ of the Gauss map $G$ can be expressed as
$\displaystyle\Delta G$ $\displaystyle=$
$\displaystyle\left(3b^{2}\left(s\right)+c^{2}\left(s\right)\right)\left(e_{1}\wedge
e_{2}\right)+\left(2a(s)b(s)-a(s)c(s)-c^{\prime}\left(s\right)\right)\left(e_{1}\wedge
e_{3}\right)$ (16)
$\displaystyle+\left(-3a(s)b(s)-b^{\prime}(s)\right)\left(e_{2}\wedge
e_{4}\right)+\left(2b^{2}(s)-2b(s)c(s)\right)\left(e_{3}\wedge e_{4}\right).$
###### Remark 1.
Similar computations to above computations is given for tensor product
surfaces in [4] and for general rotational surface in [9]
Now we investigate the flat rotation surface with the pointwise 1-type Gauss
map. From $(13)$, we obtain that $b(s)=0$ or $b(s)=c(s).$ We assume that
$b(s)\neq c(s).$ Then $b(s)$ is equal to zero and $(15)$ implies that
$a(s)c(s)=0.$ Since $b(s)\neq c(s),$ it implies that $c(s)$ is not equal to
zero. Then we obtain as $a(s)=0.$ In that case, by using $(9)$ and $(10)$ we
obtain that $\alpha\left(s\right)=\left(x\left(s\right),0,y(s),0\right)$ is a
constant vector. This is a contradiction. Therefore $b(s)=c(s)$ for all $s.$
From $(14)$, we get
$a^{\prime}\left(s\right)+a^{2}\left(s\right)=0$ (17)
whose the trivial solution and non-trivial solution
$a(s)=0$
and
$a(s)=\frac{1}{s+c},$
respectively. We assume that $a(s)=0.$ By $(15)$ $b=b_{0}$ is a constant and
so is $c$. In that case by using $(9),(10)$ and $(11)$, $x$ and $y$ satisfy
the following differential equations
$x^{2}\left(s\right)+y^{2}(s)=\lambda^{2}\text{ \ \ }\lambda\text{ is a non-
zero constant,}$ (18) $x(s)y^{\prime}(s)-x^{\prime}(s)y(s)=b_{0}\lambda^{2},$
(19) $x^{\prime}(s)y^{\prime\prime}-x^{\prime\prime}y^{\prime}(s)=b_{0}.$ (20)
From $(18)$ we may put
$x\left(s\right)=\lambda\cos\theta\left(s\right),\text{ \ \
}y\left(s\right)=\lambda\sin\theta\left(s\right),$ (21)
where $\theta\left(s\right)$ is some angle function. Differentiating $(21)$
with respect to $s,$ we have
$x^{\prime}(s)=-\theta^{\prime}(s)y\left(s\right)\text{ \ and \
}y^{\prime}(s)=\theta^{\prime}(s)x\left(s\right).$ (22)
By substituting $(21)$ and $(22)$ into $(19)$, we get
$\theta\left(s\right)=b_{0}s+d\text{, \ \ }d=const.$
And since the curve $\alpha$ is a unit speed curve, we have
$b_{0}^{2}\lambda^{2}=1.$
Then we can write components of the curve $\alpha$ as:
$x\left(s\right)=\lambda\cos\left(b_{0}s+d\right)\text{ \ \ and \ \
}y\left(s\right)=\lambda\sin\left(b_{0}s+d\right),\text{ \ \ \
}b_{0}^{2}\lambda^{2}=1.$
On the other hand, by using $(16)$ we can rewrite the Laplacian of the Gauss
map $G$ with $a(s)=0$ and $b=c=b_{0}$ as follows:
$\Delta G=4b_{0}^{2}\left(e_{1}\wedge e_{2}\right)$
that is, the flat surface $M$ is pointwise 1-type Gauss map with the function
$f=4b_{0}^{2}$ and $C=0.$ Even if it is a pointwise 1-type Gauss map of the
first kind.
Now we assume that $a(s)=\frac{1}{s+c}.$ Since $b(s)$ is equal to $c(s),$ from
$(15)$ we get
$b^{\prime}\left(s\right)=-a(s)b(s)$
or we can write
$b^{\prime}\left(s\right)=-\frac{b(s)}{s+c},$
whose the solution
$b(s)=\mu a(s),\text{ \ \ \ }\mu\text{ is a constant.}$
By using $(16)$ we can rewrite the Laplacian of the Gauss map $G$ with
$c(s)=b(s)=\mu a(s)$ as:
$\Delta G=\left(4\mu^{2}a^{2}\left(s\right)\right)\left(e_{1}\wedge
e_{2}\right)+2\mu a^{2}(s)\left(e_{1}\wedge e_{3}\right)-2\mu
a^{2}(s)\left(e_{2}\wedge e_{4}\right).$ (23)
We suppose that the flat rotational surface has pointwise 1-type Gauss map.
From $(1)$ and $(22)$, we get
$4\mu^{2}a^{2}\left(s\right)=f+f\left\langle C,e_{1}\wedge e_{2}\right\rangle$
(24) $2\mu a^{2}(s)=f\left\langle C,e_{1}\wedge e_{3}\right\rangle$ (25)
$-2\mu a^{2}(s)=f\left\langle C,e_{2}\wedge e_{4}\right\rangle$ (26)
Then, we have
$\left\langle C,e_{1}\wedge e_{4}\right\rangle=0,\text{ }\left\langle
C,e_{2}\wedge e_{3}\right\rangle=0,\text{ }\left\langle C,e_{3}\wedge
e_{4}\right\rangle=0$ (27)
By using $(25)$ and $(26)$ we obtain
$\left\langle C,e_{1}\wedge e_{3}\right\rangle+\left\langle C,e_{2}\wedge
e_{4}\right\rangle=0$ (28)
By differentiating the first equation in $(27)$ with respect to $e_{1}$ and by
using $(8)$, the third equation in $(27)$ and $(28)$, we get
$2a(s)\left\langle C,e_{1}\wedge e_{3}\right\rangle+\mu a(s)\left\langle
C,e_{1}\wedge e_{2}\right\rangle=0$ (29)
Combining $(24),(25)$ and $(29)$ we then have
$f=4\left(a^{2}\left(s\right)+\mu^{2}a^{2}\left(s\right)\right)$ (30)
that is, a smooth function $f$ depends only on $s.$ By differentiating $f$
with respect to $s$ and by using the equality
$a^{\prime}\left(s\right)=-a^{2}\left(s\right)$, we get
$f^{\prime}=-2a(s)f$ (31)
By differentiating $(25)$ with respect to s and by using $(8),(24)$, the third
equation in $(27),(30),(31)$ and the equality
$a^{\prime}\left(s\right)=-a^{2}\left(s\right)$, we have
$\mu a^{3}=0$
Since $a(s)\neq 0$, it follows that $\mu=0.$ Then we obtain that $b=c=0.$ Then
the surface $M$ is a part of plane.
Thus we can give the following theorem and corollary.
###### Theorem 1.
Let $M$ be the flat rotation surface given by the parametrization (7). Then
$M$ has pointwise 1-type Gauss map if and only if $M$ is either totally
geodesic or parametrized by
$X\left(s,t\right)=\left(\begin{array}[]{c}\lambda\cos\left(b_{0}s+d\right)\cos
t,\lambda\cos\left(b_{0}s+d\right)\sin t,\\\
\lambda\sin\left(b_{0}s+d\right)\cos t,\lambda\sin\left(b_{0}s+d\right)\sin
t\end{array}\right)\text{,\ \ \ }b_{0}^{2}\lambda^{2}=1$ (32)
where $b_{0},$ $\lambda$ and $d$ are real constants.
###### Corollary 1.
Let $M$ be flat rotation surface given by the parametrization (7). If $M$ has
pointwise 1-type Gauss map then the Gauss map $G$ on $M$ is of 1-type.
## 4 The general rotation surface and Lie group
In this section, we determine the profile curve of the general rotation
surface which has a group structure with the bicomplex number product.
Let the hyperquadric $P$ be given by
$P=\left\\{x=\left(x_{1},x_{2},x_{3},x_{4}\right)\neq 0\text{; \ \ \
}x_{1}x_{4}=x_{2}x_{3}\right\\}.$
We consider $P$ as the set of bicomplex number
$P=\left\\{x=x_{1}1+x_{2}i+x_{3}j+x_{4}ij\text{ };\text{\
}x_{1}x_{4}=x_{2}x_{3},\text{ }x\neq 0\right\\}.$
The components of $P$ are easily obtained by representing bicomplex number
multiplication in matrix form.
$\tilde{P}=\left\\{M_{x}=\left(\begin{array}[]{cccc}x_{1}&-x_{2}&-x_{3}&x_{4}\\\
x_{2}&x_{1}&-x_{4}&-x_{3}\\\ x_{3}&-x_{4}&x_{1}&-x_{2}\\\
x_{4}&x_{3}&x_{2}&x_{1}\end{array}\right);\text{\
}x_{1}x_{4}=x_{2}x_{3},\text{\ }x\neq 0\right\\}.$
###### Theorem 2.
The set of $P$ together with the bicomplex number product is a Lie group
###### Proof.
$\tilde{P}$ is a differentiable manifold and at the same time a group with
group operation given by matrix multiplication. The group function
$.:\tilde{P}\times\tilde{P}\rightarrow\tilde{P}$
defined by $\left(x,y\right)\rightarrow x.y$ is differentiable. So $(P,.)$ can
be made a Lie group so that $g$ is a isomorphism [15]. ∎
###### Remark 2.
The surface $M$ given by the parametrization (7) is a subset of $P.$
###### Proposition 1.
Let $M$ be a rotation surface given by the parametrization (7). If $x(s)$ and
$y(s)$ satisfy the following equations then $M$ is a Lie subgroup of $P$.
$x\left(s_{1}\right)x\left(s_{2}\right)-y\left(s_{1}\right)y\left(s_{2}\right)=x\left(s_{1}+s_{2}\right)$
(33)
$x\left(s_{1}\right)y\left(s_{2}\right)+x\left(s_{2}\right)y\left(s_{1}\right)=y\left(s_{1}+s_{2}\right)$
(34)
$\frac{x\left(s\right)}{x^{2}\left(s\right)+y^{2}\left(s\right)}=x\left(-s\right)$
(35)
$-\frac{y\left(s\right)}{x^{2}\left(s\right)+y^{2}\left(s\right)}=y\left(-s\right)$
(36)
###### Proof.
Let $\alpha(s)=\left(x(s),0,y(s),0\right)$ be a profile curve of the rotation
surface given by the parametrization (7) such that $x(s)$ and $y(s)$ satisfy
the equations $\left(33\right),$ $\left(34\right),$ $\left(35\right)$ and
$\left(36\right)$. In that case we obtain that the inverse of
$X\left(s,t\right)$ is $X\left(-s,-t\right)$ and
$X\left(s_{1},t_{1}\right)\times
X\left(s_{2},t_{2}\right)=X\left(s_{1}+s_{2},t_{1}+t_{2}\right).$ This
completes the proof. ∎
###### Proposition 2.
Let $\alpha(s)=\left(x(s),0,y(s),0\right)$ be a profile curve of the rotation
surface given by the parametrization (7) such that $x(s)$ and $y(s)$ satisfy
the equation $x^{2}\left(s\right)+y^{2}\left(s\right)=\lambda^{2},$ where
$\lambda$ is a non-zero constant. If $M$ is a subgroup of $P$ then the profile
curve $\alpha$ is a unit circle.
###### Proof.
We assume that $x(s)$ and $y(s)$ satisfy the equation
$x^{2}\left(s\right)+y^{2}\left(s\right)=\lambda^{2}.$ Then we can put
$x(s)=\lambda\cos\theta\left(s\right)\text{ and
}y(s)=\lambda\sin\theta\left(s\right)$ (37)
where $\lambda$ is a real constant and $\theta\left(s\right)$ is a smooth
function. Since $M$ is a group, there exists one and only inverse of all
elements on $M.$ In that case the inverse of $X\left(s,t\right)$ is given by
$X^{-1}\left(s,t\right)=\left(\begin{array}[]{c}\frac{x\left(s\right)}{x^{2}\left(s\right)+y^{2}\left(s\right)}\cos\left(-t\right),\frac{x\left(s\right)}{x^{2}\left(s\right)+y^{2}\left(s\right)}\sin\left(-t\right),\\\
-\frac{y\left(s\right)}{x^{2}\left(s\right)+y^{2}\left(s\right)}\cos\left(-t\right),-\frac{y\left(s\right)}{x^{2}\left(s\right)+y^{2}\left(s\right)}\sin\left(-t\right)\end{array}\right)$
where
$\displaystyle\frac{x\left(s\right)}{x^{2}\left(s\right)+y^{2}\left(s\right)}$
$\displaystyle=$ $\displaystyle x\left(f\left(s\right)\right),\text{ }$
$\displaystyle-\frac{y\left(s\right)}{x^{2}\left(s\right)+y^{2}\left(s\right)}$
$\displaystyle=$ $\displaystyle y\left(f\left(s\right)\right)\text{, }f\text{
is a smooth function.}$ (38)
By using $(38)$, we get
$x\left(s\right)=\lambda^{2}x\left(f\left(s\right)\right)$ (39)
and
$y\left(s\right)=-\lambda^{2}y\left(f\left(s\right)\right)$ (40)
By summing of the squares on both sides in $(39)$ and $(40)$ and by using
$(37)$, we obtain that $\lambda^{2}=1.$ This completes the proof. ∎
###### Proposition 3.
Let $\alpha(s)=\left(x(s),0,y(s),0\right)$ be a profile curve of the rotation
surface given by the parametrization (7) such that $x(s)$ and $y(s)$ is given
by $x(s)=\lambda\cos\theta\left(s\right)$ and
$y(s)=\lambda\sin\theta\left(s\right).$ Then if $\lambda=1$ and $\theta$ is a
linear function then $M$ is a Lie subgroup of $P$.
###### Proof.
We assume that $\lambda=1$ and $\theta$ is a linear function. Then we can
write
$x(s)=\cos\eta s\text{ and }y(s)=\sin\eta s$
and in that case $x(s)$ and $y(s)$ satisfy the equations $\left(33\right),$
$\left(34\right),$ $\left(35\right)$ and $\left(36\right).$ Thus from
Proposition (2) $M$ is a subgroup of $P.$ Also, it is a submanifold of $P.$ ∎
###### Proposition 4.
Let $\alpha(s)=\left(x(s),0,y(s),0\right)$ be a profile curve of the rotation
surface given by the parametrization (7) such that $x(s)$ and $y(s)$ is given
by $x(s)=u(s)\cos\theta\left(s\right)$ and
$y(s)=u(s)\sin\theta\left(s\right).$ Then if $u:$
$\left(\mathbb{R},+\right)\rightarrow\left(\mathbb{R}^{+},.\right)$ is a group
homomorphism and $\theta$ is a linear function then $M$ is a Lie subgroup of
$P$.
###### Proof.
Let $x(s)$ and $y(s)$ be given by $x(s)=u(s)\cos\theta\left(s\right)$ and
$y(s)=u(s)\sin\theta\left(s\right)$ and let $u:$
$\left(\mathbb{R},+\right)\rightarrow\left(\mathbb{R}^{+},.\right)$ be a group
homomorphism and $\theta$ be a linear function. In that case $x(s)$ and $y(s)$
satisfy the equations $\left(33\right),$ $\left(34\right),$ $\left(35\right)$
and $\left(36\right).$ Thus from Proposition (1) $M$ is a subgroup of $P.$
Also, it is a submanifold of $P.$ So it is a Lie subgroup of $P.$ ∎
###### Corollary 2.
Let $\alpha(s)=\left(x(s),0,y(s),0\right)$ be a profile curve of the rotation
surface given by the parametrization (7) such that $x(s)$ and $y(s)$ is given
by $x(s)=\lambda\cos\theta\left(s\right)$ and
$y(s)=\lambda\sin\theta\left(s\right)$ for $\theta$ linear function. If $M$ is
a Lie subgroup then $\lambda=1$.
###### Proof.
We assume that $M$ is a group and $\lambda\neq 1.$ From Proposition (1) we
obtain that $\lambda=-1.$ On the other hand, for $\lambda=-1$ and $\theta$
linear function the closure property is not satisfied on $M.$ This is a
contradiction. Then we obtain that $\lambda=1$. ∎
###### Remark 3.
Let $M$ be a Vranceanu surface. If the surface $M$ is flat then it is given by
$X\left(s,t\right)=\left(e^{ks}\cos s\cos t,e^{ks}\cos s\sin t,e^{ks}\sin
s\cos t,e^{ks}\sin s\sin t\right)$
where $k$ is a real constant. In that case we can say that flat Vranceanu
surface is a Lie subgroup of $P$ with bicomplex multiplication. Also, flat
Vranceanu surface with pointwise 1-type Gauss map is Clifford torus and it is
given by
$X\left(s,t\right)=\left(\cos s\cos t,\cos s\sin t,\sin s\cos t\sin s\sin
t\right)$
and Clifford Torus is a Lie subgroup of $P$ with bicomplex multiplication. See
for more details [1].
###### Theorem 3.
Let $M$ be non-planar flat rotation surface with pointwise 1-type Gauss map
given by the parametrization (32) with $d=2k\pi$. Then $M$ is a Lie group with
bicomplex multiplication if and only if it is a Clifford torus.
###### Proof.
We assume that $M$ is a Lie group with bicomplex multiplication then from
Corollary (2) we get that $\lambda=1.$ Since $b_{0}^{2}\lambda^{2}=1,$ it
follows that $b_{0}=\varepsilon$, where $\varepsilon=\pm 1.$ In that case the
surface $M$ is given by
$X\left(s,t\right)=\left(\cos\varepsilon s\cos t,\cos\varepsilon s\sin
t,\sin\varepsilon s\cos t\sin\varepsilon s\sin t\right)$
and $M$ is a Clifford torus, that is, the product of two plane circle wih the
same radius.
Conversely, Clifford torus is a flat rotational surface with pointwise 1-type
Gauss map the surface which can be obtained by the parametrization $(32)$ and
it is a Lie group with bicomplex multiplication. This completes the proof. ∎
## References
* [1] Aksoyak F. K. and Yaylı Y., Homothetic motions and surfaces in $\mathbb{E}^{4}$, Bull. Malays. Math. Sci. Soc. (accepted)
* [2] Arslan K., Bayram, B.K., Bulca, B., Kim, Y.H., Murathan, C. and Öztürk, G. Rotational embeddings in $E^{4}$ with pointwise 1-type Gauss map, Turk. J. Math. 35, 493-499, 2011.
* [3] Arslan K., Bayram B.K., Kim, Y.H., Murathan, C. and Öztürk, G. Vranceanu surface in $E^{4}$ with pointwise 1-type Gauss map, Indian J. Pure. Appl. Math. 42, 41-51, 2011.
* [4] Arslan K., Bulca B., Kılıç B., Kim Y.H., Murathan C. and Öztürk G. Tensor Product Surfaces with Pointwise 1-Type Gauss Map. Bull. Korean Math. Soc. 48, 601-609, 2011.
* [5] Chen, B.Y. Choi, M. and Kim, Y.H. Surfaces of revolution with pointwise 1-type Gauss map, J. Korean Math. 42, 447-455, 2005.
* [6] Chen, B.Y. and Piccinni, P. Submanifolds with Finite Type-Gauss map, Bull. Austral. Math. Soc., 35, 161-186, 1987.
* [7] Choi, M. and Kim, Y.H. Characterization of the helicoid as ruled surfaces with pointwise 1-type Gauss map, Bull. Korean Math. Soc. 38, 753-761, 2001.
* [8] Choi, M., Kim, D.S., Kim Y.H, Helicoidal surfaces with pointwise 1-type Gauss map, J. Korean Math. Soc. 46, 215-223, 2009.
* [9] Dursun, U. and Turgay, N.C., General rotational surfaces in Euclidean space $E^{4}$ with pointwise 1-type Gauss map, Math. Commun. 17, 71-81, 2012.
* [10] Dursun, U., Hypersurfaces with pointwise 1-type Gauss map, Taiwanese J. Math. 11, 1407-1416, 2007.
* [11] Dursun, U., Flat surfaces in the Euclidean space $E^{3}$ with pointwise 1-type Gauss map, Bull. Malays. Math. Sci. Soc. 33, 469-478, 2010\.
* [12] Dursun, U. and Arsan, G.G. Surfaces in the Euclidean space $E^{4}$ with pointwise 1-type Gauss map, Hacet. J. Math. Stat. 40, 617-625, 2011.
* [13] Kim, Y.H. and Yoon, D.W. Ruled surfaces with pointwise 1-type Gauss map, J. Geom. Phys. 34, 191-205, 2000
* [14] Kim, Y.H. and Yoon, D.W. Classification of rotation surfaces in pseudo Euclidean space, J. Korean Math. 41, 379-396, 2004.
* [15] Özkaldi S., Yaylı Y., Tensor product surfaces in $\mathbb{R}^{4}$ and Lie groups, Bull. Malays. Math. Sci.Soc. (2) 33, no. 1, 69-77, 2010.
* [16] Moore C.L.E, Surfaces of rotation in a space of four dimensions, Ann. of Math. 21, 81-93, 1919.
* [17] Niang, A. Rotation surfaces with 1-type Gauss map, Bull. Korean Math. Soc. 42, 23-27, 2005
* [18] Yoon, D.W. Rotation surfaces with finite type Gauss map in $E^{4},$ Indian J. Pure. Appl. Math. 32, 1803-1808, 2001.
* [19] Yoon, D.W. Some properties of the Clifford torus as rotation surface, Indian J. Pure. Appl. Math. 34, 907-915, 2003.
|
arxiv-papers
| 2013-02-12T14:27:10 |
2024-09-04T02:49:41.659190
|
{
"license": "Public Domain",
"authors": "Ferda\\u{g} Kahraman Aksoyak and Yusuf Yayl{\\i}",
"submitter": "Ferda\\u{g} Kahraman Aksoyak",
"url": "https://arxiv.org/abs/1302.2804"
}
|
1302.2864
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-PH-EP-2013-009 LHCb-PAPER-2012-041 12 February 2013
Prompt charm production in $pp$ collisions at
$\sqrt{s}=7\mathrm{\,Te\kern-2.07413ptV}$
The LHCb collaboration†††Authors are listed on the following pages.
Charm production at the LHC in $pp$ collisions at
$\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$ is studied with the LHCb detector.
The decays $D^{0}\\!\rightarrow K^{-}\pi^{+}$, $D^{+}\\!\rightarrow
K^{-}\pi^{+}\pi^{+}$, $D^{*+}\\!\rightarrow D^{0}(K^{-}\pi^{+})\pi^{+}$,
$D^{+}_{s}\\!\rightarrow\phi(K^{-}K^{+})\pi^{+}$, $\mathchar
28931\relax_{c}^{+}\\!\rightarrow pK^{-}\pi^{+}$, and their charge conjugates
are analysed in a data set corresponding to an integrated luminosity of
$15\mbox{\,nb}^{-1}$. Differential cross-sections
${\mathrm{d}{\sigma}}/{\mathrm{d}{\mbox{$p_{\rm T}$}}}$ are measured for
prompt production of the five charmed hadron species in bins of transverse
momentum and rapidity in the region $0<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $2.0<y<4.5$. Theoretical
predictions are compared to the measured differential cross-sections. The
integrated cross-sections of the charm hadrons are computed in the above
$p_{\rm T}$-$y$ range, and their ratios are reported. A combination of the
five integrated cross-section measurements gives
$\sigma(c\overline{}c)_{p_{\mathrm{T}}<8\mathrm{\,Ge\kern-0.70004ptV\\!/}c,\,2.0<y<4.5}=1419\pm
12\,\mathrm{(stat)}\pm 116\,\mathrm{(syst)}\pm 65\,\mathrm{(frag)}{\rm\,\upmu
b},$
where the uncertainties are statistical, systematic, and due to the
fragmentation functions.
Submitted to Nuclear Physics B
© CERN on behalf of the LHCb collaboration, license CC-BY-3.0.
LHCb collaboration
R. Aaij38, C. Abellan Beteta33,n, A. Adametz11, B. Adeva34, M. Adinolfi43, C.
Adrover6, A. Affolder49, Z. Ajaltouni5, J. Albrecht9, F. Alessio35, M.
Alexander48, S. Ali38, G. Alkhazov27, P. Alvarez Cartelle34, A.A. Alves
Jr22,35, S. Amato2, Y. Amhis7, L. Anderlini17,f, J. Anderson37, R.
Andreassen56, R.B. Appleby51, O. Aquines Gutierrez10, F. Archilli18, A.
Artamonov 32, M. Artuso53, E. Aslanides6, G. Auriemma22,m, S. Bachmann11, J.J.
Back45, C. Baesso54, V. Balagura28, W. Baldini16, R.J. Barlow51, C.
Barschel35, S. Barsuk7, W. Barter44, Th. Bauer38, A. Bay36, J. Beddow48, I.
Bediaga1, S. Belogurov28, K. Belous32, I. Belyaev28, E. Ben-Haim8, M.
Benayoun8, G. Bencivenni18, S. Benson47, J. Benton43, A. Berezhnoy29, R.
Bernet37, M.-O. Bettler44, M. van Beuzekom38, A. Bien11, S. Bifani12, T.
Bird51, A. Bizzeti17,h, P.M. Bjørnstad51, T. Blake35, F. Blanc36, C. Blanks50,
J. Blouw11, S. Blusk53, A. Bobrov31, V. Bocci22, A. Bondar31, N. Bondar27, W.
Bonivento15, S. Borghi51, A. Borgia53, T.J.V. Bowcock49, E. Bowen37, C.
Bozzi16, T. Brambach9, J. van den Brand39, J. Bressieux36, D. Brett51, M.
Britsch10, T. Britton53, N.H. Brook43, H. Brown49, I. Burducea26, A.
Bursche37, J. Buytaert35, S. Cadeddu15, O. Callot7, M. Calvi20,j, M. Calvo
Gomez33,n, A. Camboni33, P. Campana18,35, A. Carbone14,c, G. Carboni21,k, R.
Cardinale19,i, A. Cardini15, H. Carranza-Mejia47, L. Carson50, K. Carvalho
Akiba2, G. Casse49, M. Cattaneo35, Ch. Cauet9, M. Charles52, Ph.
Charpentier35, P. Chen3,36, N. Chiapolini37, M. Chrzaszcz 23, K. Ciba35, X.
Cid Vidal34, G. Ciezarek50, P.E.L. Clarke47, M. Clemencic35, H.V. Cliff44, J.
Closier35, C. Coca26, V. Coco38, J. Cogan6, E. Cogneras5, P. Collins35, A.
Comerma-Montells33, A. Contu15, A. Cook43, M. Coombes43, G. Corti35, B.
Couturier35, G.A. Cowan36, D. Craik45, S. Cunliffe50, R. Currie47, C.
D’Ambrosio35, P. David8, P.N.Y. David38, I. De Bonis4, K. De Bruyn38, S. De
Capua51, M. De Cian37, J.M. De Miranda1, L. De Paula2, W. De Silva56, P. De
Simone18, D. Decamp4, M. Deckenhoff9, H. Degaudenzi36,35, L. Del Buono8, C.
Deplano15, D. Derkach14, O. Deschamps5, F. Dettori39, A. Di Canto11, J.
Dickens44, H. Dijkstra35, P. Diniz Batista1, M. Dogaru26, F. Domingo
Bonal33,n, S. Donleavy49, F. Dordei11, A. Dosil Suárez34, D. Dossett45, A.
Dovbnya40, F. Dupertuis36, R. Dzhelyadin32, A. Dziurda23, A. Dzyuba27, S.
Easo46,35, U. Egede50, V. Egorychev28, S. Eidelman31, D. van Eijk38, S.
Eisenhardt47, U. Eitschberger9, R. Ekelhof9, L. Eklund48, I. El Rifai5, Ch.
Elsasser37, D. Elsby42, A. Falabella14,e, C. Färber11, G. Fardell47, C.
Farinelli38, S. Farry12, V. Fave36, D. Ferguson47, V. Fernandez Albor34, F.
Ferreira Rodrigues1, M. Ferro-Luzzi35, S. Filippov30, C. Fitzpatrick35, M.
Fontana10, F. Fontanelli19,i, R. Forty35, O. Francisco2, M. Frank35, C.
Frei35, M. Frosini17,f, S. Furcas20, E. Furfaro21, A. Gallas Torreira34, D.
Galli14,c, M. Gandelman2, P. Gandini52, Y. Gao3, J. Garofoli53, P. Garosi51,
J. Garra Tico44, L. Garrido33, C. Gaspar35, R. Gauld52, E. Gersabeck11, M.
Gersabeck51, T. Gershon45,35, Ph. Ghez4, V. Gibson44, V.V. Gligorov35, C.
Göbel54, D. Golubkov28, A. Golutvin50,28,35, A. Gomes2, H. Gordon52, M.
Grabalosa Gándara5, R. Graciani Diaz33, L.A. Granado Cardoso35, E. Graugés33,
G. Graziani17, A. Grecu26, E. Greening52, S. Gregson44, O. Grünberg55, B.
Gui53, E. Gushchin30, Yu. Guz32, T. Gys35, C. Hadjivasiliou53, G. Haefeli36,
C. Haen35, S.C. Haines44, S. Hall50, T. Hampson43, S. Hansmann-Menzemer11, N.
Harnew52, S.T. Harnew43, J. Harrison51, P.F. Harrison45, T. Hartmann55, J.
He7, V. Heijne38, K. Hennessy49, P. Henrard5, J.A. Hernando Morata34, E. van
Herwijnen35, E. Hicks49, D. Hill52, M. Hoballah5, C. Hombach51, P. Hopchev4,
W. Hulsbergen38, P. Hunt52, T. Huse49, N. Hussain52, D. Hutchcroft49, D.
Hynds48, V. Iakovenko41, P. Ilten12, R. Jacobsson35, A. Jaeger11, E. Jans38,
F. Jansen38, P. Jaton36, F. Jing3, M. John52, D. Johnson52, C.R. Jones44, B.
Jost35, M. Kaballo9, S. Kandybei40, M. Karacson35, T.M. Karbach35, I.R.
Kenyon42, U. Kerzel35, T. Ketel39, A. Keune36, B. Khanji20, O. Kochebina7, I.
Komarov36,29, R.F. Koopman39, P. Koppenburg38, M. Korolev29, A. Kozlinskiy38,
L. Kravchuk30, K. Kreplin11, M. Kreps45, G. Krocker11, P. Krokovny31, F.
Kruse9, M. Kucharczyk20,23,j, V. Kudryavtsev31, T. Kvaratskheliya28,35, V.N.
La Thi36, D. Lacarrere35, G. Lafferty51, A. Lai15, D. Lambert47, R.W.
Lambert39, E. Lanciotti35, G. Lanfranchi18,35, C. Langenbruch35, T. Latham45,
C. Lazzeroni42, R. Le Gac6, J. van Leerdam38, J.-P. Lees4, R. Lefèvre5, A.
Leflat29,35, J. Lefrançois7, O. Leroy6, Y. Li3, L. Li Gioi5, M. Liles49, R.
Lindner35, C. Linn11, B. Liu3, G. Liu35, J. von Loeben20, J.H. Lopes2, E.
Lopez Asamar33, N. Lopez-March36, H. Lu3, J. Luisier36, H. Luo47, F.
Machefert7, I.V. Machikhiliyan4,28, F. Maciuc26, O. Maev27,35, S. Malde52, G.
Manca15,d, G. Mancinelli6, N. Mangiafave44, U. Marconi14, R. Märki36, J.
Marks11, G. Martellotti22, A. Martens8, L. Martin52, A. Martín Sánchez7, M.
Martinelli38, D. Martinez Santos39, D. Martins Tostes2, A. Massafferri1, R.
Matev35, Z. Mathe35, C. Matteuzzi20, M. Matveev27, E. Maurice6, A.
Mazurov16,30,35,e, J. McCarthy42, R. McNulty12, B. Meadows56,52, F. Meier9, M.
Meissner11, M. Merk38, D.A. Milanes13, M.-N. Minard4, J. Molina Rodriguez54,
S. Monteil5, D. Moran51, P. Morawski23, R. Mountain53, I. Mous38, F. Muheim47,
K. Müller37, R. Muresan26, B. Muryn24, B. Muster36, P. Naik43, T. Nakada36, R.
Nandakumar46, I. Nasteva1, M. Needham47, N. Neufeld35, A.D. Nguyen36, T.D.
Nguyen36, C. Nguyen-Mau36,o, M. Nicol7, V. Niess5, R. Niet9, N. Nikitin29, T.
Nikodem11, A. Nomerotski52, A. Novoselov32, A. Oblakowska-Mucha24, V.
Obraztsov32, S. Oggero38, S. Ogilvy48, O. Okhrimenko41, R. Oldeman15,d,35, M.
Orlandea26, J.M. Otalora Goicochea2, P. Owen50, B.K. Pal53, A. Palano13,b, M.
Palutan18, J. Panman35, A. Papanestis46, M. Pappagallo48, C. Parkes51, C.J.
Parkinson50, G. Passaleva17, G.D. Patel49, M. Patel50, G.N. Patrick46, C.
Patrignani19,i, C. Pavel-Nicorescu26, A. Pazos Alvarez34, A. Pellegrino38, G.
Penso22,l, M. Pepe Altarelli35, S. Perazzini14,c, D.L. Perego20,j, E. Perez
Trigo34, A. Pérez-Calero Yzquierdo33, P. Perret5, M. Perrin-Terrin6, G.
Pessina20, K. Petridis50, A. Petrolini19,i, A. Phan53, E. Picatoste Olloqui33,
B. Pietrzyk4, T. Pilař45, D. Pinci22, S. Playfer47, M. Plo Casasus34, F.
Polci8, G. Polok23, A. Poluektov45,31, E. Polycarpo2, D. Popov10, B.
Popovici26, C. Potterat33, A. Powell52, J. Prisciandaro36, V. Pugatch41, A.
Puig Navarro36, W. Qian4, J.H. Rademacker43, B. Rakotomiaramanana36, M.S.
Rangel2, I. Raniuk40, N. Rauschmayr35, G. Raven39, S. Redford52, M.M. Reid45,
A.C. dos Reis1, S. Ricciardi46, A. Richards50, K. Rinnert49, V. Rives
Molina33, D.A. Roa Romero5, P. Robbe7, E. Rodrigues51, P. Rodriguez Perez34,
G.J. Rogers44, S. Roiser35, V. Romanovsky32, A. Romero Vidal34, J. Rouvinet36,
T. Ruf35, H. Ruiz33, G. Sabatino22,k, J.J. Saborido Silva34, N. Sagidova27, P.
Sail48, B. Saitta15,d, C. Salzmann37, B. Sanmartin Sedes34, M. Sannino19,i, R.
Santacesaria22, C. Santamarina Rios34, E. Santovetti21,k, M. Sapunov6, A.
Sarti18,l, C. Satriano22,m, A. Satta21, M. Savrie16,e, D. Savrina28,29, P.
Schaack50, M. Schiller39, H. Schindler35, S. Schleich9, M. Schlupp9, M.
Schmelling10, B. Schmidt35, O. Schneider36, A. Schopper35, M.-H. Schune7, R.
Schwemmer35, B. Sciascia18, A. Sciubba18,l, M. Seco34, A. Semennikov28, K.
Senderowska24, I. Sepp50, N. Serra37, J. Serrano6, P. Seyfert11, M. Shapkin32,
I. Shapoval40,35, P. Shatalov28, Y. Shcheglov27, T. Shears49,35, L.
Shekhtman31, O. Shevchenko40, V. Shevchenko28, A. Shires50, R. Silva
Coutinho45, T. Skwarnicki53, N.A. Smith49, E. Smith52,46, M. Smith51, K.
Sobczak5, M.D. Sokoloff56, F.J.P. Soler48, F. Soomro18,35, D. Souza43, B.
Souza De Paula2, B. Spaan9, A. Sparkes47, P. Spradlin48, F. Stagni35, S.
Stahl11, O. Steinkamp37, S. Stoica26, S. Stone53, B. Storaci37, M.
Straticiuc26, U. Straumann37, V.K. Subbiah35, S. Swientek9, V. Syropoulos39,
M. Szczekowski25, P. Szczypka36,35, T. Szumlak24, S. T’Jampens4, M.
Teklishyn7, E. Teodorescu26, F. Teubert35, C. Thomas52, E. Thomas35, J. van
Tilburg11, V. Tisserand4, M. Tobin37, S. Tolk39, D. Tonelli35, S. Topp-
Joergensen52, N. Torr52, E. Tournefier4,50, S. Tourneur36, M.T. Tran36, M.
Tresch37, A. Tsaregorodtsev6, P. Tsopelas38, N. Tuning38, M. Ubeda Garcia35,
A. Ukleja25, D. Urner51, U. Uwer11, V. Vagnoni14, G. Valenti14, R. Vazquez
Gomez33, P. Vazquez Regueiro34, S. Vecchi16, J.J. Velthuis43, M. Veltri17,g,
G. Veneziano36, M. Vesterinen35, B. Viaud7, D. Vieira2, X. Vilasis-
Cardona33,n, A. Vollhardt37, D. Volyanskyy10, D. Voong43, A. Vorobyev27, V.
Vorobyev31, C. Voß55, H. Voss10, R. Waldi55, R. Wallace12, S. Wandernoth11, J.
Wang53, D.R. Ward44, N.K. Watson42, A.D. Webber51, D. Websdale50, M.
Whitehead45, J. Wicht35, J. Wiechczynski23, D. Wiedner11, L. Wiggers38, G.
Wilkinson52, M.P. Williams45,46, M. Williams50,p, F.F. Wilson46, J. Wishahi9,
M. Witek23, S.A. Wotton44, S. Wright44, S. Wu3, K. Wyllie35, Y. Xie47,35, F.
Xing52, Z. Xing53, Z. Yang3, R. Young47, X. Yuan3, O. Yushchenko32, M.
Zangoli14, M. Zavertyaev10,a, F. Zhang3, L. Zhang53, W.C. Zhang12, Y. Zhang3,
A. Zhelezov11, A. Zhokhov28, L. Zhong3, A. Zvyagin35.
1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
3Center for High Energy Physics, Tsinghua University, Beijing, China
4LAPP, Université de Savoie, CNRS/IN2P3, Annecy-Le-Vieux, France
5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-
Ferrand, France
6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot,
CNRS/IN2P3, Paris, France
9Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
10Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
11Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
12School of Physics, University College Dublin, Dublin, Ireland
13Sezione INFN di Bari, Bari, Italy
14Sezione INFN di Bologna, Bologna, Italy
15Sezione INFN di Cagliari, Cagliari, Italy
16Sezione INFN di Ferrara, Ferrara, Italy
17Sezione INFN di Firenze, Firenze, Italy
18Laboratori Nazionali dell’INFN di Frascati, Frascati, Italy
19Sezione INFN di Genova, Genova, Italy
20Sezione INFN di Milano Bicocca, Milano, Italy
21Sezione INFN di Roma Tor Vergata, Roma, Italy
22Sezione INFN di Roma La Sapienza, Roma, Italy
23Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of
Sciences, Kraków, Poland
24AGH University of Science and Technology, Kraków, Poland
25National Center for Nuclear Research (NCBJ), Warsaw, Poland
26Horia Hulubei National Institute of Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
27Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia
28Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia
29Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
30Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN),
Moscow, Russia
31Budker Institute of Nuclear Physics (SB RAS) and Novosibirsk State
University, Novosibirsk, Russia
32Institute for High Energy Physics (IHEP), Protvino, Russia
33Universitat de Barcelona, Barcelona, Spain
34Universidad de Santiago de Compostela, Santiago de Compostela, Spain
35European Organization for Nuclear Research (CERN), Geneva, Switzerland
36Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
37Physik-Institut, Universität Zürich, Zürich, Switzerland
38Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands
39Nikhef National Institute for Subatomic Physics and VU University Amsterdam,
Amsterdam, The Netherlands
40NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
41Institute for Nuclear Research of the National Academy of Sciences (KINR),
Kyiv, Ukraine
42University of Birmingham, Birmingham, United Kingdom
43H.H. Wills Physics Laboratory, University of Bristol, Bristol, United
Kingdom
44Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
45Department of Physics, University of Warwick, Coventry, United Kingdom
46STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
47School of Physics and Astronomy, University of Edinburgh, Edinburgh, United
Kingdom
48School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
49Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
50Imperial College London, London, United Kingdom
51School of Physics and Astronomy, University of Manchester, Manchester,
United Kingdom
52Department of Physics, University of Oxford, Oxford, United Kingdom
53Syracuse University, Syracuse, NY, United States
54Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de
Janeiro, Brazil, associated to 2
55Institut für Physik, Universität Rostock, Rostock, Germany, associated to 11
56University of Cincinnati, Cincinnati, OH, United States, associated to 53
aP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS),
Moscow, Russia
bUniversità di Bari, Bari, Italy
cUniversità di Bologna, Bologna, Italy
dUniversità di Cagliari, Cagliari, Italy
eUniversità di Ferrara, Ferrara, Italy
fUniversità di Firenze, Firenze, Italy
gUniversità di Urbino, Urbino, Italy
hUniversità di Modena e Reggio Emilia, Modena, Italy
iUniversità di Genova, Genova, Italy
jUniversità di Milano Bicocca, Milano, Italy
kUniversità di Roma Tor Vergata, Roma, Italy
lUniversità di Roma La Sapienza, Roma, Italy
mUniversità della Basilicata, Potenza, Italy
nLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain
oHanoi University of Science, Hanoi, Viet Nam
pMassachusetts Institute of Technology, Cambridge, MA, United States
## 1 Introduction
Measurements of the production cross-sections of charmed hadrons test the
predictions of quantum chromodynamic (QCD) fragmentation and hadronisation
models. Perturbative calculations of charmed hadron production cross-sections
at next-to-leading order using the Generalized Mass Variable Flavour Number
Scheme (GMVFNS) [1, 2, 3, 4, 5, 6] and at fixed order with next-to-leading-log
resummation (FONLL) [7, 8, 9, 10] reproduce the cross-sections measured in the
central rapidity region ($|y|\leq 1$) in $p\overline{}p$ collisions at
$\sqrt{s}=1.97\mathrm{\,Te\kern-1.00006ptV}$ at the Fermilab Tevatron collider
[11] and the cross-sections measured in the central rapidity region
($|y|<0.5$) in $pp$ collisions at $\sqrt{s}=2.96\mathrm{\,Te\kern-1.00006ptV}$
[12] and at $\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$ [13, 14] at the CERN
Large Hadron Collider (LHC). The LHCb detector at the LHC provides unique
access to the forward rapidity region at these energies with a detector that
is tailored for flavour physics. This paper presents measurements with the
LHCb detector of $D^{0}$, $D^{+}$, $D^{+}_{s}$, $D^{*+}$, and $\mathchar
28931\relax_{c}^{+}$ production in the forward rapidity region $2.0<y<4.5$ in
$pp$ collisions at a centre-of-mass energy of
$7\mathrm{\,Te\kern-1.00006ptV}$. Throughout this article, references to
specific decay modes or specific charmed hadrons also imply the charge
conjugate mode. The measurements are based on $15\mbox{\,nb}^{-1}$ of $pp$
collisions recorded with the LHCb detector in 2010 with approximately $1.1$
visible interactions per triggered bunch crossing.
Charmed hadrons may be produced at the $pp$ collision point either directly or
as feed-down from the instantaneous decays of excited charm resonances. They
may also be produced in decays of $b$-hadrons. In this paper, the first two
sources (direct production and feed-down) are referred to as prompt. Charmed
particles from $b$-hadron decays are called secondary charmed hadrons. The
measurements described here are the production cross-sections of prompt
charmed hadrons. Secondary charmed hadrons are treated as backgrounds. No
attempt is made to distinguish between the two sources of prompt charmed
hadrons.
## 2 Experimental conditions
The LHCb detector [15] is a single-arm forward spectrometer covering the
pseudorapidity range $2<\eta<5$, designed for the study of particles
containing $b$ or $c$ quarks. The detector includes a high precision tracking
system consisting of a silicon-strip vertex detector surrounding the $pp$
interaction region, a large-area silicon-strip detector located upstream of a
dipole magnet with a bending power of about $4{\rm\,Tm}$, and three stations
of silicon-strip detectors and straw drift-tubes placed downstream. The
combined tracking system has a momentum resolution ($\Delta p/p$) that varies
from 0.4% at 5${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ to 0.6% at
100${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and an impact parameter
($\mathrm{IP}$) resolution of 20$\,\upmu{\rm m}$ for tracks with high
transverse momentum. Charged hadrons are identified using two ring-imaging
Cherenkov detectors. Photon, electron, and hadron candidates are identified by
a calorimeter system consisting of scintillating-pad and pre-shower detectors,
an electromagnetic calorimeter, and a hadronic calorimeter. Muons are
identified by a system composed of alternating layers of iron and multiwire
proportional chambers. The trigger consists of a hardware stage, based on
information from the calorimeter and muon systems, followed by a software
stage that applies a full event reconstruction.
During the considered data taking period, the rate of bunch crossings at the
LHCb interaction point was sufficiently small that the software stage of the
trigger could process all bunch crossings. Candidate events passed through the
hardware stage of the trigger without filtering. The software stage of the
trigger accepted bunch crossings for which at least one track was
reconstructed in either the silicon-strip vertex detector or the downstream
tracking stations. The sample is divided into two periods of data collection.
In the first $1.9\pm 0.1\mbox{\,nb}^{-1}$ all bunch crossings satisfying these
criteria were retained. In the subsequent $13.1\pm 0.5\mbox{\,nb}^{-1}$ the
trigger retention rate was limited to a randomly selected $(24.0\pm 0.2)\%$ of
all bunch crossings.
For simulated events, $pp$ collisions are generated using Pythia 6.4 [16] with
a specific LHCb configuration [17] that employs the CTEQ6L1 parton densities
[18]. Decays of hadronic particles are described by EvtGen [19] in which final
state radiation is generated using Photos [20]. The interaction of the
generated particles with the detector and its response are implemented using
the Geant4 toolkit [21, *Agostinelli:2002hh] as described in Ref. [23].
## 3 Analysis strategy
The analysis is based on fully reconstructed decays of charmed hadrons in the
following decay modes: $D^{0}\\!\rightarrow K^{-}\pi^{+}$,
$D^{+}\\!\rightarrow K^{-}\pi^{+}\pi^{+}$, $D^{*+}\\!\rightarrow
D^{0}(K^{-}\pi^{+})\pi^{+}$, $D^{+}_{s}\\!\rightarrow\phi(K^{-}K^{+})\pi^{+}$,
and $\mathchar 28931\relax_{c}^{+}\\!\rightarrow pK^{-}\pi^{+}$. Formally, the
$D^{0}\\!\rightarrow K^{-}\pi^{+}$ sample contains the sum of the Cabibbo-
favoured decays $D^{0}\\!\rightarrow K^{-}\pi^{+}$ and the doubly Cabibbo-
suppressed decays $\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}\\!\rightarrow K^{-}\pi^{+}$. For
simplicity, we will refer to the combined sample by its dominant component.
The measurements are performed in two-dimensional bins of the transverse
momentum ($p_{\rm T}$) and rapidity ($y$) of the reconstructed hadrons,
measured with respect to the beam axis in the $pp$ centre-of-mass (CM) frame.
For the $D^{0}$, $D^{+}$, $D^{*+}$, and $D^{+}_{s}$ measurements, we use eight
bins of uniform width in the range $0<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and five bins of uniform width in
the range $2.0<y<4.5$. For the $\mathchar 28931\relax_{c}^{+}$ measurement, we
partition the data in two ways: six uniform $p_{\rm T}$ bins in
$2<\mbox{$p_{\rm T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ with a single
$2.0<y<4.5$ bin and a single $2<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ bin with five uniform $y$ bins in
$2.0<y<4.5$.
### 3.1 Selection criteria
The selection criteria were tuned independently for each decay. The same
selection criteria are used for $D^{0}\\!\rightarrow K^{-}\pi^{+}$ candidates
in the $D^{0}$ and $D^{*+}$ cross-section measurements. We use only events
that have at least one reconstructed primary interaction vertex (PV). Each
final state kaon, pion, or proton candidate used in the reconstruction of a
$D^{0}$, $D^{+}$, $D^{+}_{s}$, or $\mathchar 28931\relax_{c}^{+}$ candidate
must be positively identified. Because of the relatively long lifetimes of the
$D^{0}$, $D^{+}$, $D^{+}_{s}$, and $\mathchar 28931\relax_{c}^{+}$ hadrons,
the trajectories of their decay products will not, in general, point directly
back to the PV at which the charmed hadron was produced. To exploit this
feature, the selections for these decays require that each final state
candidate has a minimum impact parameter $\chi^{2}$ ($\mathrm{IP}\,\chi^{2}$)
with respect to the PV. The $\mathrm{IP}\,\chi^{2}$ is defined as the
difference between the $\chi^{2}$ of the PV reconstructed with and without the
considered particle. For the $D^{0}$ and $\mathchar 28931\relax_{c}^{+}$
reconstruction, a common $\mathrm{IP}\,\chi^{2}$ requirement is imposed on all
final state particles. For the $D^{+}$ and $D^{+}_{s}$ candidates,
progressively stricter limits are used for the three daughters. Final-state
decay products of charmed hadrons have transverse momenta that are generally
larger than those of stable charged particles produced at the PV. Applying
lower limits on the $p_{\rm T}$ of the final state tracks suppresses
combinatorial backgrounds in the selections of $D^{0}$, $D^{+}$, and
$\mathchar 28931\relax_{c}^{+}$ samples.
The selections of candidate charmed hadron decays are further refined by
studying properties of the combinations of the selected final state particles.
Candidate $D^{+}_{s}\\!\rightarrow\phi(K^{-}K^{+})\pi^{+}$ decays are required
to have a $K^{-}K^{+}$ invariant mass within $\pm
20{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ of the $\phi(1020)$ mass [24]. The
decay products for each candidate charmed hadron must be consistent with
originating from a common vertex with a good quality fit. The significant
lifetimes of $D^{0}$, $D^{+}$, $D^{+}_{s}$, and $\mathchar
28931\relax_{c}^{+}$ hadrons are exploited by requiring that the fitted decay
vertexes are significantly displaced from the PV. The trajectory of a prompt
charmed hadron should point back to the PV in which it was produced. For
$D^{0}$ candidates this is exploited as a requirement that
$\mathrm{IP}\,\chi^{2}<100$. For $D^{0}$ decays, we use one additional
discriminating variable: the angle between the momentum of the $D^{0}$
candidate in the laboratory frame and the momentum of the pion candidate from
its decay evaluated in the $D^{0}$ rest frame. The cosine of this angle has a
flat distribution for $D^{0}$ decays but peaks strongly in the forward
direction for combinatorial backgrounds. Candidate $D^{*+}$ decays are
reconstructed from $D^{0}$ and slow pion candidates. Figures 1–3 show the
invariant mass distributions and the $\log_{10}(\mathrm{IP}\,\chi^{2})$
distributions of the selected charmed hadron candidates.
LABEL:sub@fig:Dz:m:m
(a)
LABEL:sub@fig:Dz:m:ip
(b)
LABEL:sub@fig:Dp:m:m
(c)
LABEL:sub@fig:Dp:m:ip
(d)
Figure 1: Mass and $\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions for
selected $D^{0}\\!\rightarrow K^{-}\pi^{+}$ and $D^{+}\\!\rightarrow
K^{-}\pi^{+}\pi^{+}$ candidates showing LABEL:sub@fig:Dz:m:m the masses of the
$D^{0}$ candidates, LABEL:sub@fig:Dz:m:ip the
$\log_{10}(\mathrm{IP}\,\chi^{2})$ distribution of $D^{0}$ candidates for a
mass window of $\pm 16{\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$ (approximately
$\pm 2\sigma$) around the fitted $m(K^{-}\pi^{+})$ peak, LABEL:sub@fig:Dp:m:m
the masses of the $D^{+}$ candidates, and LABEL:sub@fig:Dp:m:ip the
$\log_{10}(\mathrm{IP}\,\chi^{2})$ distribution of $D^{+}$ candidates for a
mass window of $\pm 11{\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$ (approximately
$\pm 2\sigma$) around the fitted $m(K^{-}\pi^{+}\pi^{+})$ peak. Projections of
likelihood fits to the full data samples are shown with components as
indicated in the legends.
LABEL:sub@fig:Dstarp:m:mkpi
(a)
LABEL:sub@fig:Dstarp:m:delm
(b)
LABEL:sub@fig:Dstarp:m:ip
(c)
Figure 2: Mass and $\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions for
selected $D^{*+}\\!\rightarrow D^{0}(K^{-}\pi^{+})\pi^{+}$ candidates showing
LABEL:sub@fig:Dstarp:m:mkpi the masses of the $D^{0}$ candidates for a window
of $\pm 1.6{\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$ (approximately $\pm
2\sigma$) around the fitted $\Delta m$ peak, LABEL:sub@fig:Dstarp:m:delm the
differences between the $D^{*+}$ and $D^{0}$ candidate masses for a mass
window of $\pm 16{\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$ (approximately $\pm
2\sigma$) around the fitted $m(K^{-}\pi^{+})$ peak, and
LABEL:sub@fig:Dstarp:m:ip the $\log_{10}(\mathrm{IP}\,\chi^{2})$ distribution
of the $D^{0}$ candidate for a mass signal box of $\pm
16{\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$ around the fitted
$m(K^{-}\pi^{+})$ peak and $\pm 1.6{\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$
around the fitted $\Delta m$ peak. Projections of a likelihood fit to the full
data sample are shown with components as indicated in the legend. The ‘$D^{0}$
backgrounds’ component is the sum of the secondary, prompt random slow pion,
and secondary random slow pion backgrounds.
LABEL:sub@fig:Ds:m:m
(a)
LABEL:sub@fig:Ds:m:ip
(b)
LABEL:sub@fig:Lcp:m:m
(c)
LABEL:sub@fig:Lcp:m:ip
(d)
Figure 3: Mass and $\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions for
selected $D^{+}_{s}\\!\rightarrow\phi(K^{-}K^{+})\pi^{+}$ and $\mathchar
28931\relax_{c}^{+}\\!\rightarrow pK^{-}\pi^{+}$ candidates showing
LABEL:sub@fig:Ds:m:m the masses of the $D^{+}_{s}$ candidates,
LABEL:sub@fig:Ds:m:ip the $\log_{10}(\mathrm{IP}\,\chi^{2})$ distribution of
$D^{+}_{s}$ candidates for a mass window of $\pm
8{\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$ (approximately $\pm 2\sigma$)
around the fitted $m(\phi(K^{-}K^{+})\pi^{+})$ peak, LABEL:sub@fig:Lcp:m:m the
masses of the $\mathchar 28931\relax_{c}^{+}$ candidates, and
LABEL:sub@fig:Lcp:m:ip the $\log_{10}(\mathrm{IP}\,\chi^{2})$ distribution of
$\mathchar 28931\relax_{c}^{+}$ candidates for a mass window of $\pm
12{\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$ (approximately $\pm 2\sigma$)
around the fitted $m(pK^{-}\pi^{+})$ peak. Projections of likelihood fits to
the full data samples are shown with components as indicated in the legends.
We factorise the efficiencies for reconstructing and selecting signal decays
into components that are measured with independent studies. The particle
identification (PID) efficiencies for pions, kaons, and protons are measured
in data in bins of track $p_{\rm T}$ and pseudorapidity, $\eta$, using high
purity samples of pions, kaons, and protons from $K^{0}_{\rm\scriptscriptstyle
S}$, $\phi(1020)$, and $\mathchar 28931\relax$ decays. The effective total PID
efficiency for each $(\mbox{$p_{\rm T}$},y)$ bin of each charmed hadron decay
mode is determined by calculating the average efficiency over the bin using
these final state PID efficiencies and the final state $(\mbox{$p_{\rm
T}$},\eta)$ distributions from simulated decays. The efficiencies of the
remaining selection criteria are determined from studies with the full event
simulation.
### 3.2 Determination of signal yields
We use multidimensional extended maximum likelihood fits to the mass and
$\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions to determine the prompt
signal yields. For the $D^{*+}\\!\rightarrow D^{0}\pi^{+}$ mode the
$\log_{10}(\mathrm{IP}\,\chi^{2})$ of the daughter $D^{0}$ is used. The
selected candidates contain secondary backgrounds from signal decays produced
in decays of $b$-hadrons and combinatorial backgrounds. The
$D^{*+}\\!\rightarrow D^{0}\pi^{+}$ decay has two additional sources of
background from $D^{0}$ decays combined with unrelated slow pion candidates:
prompt random slow pion backgrounds in which the $D^{0}$ mesons are produced
at the PV and secondary random slow pion backgrounds in which the $D^{0}$
mesons are produced in decays of $b$-hadrons. The combinatorial backgrounds
are separated from the remaining components with the reconstructed $D^{0}$,
$D^{+}$, $D^{+}_{s}$, and $\mathchar 28931\relax_{c}^{+}$ mass distributions.
Analysis of the $\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions allow
separation of the prompt signal and secondary backgrounds. The additional
random slow pion backgrounds in the $D^{*+}\\!\rightarrow
D^{0}(K^{-}\pi^{+})\pi^{+}$ mode are identified in the distribution of the
difference $\Delta m$ between the masses of the $D^{*+}$ and $D^{0}$
candidates. Thus the prompt signal yields for $D^{0}$, $D^{+}$, $D^{+}_{s}$,
and $\mathchar 28931\relax_{c}^{+}$ decays are measured with two-dimensional
fits to the mass and $\log_{10}(\mathrm{IP}\,\chi^{2})$, and the prompt signal
yields for $D^{*+}$ decays are determined with three-dimensional fits to the
$D^{0}$ candidate mass, $\Delta m$, and $\log_{10}(\mathrm{IP}\,\chi^{2})$.
The extended likelihood functions are constructed from multidimensional
probability density functions (PDFs). For each class of events, the
multidimensional PDF is the product of an appropriate one-dimensional PDF in
each variable:
Prompt signal:
The mass distributions are represented by Crystal Ball functions [25] for
$D^{0}$ decays (both direct and from $D^{*+}$ mesons), double Gaussian
functions for the $D^{+}$ and $D^{+}_{s}$ modes, and a single Gaussian
function for the $\mathchar 28931\relax_{c}^{+}$ mode. The $\Delta m$
distribution for the $D^{*+}$ mode is represented by a Crystal Ball function.
The $\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions are represented by
bifurcated Gaussian functions with exponential tails defined as
$f_{\mathrm{BG}}\mathopen{}\left(x;\mu,\sigma,\varepsilon,\rho_{L},\rho_{R}\right)\mathclose{}=\begin{cases}\exp\mathopen{}\left(\frac{\rho_{L}^{2}}{2}+\frac{x-\mu}{\sigma\cdot(1-\varepsilon)}\cdot\rho_{L}\right)\mathclose{}&\text{if
}x<\mu-\rho_{L}\cdot\sigma\cdot(1-\varepsilon),\\\
\exp\mathopen{}\left(-\frac{(x-\mu)^{2}}{2\cdot\sigma^{2}\cdot(1-\varepsilon)^{2}}\right)\mathclose{}&\text{if
}\mu-\rho_{L}\cdot\sigma\cdot(1-\varepsilon)<x<\mu,\\\
\exp\mathopen{}\left(-\frac{(x-\mu)^{2}}{2\cdot\sigma^{2}\cdot(1+\varepsilon)^{2}}\right)\mathclose{}&\text{if
}\mu<x<\mu+\rho_{R}\cdot\sigma\cdot(1+\varepsilon),\\\
\exp\mathopen{}\left(\frac{\rho_{R}^{2}}{2}-\frac{x-\mu}{\sigma\cdot(1+\varepsilon)}\cdot\rho_{R}\right)\mathclose{}&\text{if
}\mu+\rho_{R}\cdot\sigma\cdot(1+\varepsilon)<x,\end{cases}$ (1)
where $\mu$ is the mode of the distribution, $\sigma$ is the average of the
left and right Gaussian widths, $\varepsilon$ is the asymmetry of the left and
right Gaussian widths, and $\rho_{L(R)}$ is the exponential coefficient for
the left (right) tail.
Secondary backgrounds:
The functions representing the mass (and $\Delta m$) distributions are
identical to those used for the prompt signal in each case. The
$\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions are represented by
$f_{\mathrm{BG}}$ functions.
Combinatorial backgrounds:
The mass distributions are represented by first order polynomials. The
$\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions are represented by
$f_{\mathrm{BG}}$ functions. The $\Delta m$ distribution for the $D^{*+}$ mode
is represented by a power-law function $C\left(\mbox{$\Delta
m$}-M_{\pi}\right)^{p}$ where the exponent $p$ is a free parameter; $M_{\pi}$
is the pion mass and $C$ is a normalisation constant.
Prompt random slow pion backgrounds
($D^{*+}$ only): The functions representing the mass and
$\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions are identical to those used
for the prompt signal. The function representing the $\Delta m$ distribution
is the same power law function as that used for the combinatorial backgrounds.
Secondary random slow pion backgrounds
($D^{*+}$ only): The functions representing the mass and
$\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions are identical to those used
for the secondary backgrounds. The function representing the $\Delta m$
distribution is the same power law function as that used for the combinatorial
backgrounds.
Shape parameters for the $\log_{10}(\mathrm{IP}\,\chi^{2})$ distributions of
combinatorial backgrounds are fixed based on fits to the mass sidebands. Those
of the prompt signal, secondary backgrounds, and random slow pion backgrounds
are fixed based on fits to simulated events. Figures 1–3 show the results of
single fits to the full $0<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, $2.0<y<4.5$ kinematic region.
The extended maximum likelihood fits are performed for each $p_{\rm T}$-$y$
bin. We simultaneously fit groups of adjacent bins constraining to the same
value several parameters that are expected to vary slowly across the kinematic
region. The secondary background component in the $\mathchar
28931\relax_{c}^{+}$ mode is too small to be measured reliably. We set its
yield to zero when performing the fits and adopt a systematic uncertainty of
3% to account for the small potential contamination from secondary production.
### 3.3 Systematic uncertainties
There are three classes of systematic uncertainties: globally correlated
sources, sources that are correlated between bins but uncorrelated between
decay modes, and sources that are uncorrelated between bins and decay modes.
The globally correlated contributions are the uncertainty on the measured
luminosity and the uncertainty on the tracking efficiency. The former is a
uniform $3.5\%$ for each mode. The latter is $3\%$ per final state track in
the $D^{0}$, $D^{+}$, $D^{+}_{s}$, and $\mathchar 28931\relax_{c}^{+}$
measurements and $4\%$ for the slow pion in the $D^{*+}$ measurement. We adopt
the uncertainty of the branching fractions as a bin-correlated systematic
uncertainty. Systematic uncertainties of the reconstruction and selection
efficiencies include contributions from the limited size of the simulated
samples, failures in the association between generated and reconstructed
particles in the simulation, differences between the observed and simulated
distributions of selection variables, and differences between the simulated
and actual resonance models in the $D^{+}$ and $\mathchar 28931\relax_{c}^{+}$
measurements. The yield determination includes uncertainties from the fit
models, from peaking backgrounds due to mis-reconstructed charm cross-feed,
and from potential variations in the yields of secondary backgrounds. Where
possible, the sizes of the systematic uncertainties are evaluated
independently for each bin. The sources of systematic uncertainties are
uncorrelated, and the total systematic uncertainty in each bin of each mode is
determined by adding the systematic uncertainties in quadrature. Table 1
summarises the systematic uncertainties.
Table 1: Overview of systematic uncertainties and their values, expressed as relative fractions of the cross-section measurements in percent (%). Uncertainties that are computed bin-by-bin are expressed as ranges giving the minimum to maximum values of the bin uncertainties. The correlated and uncorrelated uncertainties are shown as discussed in the text. Source | $D^{0}$ | $D^{*+}$ | $D^{+}$ | $D^{+}_{s}$ | $\mathchar 28931\relax_{c}^{+}$
---|---|---|---|---|---
Selection and reconstruction (correlated) | 1.6 | 2.6 | 4.3 | 5.3 | 0.4
Selection and reconstruction (uncorrelated) | 1–12 | 3–9 | 1–10 | 4–9 | 5–17
Yield determination (correlated) | 2.5 | 2.5 | 0.5 | 1.0 | 3.0
Yield determination (uncorrelated) | – | – | 1–5 | 2–14 | 4–9
PID efficiency | 1–5 | 1–5 | 6–19 | 1–15 | 5–9
Tracking efficiency | 6 | 10 | 9 | 9 | 9
Branching fraction | 1.3 | 1.5 | 2.1 | 5.8 | 26.0
Luminosity | 3.5 | 3.5 | 3.5 | 3.5 | 3.5
As cross-checks, additional cross-section measurements are performed with the
decay modes $D^{0}\\!\rightarrow K^{-}\pi^{+}\pi^{-}\pi^{+}$ and
$D^{+}\\!\rightarrow\phi(K^{-}K^{+})\pi^{+}$ and with a selection of
$D^{0}\\!\rightarrow K^{-}\pi^{+}$ decays that does not use particle
identification information. Their results are in agreement with the results
from our nominal measurements.
## 4 Cross-section measurements
The signal yields determined from the data allow us to measure the
differential cross-sections as functions of $p_{\rm T}$ and $y$ in the range
$0<\mbox{$p_{\rm T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $2.0<y<4.5$.
The differential cross-section for producing hadron species $H_{c}$ or its
charge conjugate in bin $i$,
${\mathrm{d}{\sigma_{i}(H_{c})}}/{\mathrm{d}{\mbox{$p_{\rm T}$}}}$, integrated
over the $y$ range of the bin is calculated with the relation
$\frac{\mathrm{d}\sigma_{i}(H_{c})}{\mathrm{d}\mbox{$p_{\rm
T}$}}=\frac{1}{\Delta\mbox{$p_{\rm
T}$}}\cdot\frac{N_{i}(\mbox{$H_{c}\\!\rightarrow
f$}+\mbox{c.c.})}{\varepsilon_{i,\mathrm{tot}}(\mbox{$H_{c}\\!\rightarrow
f$})\cdot{\cal B}(\mbox{$H_{c}\\!\rightarrow
f$})\cdot\mathcal{L}_{\mathrm{int}}},$ (2)
where $\Delta\mbox{$p_{\rm T}$}$ is the width in $p_{\rm T}$ of bin $i$,
typically $1{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$,
$N_{i}(\mbox{$H_{c}\\!\rightarrow f$}+\mbox{c.c.})$ is the measured yield of
$H_{c}$ and their charge conjugate decays in bin $i$, ${\cal
B}(\mbox{$H_{c}\\!\rightarrow f$})$ is the branching fraction of the decay,
$\varepsilon_{i,\mathrm{tot}}(\mbox{$H_{c}\\!\rightarrow f$})$ is the total
efficiency for observing the signal decay in bin $i$, and
$\mathcal{L}_{\mathrm{int}}=15.0\pm 0.5\mbox{\,nb}^{-1}$ is the integrated
luminosity of the sample. The following branching fractions from Ref. [24] are
used: ${\cal B}(\mbox{$D^{+}\\!\rightarrow K^{-}\pi^{+}\pi^{+}$})=(9.13\pm
0.19)\%$, ${\cal B}(\mbox{$D^{*+}\\!\rightarrow
D^{0}(K^{-}\pi^{+})\pi^{+}$})=(2.63\pm 0.04)\%$, ${\cal B}(\mbox{$\mathchar
28931\relax_{c}^{+}\\!\rightarrow pK^{-}\pi^{+}$})=(5.0\pm 1.3)\%$, and ${\cal
B}(\mbox{$(D^{0}+\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})\\!\rightarrow
K^{-}\pi^{+}$})=(3.89\pm 0.05)\%$, where the last is the sum of Cabibbo-
favoured and doubly Cabibbo-suppressed branching fractions. For the
$D^{+}_{s}$ measurement we use the branching fraction of
$D^{+}_{s}\\!\rightarrow K^{-}K^{+}\pi^{+}$ in a $\pm
20\,{\mathrm{\,Me\kern-1.00006ptV\\!/}c}$ window around the $\phi(1020)$ mass:
${\cal B}(\mbox{$D^{+}_{s}\\!\rightarrow\phi(K^{-}K^{+})\pi^{+}$})=(2.24\pm
0.13)\%$ [26]. The measured differential cross-sections are tabulated in the
appendix. Bins with a sample size insufficient to produce a measurement with a
total relative uncertainty of less than $50\%$ are discarded.
Theoretical expectations for the production cross-sections of charmed hadrons
have been calculated by Kniehl et al. using the GMVFNS scheme [1, 2, 3, 4, 5,
6] and Cacciari et al., using the FONLL approach [7, 8, 9, 10]. Both groups
have provided differential cross-sections as functions of $p_{\rm T}$ and
integrated over bins in $y$.
The FONLL calculations use the CTEQ 6.6 [27] parameterisation of the parton
densities. They include estimates of theoretical uncertainties due to the
charm quark mass and the renormalisation and factorisation scales. However, we
display only the central values in Figs. 4–5. The theoretical calculations
assume unit transition probabilities from a primary charm quark to the
exclusive hadron state. The actual transition probabilities that we use to
convert the predictions to measurable cross-sections are those quoted by Ref.
[28], based on measurements from $e^{+}e^{-}$ colliders close to the
$\Upsilon(4S)$ resonance: $f(\mbox{$c\rightarrow D^{0}$})=0.565\pm 0.032$,
$f(\mbox{$c\rightarrow D^{+}$})=0.246\pm 0.020$, $f(\mbox{$c\rightarrow
D^{*+}$})=0.224\pm 0.028$, $f(\mbox{$c\rightarrow D^{+}_{s}$})=0.080\pm
0.017$, and $f(\mbox{$c\rightarrow\mathchar 28931\relax_{c}^{+}$})=0.094\pm
0.035$. Note that the transition probabilities do not sum up to unity, since,
e.g., $f(\mbox{$c\rightarrow D^{0}$})$ has an overlapping contribution from
$f(\mbox{$c\rightarrow D^{*+}$})$. No dedicated calculation for $D^{+}_{s}$
production is available. The respective prediction was obtained by scaling the
kinematically similar $D^{*+}$ prediction by the ratio $f(\mbox{$c\rightarrow
D^{+}_{s}$})/f(\mbox{$c\rightarrow D^{*+}$})$.
The GMVFNS calculations include theoretical predictions for all hadrons
studied in our analysis. Results were provided for $\mbox{$p_{\rm
T}$}>3{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The uncertainties from scale
variations were determined only for the case of $D^{0}$ production. The
relative sizes of the uncertainties for the other hadron species are assumed
to be the same as those for the $D^{0}$. Here the CTEQ 6.5 [29] set of parton
densities was used. Predictions for $D^{0}$ mesons were also provided using
the CTEQ 6.5c2 [30] parton densities with intrinsic charm. As shown in Fig.
4a, in the phase space region of the present measurement the effect of
intrinsic charm is predicted to be small. The GMVFNS theoretical framework
includes the convolution with fragmentation functions describing the
transition $c\rightarrow H_{c}$ that are normalised to the respective total
transition probabilities [4]. The fragmentation functions are results of a fit
to production measurements at $e^{+}e^{-}$ colliders, where no attempt was
made in the fit to separate direct production and feed-down from higher
resonances.
To compare the theoretical calculations to our measurements, the theoretical
differential cross-sections were integrated over the $p_{\rm T}$ bins and then
divided by the bin width $\Delta\mbox{$p_{\rm T}$}$. The integration was
performed numerically with a third-order spline interpolation of the
differential cross-sections.
The measured cross-sections compared to the theoretical predictions are shown
in Figs. 4–5. For better visibility, theoretical predictions are displayed as
smooth curves such that the value at the bin centre corresponds to the
differential cross-section calculated in that bin. The data points with their
uncertainties, which are always drawn at the bin centre, thus can be directly
compared with theory. The predictions agree well with our measurements,
generally bracketing the observed values between the FONLL and GMVFNS
calculations.
LABEL:sub@fig:lhcb:Dz:a
(a)
LABEL:sub@fig:lhcb:Dp:b
(b)
LABEL:sub@fig:lhcb:dstar:a
(c)
LABEL:sub@fig:lhcb:Dsp:b
(d)
Figure 4: Differential cross-sections for LABEL:sub@fig:lhcb:Dz:a $D^{0}$,
LABEL:sub@fig:lhcb:Dp:b $D^{+}$, LABEL:sub@fig:lhcb:dstar:a $D^{*+}$, and
LABEL:sub@fig:lhcb:Dsp:b $D^{+}_{s}$ meson production compared to theoretical
predictions. The cross-sections for different $y$ regions are shown as
functions of $p_{\rm T}$. The $y$ ranges are shown as separate curves and
associated sets of points scaled by factors $10^{-m}$, where the exponent $m$
is shown on the plot with the $y$ range. The error bars associated with the
data points show the sum in quadrature of the statistical and total systematic
uncertainty. The shaded regions show the range of theoretical uncertainties
for the GMVFNS prediction. Figure 5: Differential cross-sections for
$\mathchar 28931\relax_{c}^{+}$ baryon production compared to the theoretical
prediction from the GMVFNS scheme. The error bars associated with the data
points show the sum in quadrature of the statistical and total systematic
uncertainty. The shaded region shows the range of theoretical uncertainty for
the theoretical prediction.
## 5 Production ratios and integrated cross-sections
Charmed hadron production ratios and total cross-sections are determined for
the kinematic range $0<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $2.0<y<4.5$. Bins where the
relative uncertainty on the yield exceeds $50\%$ (left blank in Tables 5–10 of
the appendix) are not used. Instead, the cross-sections are extrapolated from
the remaining bins with predictions obtained from Pythia 6.4. The
extrapolation factors are computed as the ratios of the predicted cross-
sections integrated over $0<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $2.0<y<4.5$ to the predicted
cross-sections integrated over the well measured bins for each of four tunes
of Pythia 6.4: LHCb-tune [17], Perugia 0, Perugia NOCR, and Perugia 2010 [31].
The mean of these four ratios is used as a multiplicative factor to
extrapolate the sum of the well measured bins to the full kinematic range
under study. The root mean square of the four ratios is taken as a systematic
uncertainty associated with the extrapolation. We confirm that this procedure
gives uncertainties of appropriate size by examining the variance of the
ratios for individual well measured bins. The resulting integrated cross-
sections for each hadron species are given in Table 2.
Table 2: Open charm production cross-sections in the kinematic range $0<\mbox{$p_{\rm T}$}<8{\mathrm{\,Ge\kern-0.90005ptV\\!/}c}$ and $2.0<y<4.5$. The computation of the extrapolation factors is described in the text. The first uncertainty is statistical, the second is systematic, and the third is the contribution from the extrapolation factor. | Extrapolation factor | Cross-section (${\rm\upmu b}$)
---|---|---
$D^{0}$ | | $1.003$ ± | $0.001$ | | 1661 $\pm$ | 16 $\pm$ | 128 $\pm$ | 2
$D^{+}$ | | $1.067$ ± | $0.013$ | | 645 $\pm$ | 11 $\pm$ | 72 $\pm$ | 8
$D^{*+}$ | | $1.340$ ± | $0.037$ | | 677 $\pm$ | 26 $\pm$ | 77 $\pm$ | 19
$D^{+}_{s}$ | | $1.330$ ± | $0.056$ | | 197 $\pm$ | 14 $\pm$ | 26 $\pm$ | 8
$\mathchar 28931\relax_{c}^{+}$ | | $1.311$ ± | $0.077$ | | 233 $\pm$ | 26 $\pm$ | 71 $\pm$ | 14
Accounting for the correlations among the sources of systematic uncertainty,
we obtain the correlation matrix for the total uncertainties of the integrated
cross-section measurements shown in Table 3. The ratios of the production
cross-sections in the kinematic range $0<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $2.0<y<4.5$ are given in Table
4.
Table 3: Correlation matrix of the uncertainties of the integrated open charm production cross-sections in the kinematic range $0<\mbox{$p_{\rm T}$}<8{\mathrm{\,Ge\kern-0.90005ptV\\!/}c}$ and $2.0<y<4.5$. The first column restates measured values of the integrated cross-sections. | $\sigma(D^{0})$ | $\sigma(D^{+})$ | $\sigma(D^{*+})$ | $\sigma(D^{+}_{s})$
---|---|---|---|---
$\sigma(D^{0})$ = | 1661 $\pm$ | 129${\rm\,\upmu b}$ | | | |
$\sigma(D^{+})$ = | 645 $\pm$ | 74${\rm\,\upmu b}$ | $0.76$ | | |
$\sigma(D^{*+})$ = | 677 $\pm$ | 83${\rm\,\upmu b}$ | $0.77$ | $0.73$ | |
$\sigma(D^{+}_{s})$ = | 197 $\pm$ | 31${\rm\,\upmu b}$ | $0.55$ | $0.52$ | $0.53$ |
$\sigma(\mathchar 28931\relax_{c}^{+})$ = | 233 $\pm$ | 77${\rm\,\upmu b}$ | $0.26$ | $0.25$ | $0.25$ | $0.18$
Table 4: Cross-section ratios for open charm production in the kinematic range $0<\mbox{$p_{\rm T}$}<8{\mathrm{\,Ge\kern-0.90005ptV\\!/}c}$ and $2.0<y<4.5$. The numbers in the table are the ratios of the respective row/column. | $\sigma(D^{0})$ | $\sigma(D^{+})$ | $\sigma(D^{*+})$ | $\sigma(D^{+}_{s})$
---|---|---|---|---
$\sigma(D^{+})$ | $0.389\pm 0.029$ | | |
$\sigma(D^{*+})$ | $0.407\pm 0.033$ | $1.049\pm 0.092$ | |
$\sigma(D^{+}_{s})$ | $0.119\pm 0.016$ | $0.305\pm 0.042$ | $0.291\pm 0.041$ |
$\sigma(\mathchar 28931\relax_{c}^{+})$ | $0.140\pm 0.045$ | $0.361\pm 0.116$ | $0.344\pm 0.111$ | $1.183\pm 0.402$
Finally, we determine the total charm cross-section contributing to charmed
hadron production inside the acceptance of this study, $0<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $2.0<y<4.5$. Combining our
measurements $\sigma(H_{c})$ with the corresponding fragmentation functions
$f(\mbox{$c\rightarrow H_{c}$})$ from Ref. [28] gives five estimates of
$\sigma(c\overline{}c)=\sigma(H_{c})/(2f(\mbox{$c\rightarrow H_{c}$}))$. The
factor of $2$ appears in the denominator because we have defined
$\sigma(H_{c})$ to be the cross-section to produce either $H_{c}$ or its
charge conjugate. A combination of all five measurements taking correlations
into account gives
$\sigma(c\overline{}c)_{p_{\mathrm{T}}<8\mathrm{\,Ge\kern-0.70004ptV\\!/}c,\,2.0<y<4.5}=1419\pm
12\,\mathrm{(stat)}\pm 116\,\mathrm{(syst)}\pm 65\,\mathrm{(frag)}{\rm\,\upmu
b},$
The final uncertainty is that due to the fragmentation functions.
## 6 Summary
A measurement of charm production in $pp$ collisions at a centre-of-mass
energy of 7$\mathrm{\,Te\kern-1.00006ptV}$ has been performed with the LHCb
detector, based on an integrated luminosity of
$\mathcal{L}_{\mathrm{int}}=15\mbox{\,nb}^{-1}$. Cross-section measurements
with total uncertainties below 20% have been achieved. The shape and absolute
normalisation of the differential cross-sections for $D^{0}$/$\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$, $D^{\pm}$, $D^{*\pm}$,
$D^{\pm}_{s}$, and $\mathchar 28931\relax_{c}^{\pm}$ hadrons are found to be
in agreement with theoretical predictions. The ratios of the production cross-
sections for the five species under study have been measured. The
$c\overline{}c$ cross-section for producing a charmed hadron in the range
$0<\mbox{$p_{\rm T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $2.0<y<4.5$
is found to be $1419\pm 12\,\mathrm{(stat)}\pm 116\,\mathrm{(syst)}\pm
65\,\mathrm{(frag)}{\rm\,\upmu b}$.
## Acknowledgements
The authors are grateful to H. Spiesberger, B. A. Kniehl, G. Kramer, and I.
Schienbein for providing theoretical cross-section predictions from the
Generalized Mass Variable Flavour Number Scheme (GMVFNS). We thank M. Mangano,
M. Cacciari, S. Frixione, M. Nason, and G. Ridolfi for supplying theoretical
cross-section predictions using the Fixed Order Next to Leading Logarithm
(FONLL) approach.
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC
(China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG
(Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR
(Poland); ANCS/IFA (Romania); MinES, Rosatom, RFBR and NRC “Kurchatov
Institute” (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER
(Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We
also acknowledge the support received from the ERC under FP7. The Tier1
computing centres are supported by IN2P3 (France), KIT and BMBF (Germany),
INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United
Kingdom). We are thankful for the computing resources put at our disposal by
Yandex LLC (Russia), as well as to the communities behind the multiple open
source software packages that we depend on.
Appendix
## Measured open charm cross-sections
Table 5 shows the production cross-sections for $\mathchar
28931\relax_{c}^{+}$ baryons integrated over $2<\mbox{$p_{\rm
T}$}<8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and over the rapidity range of the
$y$ bins. The differential production cross-section values (integrated over
the $y$ range of the respective bin) plotted in Figs. 4–5 are given in Tables
6–10.
Table 5: Bin-integrated production cross-sections in ${\rm\,\upmu b}$ for prompt $\mathchar 28931\relax_{c}^{+}$ $+$ c.c. baryons in bins of $y$ integrated over the range $2<\mbox{$p_{\rm T}$}<8{\mathrm{\,Ge\kern-0.90005ptV\\!/}c}$. The first uncertainty is statistical, and the second is the total systematic. $p_{\rm T}$ | $y$
---|---
(${\mathrm{Ge\kern-1.00006ptV\\!/}c}$) | $(2.0,2.5)$ | $(2.5,3.0)$ | $(3.0,3.5)$ | $(3.5,4.0)$
$(2,8)$ | $21.4\pm 8.1\pm 7.2$ | $49.9\pm 11.6\pm 15.6$ | $62.9\pm 7.0\pm 18.8$ | $44.2\pm 8.6\pm 13.2$
Table 6: Differential production cross-sections, $\mathrm{d}\sigma/\mathrm{d}\mbox{$p_{\rm T}$}$, in ${\rm\,\upmu b}/({\mathrm{Ge\kern-0.90005ptV\\!/}c})$ for prompt $\mathchar 28931\relax_{c}^{+}$ $+$ c.c. baryons in bins of $p_{\rm T}$ integrated over the rapidity range $2.0<y<4.5$. The first uncertainty is statistical, and the second is the total systematic. $p_{\rm T}$ | $y$
---|---
(${\mathrm{Ge\kern-1.00006ptV\\!/}c}$) | $(2.0,4.5)$
$(2,3)$ | $89.6$ ± | $17.8$ ± | $32.6$
$(3,4)$ | $49.8$ ± | $7.9$ ± | $15.3$
$(4,5)$ | $22.5$ ± | $3.1$ ± | $6.9$
$(5,6)$ | $8.5$ ± | $1.4$ ± | $2.6$
$(6,7)$ | $4.9$ ± | $0.9$ ± | $1.5$
$(7,8)$ | $2.4$ ± | $0.6$ ± | $0.8$
Table 7: Differential production cross-sections,
$\mathrm{d}\sigma/\mathrm{d}\mbox{$p_{\rm T}$}$, in ${\rm\,\upmu
b}/({\mathrm{Ge\kern-0.90005ptV\\!/}c})$ for prompt $D^{0}$ $+$ c.c. mesons in
bins of $(\mbox{$p_{\rm T}$},y)$. The first uncertainty is statistical, and
the second is the total systematic.
$p_{\rm T}$ | $y$
---|---
(${\mathrm{Ge\kern-1.00006ptV\\!/}c}$) | $(2.0,2.5)$ | $(2.5,3.0)$ | $(3.0,3.5)$ | $(3.5,4.0)$ | $(4.0,4.5)$
$(0,1)$ | $113.58$ ± | $5.45$ ± | $10.45$ | $96.51$ ± | $3.49$ ± | $8.10$ | $90.99$ ± | $3.67$ ± | $7.24$ | $80.41$ ± | $4.19$ ± | $6.30$ | $57.37$ ± | $5.37$ ± | $5.10$
$(1,2)$ | $147.06$ ± | $5.78$ ± | $12.45$ | $146.54$ ± | $4.08$ ± | $12.16$ | $129.43$ ± | $3.89$ ± | $10.19$ | $112.64$ ± | $4.52$ ± | $8.95$ | $81.57$ ± | $5.20$ ± | $7.02$
$(2,3)$ | $85.95$ ± | $3.18$ ± | $6.80$ | $82.07$ ± | $2.10$ ± | $6.58$ | $68.48$ ± | $1.90$ ± | $5.40$ | $58.25$ ± | $2.02$ ± | $4.70$ | $39.87$ ± | $2.56$ ± | $3.78$
$(3,4)$ | $41.79$ ± | $1.78$ ± | $3.82$ | $34.86$ ± | $1.10$ ± | $2.82$ | $31.30$ ± | $1.05$ ± | $2.47$ | $22.65$ ± | $1.00$ ± | $2.13$ | $15.50$ ± | $1.29$ ± | $1.51$
$(4,5)$ | $18.61$ ± | $0.98$ ± | $1.73$ | $16.11$ ± | $0.67$ ± | $1.49$ | $14.36$ ± | $0.66$ ± | $1.15$ | $9.89$ ± | $0.62$ ± | $0.94$ | $5.69$ ± | $0.87$ ± | $0.60$
$(5,6)$ | $9.35$ ± | $0.66$ ± | $0.90$ | $8.85$ ± | $0.48$ ± | $0.84$ | $6.23$ ± | $0.41$ ± | $0.60$ | $4.88$ ± | $0.43$ ± | $0.48$ | $3.22$ ± | $0.98$ ± | $0.46$
$(6,7)$ | $4.92$ ± | $0.51$ ± | $0.49$ | $4.31$ ± | $0.38$ ± | $0.43$ | $2.99$ ± | $0.33$ ± | $0.30$ | $2.33$ ± | $0.34$ ± | $0.25$ |
$(7,8)$ | $2.34$ ± | $0.42$ ± | $0.26$ | $2.41$ ± | $0.36$ ± | $0.26$ | $1.25$ ± | $0.27$ ± | $0.14$ | $1.14$ ± | $0.35$ ± | $0.16$ |
Table 8: Differential production cross-sections,
$\mathrm{d}\sigma/\mathrm{d}\mbox{$p_{\rm T}$}$, in ${\rm\,\upmu
b}/({\mathrm{Ge\kern-0.90005ptV\\!/}c})$ for prompt $D^{+}$ $+$ c.c. mesons in
bins of $(\mbox{$p_{\rm T}$},y)$. The first uncertainty is statistical, and
the second is the total systematic.
$p_{\rm T}$ | $y$
---|---
(${\mathrm{Ge\kern-1.00006ptV\\!/}c}$) | $(2.0,2.5)$ | $(2.5,3.0)$ | $(3.0,3.5)$ | $(3.5,4.0)$ | $(4.0,4.5)$
$(0,1)$ | | $42.11$ ± | $2.92$ ± | $7.21$ | $34.00$ ± | $1.78$ ± | $6.29$ | $29.32$ ± | $1.89$ ± | $5.52$ | $24.01$ ± | $2.94$ ± | $5.45$
$(1,2)$ | $\phantom{1}55.56$ ± | $6.79$ ± | $9.89$ | $\phantom{11}52.72$ ± | $2.27$ ± | $8.31$ | $\phantom{11}50.74$ ± | $1.66$ ± | $7.68$ | $\phantom{11}45.26$ ± | $1.70$ ± | $7.56$ | $32.87$ ± | $2.47$ ± | $6.59$
$(2,3)$ | $29.86$ ± | $2.38$ ± | $4.40$ | $31.79$ ± | $1.09$ ± | $4.57$ | $29.03$ ± | $0.87$ ± | $3.99$ | $23.09$ ± | $0.84$ ± | $3.45$ | $15.79$ ± | $1.17$ ± | $3.43$
$(3,4)$ | $14.97$ ± | $1.04$ ± | $2.14$ | $15.69$ ± | $0.57$ ± | $2.10$ | $13.53$ ± | $0.48$ ± | $1.71$ | $10.15$ ± | $0.45$ ± | $1.49$ | $5.84$ ± | $0.55$ ± | $1.25$
$(4,5)$ | $7.26$ ± | $0.54$ ± | $1.01$ | $7.44$ ± | $0.33$ ± | $0.96$ | $5.89$ ± | $0.27$ ± | $0.74$ | $4.12$ ± | $0.26$ ± | $0.65$ | $2.31$ ± | $0.32$ ± | $0.50$
$(5,6)$ | $3.37$ ± | $0.31$ ± | $0.58$ | $3.51$ ± | $0.21$ ± | $0.46$ | $2.81$ ± | $0.18$ ± | $0.36$ | $1.90$ ± | $0.16$ ± | $0.31$ | $0.64$ ± | $0.18$ ± | $0.14$
$(6,7)$ | $1.93$ ± | $0.21$ ± | $0.31$ | $1.73$ ± | $0.14$ ± | $0.23$ | $1.81$ ± | $0.14$ ± | $0.36$ | $0.80$ ± | $0.10$ ± | $0.17$ |
$(7,8)$ | $1.22$ ± | $0.17$ ± | $0.22$ | $0.94$ ± | $0.11$ ± | $0.13$ | $0.70$ ± | $0.09$ ± | $0.14$ | $0.32$ ± | $0.07$ ± | $0.07$ |
Table 9: Differential production cross-sections,
$\mathrm{d}\sigma/\mathrm{d}\mbox{$p_{\rm T}$}$, in ${\rm\,\upmu
b}/({\mathrm{Ge\kern-0.90005ptV\\!/}c})$ for prompt $D^{*+}$ $+$ c.c. mesons
in bins of $(\mbox{$p_{\rm T}$},y)$. The first uncertainty is statistical, and
the second is the total systematic.
$p_{\rm T}$ | $y$
---|---
(${\mathrm{Ge\kern-1.00006ptV\\!/}c}$) | $(2.0,2.5)$ | $(2.5,3.0)$ | $(3.0,3.5)$ | $(3.5,4.0)$ | $(4.0,4.5)$
$(0,1)$ | | | $26.17$ ± | $5.17$ ± | $3.25$ | $36.67$ ± | $6.02$ ± | $4.53$ | $46.60$ ± | $12.77$ ± | $6.88$
$(1,2)$ | | $62.56$ ± | $8.42$ ± | $7.91$ | $49.02$ ± | $3.13$ ± | $5.73$ | $39.27$ ± | $3.15$ ± | $4.62$ | $32.40$ ± | $4.41$ ± | $4.06$
$(2,3)$ | | $30.60$ ± | $2.85$ ± | $3.66$ | $24.93$ ± | $1.54$ ± | $2.91$ | $24.11$ ± | $1.77$ ± | $2.86$ | $18.55$ ± | $2.37$ ± | $2.45$
$(3,4)$ | $15.31$ ± | $3.11$ ± | $2.12$ | $17.11$ ± | $1.37$ ± | $2.04$ | $13.90$ ± | $0.93$ ± | $1.63$ | $10.44$ ± | $0.91$ ± | $1.34$ | $5.13$ ± | $1.06$ ± | $0.70$
$(4,5)$ | $\phantom{14}9.90$ ± | $1.61$ ± | $1.35$ | $\phantom{114}6.28$ ± | $0.66$ ± | $0.81$ | $\phantom{112}6.20$ ± | $0.57$ ± | $0.74$ | $\phantom{11}4.51$ ± | $0.53$ ± | $0.59$ | $3.41$ ± | $1.02$ ± | $0.52$
$(5,6)$ | $3.92$ ± | $0.84$ ± | $0.55$ | $3.81$ ± | $0.47$ ± | $0.50$ | $3.43$ ± | $0.42$ ± | $0.45$ | $1.96$ ± | $0.35$ ± | $0.27$ |
$(6,7)$ | $2.40$ ± | $0.59$ ± | $0.36$ | $1.78$ ± | $0.32$ ± | $0.24$ | $1.05$ ± | $0.25$ ± | $0.15$ | $0.68$ ± | $0.24$ ± | $0.10$ |
$(7,8)$ | $1.74$ ± | $0.58$ ± | $0.30$ | $1.10$ ± | $0.31$ ± | $0.17$ | | |
Table 10: Differential production cross-sections,
$\mathrm{d}\sigma/\mathrm{d}\mbox{$p_{\rm T}$}$, in ${\rm\,\upmu
b}/({\mathrm{Ge\kern-0.90005ptV\\!/}c})$ for prompt $D^{+}_{s}$ $+$ c.c.
mesons in bins of $(\mbox{$p_{\rm T}$},y)$. The first uncertainty is
statistical, and the second is the total systematic.
$p_{\rm T}$ | $y$
---|---
(${\mathrm{Ge\kern-1.00006ptV\\!/}c}$) | $(2.0,2.5)$ | $(2.5,3.0)$ | $(3.0,3.5)$ | $(3.5,4.0)$ | $(4.0,4.5)$
$(0,1)$ | | | $11.23$ ± | $3.64$ ± | $2.48$ | |
$(1,2)$ | $22.50$ ± | $7.79$ ± | $6.09$ | $20.41$ ± | $3.07$ ± | $3.53$ | $12.04$ ± | $2.10$ ± | $2.36$ | $11.00$ ± | $3.09$ ± | $2.61$ |
$(2,3)$ | $\phantom{14}6.03$ ± | $1.88$ ± | $1.43$ | $\phantom{114}8.34$ ± | $1.17$ ± | $1.17$ | $\phantom{11}10.37$ ± | $1.18$ ± | $1.46$ | $\phantom{111}7.34$ ± | $1.31$ ± | $1.22$ | $5.89$ ± | $2.22$ ± | $1.42$
$(3,4)$ | $3.38$ ± | $0.92$ ± | $0.66$ | $5.57$ ± | $0.73$ ± | $0.81$ | $4.78$ ± | $0.69$ ± | $0.79$ | $3.83$ ± | $0.68$ ± | $0.65$ | $2.08$ ± | $0.90$ ± | $0.49$
$(4,5)$ | $1.79$ ± | $0.50$ ± | $0.31$ | $2.18$ ± | $0.37$ ± | $0.30$ | $1.49$ ± | $0.29$ ± | $0.21$ | $1.62$ ± | $0.39$ ± | $0.26$ |
$(5,6)$ | $0.91$ ± | $0.34$ ± | $0.20$ | $1.11$ ± | $0.24$ ± | $0.17$ | $0.88$ ± | $0.21$ ± | $0.13$ | $0.67$ ± | $0.21$ ± | $0.13$ |
$(6,7)$ | $0.68$ ± | $0.23$ ± | $0.15$ | $0.51$ ± | $0.16$ ± | $0.08$ | $0.62$ ± | $0.18$ ± | $0.10$ | |
$(7,8)$ | $0.60$ ± | $0.21$ ± | $0.14$ | | $0.23$ ± | $0.10$ ± | $0.04$ | |
## References
* [1] B. A. Kniehl, G. Kramer, I. Schienbein, and H. Spiesberger, Inclusive $D^{*\pm}$ production in $p\overline{}p$ collisions with massive charm quarks, Phys. Rev. D71 (2005) 014018, arXiv:hep-ph/0410289
* [2] B. A. Kniehl and G. Kramer, $D^{0}$, $D^{+}$, $D^{+}_{s}$, and $\mathchar 28931\relax_{c}^{+}$ fragmentation functions from CERN LEP1, Phys. Rev. D71 (2005) 094013, arXiv:hep-ph/0504058
* [3] B. A. Kniehl, G. Kramer, I. Schienbein, and H. Spiesberger, Reconciling open charm production at the Fermilab Tevatron with QCD, Phys. Rev. Lett. 96 (2006) 012001, arXiv:hep-ph/0508129
* [4] T. Kneesch, B. A. Kniehl, G. Kramer, and I. Schienbein, Charmed-meson fragmentation functions with finite-mass corrections, Nucl. Phys. B799 (2008) 34, arXiv:0712.0481
* [5] B. A. Kniehl, G. Kramer, I. Schienbein, and H. Spiesberger, Open charm hadroproduction and the charm content of the proton, Phys. Rev. D79 (2009) 094009, arXiv:0901.4130
* [6] B. Kniehl, G. Kramer, I. Schienbein, and H. Spiesberger, Inclusive charmed-meson production at the CERN LHC, Eur. Phys. J. C72 (2012) 2082, arXiv:1202.0439
* [7] M. Cacciari, M. Greco, and P. Nason, The $p_{\rm T}$ spectrum in heavy-flavour hadroproduction, JHEP 05 (1998) 007, arXiv:hep-ph/9803400
* [8] M. Cacciari and P. Nason, Charm cross-sections for the Tevatron Run II, JHEP 09 (2003) 006, arXiv:hep-ph/0306212
* [9] M. Cacciari, P. Nason, and C. Oleari, A study of heavy flavoured meson fragmentation functions in $e^{+}e^{-}$ annihilation, JHEP 04 (2006) 006, arXiv:hep-ph/0510032
* [10] M. Cacciari et al., Theoretical predictions for charm and bottom production at the LHC, JHEP 10 (2012) 137, arXiv:1205.6344
* [11] CDF collaboration, D. Acosta et al., Measurement of prompt charm meson production cross sections in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV, Phys. Rev. Lett. 91 (2003) 241804, arXiv:hep-ex/0307080
* [12] ALICE collaboration, B. Abelev et al., Measurement of charm production at central rapidity in proton-proton collisions at $\sqrt{s}=2.76$ TeV, JHEP 07 (2012) 191, arXiv:1205.4007
* [13] ALICE collaboration, B. Abelev et al., $D_{s}^{+}$ meson production at central rapidity in proton–proton collisions at $\sqrt{s}=7$ TeV, Phys. Lett. B718 (2012) 279, arXiv:1208.1948
* [14] ALICE collaboration, B. Abelev et al., Measurement of charm production at central rapidity in proton-proton collisions at $\sqrt{s}=7$ TeV, JHEP 01 (2012) 128, arXiv:1111.1553
* [15] LHCb collaboration, A. A. Alves Jr. et al., The LHCb detector at the LHC, JINST 3 (2008) S08005
* [16] T. Sjöstrand, S. Mrenna, and P. Skands, PYTHIA 6.4 physics and manual, JHEP 05 (2006) 026, arXiv:hep-ph/0603175
* [17] I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulation framework, Nuclear Science Symposium Conference Record (NSS/MIC) IEEE (2010) 1155
* [18] J. Pumplin et al., New generation of parton distributions with uncertainties from global QCD analysis, JHEP 07 (2002) 012, arXiv:hep-ph/0201195
* [19] D. J. Lange, The EvtGen particle decay simulation package, Nucl. Instrum. Meth. A462 (2001) 152
* [20] P. Golonka and Z. Was, PHOTOS Monte Carlo: a precision tool for QED corrections in $Z$ and $W$ decays, Eur. Phys. J. C45 (2006) 97, arXiv:hep-ph/0506026
* [21] GEANT4 collaboration, J. Allison et al., Geant4 developments and applications, IEEE Trans. Nucl. Sci. 53 (2006) 270
* [22] GEANT4 collaboration, S. Agostinelli et al., GEANT4: A simulation toolkit, Nucl. Instrum. Meth. A506 (2003) 250
* [23] M. Clemencic et al., The LHCb simulation application, Gauss: design, evolution and experience, J. of Phys. : Conf. Ser. 331 (2011) 032023
* [24] Particle Data Group, J. Beringer et al., Review of particle physics, Phys. Rev. D86 (2012) 010001
* [25] T. Skwarnicki, A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, DESY-F31-86-02
* [26] CLEO collaboration, J. P. Alexander et al., Absolute measurement of hadronic branching fractions of the $D^{+}_{s}$ meson, Phys. Rev. Lett. 100 (2008) 161804, arXiv:0801.0680
* [27] P. M. Nadolsky et al., Implications of CTEQ global analysis for collider observables, Phys. Rev. D78 (2008) 013004, arXiv:0802.0007
* [28] Particle Data Group, C. Amsler et al., Fragmentation functions in $e^{+}e^{-}$ annihilation and lepton-nucleon DIS, in Review of particle physics, Phys. Lett. B667 (2008) 1
* [29] W. Tung et al., Heavy quark mass effects in deep inelastic scattering and global QCD analysis, JHEP 02 (2007) 053, arXiv:hep-ph/0611254
* [30] J. Pumplin, H. L. Lai, and W. K. Tung, The charm parton content of the nucleon, Phys. Rev. D75 (2007) 054029, arXiv:hep-ph/0701220
* [31] P. Z. Skands, Tuning Monte Carlo generators: the Perugia tunes, Phys. Rev. D82 (2010) 074018, and updates in arXiv:1005.3457
|
arxiv-papers
| 2013-02-12T17:18:01 |
2024-09-04T02:49:41.667765
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "LHCb collaboration: R. Aaij, C. Abellan Beteta, A. Adametz, B. Adeva,\n M. Adinolfi, C. Adrover, A. Affolder, Z. Ajaltouni, J. Albrecht, F. Alessio,\n M. Alexander, S. Ali, G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S.\n Amato, Y. Amhis, L. Anderlini, J. Anderson, R. Andreassen, R.B. Appleby, O.\n Aquines Gutierrez, F. Archilli, A. Artamonov, M. Artuso, E. Aslanides, G.\n Auriemma, S. Bachmann, J.J. Back, C. Baesso, V. Balagura, W. Baldini, R.J.\n Barlow, C. Barschel, S. Barsuk, W. Barter, Th. Bauer, A. Bay, J. Beddow, I.\n Bediaga, S. Belogurov, K. Belous, I. Belyaev, E. Ben-Haim, M. Benayoun, G.\n Bencivenni, S. Benson, J. Benton, A. Berezhnoy, R. Bernet, M.-O. Bettler, M.\n van Beuzekom, A. Bien, S. Bifani, T. Bird, A. Bizzeti, P.M. Bj{\\o}rnstad, T.\n Blake, F. Blanc, C. Blanks, J. Blouw, S. Blusk, A. Bobrov, V. Bocci, A.\n Bondar, N. Bondar, W. Bonivento, S. Borghi, A. Borgia, T.J.V. Bowcock, E.\n Bowen, C. Bozzi, T. Brambach, J. van den Brand, J. Bressieux, D. Brett, M.\n Britsch, T. Britton, N.H. Brook, H. Brown, I. Burducea, A. Bursche, J.\n Buytaert, S. Cadeddu, O. Callot, M. Calvi, M. Calvo Gomez, A. Camboni, P.\n Campana, A. Carbone, G. Carboni, R. Cardinale, A. Cardini, H. Carranza-Mejia,\n L. Carson, K. Carvalho Akiba, G. Casse, M. Cattaneo, Ch. Cauet, M. Charles,\n Ph. Charpentier, P. Chen, N. Chiapolini, M. Chrzaszcz, K. Ciba, X. Cid Vidal,\n G. Ciezarek, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier, C. Coca, V.\n Coco, J. Cogan, E. Cogneras, P. Collins, A. Comerma-Montells, A. Contu, A.\n Cook, M. Coombes, G. Corti, B. Couturier, G.A. Cowan, D. Craik, S. Cunliffe,\n R. Currie, C. D'Ambrosio, P. David, P.N.Y. David, I. De Bonis, K. De Bruyn,\n S. De Capua, M. De Cian, J.M. De Miranda, L. De Paula, W. De Silva, P. De\n Simone, D. Decamp, M. Deckenhoff, H. Degaudenzi, L. Del Buono, C. Deplano, D.\n Derkach, O. Deschamps, F. Dettori, A. Di Canto, J. Dickens, H. Dijkstra, P.\n Diniz Batista, M. Dogaru, F. Domingo Bonal, S. Donleavy, F. Dordei, A. Dosil\n Su\\'arez, D. Dossett, A. Dovbnya, F. Dupertuis, R. Dzhelyadin, A. Dziurda, A.\n Dzyuba, S. Easo, U. Egede, V. Egorychev, S. Eidelman, D. van Eijk, S.\n Eisenhardt, U. Eitschberger, R. Ekelhof, L. Eklund, I. El Rifai, Ch.\n Elsasser, D. Elsby, A. Falabella, C. F\\\"arber, G. Fardell, C. Farinelli, S.\n Farry, V. Fave, D. Ferguson, V. Fernandez Albor, F. Ferreira Rodrigues, M.\n Ferro-Luzzi, S. Filippov, C. Fitzpatrick, M. Fontana, F. Fontanelli, R.\n Forty, O. Francisco, M. Frank, C. Frei, M. Frosini, S. Furcas, E. Furfaro, A.\n Gallas Torreira, D. Galli, M. Gandelman, P. Gandini, Y. Gao, J. Garofoli, P.\n Garosi, J. Garra Tico, L. Garrido, C. Gaspar, R. Gauld, E. Gersabeck, M.\n Gersabeck, T. Gershon, Ph. Ghez, V. Gibson, V.V. Gligorov, C. G\\\"obel, D.\n Golubkov, A. Golutvin, A. Gomes, H. Gordon, M. Grabalosa G\\'andara, R.\n Graciani Diaz, L.A. Granado Cardoso, E. Graug\\'es, G. Graziani, A. Grecu, E.\n Greening, S. Gregson, O. Gr\\\"unberg, B. Gui, E. Gushchin, Yu. Guz, T. Gys, C.\n Hadjivasiliou, G. Haefeli, C. Haen, S.C. Haines, S. Hall, T. Hampson, S.\n Hansmann-Menzemer, N. Harnew, S.T. Harnew, J. Harrison, P.F. Harrison, T.\n Hartmann, J. He, V. Heijne, K. Hennessy, P. Henrard, J.A. Hernando Morata, E.\n van Herwijnen, E. Hicks, D. Hill, M. Hoballah, C. Hombach, P. Hopchev, W.\n Hulsbergen, P. Hunt, T. Huse, N. Hussain, D. Hutchcroft, D. Hynds, V.\n Iakovenko, P. Ilten, R. Jacobsson, A. Jaeger, E. Jans, F. Jansen, P. Jaton,\n F. Jing, M. John, D. Johnson, C.R. Jones, B. Jost, M. Kaballo, S. Kandybei,\n M. Karacson, T.M. Karbach, I.R. Kenyon, U. Kerzel, T. Ketel, A. Keune, B.\n Khanji, O. Kochebina, I. Komarov, R.F. Koopman, P. Koppenburg, M. Korolev, A.\n Kozlinskiy, L. Kravchuk, K. Kreplin, M. Kreps, G. Krocker, P. Krokovny, F.\n Kruse, M. Kucharczyk, V. Kudryavtsev, T. Kvaratskheliya, V.N. La Thi, D.\n Lacarrere, G. Lafferty, A. Lai, D. Lambert, R.W. Lambert, E. Lanciotti, G.\n Lanfranchi, C. Langenbruch, T. Latham, C. Lazzeroni, R. Le Gac, J. van\n Leerdam, J.-P. Lees, R. Lef\\`evre, A. Leflat, J. Lefran\\c{c}ois, O. Leroy, Y.\n Li, L. Li Gioi, M. Liles, R. Lindner, C. Linn, B. Liu, G. Liu, J. von Loeben,\n J.H. Lopes, E. Lopez Asamar, N. Lopez-March, H. Lu, J. Luisier, H. Luo, F.\n Machefert, I.V. Machikhiliyan, F. Maciuc, O. Maev, S. Malde, G. Manca, G.\n Mancinelli, N. Mangiafave, U. Marconi, R. M\\\"arki, J. Marks, G. Martellotti,\n A. Martens, L. Martin, A. Mart\\'in S\\'anchez, M. Martinelli, D. Martinez\n Santos, D. Martins Tostes, A. Massafferri, R. Matev, Z. Mathe, C. Matteuzzi,\n M. Matveev, E. Maurice, A. Mazurov, J. McCarthy, R. McNulty, B. Meadows, F.\n Meier, M. Meissner, M. Merk, D.A. Milanes, M.-N. Minard, J. Molina Rodriguez,\n S. Monteil, D. Moran, P. Morawski, R. Mountain, I. Mous, F. Muheim, K.\n M\\\"uller, R. Muresan, B. Muryn, B. Muster, P. Naik, T. Nakada, R. Nandakumar,\n I. Nasteva, M. Needham, N. Neufeld, A.D. Nguyen, T.D. Nguyen, C. Nguyen-Mau,\n M. Nicol, V. Niess, R. Niet, N. Nikitin, T. Nikodem, A. Nomerotski, A.\n Novoselov, A. Oblakowska-Mucha, V. Obraztsov, S. Oggero, S. Ogilvy, O.\n Okhrimenko, R. Oldeman, M. Orlandea, J.M. Otalora Goicochea, P. Owen, B.K.\n Pal, A. Palano, M. Palutan, J. Panman, A. Papanestis, M. Pappagallo, C.\n Parkes, C.J. Parkinson, G. Passaleva, G.D. Patel, M. Patel, G.N. Patrick, C.\n Patrignani, C. Pavel-Nicorescu, A. Pazos Alvarez, A. Pellegrino, G. Penso, M.\n Pepe Altarelli, S. Perazzini, D.L. Perego, E. Perez Trigo, A. P\\'erez-Calero\n Yzquierdo, P. Perret, M. Perrin-Terrin, G. Pessina, K. Petridis, A.\n Petrolini, A. Phan, E. Picatoste Olloqui, B. Pietrzyk, T. Pila\\v{r}, D.\n Pinci, S. Playfer, M. Plo Casasus, F. Polci, G. Polok, A. Poluektov, E.\n Polycarpo, D. Popov, B. Popovici, C. Potterat, A. Powell, J. Prisciandaro, V.\n Pugatch, A. Puig Navarro, W. Qian, J.H. Rademacker, B. Rakotomiaramanana,\n M.S. Rangel, I. Raniuk, N. Rauschmayr, G. Raven, S. Redford, M.M. Reid, A.C.\n dos Reis, S. Ricciardi, A. Richards, K. Rinnert, V. Rives Molina, D.A. Roa\n Romero, P. Robbe, E. Rodrigues, P. Rodriguez Perez, G.J. Rogers, S. Roiser,\n V. Romanovsky, A. Romero Vidal, J. Rouvinet, T. Ruf, H. Ruiz, G. Sabatino,\n J.J. Saborido Silva, N. Sagidova, P. Sail, B. Saitta, C. Salzmann, B.\n Sanmartin Sedes, M. Sannino, R. Santacesaria, C. Santamarina Rios, E.\n Santovetti, M. Sapunov, A. Sarti, C. Satriano, A. Satta, M. Savrie, D.\n Savrina, P. Schaack, M. Schiller, H. Schindler, S. Schleich, M. Schlupp, M.\n Schmelling, B. Schmidt, O. Schneider, A. Schopper, M.-H. Schune, R.\n Schwemmer, B. Sciascia, A. Sciubba, M. Seco, A. Semennikov, K. Senderowska,\n I. Sepp, N. Serra, J. Serrano, P. Seyfert, M. Shapkin, I. Shapoval, P.\n Shatalov, Y. Shcheglov, T. Shears, L. Shekhtman, O. Shevchenko, V.\n Shevchenko, A. Shires, R. Silva Coutinho, T. Skwarnicki, N.A. Smith, E.\n Smith, M. Smith, K. Sobczak, M.D. Sokoloff, F.J.P. Soler, F. Soomro, D.\n Souza, B. Souza De Paula, B. Spaan, A. Sparkes, P. Spradlin, F. Stagni, S.\n Stahl, O. Steinkamp, S. Stoica, S. Stone, B. Storaci, M. Straticiuc, U.\n Straumann, V.K. Subbiah, S. Swientek, V. Syropoulos, M. Szczekowski, P.\n Szczypka, T. Szumlak, S. T'Jampens, M. Teklishyn, E. Teodorescu, F. Teubert,\n C. Thomas, E. Thomas, J. van Tilburg, V. Tisserand, M. Tobin, S. Tolk, D.\n Tonelli, S. Topp-Joergensen, N. Torr, E. Tournefier, S. Tourneur, M.T. Tran,\n M. Tresch, A. Tsaregorodtsev, P. Tsopelas, N. Tuning, M. Ubeda Garcia, A.\n Ukleja, D. Urner, U. Uwer, V. Vagnoni, G. Valenti, R. Vazquez Gomez, P.\n Vazquez Regueiro, S. Vecchi, J.J. Velthuis, M. Veltri, G. Veneziano, M.\n Vesterinen, B. Viaud, D. Vieira, X. Vilasis-Cardona, A. Vollhardt, D.\n Volyanskyy, D. Voong, A. Vorobyev, V. Vorobyev, C. Vo\\ss, H. Voss, R. Waldi,\n R. Wallace, S. Wandernoth, J. Wang, D.R. Ward, N.K. Watson, A.D. Webber, D.\n Websdale, M. Whitehead, J. Wicht, J. Wiechczynski, D. Wiedner, L. Wiggers, G.\n Wilkinson, M.P. Williams, M. Williams, F.F. Wilson, J. Wishahi, M. Witek,\n S.A. Wotton, S. Wright, S. Wu, K. Wyllie, Y. Xie, F. Xing, Z. Xing, Z. Yang,\n R. Young, X. Yuan, O. Yushchenko, M. Zangoli, M. Zavertyaev, F. Zhang, L.\n Zhang, W.C. Zhang, Y. Zhang, A. Zhelezov, A. Zhokhov, L. Zhong, A. Zvyagin",
"submitter": "Patrick Spradlin",
"url": "https://arxiv.org/abs/1302.2864"
}
|
1302.2880
|
# Minimal immersions of Riemannian manifolds in products of space forms
Fernando Manfio and Feliciano Vitório
###### Abstract
In this note, we give natural extensions to cylinders and tori of a classical
result due to T. Takahashi [9] about minimal immersions into spheres. More
precisely, we deal with Euclidean isometric immersions whose projections in
$\mathbb{R}^{N}$ satisfy a spectral condition of their Laplacian.
MSC 2000: 53C42, 53A10.
Key words: Minimal immersions, isometric immersions, Riemannian product of
space forms.
## 1 Introduction
An isometric immersion $f:M^{m}\to N^{n}$ of a Riemannian manifold $M$ in
another Riemannian manifold $N$ is said to be minimal if its mean curvature
vector field $H$ vanishes. The study of minimal surfaces is one of the oldest
subjects in differential geometry, having its origin with the work of Euler
and Lagrange. In the last century, a series of works have been developed in
the study of properties of minimal immersions, whose ambient space has
constant sectional curvature. In particular, minimal immersions in the sphere
$\mathbb{S}^{n}$ play a important role in the theory, as for example the
famous paper of J. Simons [8].
Let $f:M^{m}\to\mathbb{R}^{n}$ be an isometric immersion of a $m$-dimensional
manifold $M$ into the Euclidean space $\mathbb{R}^{n}$. Associated with the
induced metric on $M$, it is defined the Laplace operator $\Delta$ acting on
$\mathrm{C}^{\infty}(M)$. This Laplacian can be extended in a natural way to
the immersion $f$. A well-known result by J. Eells and J. H. Sampson [4]
asserts that the immersion $f$ is minimal if and only if $\Delta f=0$. The
following result, due to T. Takahashi [9], states that the immersion $f$
realizes a minimal immersion in a sphere if and only if its coordinate
functions are eigenfunctions of the Laplace operator with the same nonzero
eigenvalue.
###### Theorem 1.
Let $F:M^{m}\to\mathbb{R}^{n+1}$ be an isometric immersion such that
$\Delta F=-mcF$
for some constant $c\neq 0$. Then $c>0$ and there exists a minimal isometric
immersion $f:M^{m}\to\mathbb{S}^{n}_{c}$ such that $F=i\circ f$.
O. Garay generalized the Theorem 1 for the hypersurfaces
$f:M^{n}\to\mathbb{R}^{n+1}$ satisfying $\Delta f=Af$, where $A$ is a constant
$(n+1)\times(n+1)$ diagonal matrix. He proved in [5] that such a hypersurface
is either minimal or an open subset of a sphere or of a cylinder. In this
direction, J. Park [7] classified the hypersurfaces in a space form or in
Lorentzian space whose immersion $f$ satisfies $\Delta f=Af+B$, where $A$ is a
constant square matrix and $B$ is a constant vector. Similar results were
obtained in [1], where the authors study and classify pseudo-Riemannian
hypersurfaces in pseudo-Riemannian space forms which satisfy the condition
$\Delta f=Af+B$, where $A$ is an endomorphism and $B$ is a constant vector. In
a somewhat different direction, B. Chen in a series of papers discuss the
problem of determining the geometrical structure of a submanifold knowing some
simple analytic information (see for example [2], [3]).
In this work we shall deal with an isometric immersion
$f:M^{m}\to\mathbb{R}^{N}$ of a Riemannian manifold $M^{m}$ into the Euclidean
space $R^{N}$. If the submanifold $f(M)$ is contained in a cylinder
$\mathbb{S}^{n}\times\mathbb{R}^{k}\subset\mathbb{R}^{N}$ or in a torus
$\mathbb{S}^{n}\times\mathbb{S}^{k}\subset\mathbb{R}^{N}$, we shall call that
the immersion $f$ realizes an immersion in a cylinder or in a torus,
respectively. Motivated by recent works on the submanifold theory in the
product of space forms [6], we obtain theorems that give us necessary and
sufficient conditions for an isometric immersion $f:M^{m}\to\mathbb{R}^{N}$
realize a minimal immersion in a cylinder or in a torus (cf. Theorems 4 and
8).
## 2 Preliminaries
Let $M^{m}$ be a Riemannian manifold and $h\in\text{C}^{\infty}(M)$. The
hessian of $h$ is the symmetric section of $\text{Lin}(TM\times TM)$ defined
by
$\mathrm{Hess}\,h(X,Y)=XY(h)-\nabla_{X}Y(h)$
for all $X,Y\in TM$. Equivalently,
$\mathrm{Hess}\,h(X,Y)=\langle\nabla_{X}\mathrm{grad}\,h,Y\rangle$
where $X,Y\in TM$ and $\mathrm{grad}\,h$ is the gradient of $h$. The Laplacian
$\Delta h$ of a function $h\in\text{C}^{\infty}(M)$ at the point $p\in M$ is
defined as
$\Delta
h(p)=\mathrm{trace}\,\mathrm{Hess}\,h(p)=\mathrm{div}\,\mathrm{grad}\,h(p).$
Consider now an isometric immersion $f:M^{m}\to\mathbb{R}^{n}$. For a fixed
$v\in\mathbb{R}^{n}$, let $h\in\text{C}^{\infty}(M)$ be the height function
with respect to the hyperplane normal to $v$, given by $h(p)=\langle
f(p),v\rangle$. Then
$\displaystyle\mathrm{Hess}\,h(X,Y)=\langle\alpha_{f}(X,Y),v\rangle$ (1)
for any $X,Y\in TM$. For an isometric immersion $f:M^{n}\to\mathbb{R}^{n}$, by
$\Delta f(p)$ at the point $p\in M$ we mean the vector
$\Delta f(p)=(\Delta f_{1}(p),\ldots,\Delta f_{n}(p)),$
where $f=(f_{1},\ldots,f_{n})$. Taking traces in (1) we obtain
$\displaystyle\Delta f(p)=mH(p),$ (2)
where $H(p)$ is the mean curvature vector of $f$ at $p\in M$.
## 3 Minimal submanifolds in $\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}$
Let $\mathbb{S}^{n}_{c}$ denote the sphere with constant sectional curvature
$c>0$ and dimension $n$. We use the fact that $\mathbb{S}^{n}_{c}$ admits a
canonical isometric embedding in $\mathbb{R}^{n+1}$ as
$\mathbb{S}^{n}_{c}=\left\\{X\in\mathbb{R}^{n+1}:\langle
X,X\rangle=1/c\right\\}.$
Thus, $\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}$ admits a canonical isometric
embedding
$i:\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}\to\mathbb{R}^{n+k+1}.$
Denote by $\pi:\mathbb{R}^{n+k+1}\to\mathbb{R}^{n+1}$ the canonical
projection. Then, the normal space of $i$ at each point
$z\in\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}$ is spanned by $N(z)=c(\pi\circ
i)(z))$, and the second fundamental form of $i$ is given by
$\alpha_{i}(X,Y)=-c\langle\pi X,Y\rangle\pi\circ i.$
If we consider a parallel orthonormal frame $E_{1},\ldots,E_{n+k+1}$ of
$\mathbb{R}^{n+k+1}$ such that
$\displaystyle\mathbb{R}^{k}=\mathrm{span}\\{E_{n+2},\ldots,E_{n+k+1}\\},$ (3)
we can express the second fundamental form $\alpha_{i}$ as
$\displaystyle\alpha_{i}(X,Y)=-c\left(\langle
X,Y\rangle-\sum_{i=n+2}^{n+k+1}\langle X,E_{i}\rangle\langle
Y,E_{i}\rangle\right)\pi\circ i.$ (4)
The following result shows that minimal immersions of a $m$-dimensional
Riemannian manifold into the cylinder $\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}$
are precisely those immersions whose $n+1$ first coordinate functions in
$\mathbb{R}^{n+k+1}$ are eigenfunctions of the Laplace operator in the induced
metric.
###### Proposition 2.
Let $f:M^{m}\to\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}$ be an isometric
immersion and set $F=i\circ f$, where
$i:\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}\to\mathbb{R}^{n+k+1}$ is the
canonical inclusion. Let $E_{1},\ldots,E_{n+k+1}$ be a parallel orthonormal
frame of $\mathbb{R}^{n+k+1}$ as in (3). Then $f$ is a minimal immersion if
and only if
$\displaystyle\Delta
F=-c\left(m-\sum_{j=n+2}^{n+k+1}\|T_{j}\|^{2}\right)\pi\circ F,$ (5)
where $T_{j}$ denotes the orthogonal projection of $E_{j}$ onto $TM$.
###### Proof.
The second fundamental forms of $f$ and $F$ are related by
$\alpha_{F}(X,Y)=i_{\ast}\alpha_{f}(X,Y)+\alpha_{i}(f_{\ast}X,f_{\ast}Y)$
for all $X,Y\in TM$. From (4) we get that
$\alpha_{F}(X,Y)=i_{\ast}\alpha_{f}(X,Y)-c\left(\langle
X,Y\rangle-\sum_{j=n+2}^{n+k+1}\langle X,T_{j}\rangle\langle
Y,T_{j}\rangle\right)\pi\circ F,$
where $T_{j}$ denotes the orthogonal projection of $E_{i}$ onto $TM$. Taking
traces and using (2) yields
$\Delta
F=mi_{\ast}H^{f}-c\left(m-\sum_{j=n+2}^{n+k+1}\|T_{j}\|^{2}\right)\pi\circ F,$
and the conclusion follows. ∎
###### Remark 3.
In case $f:M^{m}\to\mathbb{S}^{n}_{c}\times\mathbb{R}$, a tangent vector field
$T$ on $M$ and a normal vector field $\eta$ along $f$ are defined by
$\frac{\partial}{\partial t}=f_{\ast}T+\eta,$
where $\frac{\partial}{\partial t}$ is an unit vector field tangent to
$\mathbb{R}$. In this case, $f$ is a minimal immersion if and only if
$\Delta F=-c(m-\|T\|^{2})\pi\circ F.$
The next result states that any isometric immersion of a Riemannian manifold
$M^{m}$ into Euclidean space $\mathbb{R}^{n+k+1}$, whose Laplacian satisfies a
condition as in (5), arises for a minimal isometric immersion of $M$ into some
cylinder $\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}$.
###### Theorem 4.
Let $F:M^{m}\to\mathbb{R}^{n+k+1}$ be an isometric immersion and let
$E_{1},\ldots,E_{n+k+1}$ be a parallel orthonormal frame in
$\mathbb{R}^{n+k+1}$ such that
$\Delta
F=-c\left(m-\sum_{j=n+2}^{n+k+1}\|T_{j}\|^{2}\right)\left(F-\sum_{j=n+2}^{n+k+1}\langle
F,E_{j}\rangle E_{j}\right),$
for some constant $c\neq 0$, where $T_{j}$ denotes the orthogonal projection
of $E_{j}$ onto the tangent bundle $TM$. Then $c>0$ and there exists a minimal
isometric immersion $f:M^{m}\to\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}$ such
that $F=i\circ f$.
###### Proof.
Since $\Delta F=mH$ by (2), the assumption implies that the vector field
$N=F-\sum_{j=n+2}^{n+k+1}\langle F,E_{j}\rangle E_{j}$
is normal to $F$. On the other hand,
$\displaystyle\langle N,E_{j}\rangle$ $\displaystyle=$
$\displaystyle\left\langle F-\sum_{l=n+2}^{n+k+1}\langle F,E_{l}\rangle
E_{l},E_{j}\right\rangle$ $\displaystyle=$ $\displaystyle\langle
F,E_{j}\rangle-\langle F,E_{j}\rangle=0$
for all $n+2\leq j\leq n+k+1$. Hence, for any $X\in TM$ we have
$X\langle N,N\rangle=2\left\langle F_{\ast}X-\sum_{j=n+2}^{n+k+1}\langle
F_{\ast}X,E_{j}\rangle E_{j},N\right\rangle=0,$
and it follows that $\langle N,N\rangle=r^{2}$ for some constant $r$. Now we
claim that
$\displaystyle\Delta\|F\|^{2}=2\sum_{j=n+2}^{n+k+1}\|T_{j}\|^{2}.$ (6)
To see this, fix a point $p\in M$ and consider a local geodesic frame
$\\{X_{1},\ldots,X_{m}\\}$ in $p$. Then
$\mathrm{grad}\,\|F\|^{2}=\sum_{\alpha=1}^{m}X_{\alpha}(\|F\|^{2})X_{\alpha}=2\sum_{\alpha=1}^{m}\langle
F_{\ast}X_{\alpha},F\rangle X_{\alpha}=2F^{T}.$
Since $N$ is normal to $F$, we have
$F^{T}=\sum_{j=n+2}^{n+k+1}\langle F,E_{j}\rangle
T_{j}=\sum_{j=n+2}^{n+k+1}\sum_{\alpha=1}^{m}\langle F,E_{j}\rangle\langle
E_{j},X_{\alpha}\rangle X_{\alpha},$
and it follows that
$\mathrm{grad}\,\|F\|^{2}=2\sum_{j=n+2}^{n+k+1}\sum_{\alpha=1}^{m}\langle
F,E_{j}\rangle\langle E_{j},X_{\alpha}\rangle X_{\alpha}.$
Therefore,
$\displaystyle\Delta\|F\|^{2}$ $\displaystyle=$
$\displaystyle\sum_{\beta=1}^{m}\left\langle\nabla_{X_{\beta}}\mathrm{grad}\,\|F\|^{2},X_{\beta}\right\rangle$
$\displaystyle=$ $\displaystyle
2\sum_{\alpha,\beta=1}^{m}\sum_{j=n+2}^{n+k+1}\left\langle\nabla_{X_{\beta}}\langle
F,E_{j}\rangle\langle E_{j},X_{\alpha}\rangle
X_{\alpha},X_{\beta}\right\rangle$ $\displaystyle=$ $\displaystyle
2\sum_{\alpha,\beta=1}^{m}\sum_{j=n+2}^{n+k+1}\langle
F_{\ast}X_{\beta},E_{j}\rangle\langle E_{j},X_{\alpha}\rangle\langle
X_{\alpha},X_{\beta}\rangle$ $\displaystyle=$ $\displaystyle
2\sum_{\alpha=1}^{m}\sum_{j=n+2}^{n+k+1}\langle
X_{\alpha},T_{j}\rangle^{2}=2\sum_{j=n+2}^{n+k+1}\|T_{j}\|^{2},$
and this proves our claim. Finally, using the fact that
$\Delta\|F\|^{2}=2(\langle\Delta F,F\rangle+m),$
we get that
$\displaystyle\sum_{j=n+2}^{n+k+1}\|T_{j}\|^{2}$ $\displaystyle=$
$\displaystyle\langle\Delta
F,F\rangle+m=\left\langle-c\left(m-\sum_{j=n+2}^{n+k+1}\|T_{j}\|^{2}\right)N,N\right\rangle+m$
$\displaystyle=$
$\displaystyle-\left(m-\sum_{j=n+2}^{n+k+1}\|T_{j}\|^{2}\right)cr^{2}+m,$
and the equality above implies that $c=1/r^{2}$. We conclude that there exists
an isometric immersion $f:M^{m}\to\mathbb{S}^{n}_{c}\times\mathbb{R}^{k}$ such
that $F=i\circ f$, and minimality of $f$ follows from Proposition 2. ∎
As an application of Theorem 4 we will construct an example into
$\mathbb{S}^{2n-1}\times\mathbb{R}$.
###### Example 5.
For each real number $a$, with $\sqrt{n-1}<a\leq\sqrt{n}$, we claim that there
exists a real number $b$ such that the immersion
$f:\mathbb{R}^{n}\to\mathbb{R}^{2n+1}$, given by
$f(x_{1},\ldots,x_{n})=\frac{1}{\sqrt{n}}\left(e^{iax_{1}},\ldots,e^{iax_{n}},b\sum_{j=1}^{n}x_{j}\right),$
is a minimal immersion into $\mathbb{S}^{2n-1}\times\mathbb{R}$. In fact, we
need check the hypothesis of the Theorem 4 for a suitable choice of
$(a,b)\in\mathbb{R}^{2}$. First observe that
$\Delta f=-a^{2}\left(f-\left\langle f,\frac{\partial}{\partial
t}\right\rangle\frac{\partial}{\partial t}\right),$
where $\frac{\partial}{\partial t}$ denotes a unit vector field tangent to the
factor $\mathbb{R}$. Now, if we denote by $T$ the orthogonal projection of
$\frac{\partial}{\partial t}$ onto $f$, a direct computation give us
$\displaystyle\|T\|^{2}=\frac{nb^{2}}{a^{2}+nb^{2}}.$ (7)
On the other hand, follows from (6) that
$\displaystyle\|T\|^{2}=\langle\Delta f,f\rangle+n=n-a^{2}.$ (8)
It follows from (7) and (8) that
$b^{2}=\frac{a^{2}(n-a^{2})}{n(a^{2}-n+1)}.$
## 4 Minimal submanifolds in the product $\mathbb{S}^{n}\times\mathbb{S}^{k}$
Let $\mathbb{S}^{n}$ and $\mathbb{S}^{k}$ denote the spheres of dimension $n$
and $k$, respectively. Using the fact that the spheres admit a canonical
isometric embedding $\mathbb{S}^{n}\subset\mathbb{R}^{n+1}$ and
$\mathbb{S}^{k}\subset\mathbb{R}^{k+1}$, the product
$\mathbb{S}^{n}\times\mathbb{S}^{k}$ admits a canonical isometric embedding
$\displaystyle i:\mathbb{S}^{n}\times\mathbb{S}^{k}\to\mathbb{R}^{n+k+2}.$ (9)
Denote by $\pi_{1}:\mathbb{R}^{n+k+2}\to\mathbb{R}^{n+1}$ and
$\pi_{2}:\mathbb{R}^{n+k+2}\to\mathbb{R}^{k+1}$ the canonical projections.
Then, the normal space of $i$ at each point
$z\in\mathbb{S}^{n}\times\mathbb{S}^{k}$ is spanned by
$N_{1}(z)=\pi_{1}(i(z))$ and $N_{2}(z)=\pi_{2}(i(z))$, and the second
fundamental form of $i$ is given by
$\alpha_{i}(X,Y)=-\langle\pi_{1}X,Y\rangle N_{1}-\langle\pi_{2}X,Y\rangle
N_{2}$
for all $X,Y\in T_{z}\mathbb{S}^{n}\times\mathbb{S}^{k}$.
Now, let $f:M^{m}\to\mathbb{S}^{n}\times\mathbb{S}^{k}$ be an isometric
immersion of a Riemannian manifold. Then, writing $F=i\circ f$, the unit
vector fields $N_{1}=\pi_{1}\circ F$ and $N_{2}=\pi_{2}\circ F$ are normal to
$F$. Consider a parallel orthonormal frame $E_{1},\ldots,E_{n+k+2}$ of
$\mathbb{R}^{n+k+2}$ such that
$\displaystyle\mathbb{R}^{n+1}=\mathrm{span}\\{E_{1},\ldots,E_{n+1}\\}\quad\text{and}\quad\mathbb{R}^{k+1}=\mathrm{span}\\{E_{n+2},\ldots,E_{n+k+2}\\}.$
(10)
In terms of this frame, we can express the vector fields $N_{1}$ and $N_{2}$
as
$\displaystyle N_{1}=F-\sum_{j=n+2}^{n+k+2}\langle F,E_{j}\rangle
E_{j}\quad\text{and}\quad N_{2}=F-\sum_{l=1}^{n+1}\langle F,E_{l}\rangle
E_{l}.$ (11)
###### Proposition 6.
Let $f:M^{m}\to\mathbb{S}^{n}\times\mathbb{S}^{k}$ be an isometric immersion
and set $F=i\circ f$, where
$i:\mathbb{S}^{n}\times\mathbb{S}^{k}\to\mathbb{R}^{n+k+2}$ is the canonical
inclusion. Let $E_{1},\ldots,E_{n+k+2}$ be a parallel orthonormal frame of
$\mathbb{R}^{n+k+2}$ as in (10). Then $f$ is a minimal immersion if and only
if
$\displaystyle\Delta
F=-\left(m-\sum_{j=n+2}^{n+k+2}\|T_{j}\|^{2}\right)N_{1}-\left(m-\sum_{l=1}^{n+1}\|T_{l}\|^{2}\right)N_{2},$
(12)
where $T_{j}$ denotes the orthogonal projection of $E_{j}$ onto $TM$.
###### Proof.
The second fundamental forms of $f$ and $F$ are related by
$\alpha_{F}(X,Y)=i_{\ast}\alpha_{f}(X,Y)+\alpha_{i}(f_{\ast}X,f_{\ast}Y)$
for all $X,Y\in TM$. Let $E_{1},\ldots,E_{n+k+2}$ be a parallel orthonormal
frame of $\mathbb{R}^{n+k+2}$ as in (10). Given $X\in TM$, we can write
$\pi_{1}X=X-\sum_{j=n+1}^{n+k+2}\langle X,T_{j}\rangle
E_{j}\quad\text{and}\quad\pi_{2}X=X-\sum_{l=1}^{n+1}\langle X,T_{l}\rangle
E_{l},$
and so, we have
$\langle\pi_{1}X,Y\rangle=\langle X,Y\rangle-\sum_{j=n+1}^{n+k+2}\langle
X,T_{j}\rangle\langle Y,T_{j}\rangle$
and
$\langle\pi_{2}X,Y\rangle=\langle X,Y\rangle-\sum_{l=1}^{n+1}\langle
X,T_{l}\rangle\langle Y,T_{l}\rangle.$
Then the second fundamental form of $F$ can be expressed by
$\displaystyle\alpha_{F}(X,Y)$ $\displaystyle=$ $\displaystyle
i_{\ast}\alpha_{f}(X,Y)-\left(\langle X,Y\rangle-\sum_{j=n+1}^{n+k+2}\langle
X,T_{j}\rangle\langle Y,T_{j}\rangle\right)N_{1}$ $\displaystyle-\left(\langle
X,Y\rangle-\sum_{l=1}^{n+1}\langle X,T_{l}\rangle\langle
Y,T_{l}\rangle\right)N_{2}.$
Taking traces and using (2) yields
$\Delta
F=mH^{F}=mi_{\ast}H^{f}-\left(m-\sum_{j=n+1}^{n+k+2}\|T_{j}\|^{2}\right)N_{1}-\left(m-\sum_{l=1}^{n+1}\|T_{l}\|^{2}\right)N_{2}$
and the conclusion follows. ∎
###### Remark 7.
Observe that an isometric immersion
$f:M^{m}\to\mathbb{S}^{n}\times\mathbb{S}^{k}$ can be seen as an isometric
immersion $\widetilde{f}=\iota\circ f:M^{m}\to\mathbb{S}_{\kappa}^{n+k+1}$
into the sphere with constant sectional curvature $\kappa=1/2$, where
$\iota:\mathbb{S}^{n}\times\mathbb{S}^{k}\to\mathbb{S}_{\kappa}^{n+k+1}$
denotes the canonical inclusion.
The next result states that any isometric immersion of a Riemannian manifold
$M^{m}$ into the sphere $\mathbb{S}_{\kappa}^{N-1}$ with constant sectional
curvature $\kappa=1/2$, whose Laplacian of coordinate functions satisfies a
condition as in (12), arises for a minimal isometric immersion of $M^{m}$ into
a product of spheres
$\mathbb{S}^{n}\times\mathbb{S}^{k}\subset\mathbb{R}^{N}$.
###### Theorem 8.
Let $F:M^{m}\to\mathbb{S}_{\kappa}^{N-1}$ be an isometric immersion. Fixed a
choice of two integers $n$ and $k$, with $N=n+k+2$, let $E_{1},\ldots,E_{N}$
be a parallel orthonormal frame in $\mathbb{R}^{N}$ as in (10) such that
$\Delta\widetilde{F}=-\left(m-\sum_{j=n+2}^{n+k+2}\|T_{j}\|^{2}\right)N_{1}-\left(m-\sum_{l=1}^{n+1}\|T_{l}\|^{2}\right)N_{2},$
where $\widetilde{F}=h\circ F$, $h:\mathbb{S}_{\kappa}^{N-1}\to\mathbb{R}^{N}$
is the umbilical inclusion, $T_{i}$ denotes the orthogonal projection of
$E_{i}$ onto $TM$ and $N_{1}$ and $N_{2}$ as in (11). Then there exists a
minimal isometric immersion $f:M^{m}\to\mathbb{S}^{n}\times\mathbb{S}^{k}$
such that $F=i\circ f$.
###### Proof.
We first prove that $N_{1}$ and $N_{2}$ are normal to $F$. In fact, in terms
of an orthonormal frame $\\{X_{1},\ldots,X_{m}\\}$ of $TM$, we have
$\displaystyle\sum_{i=1}^{N}\|T_{i}\|^{2}=\sum_{l=1}^{n+1}\|T_{l}\|^{2}+\sum_{j=n+2}^{N}\|T_{j}\|^{2}=m.$
(13)
Then, as $\widetilde{F}=N_{1}+N_{2}$, we can write:
$\displaystyle\begin{aligned}
\Delta\widetilde{F}=&-\left(\sum_{l=1}^{n+1}\|T_{l}\|^{2}\right)N_{1}-\left(\sum_{j=n+2}^{N}\|T_{j}\|^{2}\right)N_{2}\\\
=&-\left(\sum_{l=1}^{n+1}\|T_{l}\|^{2}\right)\widetilde{F}+\left(\sum_{l=1}^{n+1}\|T_{l}\|^{2}-\sum_{j=n+2}^{N}\|T_{j}\|^{2}\right)N_{2}.\end{aligned}$
(14)
If
$\sum_{l=1}^{n+1}\|T_{l}\|^{2}=\sum_{j=n+2}^{N}\|T_{j}\|^{2},$
we have, by using (13), that
$\Delta\widetilde{F}=-\frac{1}{2}m\widetilde{F}.$
Thus, it follows from Theorem 1 that $F:M^{m}\to\mathbb{S}_{\kappa}^{N-1}$ is
a minimal isometric immersion. Suppose from now on that
$\displaystyle\sum_{l=1}^{n+1}\|T_{l}\|^{2}\neq\sum_{j=n+2}^{N}\|T_{j}\|^{2}.$
(15)
As $\Delta\widetilde{F}=mH$ and $\widetilde{F}$ is normal to $M$, we conclude
from (14) that $N_{2}$ is normal to $M$. Similary we obtain that $N_{1}$ is
normal to $M$. Now, for $n+2\leq j\leq n+k+2$, we have
$\displaystyle\langle N_{1},E_{j}\rangle$ $\displaystyle=$
$\displaystyle\left\langle\widetilde{F}-\sum_{i=n+2}^{N}\langle\widetilde{F},E_{i}\rangle
E_{i},E_{j}\right\rangle$ $\displaystyle=$
$\displaystyle\langle\widetilde{F},E_{j}\rangle-\langle\widetilde{F},E_{j}\rangle=0.$
Hence, for any $X\in TM$ we have
$X\langle
N_{1},N_{1}\rangle=2\left\langle\widetilde{F}_{\ast}X-\sum_{j=n+2}^{N}\langle\widetilde{F}_{\ast}X,E_{j}\rangle
E_{j},N_{1}\right\rangle=0,$
and it follows that $\langle N_{1},N_{1}\rangle=r^{2}$ for some constant $r$.
The same argument gives $\langle N_{2},N_{2}\rangle=s^{2}$ for some constant
$s$. Since $\widetilde{F}=N_{1}+N_{2}$ and
$\Delta\|\widetilde{F}\|^{2}=2(\langle\Delta\widetilde{F},\widetilde{F}\rangle+m)$,
we have
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\frac{1}{2}\Delta\|\widetilde{F}\|^{2}=\langle\Delta\widetilde{F},\widetilde{F}\rangle+m$
$\displaystyle=$
$\displaystyle-\left(m-\sum_{j=n+2}^{N}\|T_{j}\|^{2}\right)r^{2}-\left(m-\sum_{l=1}^{n+1}\|T_{l}\|^{2}\right)s^{2}+m.$
$\displaystyle=$
$\displaystyle-\left(\sum_{l=1}^{n+1}\|T_{l}\|^{2}\right)r^{2}-\left(\sum_{j=n+2}^{N}\|T_{j}\|^{2}\right)s^{2}+m.$
Since $r^{2}+s^{2}=2$, we can rewrite the above equation as
$\displaystyle\left(\sum_{l=1}^{n+1}\|T_{l}\|^{2}-\sum_{j=n+2}^{N}\|T_{j}\|^{2}\right)s^{2}$
$\displaystyle=$ $\displaystyle 2\sum_{l=1}^{n+1}\|T_{l}\|^{2}-m$
$\displaystyle=$
$\displaystyle\sum_{l=1}^{n+1}\|T_{l}\|^{2}-\sum_{j=n+2}^{n+k+2}\|T_{j}\|^{2}.$
As we are assuming (15), we obtain $s^{2}=1$, and therefore $r^{2}=1$. We
conclude that there exists an isometric immersion
$f:M^{m}\to\mathbb{S}^{n}\times\mathbb{S}^{k}$ such that $F=i\circ f$, and
minimality of $f$ follows from Proposition 6.
∎
## References
* [1] L. J. Alías, A. Ferrández, P. Lucas, Hypersurfaces in space forms satisfying the condition $\Delta x=Ax+B$, Trans. Amer. Math. Soc., 347 (1995), 1793–1801.
* [2] B.-Y. Chen., A report on submanifolds of finite type, Soochow J. Math. 22 (1996), 117–337.
* [3] B.-Y. Chen, O. J. Garay, $\delta(2)$-ideal null $2$-type hypersurfaces of Euclidean space are spherical cylinders, Kodai Math. J. 35 (2012), 382–391.
* [4] J. Eells, J. H. Sampson, Harmonic mapping of Riemannian manifold, Amer. J. Math., 186 (1964), 109–160.
* [5] O. J. Garay, An extension of Takahashi’s theorem, Geom. Dedicata 34 (1990), 105–112.
* [6] J. H. Lira, R. Tojeiro, F. Vitório, A Bonnet theorem for isometric immersions into products of space forms, Archiv der Math. 95 (2010), 469–479.
* [7] J. Park, Hypersurfaces satisfying the equation $\Delta x=Rx+b$, Proc. Amer. Math. Soc. 120 (1994), 317–328.
* [8] J. Simons, Minimal Varietes in Riemannian Manifolds, Ann. of Math. 88 (1968) 62–105.
* [9] T. Takahashi, Minimal Immersions of Riemannian Manifolds, J. Math. Soc. Japan. 18 (1966), 380–385.
| Fernando Manfio
---
ICMC, Universidade de São Paulo
São Carlos-SP, 13561-060
Brazil
`[email protected]`
| | Feliciano Vitório
---
IM, Universidade Federal de Alagoas
Maceió-AL, 57072-900
Brazil
`[email protected]`
|
arxiv-papers
| 2013-02-12T18:18:57 |
2024-09-04T02:49:41.677712
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Fernando Manfio and Feliciano Vit\\'orio",
"submitter": "Fernando Manfio",
"url": "https://arxiv.org/abs/1302.2880"
}
|
1302.2893
|
\- 11institutetext: Physikalisches Institut, Ruprecht-Karls-Universität
Heidelberg, Germany
# Charm mixing at LHCb
Angelo Di Canto 11
###### Abstract
We report a measurement of the time-dependent ratio of $D^{0}\to K^{+}\pi^{-}$
to $D^{0}\to K^{-}\pi^{+}$ decay rates in $D^{*+}$-tagged events using
$1.0\mbox{\,fb}^{-1}$ of integrated luminosity recorded by the LHCb
experiment. We measure the mixing parameters $x^{\prime 2}=(-0.9\pm 1.3)\times
10^{-4}$, $y^{\prime}=(7.2\pm 2.4)\times 10^{-3}$ and the ratio of doubly-
Cabibbo-suppressed to Cabibbo-favored decay rates $R_{D}=(3.52\pm 0.15)\times
10^{-3}$. The result excludes the no-mixing hypothesis with a probability
corresponding to $9.1$ standard deviations and represents the first
observation of charm mixing from a single measurement.
## 1 Introduction
Quantum-mechanical mixing between neutral meson particle and antiparticle
flavour eigenstates provides important information about electroweak
interactions and the Cabibbo-Kobayashi-Maskawa matrix, as well as the virtual
particles that are exchanged in the mixing process itself. For this reason,
the mixing of neutral mesons is generally considered a powerful probe to
discover physics beyond the standard model. Meson-antimeson mixing has been
observed in the $K^{0}-\kern 1.99997pt\overline{\kern-1.99997ptK}{}^{0}$
Lande:1956pf , $B^{0}-\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}$
Albrecht:1987dr and $B^{0}_{s}-\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}$ Abulencia:2006ze systems, all
with rates in agreement with standard model expectations. Evidence of mixing
in the charm system has been reported by three experiments using different
$D^{0}$ decay channels Aubert:2007wf ; Staric:2007dt ; Aaltonen:2007ac ;
Aubert:2007en ; Aubert:2008zh ; Aubert:2009ai . Only the combination of these
measurements provides confirmation of $D^{0}-\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ mixing with more than $5\sigma$
significance hfag . While it is accepted that charm mixing occurs, a clear
observation of the phenomenon from a single measurement is needed to establish
it conclusively.
Thanks to the large charm production cross-section available in $pp$
collisions at $\sqrt{s}=7\,\mathrm{\,Te\kern-1.00006ptV}$ and to its flexible
trigger on hadronic final states, the LHCb experiment Alves:2008zz collected
the world’s largest sample of fully reconstructed hadronic charm decays during
the 2011 LHC run. In the following we present the first search for
$D^{0}-\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ mixing using this
data sample, which corresponds to $1.0\,\text{fb}^{-1}$ of integrated
luminosity. More details on the analysis can be found in Ref. preprint .
## 2 Measurement of charm mixing in the $D^{0}\to K^{+}\pi^{-}$ channel
In this analysis a search for $D^{0}-\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ mixing is performed by measuring
the time-dependent ratio of $D^{0}\to K^{+}\pi^{-}$ to $D^{0}\to K^{-}\pi^{+}$
decay rates.111Charge-conjugated modes are implied unless otherwise stated.
The $D^{0}$ flavour at production time is determined using the charge of the
soft (low-momentum) pion, $\pi_{\rm s}^{+}$, in the strong $D^{*+}\to
D^{0}\pi_{\rm s}^{+}$ decay. The $D^{*+}\to D^{0}(\to K^{-}\pi^{+})\pi_{\rm
s}^{+}$ process is referred to as right-sign (RS), whereas the $D^{*+}\to
D^{0}(\to K^{+}\pi^{-})\pi_{\rm s}^{+}$ is designated as wrong-sign (WS). The
RS process is dominated by a Cabibbo-favored (CF) decay amplitude, whereas the
WS amplitude includes contributions from both the doubly-Cabibbo-suppressed
(DCS) $D^{0}\to K^{+}\pi^{-}$ decay, as well as $D^{0}-\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ mixing followed by the favored
$\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}\to K^{+}\pi^{-}$ decay. In
the limit of small mixing ($|x|,|y|\ll 1$), and assuming negligible $C\\!P$
violation, the time-dependent ratio, $R(t)$, of WS to RS decay rates is
approximated by
$R(t)\approx R_{D}+\sqrt{R_{D}}\ y^{\prime}\ \frac{t}{\tau}+\frac{x^{\prime
2}+y^{\prime 2}}{4}\left(\frac{t}{\tau}\right)^{2},$ (1)
where $t/\tau$ is the decay time expressed in units of the average $D^{0}$
lifetime $\tau$, $R_{D}$ is the ratio of DCS to CF decay rates, $x^{\prime}$
and $y^{\prime}$ are the mixing parameters rotated by the strong phase
difference between the DCS and CF amplitudes. In case of no mixing,
$x^{\prime}=y^{\prime}=0$, the WS/RS ratio would be independent of decay time.
### 2.1 Outline of the analysis
The candidate reconstruction exploits the capabilities of the silicon vertex
locator (VELO), to identify the displaced two-track vertices of the $D^{0}$
decay products with decay-time resolution $\Delta t\approx 0.1\tau$; of the
tracking system, which measures charged particles with momentum resolution
$\Delta p/p$ that varies from $0.4\%$ at
$5\,\mbox{$\mathrm{\,Ge\kern-1.00006ptV}$/$c$}$ to $0.6\%$ at
$100\,\mbox{$\mathrm{\,Ge\kern-1.00006ptV}$/$c$}$, corresponding to a typical
mass resolution of approximately
$8\,\mbox{$\mathrm{\,Me\kern-1.00006ptV}$/$c^{2}$}$ for a two-body charm-meson
decay; and of the ring imaging Cherenkov detectors, which are used to
distinguish between pions and kaons and to suppress the contamination from
misidentified two-body charm decays in the sample.
Figure 1: Time-integrated $D^{0}\pi_{\rm s}^{+}$ mass distributions for the
selected RS $D^{0}\to K^{-}\pi^{+}$ (left) and WS $D^{0}\to K^{+}\pi^{-}$
(right) candidates with fit projections overlaid. The bottom plots show the
normalized residuals between the data points and the fits.
We reconstruct approximately $3.6\times 10^{4}$ WS and $8.4\times 10^{6}$ RS
decays as shown in figure 1, where the $M(D^{0}\pi_{\rm s}^{+})$ distribution
for the selected RS and WS candidates is fitted to separate the $D^{*+}$
signal component, with a mass resolution of about
$0.3\,\mbox{$\mathrm{\,Me\kern-1.00006ptV}$/$c^{2}$}$, from the background
component, which is dominated by associations of real $D^{0}$ decays and
random pions. Similar fits are used to determine the signal yields for the RS
and WS samples in thirteen $D^{0}$ decay time bins, chosen to have a similar
number of candidates in each bin. The shape parameters and the yields of the
two components, signal and random pion background, are left free to vary in
the different decay time bins. We further assume that the $M(D^{0}\pi_{\rm
s}^{+})$ signal shape for RS and WS decay are the same and therefore first
perform a fit to the more abundant and cleaner RS sample to determine the
signal shape and yield, and then use those shape parameters with fixed values
when fitting for the WS signal yield. The signal yields from the thirteen bins
are used to calculate the WS/RS ratios and the mixing parameters are
determined in a binned $\chi^{2}$ fit to the observed decay-time dependence.
### 2.2 Systematic uncertainties
Since WS and RS events are expected to have the same decay-time acceptance and
$M(D^{0}\pi_{\rm s}^{+})$ distributions, most systematic uncertainties
affecting the determination of the signal yields as a function of decay time
cancel in the ratio between WS and RS events. Residual biases from non-
cancelling instrumental and production effects, such as asymmetries in
detection efficiencies or in production, are found to modify the WS/RS ratio
only by a relative fraction of $\mathcal{O}(10^{-4})$ and are neglected.
Uncertainties in the distance between VELO sensors can lead to a bias of the
decay-time scale. The effect has been estimated to be less than 0.1% of the
measured time and translates into relative systematic biases of $0.1\%$ and
$0.2\%$ on $y^{\prime}$ and $x^{\prime 2}$, respectively. At the current level
of statistical precision, such small effects are negligible.
The main sources of systematic uncertainty are those which could alter the
observed decay-time dependence of the WS/RS ratio. Two such sources have been
identified: $(1)$ $D$ mesons from $b$-hadron decays, and $(2)$ peaking
backgrounds from charm decays reconstructed with the wrong particle
identification assignments. These effects, discussed below, are expected to
depend on the true value of the mixing parameters and are accounted for in the
time-dependent fit.
Figure 2: Background-subtracted $\chi^{2}(\text{IP})$ distributions for RS
$D^{0}$ candidates in two different decay-time bins. The dashed line indicates
the cut used in the analysis; the solid histograms represent the estimated
secondary components.
Figure 3: Measured fraction of secondary decays entering the final RS sample
as a function of decay time (points), with overlaid the projection of a fit to
a sigmoid-like function (solid line).
A contamination of charm mesons produced in $b$-hadron decays (secondary $D$
decays) could bias the time-dependent measurement, as the reconstructed decay
time is calculated with respect to primary vertex, which, for these
candidates, does not coincide with the $D^{0}$ production vertex. When the
secondary component is not subtracted, the measured WS/RS ratio can be written
as $R(t)\left[1-\Delta_{B}(t)\right]$, where $R(t)$ is the ratio of promptly-
produced candidates according to Eq. (1), and $\Delta_{B}(t)$ is a time-
dependent bias due to the secondary contamination. Since $R(t)$ is measured to
be monotonically non-decreasing hfag and the reconstructed decay time for
secondary decays overestimates the true decay time of the $D^{0}$ meson, it is
possible to bound $\Delta_{B}(t)$, for all decay times, as
$0\leqslant\Delta_{B}(t)\leqslant f_{B}^{\rm
RS}(t)\left[1-\frac{R_{D}}{R(t)}\right],$ (2)
where $f_{B}^{\rm RS}(t)$ is the fraction of secondary decays in the RS sample
at decay time $t$. In this analysis most of the secondary $D$ decays are
removed by requiring the $\chi^{2}(\text{IP})$ of the $D^{0}$ to be smaller
than $9$.222The $\chi^{2}(\text{IP})$ is defined as the difference between the
$\chi^{2}$ of the primary vertex reconstructed with and without the considered
particle, and is a measure of consistency with the hypothesis that the
particle originates from the primary vertex. A residual $(2.7\pm 0.2)\%$
contamination survives. To include the corresponding systematic uncertainty,
we modify the fitting function for the mixing hypothesis assuming the largest
possible bias from equation (2). The value of $f_{B}^{\rm RS}(t)$ is
constrained, within uncertainties, to the measured value, obtained by fitting
the $\chi^{2}(\text{IP})$ distribution of the RS $D^{0}$ candidates in bins of
decay time (see figure 2). In this fit, the promptly-produced component is
described by a time-independent $\chi^{2}(\text{IP})$ shape, which is derived
from data using the candidates with $t<0.8\tau$. The $\chi^{2}(\text{IP})$
shape of the secondary component (represented by the solid histograms in
figure 2), and its dependence on decay time, is also determined from data by
studying the sub-sample of candidates that are reconstructed, in combination
with other tracks in the events, as $B\to D^{*}(3)\pi$, $B\to D^{*}\mu X$ or
$B\to D^{0}\mu X$. The measured value of $f_{B}^{\rm RS}(t)$ is shown in
figure 3. Assuming the maximum bias could induce an over-correction which
results in a shift in the estimated mixing parameters. We checked on pseudo-
experiments, before fitting the data, and then also on data that such a shift
is always much smaller than the corresponding increase in the uncertainty when
the secondary bias is included in the fit.
Figure 4: Decay-time evolution of the number of doubly misidentified RS events
observed in the $D^{0}$ mass sidebands of the WS sample normalized to the RS
signal yield. The solid (dashed) line is the result of a fit assuming linear
(constant) decay-time dependence.
Peaking background in $M(D^{0}\pi_{\rm s}^{+})$, that is not accounted for in
our mass fit, arises from $D^{*+}$ decays for which the correct soft pion is
found but the $D^{0}$ is partially reconstructed or misidentified. This
background is suppressed by the use of tight particle identification and two-
body mass requirements. From studies of the events in the $D^{0}$ mass
sidebands, we find that the dominant source of peaking background leaking into
our signal region is from RS events which are doubly misidentified as a WS
candidate; they are estimated to constitute $(0.4\pm 0.2)\%$ of the WS signal.
From the same events, we also derive a bound on the possible time dependence
of this background (see figure 4), which is included in the fit in a similar
manner to the secondary background. Contamination from peaking background due
to partially reconstructed $D^{0}$ decays is found to be much smaller than
$0.1\%$ of the WS signal and neglected in the fit.
### 2.3 Results
Figure 5 shows the observed decay-time evolution of the WS to RS ratio, with
the projection of the best fit result overlaid (solid line). The estimated
values of the parameters $R_{D}$, $y^{\prime}$ and $x^{\prime 2}$ are listed
in table 1.
Figure 5: Decay-time evolution of the ratio, $R$, of WS $D^{0}\to
K^{+}\pi^{-}$ to RS $D^{0}\to K^{-}\pi^{+}$ yields (points) with the
projection of the mixing allowed (solid line) and no-mixing (dashed line) fits
overlaid.
Figure 6: Estimated confidence-level (CL) regions in the $(x^{\prime 2},y^{\prime})$ plane for $1-\text{CL}=0.317$ ($1\sigma$), $2.7\times 10^{-3}$ ($3\sigma$) and $5.73\times 10^{-7}$ ($5\sigma$). Systematic uncertainties are included. The cross indicates the no-mixing point. Table 1: Results of the time-dependent fit to the data. The uncertainties include statistical and systematic sources; ndf indicates the number of degrees of freedom. Fit type ($\chi^{2}$/ndf) | Parameter | Fit result ($10^{-3}$)
---|---|---
Mixing ($9.5/10$) | $R_{D}$ | $\quad 3.52\pm 0.15$
| $y^{\prime}$ | $\quad 7.2\pm 2.4$
| $x^{\prime 2}$ | $\,-0.09\pm 0.13$
No mixing ($98.1/12$) | $R_{D}$ | $\quad 4.25\pm 0.04$
To evaluate the significance of this mixing result we determine the change in
the fit $\chi^{2}$ when the data are described under the assumption of the no-
mixing hypothesis (dashed line in figure 5). Under the assumption that the
$\chi^{2}$ difference, $\Delta\chi^{2}$, follows a $\chi^{2}$ distribution for
two degrees of freedom, $\Delta\chi^{2}=88.6$ corresponds to a $p$-value of
$5.7\times 10^{-20}$, which excludes the no-mixing hypothesis at $9.1$
standard deviations. This is also illustrated in figure 6 where the $1\sigma$,
$3\sigma$ and $5\sigma$ confidence regions for $x^{\prime 2}$ and $y^{\prime}$
are shown.
## 3 Conclusions
We measure the decay time dependence of the ratio between $D^{0}\to
K^{+}\pi^{-}$ and $D^{0}\to K^{-}\pi^{+}$ decays using $1.0\mbox{\,fb}^{-1}$
of data and exclude the no-mixing hypothesis at $9.1$ standard deviations.
This is the first observation of $D^{0}-\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ oscillations in a single
measurement. The measured values of the mixing parameters are compatible with
and have better precision than those from previous measurements Aubert:2007wf
; Aaltonen:2007ac ; Zhang:2006dp , as shown in table 2.
Table 2: Comparison of our result with recent measurements from other experiments. The uncertainties include statistical and systematic components. Exp. | $R_{D}$ ($10^{-3}$) | $y^{\prime}$ ($10^{-3}$) | $x^{\prime 2}$ ($10^{-3}$)
---|---|---|---
LHCb 2011 | $3.52\pm 0.15$ | $7.2\pm 2.4$ | $-0.09\pm 0.13$
BaBar Aubert:2007wf | $3.03\pm 0.19$ | $9.7\pm 5.4$ | $-0.22\pm 0.37$
Belle Zhang:2006dp | $3.64\pm 0.17$ | $0.6^{+4.0}_{-3.9}$ | $0.18^{+0.21}_{-0.23}$
CDF Aaltonen:2007ac | $3.04\pm 0.55$ | $8.5\pm 7.6$ | $-0.12\pm 0.35$
The result is still statistically limited and can be improved thanks to the
already available $2.1\,\text{fb}^{-1}$ sample of $pp$ collisions recorded by
LHCb at $\sqrt{s}=8\,\mathrm{\,Te\kern-1.00006ptV}$ during 2012. This
additional sample will also allow to search for $C\\!P$ violation in charm
mixing with sensitivities never achieved before.
## References
* (1) K. Lande et al., Phys. Rev. 103, 1901 (1956).
* (2) H. Albrecht et al. (ARGUS Collaboration), Phys. Lett. B 192, 245 (1987).
* (3) A. Abulencia et al. (CDF Collaboration), Phys. Rev. Lett. 97, 242003 (2006).
* (4) B. Aubert et al. (BaBar Collaboration), Phys. Rev. Lett. 98, 211802 (2007).
* (5) M. Staric et al. (Belle Collaboration), Phys. Rev. Lett. 98, 211803 (2007).
* (6) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 100, 121802 (2008).
* (7) B. Aubert et al. (BaBar Collaboration), Phys. Rev. D 78, 011105 (2008).
* (8) B. Aubert et al. (BaBar Collaboration), Phys. Rev. Lett. 103, 211801 (2009).
* (9) B. Aubert et al. (BaBar Collaboration), Phys. Rev. D 80, 071103 (2009).
* (10) Y. Amhis et al. (Heavy Flavour Averaging Group), arXiv:1207.1158 and online update at http://www.slac.stanford.edu/xorg/hfag.
* (11) Alves Jr., A. A. et al. (LHCb Collaboration), JINST 3, S08005 (2008).
* (12) R. Aaij et al. (LHCb Collaboration), arXiv:1211.1230 [hep-ex].
* (13) L. M. Zhang et al. (Belle Collaboration), Phys. Rev. Lett. 96, 151801 (2006).
|
arxiv-papers
| 2013-02-12T19:19:33 |
2024-09-04T02:49:41.683395
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Angelo Di Canto",
"submitter": "Angelo Di Canto",
"url": "https://arxiv.org/abs/1302.2893"
}
|
1302.3005
|
# Entanglement of Tripartite States with Decoherence in Noninertial frames
Salman Khan [email protected] Department of Physics, Quaid-i-Azam
University, Islamabad 45320, Pakistan
(Jun 8 2011)
###### Abstract
The one-tangle and $\pi$-tangle are used to quantify the entanglement of a
tripartite GHZ state in noninertial frames when the system interacts with a
noisy environment in the form of phase damping, phase flip and bit flip
channel. It is shown that the two-tangles behave as a closed system. The one-
tangle and $\pi$-tangle have different behaviors in the three channel. In the
case of phase damping channel, depending on the kind of coupling, the sudden
death of both one-tangle and $\pi$-tangle may or may not happen. Whereas in
the case of phase flip channel the sudden death cannot be avoided. The effect
of decoherence may be ignored in the limit of infinite acceleration when the
system interacts with a bit flip channel. Furthermore, a sudden rebirth of the
one-tangle and $\pi$-tangle occur in the case of phase flip channel that may
be delayed when collective coupling is switched on.
PACS: 03.65.Ud; 03.65.Yz; 03.67.Mn;04.70.Dy
Keywords: Entanglement; Tripartite GHZ state; Decoherence; Noninertial frames
Entanglement; Tripartite GHZ state; Decoherence; Noninertial frames
###### pacs:
03.65.Ud; 03.65.Yz; 03.67.Mn;04.70.Dy
††preprint:
## I Introduction
Entanglement is not only one of the most striking properties of quantum
mechanics but also the core concept of quantum information and quantum
computation springer . The structure of all the subfields of quantum
information theory, such as teleportation of unknown states Bennett , quantum
key distribution Ekert , quantum cryptography Bennett2 and quantum
computation Grover ; Vincenzo , stand on quantum entanglement. The dynamics of
entanglement in various bipartite qubit and qutrit states have been
extensively studied under various circumstances. The study of entanglement of
various fields in the accelerated frames has taken into account recently and
valuable results about the behavior of entanglement have been obtained Alsing
; Ling ; Gingrich ; Pan ; Schuller ; Terashima . The effect of decoherence on
the behavior of entanglement under various quantum channels in noninertial
frames have been studied in Refs. Wang ; Salman . However, these studies are
limited only to bipartite qubit systems in the accelerated frames. Recently,
the dynamics of entanglement in tripartite qubit systems in noninertial frames
are studied in Refs. Hwang ; Wang2 ; Shamirzai . These studies show that the
degree of entanglement is degraded by the acceleration of the frames and, like
the two-tangles in inertial frames, the two-tangles are zero when one or two
observers are in the accelerated frames.
In this paper, I investigate the effect of decoherence on the tripartite
entanglement of Dirac field in accelerated frames by using a phase damping
channel, a phase flip channel and a bit flip channel. The effect of amplitude
damping channel and depolarizing channel on tripartite entanglement of Dirac
field in a noninertial frames is recently studied in Ref. Winpin . I consider
three observers Alice, Bob and Charlie that initially share a GHZ tripartite
state. One of the observers, say Alice, stays stationary and the other two
observers move with a constant acceleration. I work out the effect of
acceleration and of decoherence by using the three channels on the initial
entanglement of the shared state between the observers. I consider different
kinds of coupling of each channel with the system. For example, in one case
each qubit interacts locally with the noisy environment. In the second case,
all the three qubits are influenced collectively by the same environment. I
show that the entanglement sudden death (ESD) Yu1 ; Yu2 can either be
completely avoided or can be slowed down depending on the various coupling of
the system and a particular channel. Furthermore, I also show that the ESD can
happen faster and becomes independent of the acceleration when the system
interacts with a phase flip channel.
Table 1: A single qubit Kraus operators for phase damping channel, phase flip channel and bit flip channel. phase damping channel | $E_{o}=\left(\begin{array}[c]{cc}1&0\\\ 0&\sqrt{1-p}\end{array}\right),\qquad E_{1}=\left(\begin{array}[c]{cc}0&0\\\ 0&\sqrt{p}\end{array}\right)$
---|---
phase flip channel | $E_{o}=\sqrt{1-p}\left(\begin{array}[c]{cc}1&0\\\ 0&1\end{array}\right),\qquad E_{1}=\sqrt{p}\left(\begin{array}[c]{cc}1&0\\\ 0&-1\end{array}\right)$
bit flip channel | $E_{o}=\sqrt{1-p}\left(\begin{array}[c]{cc}1&0\\\ 0&1\end{array}\right),\qquad E_{1}=\sqrt{p}\left(\begin{array}[c]{cc}0&1\\\ 1&0\end{array}\right)$
## II The system in noisy environment
The evolution of a density matrix of a system in a noisy environment is
described in terms of Kraus operators formalism. The Kraus operators for a
single qubit system of the three channels that I use in this paper are given
in Table $1$. Each channel is parameterized by the decoherence parameter $p$
that has values between $0$ and $1$. For the lower limit of $p$, a channel has
no effect on the system and is said to be undecohered whereas for the upper
limit it introduces maximum error and is said to be fully decohered. The final
density matrix of a composite system when it evolves in a noisy environment is
given by the following equation
$\rho_{f}=\sum_{i,j,k,...}K_{i}K_{j}K_{k}...\rho...K_{k}^{{\dagger}}K_{j}^{{\dagger}}K_{i}^{{\dagger}},$
(1)
where $\rho$ is the initial density matrix of the system and $K_{n}$ are the
Kraus operators that satisfy the completeness relation
$\sum_{n}K_{n}^{{\dagger}}K_{n}=I$. I consider that Alice, Bob and Charlie
initially share the following maximally entangled GHZ state
$|\psi\rangle_{ABC}=\frac{1}{\sqrt{2}}\left(|000\rangle_{A,B,C}+|111\rangle_{A,B,C}\right),$
(2)
where the three alphabets represent the three observers and the entries in
each ket correspond to the three observers in the order of alphabet. From the
perspective of an inertial observer, the Dirac fields can be expressed as a
superposition of Minkowski monochromatic modes
$|0\rangle_{M}=\bigoplus_{i}|0_{\omega_{i}}\rangle_{M}$ and
$|1\rangle_{M}=\bigoplus_{i}|1_{\omega_{i}}\rangle_{M}$ $\forall i$, with
AsPach ; Martin
$\displaystyle|0_{\omega_{i}}\rangle_{M}$ $\displaystyle=\cos
r_{i}|0_{\omega_{i}}\rangle_{I}|0_{\omega_{i}}\rangle_{II}+\sin
r_{i}|0_{\omega_{i}}\rangle_{I}|0_{\omega_{i}}\rangle_{II},$
$\displaystyle|1_{\omega_{i}}\rangle_{M}$
$\displaystyle=|1_{\omega_{i}}\rangle_{I}|1_{\omega_{i}}\rangle_{II},$ (3)
where $\cos r_{i}=\left(e^{-2\pi\omega_{i}c/a_{i}}+1\right)^{-1/2}$. The
parameters $\omega_{i}$, $c$ and $a_{i}$, in the exponential stand,
respectively, for Dirac particle’s frequency, speed of light in vacuum and
acceleration of the $i$th observer. In Eq. (3) the subscripts $I$ and $II$ of
the kets represent the Rindler modes in region $I$ and $II$, respectively, in
the Rindler spacetime diagram. The Minkowski mode that defines the Minkowski
vacuum is related to a highly nonmonochromatic Rindler mode rather than a
single mode with the same frequency Martin ; Bruschi . Let the angular
frequencies $\omega_{i}$ for the three observers be, respectively, given by
$\omega_{a}$, $\omega_{b}$ and $\omega_{c}$ then with respect to an inertial
observer $|0_{\omega_{a(b,c)}}\rangle_{A(B,C)}$ and
$|1_{\omega_{a(b,c)}}\rangle_{A(B,C)}$ are the vacuum and the first excited
states. Furthermore, let us suppose that each observer is equipped with a
monochromatic device that are sensitive only to their respective modes. In
order to save space and present the relation in simple form, I drop the
frequencies in subsript of each entry of the kets. Then substituting the
Rindler modes from Eq. (3) for the Minkowski modes in Eq. (2) for the two
observers in the accelerated frames gives
$\displaystyle|\psi\rangle_{ABC}$ $\displaystyle=\frac{1}{\sqrt{2}}(\cos
r_{b}\cos r_{c}|00000\rangle_{A,BI,BII,CI,CII}$ $\displaystyle+\cos r_{b}\sin
r_{c}|00011\rangle_{A,BI,BII,CI,CII}$ $\displaystyle+\sin r_{b}\cos
r_{c}|01100\rangle_{A,BI,BII,CI,CII}$ $\displaystyle+\sin r_{b}\sin
r_{c}|01111\rangle_{A,BI,BII,CI,CII}$
$\displaystyle+|11010\rangle_{A,BI,BII,CI,CII}).$ (4)
Let the noninertial frames of Bob and Charlie be moving with same
acceleration, then $r_{b}=r_{c}=r$. Since the Rindler modes in region $II$ are
inaccessible, tracing out over those modes leave the following initial mixed
density matrix
$\displaystyle\rho$
$\displaystyle=\frac{1}{2}[\cos^{2}r_{b}\cos^{2}r_{c}|000\rangle\langle
000|+\cos^{2}r_{b}\sin^{2}r_{c}|001\rangle\langle 001|$
$\displaystyle+\sin^{2}r_{b}\cos^{2}r_{c}|010\rangle\langle
010|+\sin^{2}r_{b}\sin^{2}r_{c}|011\rangle\langle 011|$ $\displaystyle+\cos
r_{b}\cos r_{c}(|000\rangle\langle 111|+|111\rangle\langle
000|)+|111\rangle\langle 111|.$ (5)
Note that I have dropped the subsript $I$ that indicates the Rindler modes in
region $I$. In the rest of the paper, all calculations correspond to the
Rindler modes in region $I$.
The degree of entanglement in bipartite qubit states is quantified by using
the concept of negativity Peres ; Horodecki1 . It is given by
$\mathcal{N}_{AB}=\left\|\rho_{AB}^{T_{B}}\right\|-1,$ (6)
where $T_{B}$ is the partial transpose over the second qubit $B$ and
$\left\|.\right\|$ gives the trace norm of a matrix. For a bipartite density
matrix $\rho_{m\nu,n\mu}$, the partial transpose over the second qubit $B$ is
given by $\rho_{m\mu,n\nu}^{T_{B}}=\rho_{m\nu,n\mu}$ and for the first qubit,
it can similarly be defined. On the other hand, the entanglement of a
tripartite state $|\psi\rangle_{ABC}$ is quantified by using the concept of
$\pi$-tangle, which is given by
$\pi_{ABC}=\frac{1}{3}(\pi_{A}+\pi_{B}+\pi_{C}),$ (7)
where $\pi_{A(BC)}$ is the residual entanglement and is given by
$\pi_{A}=\mathcal{N}_{A(BC)}^{2}-\mathcal{N}_{AB}^{2}-\mathcal{N}_{AC}^{2}.$
(8)
In Eq. (8), $\mathcal{N}_{AB(AC)}$ is a two-tangle and is given as the
negativity of the mixed density matrix
$\rho_{AB(AC)}=Tr_{C(B)}|\psi\rangle_{ABC}\langle\psi|$ whereas
$\mathcal{N}_{A(BC)}$ is a one-tangle and is defined as
$\mathcal{N}_{A(BC)}=\left\|\rho_{ABC}^{T_{A}}\right\|-1$.
Now, in the following I use the three channels to investigate the behavior of
entanglement of the tripartite initially entangled density matrix of Eq. (8)
in a noisy environment.
### II.1 Phase damping channel
When all the three qubits are locally coupled to a phase damping channel, then
Eq. (1) for the evolution of a tripartite state can be written as
$\rho_{f}=\sum_{i,j,k}(K_{i}K_{j}K_{k})\rho\left(K_{k}^{{\dagger}}K_{j}^{{\dagger}}K_{i}^{{\dagger}}\right).$
(9)
The Kraus operators in Eq. (9) with subscripts $i$, $j$, $k$ act,
respectively, on Alice’s, Bob’s and Charlie’s qubits. These operators are
formed from the single qubit Kraus operators of a phase damping channel and
are, respectively, given by $K_{i}=(E_{i}\otimes I\otimes I)$,
$K_{j}=(I\otimes E_{j}\otimes I)$, $K_{k}=(I\otimes I\otimes E_{k})$ with
$i=j=k=0$, $1$. Using the initial density matrix of Eq. (5) in Eq. (9), the
final density matrix of the system is given as
$\displaystyle\rho_{f}$ $\displaystyle=\frac{1}{2}\cos^{4}r|000\rangle\langle
000|+\frac{1}{8}\sin^{2}2r(|001\rangle\langle 001|+|010\rangle\langle 010|)$
$\displaystyle+\frac{1}{2}\sin^{4}r|011\rangle\langle
011|+\frac{1}{2}|111\rangle\langle 111|$
$\displaystyle+\frac{1}{2}\sqrt{(1-p_{0})(1-p_{1})(1-p_{2})}\cos^{2}r(|000\rangle\langle
111|+|111\rangle\langle 000|),$ (10)
where $p_{0}$, $p_{1}$ and $p_{2}$ are the decoherence parameters of the
locally coupled channels with the qubits of Alice, Bob and Charlie,
respectively. The three one-tangles that can straightforwardly be calculated
by using its definition are given as
$\displaystyle\mathcal{N}_{A(BC)}$
$\displaystyle=\frac{1}{4}(-2+2\cos^{4}r+2\sqrt{(1-p_{0})(1-p_{1})(1-p_{2})\cos^{4}r}$
$\displaystyle+2\sqrt{(1-p_{0})(1-p_{1})(1-p_{2})\cos^{4}r+\sin^{8}r}+\sin^{2}2r),$
(11) $\displaystyle\mathcal{N}_{B(AC)}$
$\displaystyle=\mathcal{N}_{C(AB)}=\frac{1}{16}(-1+8\sqrt{(1-p_{0})(1-p_{1})(1-p_{2})\cos^{4}r}$
$\displaystyle+\cos
4r+2\sqrt{16(1-p_{0})(1-p_{1})(1-p_{2})\cos^{4}r+\sin^{4}2r}).$ (12)
(a) |
---|---
(b) |
---|---
Figure 1: (color online) (a) The one-tangles, for $r=\pi/6$, are plotted
against the decoherence parameter $p$ when only one qubit is coupled to the
phase damping channel. (b) The one tangles, for $r=\pi/6$, are plotted against
$p$ when all the three qubit are influenced collectively by the same phase
damping channel.
Note that $\mathcal{N}_{B(AC)}=\mathcal{N}_{C(AB)}$ shows that the two
accelerated subsystems are symmetrical for any value of the acceleration. It
can easily be checked that all the one-tangles reduce to
$[(1-p_{0})(1-p_{1})(1-p_{2})]^{1/2}$ when $r=0$ and for $p_{i}=r=0,$ the
result is $1$, which is the result obtained in the rest frames both for Dirac
and scalar fields Hwang ; Wang2 . It is also clear that for $r=0$ case, the
one-tangles goes to zero when either of the three qubits is coupled to a fully
decohered channel. Moreover, for $r=\pi/4$ all the three one-tangles become
indistinguishable, that is,
$\mathcal{N}_{A(BC)}=\mathcal{N}_{B(AC)}=\mathcal{N}_{C(AB)}$ irrespective of
the value of decoherence parameter and of which qubit is coupled to the
channel. Also, in this case the one-tangle is lost only when either of the
three local channels is fully decohered. However, the initial entanglement is
small enough as compared to the case for $r=0$. For example, when only one
qubit is locally coupled to the channel, the initial value of the one-tangle
is $0.4$ for $p_{i}=0$. This shows the strong dependence of the one tangle
over the acceleration. However, the on-tangles are different for other values
of the acceleration. To see the behavior of the one-tangles for other values
of the acceleration, I plot it against the decoherence parameters for
$r=\pi/6$ in Fig. $1$($a,b$). Fig. $1a$ shows the behavior of the one-tangles
when only one qubit is locally influenced by the channel. This behavior does
not change by switching the coupling of the channel from one qubit to another,
that is, whether it’s plotted against $p_{0}$ or $p_{1}$($p_{2}$), which means
that the one-tangles are symmetrical in the phase damping environment. It can
be seen from the figure that both $\mathcal{N}_{A(BC)}$ and
$\mathcal{N}_{B(AC)}$($\mathcal{N}_{C(AB)}$) varies identically with the
increasing value of decoherence parameter, however, the acceleration effect is
different on both of them. Like for the case of $r=0$ and $r=\pi/4$, the one-
tangles goes to zero only when the channel is fully decohered, which shows
that no sudden death occurs. The behavior of one-tangles when all the three
qubits are under the influence of collective environment
($p_{0}=p_{1}=p_{2}=p$) is shown in Fig. $1b$ for $r=\pi/6$. In this case, the
one-tangles are heavily damped by the decoherence parameter as compared to Fig
$1a$ and become indistinguishable at $p=0.85$.
Next, I find the two-tangles according to the definition given above. Taking
partial trace of the final density matrix of Eq. (10) over the Bob’s qubit or
Charlie’s qubit leads to the following mixed state
$\rho_{AB(AC)}=\rho_{ABC}^{T_{C(B)}}=\frac{1}{2}(\cos^{2}r|00\rangle\langle
00|+\sin^{2}r|01\rangle\langle 01|+|11\rangle\langle 11|).$ (13)
Since non of the elements of the density matrix depends on the decoherence
parameter, the two-tangles are unaffected by the noisy environment. Similarly,
one can straightforwardly prove that tracing over Alice’s qubit leads to a
density matrix whose every element is independent of the decoherence
parameter. From these matrices it can easily be verified that
$\mathcal{N}_{AB}=\mathcal{N}_{AC}=\mathcal{N}_{BC}=0$. The zero value of all
the two-tangles verifies that no entanglement exists between any two
subsystems of the tripartite state.
\put(-320.0,220.0){} | |
---|---|---
Figure 2: (color online) The $\pi$-tangle is plotted against the decoherence
parameter for $r=\pi/6$ and $r=\pi/4$ both for one qubit coupled with the
phase damping channel and for all the three qubit coupled collectively with
the same phase damping channel.
I am now in position to find the $\pi$-tangle by using its defining Eq. (7).
Since all the two-tangles are zero, the $\pi$-tangle simply becomes
$\displaystyle\pi_{ABC}$
$\displaystyle=\frac{1}{3}\left(\mathcal{N}_{A(BC)}^{2}+\mathcal{N}_{B(AC)}^{2}+\mathcal{N}_{C(AB)}^{2}\right)$
$\displaystyle=\frac{1}{384}[8(-2+2\cos^{4}r+2\sqrt{(1-p_{0})(1-p_{1})(1-p_{2})\cos^{4}r}$
$\displaystyle+2\sqrt{(1-p_{0})(1-p_{1})(1-p_{2})\cos^{4}r+\sin^{8}r}+\sin^{2}2r)^{2}$
$\displaystyle+(-1+8\sqrt{(1-p_{0})(1-p_{1})(1-p_{2})\cos^{4}r}+\cos 4r$
$\displaystyle+2\sqrt{16(1-p_{0})(1-p_{1})(1-p_{2})\cos^{4}r+\sin^{4}2r})^{2}].$
(14)
From Eq. (14), it can easily be verified that for the case of inertial frames
the $\pi$-tangle reduces to $(1-p_{i})$ when only one qubit is locally coupled
to the noisy environment. This is easy to check that this result goes to zero
only when the decoherence parameter is $1$. Moreover, for any acceleration,
the $\pi$-tangle does not depend on whether the stationary or the accelerated
qubit is coupled to the environment. In Fig. $2$, I plot the $\pi$-tangle for
different accelerations when only one qubit is coupled to the environment. It
can be seen that the $\pi$-tangle is lost only when the channel is fully
decohered and thus no sudden death occurs in this case as well. I have also
plotted the $\pi$-tangle for the case of collective environment in the same
figure. The graphs show that the sudden death can happen when the system is
influenced by collective environment.
### II.2 Phase flip channel
When the three qubits are locally coupled to a phase flip channel, the final
density matrix of the system is again given by Eq. (9). The subscripts and the
number of Kraus operators are the same as described for the previous case,
however, these are now made from single qubit phase flip Kraus operators. The
final density matrix of the system in this case becomes
$\displaystyle\rho_{f}$ $\displaystyle=\frac{1}{2}\cos^{4}r|000\rangle\langle
000|+\frac{1}{8}\sin^{2}2r(|001\rangle\langle 001|+|010\rangle\langle 010|)$
$\displaystyle+\frac{1}{2}(1-2p_{0})(1-2p_{1})(1-2p_{2})\cos^{2}r(|000\rangle\langle
111|+|111\rangle\langle 000|)$
$\displaystyle+\frac{1}{2}\sin^{4}r|011\rangle\langle
011|+\frac{1}{2}|111\rangle\langle 111|.$ (15)
Using the definition of one-tangle, the three one-tangles in this case become
$\displaystyle\mathcal{N}_{A(BC)}$
$\displaystyle=1/4(-2+2\cos^{2}r(\left|(1-2p_{0})(1-2p_{1})(1-2p_{2})\right|+\cos^{2}r)$
$\displaystyle+2\sqrt{(1-2p_{0})^{2}(1-2p_{1})^{2}(1-2p_{2})^{2}\cos^{4}r+\sin^{8}r}+\sin^{2}2r)$
$\displaystyle\mathcal{N}_{B(AC)}$
$\displaystyle=\mathcal{N}_{C(AB)}=\frac{1}{8}(-4+4\left|(1-2p_{0})(1-2p_{1})(1-2p_{2})\right|\cos^{2}r+4\cos^{4}r$
$\displaystyle+4\sin^{4}r+\sin^{2}2r+\sqrt{16(1-2p_{0})^{2}(1-2p_{1})^{2}(1-2p_{2})^{2}\cos^{4}r+\sin^{4}2r})$
(16)
(a) |
---|---
(b) |
---|---
Figure 3: (color online) (a) The one-tangles, for $r=\pi/6$, are plotted
against the decoherence parameter $p$ when only one qubit is coupled to the
phase flip channel. (b) The one tangles, for $r=\pi/6$, are plotted against
$p$ when all the three qubit are influenced collectively by the phase flip
channel.
where, $\left|.\right|$ represents the absolute value. Once again, the
equality between the one-tangles for the accelerated frames shows the
symmetrical nature of the problem. Also, for $r=0$, the three one-tangles are
indistinguishable, as expected, and are given by
$\left|(1-2p_{0})(1-2p_{1})(1-2p_{2})\right|$. This shows that the one-tangles
vanishes at $p_{i}=0.5$ irrespective of which qubit is locally coupled to the
noisy environment. For undecohered case, this leads to the result of static
frames. Again, like in the case of phase damping channel, all the one-tangles
becomes indistinguishable at $r=\pi/4$, however, unlike the phase damping
channel it goes to zero at $p_{i}=0.5$. This means that the phase flip channel
destroys all the one-tangles earlier, in other words, the tripartite
entanglement is caused to sudden death. The one-tangles are shown in Fig. $3a$
for $r=\pi/6$ when only one qubit is locally influenced by the phase flip
channel. It is important to mention here that the channel effect is locally
symmetrical for both inertial and noninertial qubits. One can see that
initially the one-tangles are acceleration dependent and this dependence on
acceleration gradually decreases as the decoherence parameter increases and
the difference vanishes near $p_{i}=0.5$. This shows that the one-tangles are
heavily damped by the decoherence parameter in comparison to the damping
caused by the acceleration. Hence one can ignore the acceleration effect near
a $50\%$ decoherence level. All the one-tangles become zero at $p_{i}=0.5$,
which in fact happens for any acceleration. Interestingly enough, the one-
tangles’ sudden rebirth takes place for values of $p_{i}>0.5$ irrespective of
the acceleration of the frames. However, the regrowing rate is different for
different values of acceleration. This means that at $50\%$ decoherence level
all the tripartite entanglement first shifts to the environment and then part
of it shifts back to the system at values of decoherence parameter higher than
this value. In Fig. $3b$, the effect of collective environment on all the one-
tangles is shown for $r=\pi/6$ and $r=\pi/4$. It can be seen that the damping
caused by the collective environment is heavier than the damping in one qubit
local coupling case and a sudden death prior to a $50\%$ decoherence level may
happen. Moreover, the sudden rebirth of the one-tangles is also delayed.
One can further verify that by taking partial trace of Eq. (15) over Bob’s or
Charlie’s qubit leads to the same result as given in Eq. (13). This means that
no bipartite entanglement exits for this case as well, because all the two-
tangles are zero as before. The $\pi$-tangle can be found straightforwardly as
done in the previous case, which is one-third of the sum of the squares of the
three one-tangles. It is given by
$\displaystyle\pi_{ABC}$
$\displaystyle=\frac{1}{96}[2(-2+2\cos^{2}r(\left|(1-2p_{0})(1-2p_{1})(1-2p_{2})\right|+\cos^{2}r)$
$\displaystyle+2\sqrt{(1-2p_{0})^{2}(1-2p_{1})^{2}(1-2p_{2})^{2}\cos^{4}r+\sin^{8}r}+\sin^{2}2r)^{2}$
$\displaystyle+(-4+4\left|1-2p_{0}\right|\left|1-2p_{1}\right|\left|1-2p_{2}\right|\cos^{2}r+4\cos^{4}r+4\sin^{4}r$
$\displaystyle+\sin^{2}2r+\sqrt{16(1-2p_{0})^{2}(1-2p_{1})^{2}(1-2p_{2})^{2}\cos^{4}r+\sin^{4}2r})^{2}].$
(17)
\put(-320.0,220.0){} | |
---|---|---
Figure 4: (color online) The $\pi$-tangle is plotted against the decoherence
parameter for $r=\pi/6$ and $r=\pi/4$ both for one qubit coupled with the
phase flip channel and for all the three qubit coupled collectively with the
same phase flip channel.
I plot it for one qubit local coupling case for $r=\pi/6$, $\pi/4$ in Fig.
$4$. It can be seen that the $\pi$-tangle is strongly acceleration dependent
for the lower limit of the decoherence parameter. However, as the decoherence
parameter increases, the acceleration dependence decreases and it goes to
zero, irrespective of the value of acceleration, at $p_{i}=0.5$. The sudden
rebirth of the $\pi$-tangle is not as quick as in the case of one-tangles.
This behavior of the $\pi$-tangle does not depend on which one qubit is
coupled to the environment. I have also plotted the $\pi$-tangle for the case
of collective environment for $r=\pi/6$, $\pi/4$. The figure shows that the
sudden death is faster when the system evolves under collective environment
and the rebirth is quite delayed.
(a) |
---|---
(b) |
---|---
Figure 5: (color online) (a) The one-tangles, for $r=0$ and $r=\pi/4$, are
plotted against the decoherence parameter $p$ when only Alice’s qubit is
coupled to the bit flip channel. (b) For the same two values of acceleration,
the one-tangles are plotted against the decoherence parameter when the
collective environment is switched on.
### II.3 Bit flip channel
Finally, I investigate the effect of bit flip channel on the entanglement of
the tripartite system. When each qubit is locally coupled to the channel, the
final density matrix of the system is again given by Eq. (9) and the Kraus
operators have the same meaning as before. However, these are now made from
the single qubit bit flip channel in the same way as mentioned earlier.
Instead of writing the mathematical relations for one-tangles, which are quite
lengthy, I prefer to see the effect of decoherence only for special cases.
Unlike the two previous cases, the one-tangles for the accelerated observers
are not always equal. Similarly, for $r=0$ case, the equality of one-tangles
depends on which qubit is locally coupled to the noisy environment. For
example, when only Alice’s qubit is locally coupled to the channel, then
$\mathcal{N}_{B(AC)}=\mathcal{N}_{C(AB)}=1$ and
$\mathcal{N}_{A(BC)}=-1+2\sqrt{1-2p_{0}+2p_{0}^{2}}$ and if Bob’s qubit is
under the influence of the channel then
$\mathcal{N}_{A(BC)}=\mathcal{N}_{C(AB)}=1$ and
$\mathcal{N}_{B(AC)}=-1+2\sqrt{1-2p_{1}+2p_{1}^{2}}$. A similar relation
exists when only Charlie’s qubit is coupled to the environment. On the other
hand, all the three one-tangles are equal when the system is coupled to
collective environment and are given by
$-1+2\sqrt{2}\sqrt{(-1+p)^{2}p^{2}}+2\sqrt{1-6p+16p^{2}-20p^{3}+10p^{4}}$. A
similar situation arises for $r=\pi/4$ as well. That is, two one-tangles are
always equal and like for $r=0$ case, the equality between two one-tangles
depends on which qubit is under the action of the channel. However, unlike
$r=0$ case, non of the one-tangles is always equal to $1$, that is, every
tangle is decoherence parameter dependent. Fig. $5a$ shows the dynamics of the
one-tangles for $r=0$, $\pi/4$ when only Alice’s qubit is locally coupled to
the channel. The acceleration dependence of the one-tangles can easily be seen
as compared to $r=0$ case. Also, $\mathcal{N}_{B(AC)}=\mathcal{N}_{C(AB)}$ for
the whole range of decoherence parameter. Even though the behavior of each
one-tangle is symmetrical around $50\%$ decoherence level, the damping is more
for $\mathcal{N}_{A(BC)}$. The interesting features are the symmetrical
increase beyond $50\%$ decoherence level of each one-tangle back to its
initial value for a fully decohered channel and the occurrence of no sudden
death even in the limit of infinite acceleration. The behavior of one-tangles
for the case of collective environment is shown in Fig.$5b$. Even in this case
of strongly decohered system, there is no sudden death of any one-tangle. The
second remarkable feature is the three regions where the subsystems become
indistinguishable, that is, for lower limit, intermediate range and upper
limit of the decoherence parameter. In Fig. $6$, the results of one-tangles
are plotted for $r=\pi/6$ against the decoherence parameter when Alice’s qubit
interacts with the channel. Again, each one-tangle is symmetrical around
$p_{1}=0.5$, however, they are affected differently. It can be seen that there
are different particular couple of values of the decoherence parameter at
which all the three one-tangles are same and the subsystems become
indistinguishable. Interestingly, in between these points, the one-tangles of
the accelerated observers are less effected than the one-tangle of the static
observer.
\put(-320.0,220.0){} | |
---|---|---
Figure 6: (color online) The one-tangles are plotted against the decoherence
parameter for $r=\pi/6$ when only Alice’s qubit is locally coupled to the bit
flip channel.
Like in the previous two cases, all the two-tangles are zero in this case as
well. The $\pi$-tangle can be found similar to the previous two cases. To see
how it is effected by the noisy environment, I plot it in Fig. $7$ against the
decoherence parameter for different values of the acceleration. The figure
shows the behavior of $\pi$-tangle for the case when only one qubit is
influenced by the channel. This behavior of the $\pi$-tangle does not change
by switching the coupling of the channel from one qubit to another, which
shows that the $\pi$-tangle is invariant with respect to the local coupling of
the channel with a single qubit. The acceleration dependence of the
$\pi$-tangle is quite clear from the figure. It can also be seen that the
damping caused by the decoherence parameter is heavy for small values of the
acceleration as compared to the case of large values of the acceleration. In
other words, the effect of decoherence diminishes as the acceleration
increases. This means that in the limit of infinite acceleration the
decoherence effect may be ignored. The behavior of $\pi$-tangle when the
system is under the action of collective environment has also been plotted in
Fig. $7$ for two different values of the acceleration. In comparison to single
qubit coupling, the damping is heavier, however, no sudden death occurs.
\put(-320.0,220.0){} | |
---|---|---
Figure 7: (color online) The $\pi$-tangle is plotted against the decoherence
parameter for $r=\pi/6$ and $r=\pi/4$ both for one (Alice) qubit coupled with
the bit flip channel and for all the three qubit coupled collectively with the
same bit flip channel.
## III Summary
In conclusion, the effects of different channels on the tripartite
entanglement of GHZ state in noninertial frames by using one-tangle and
$\pi$-tangle as the entanglement quantifier are investigated. It is shown that
non of the reduced density matrix of any two subsystems depends on the
decoherence parameter irrespective of the channel used. That is, the reduced
density matrices of the subsystems behave as closed systems for which the two-
tangles are always zero. This means that no entanglement exists between two
subsystems. In other words, the entanglement resource cannot be utilized by
any two observers without the cooperation of the third one. Under the
influence of phase damping and phase flip channels, the one-tangles of the
accelerated observers are indistinguishable for the whole range of
acceleration and decoherence parameters. Similarly, in the limit of infinite
acceleration all the three one-tangles are equal which shows that all the
subsystems are equally entangled. On the other hand, the situation is
different when the system is coupled to a bit flip channel. That is, the
equality between any two one-tangles depends on which qubit is coupled to the
channel. In other words, the share of entanglement between subsystems is
dependent on the coupling of a particular qubit to the channel. This asymmetry
of the one tangles can be used to identify the frame of the observer coupled
to the noisy environment. Also, under no circumstances the three one-tangles
become equal for the whole range of decoherence parameter, except for a couple
of values, when the system is coupled to a bit flip channel. In the case of
phase damping channel, no sudden death of any one-tangle happens under local
coupling of any qubit with the channel. However, it goes to zero when the
channel is fully decohered. The sudden death of one-tangles may happen when
the system interacts with collective phase damping channel. Under the action
of phase flip channel the one-tangle sudden death occurs for every
acceleration and then a sudden rebirth of the one-tangle that increases as the
decoherence parameter increases. In the case of bit flip channel, the one-
tangle always survives and the effect of decoherence may be ignored in the
range of large acceleration. The sudden death of $\pi$-tangles may or may not
happens under the action of phase damping channel. Whereas it’s sudden death
cannot be avoided when the system is influenced by phase flip channel. The
$\pi$-tangle is never lost when the system is coupled to a bit flip channel.
Which means that the entanglement of tripartite GHZ state is robust against
bit flip noise. The fact that both one-tangles and $\pi$-tangle are never lost
in bit flip channel may be useful for faithful communication in noninertial
frames. Finally, it needs to be pointed out that the presence of a noisy
environment does not violate the CKW inequality Coffman
$\mathcal{N}_{AB}^{2}+\mathcal{N}_{AC}^{2}\leq\mathcal{N}_{A(BC)}^{2}$ for GHZ
initial state in the noninertial frames.
## References
* (1) The Physics of Quantum Information, D. Bouwmeester, A. Ekert, A. Zeilinger (Springer-Verlag, Berlin, 2000)
* (2) C. H. Bennett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. 70, 1895 (1993).
* (3) A.K Ekert, Phys. Rev. Lett. 67, 661 (1991).
* (4) C.H. Bennett, G. Brassard and N.D. Mermin, Phys. Rev. Lett. 68, 557 (1992).
* (5) L.K. Grover, Phys. Rev. Lett. 79, 325 (1997).
* (6) D.P. DiVincenzo, Science 270, 255 (1995).
* (7) P.M. Alsing, I. Fuentes-Schuller, R. B.Mann, and T. E. Tessier, Phys. Rev. A 74, 032326 (2006).
* (8) Yi Ling et al, J. Phys. A: Math. Theor. 40, 9025 (2007).
* (9) R. M. Gingrich and C. Adami, Phys. Rev. Lett. 89, 270402 (2002).
* (10) Q. Pan and J. Jing, Phys. Rev. A 77, 024302 (2008).
* (11) I. Fuentes-Schuller and R. B. Mann, Phys. Rev. Lett. 95, 120404 (2005).
* (12) H. Terashima and M. Ueda, Int. J. Quantum Inf. 1, 93 (2003).
* (13) J. Wang and J. Jing, arxiv: 1005.2865v4 (2010).
* (14) S. Khan and M. K. Khan, J. Phys. A: Math. Theor. 44, 045305 (2011).
* (15) M. R. Hwang, D. Park, and E. Jung, Phys. Rev. A 83, 012111 (2010).
* (16) J. Wang and J. Jing, Phys. Rev. A 83, 022314 (2010).
* (17) M. Shamirzai, B. N. Esfahani, and M. Soltani, ArXiv:1103.0258v2
* (18) W. Zhang and J. Jing, arXiv: 1103.4903v1.
* (19) T. Yu and J. H. Eberly Phys. Rev. B 66 193306 (2002); 68 165322 (2003).
* (20) T. Yu and J. H. Eberly Phys. Rev. Lett. 93 140404 (2004).
* (21) M. AsPachs, G. Adesso, and I. Fuentes, Phys. Rev. Lett. 105, 151301 (2010).
* (22) E. Mart´ın-Mart´ınez, L. J. Garay, and J. Le´on, Phys. Rev. D 82, 064006 (2010); Phys. Rev. D 82, 064028 (2010).
* (23) D. E. Bruschi, J. Louko, E. Mart´ın-Mart´ınez, A. Dragan, and I. Fuentes, Phys. Rev. A 82, 042332 (2010).
* (24) A. Peres Phys. Rev. Lett. 77, 1413 (1996)
* (25) M. Horodecki, P. Horodecki, and R. Horodecki Phys. Lett. A 223, 1 (1996)
* (26) V. Coffman, J. Kundu, and W. K. Wootters, Phys. Rev. A 61, 052306 (2000).
|
arxiv-papers
| 2013-02-13T07:48:06 |
2024-09-04T02:49:41.691302
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Salman Khan",
"submitter": "Salman Khan",
"url": "https://arxiv.org/abs/1302.3005"
}
|
1302.3274
|
# Disentangling the effects of geographic and ecological isolation on genetic
differentiation
Gideon S. Bradburd1,a, Peter L. Ralph2,b, Graham M. Coop1,c
###### Abstract
Populations can be genetically isolated both by geographic distance and by
differences in their ecology or environment that decrease the rate of
successful migration. Empirical studies often seek to investigate the
relationship between genetic differentiation and some ecological variable(s)
while accounting for geographic distance, but common approaches to this
problem (such as the partial Mantel test) have a number of drawbacks. In this
article, we present a Bayesian method that enables users to quantify the
relative contributions of geographic distance and ecological distance to
genetic differentiation between sampled populations or individuals. We model
the allele frequencies in a set of populations at a set of unlinked loci as
spatially correlated Gaussian processes, in which the covariance structure is
a decreasing function of both geographic and ecological distance. Parameters
of the model are estimated using a Markov chain Monte Carlo algorithm. We call
this method Bayesian Estimation of Differentiation in Alleles by Spatial
Structure and Local Ecology (_BEDASSLE_), and have implemented it in a user-
friendly format in the statistical platform R. We demonstrate its utility with
a simulation study and empirical applications to human and teosinte datasets.
1Center for Population Biology, Department of Evolution and Ecology,
University of California, Davis, CA 95616
2Department of Molecular and Computational Biology, University of Southern
California, Los Angeles, CA 90089
[email protected]; [email protected]; [email protected]
Key Words: isolation by distance, isolation by ecology, partial Mantel test,
landscape genetics
## Introduction
The level of genetic differentiation between populations is determined by the
homogenizing action of gene flow balanced against differentiating processes
such as local adaptation, different adaptive responses to shared environments,
and random genetic drift. Geography often limits dispersal, so that the rate
of migration is higher between nearby populations and lower between more
distant populations. The combination of local genetic drift and distance-
limited migration results in local differences in allele frequencies, the
magnitude of which increases with geographic distance, resulting in a pattern
of isolation by distance (Wright, 1943). Extensive theoretical work has
described expected patterns of isolation by distance under a variety of models
of genetic drift and migration (Charlesworth et al., 2003) in both equilibrium
populations in which migration and drift reach a balance, and under non-
equilibrium demographic models, such as population expansion or various
scenarios of colonization (Slatkin, 1993). A range of theoretical approaches
have been applied, with authors variously computing probabilities of identity
of gene lineages (e.g. Malécot, 1975; Rousset, 1997) or correlations in allele
frequencies (e.g. Slatkin and Maruyama, 1975; Weir and Cockerham, 1984), or
working with the structured coalescent (e.g. Hey, 1991; Nordborg and Krone,
2002). Although these approaches differ somewhat in detail, their expectations
can all be described by a pattern in which allele frequencies are more similar
between nearby populations than between distant ones.
In addition to geographic distance, populations can also be isolated by
ecological and environmental differences if processes such as dispersal
limitations (Wright, 1943), biased dispersal (e.g. Edelaar and Bolnick, 2012),
or selection against migrants due to local adaptation (Wright, 1943; Hendry,
2004) decrease the rate of successful migration. Thus, in an environmentally
heterogeneous landscape, genome-wide differentiation may increase between
populations as either geographic distance or ecological distance increase. The
relevant ecological distance may be distance along a single environmental
axis, such as difference in average annual rainfall, or distance along a
discrete axis describing some landscape or ecological feature not captured by
pairwise geographic distance, such as being on serpentine versus non-
serpentine soil, or being on different host plants.
Isolation by distance has been observed in many species (Vekemans and Hardy,
2004; Meirmans, 2012), with a large literature focusing on identifying other
ecological and environmental correlates of genomic differentiation. The goals
of these empirical studies are generally 1) to determine whether an ecological
factor is playing a role in generating the observed pattern of genetic
differentiation between populations and, 2) if it is, to determine the
strength of that factor relative to that of geographic distance. The vast
majority of this work makes use of the partial Mantel test to assess the
association between pairwise genetic distance and ecological distance while
accounting for geographic distance (Smouse et al., 1986).
A number of valid objections have been raised to the reliability and
interpretability of the partial Mantel (e.g. Legendre and Fortin, 2010;
Guillot and Rousset, 2013). First, because the test statistic of the Mantel
test is a matrix correlation, it assumes a linear dependence between the
distance variables, and will therefore behave poorly if there is a nonlinear
relationship (Legendre and Fortin, 2010). Second, the Mantel and partial
Mantel tests can exhibit high false positive rates when the variables measured
are spatially autocorrelated (e.g., when an environmental attribute, such as
serpentine soil, is patchily distributed on a landscape), since this structure
is not accommodated by the permutation procedure used to assess significance
(Guillot and Rousset, 2013). Finally, in our view the greatest limitation of
the partial Mantel test in its application to landscape genetics may be that
it is only able to answer the first question posed above — whether an
ecological factor plays a role in generating a pattern of genetic
differentiation between populations — rather than the first _and_ the second —
the strength of that factor relative to that of geographic distance. By
attempting to control for the effect of geographic distance with matrix
regressions, the partial Mantel test makes it hard to simultaneously infer the
effect sizes of geography and ecology on genetic differentiation, and because
the correlation coefficients are inferred for the matrices of post-regression
residuals, the inferred effects of both variables are not comparable — they
are not in a common currency. We perceive this to be a crucial lacuna in the
populations genetics methods toolbox, as studies quantifying the effects of
local adaptation (e.g. Rosenblum and Harmon, 2011), host-associated
differentiation (e.g. Drès and Mallet, 2002; Gómez-Díaz et al., 2010), or
isolation over ecological distance (e.g. Andrew et al., 2012; Mosca et al.,
2012) all require rigorous comparisons to the effect of isolation by
geographic distance.
In this article, we present a method that enables users to quantify the
relative contributions of geographic distance and ecological distance to
genetic differentiation between sampled populations or individuals. To do
this, we borrow tools from geostatistics (Diggle et al., 1998) and model the
allele frequencies at a set of unlinked loci as spatial Gaussian processes. We
use statistical machinery similar to that employed by the Smooth and
Continuous AssignmenTs (SCAT) program designed by (Wasser et al., 2004) and
the BayEnv and BayEnv2 programs designed by (Coop et al., 2010) and (Günther
and Coop, 2013). Under this model, the allele frequency of a local population
deviates away from a global mean allele frequency specific to that locus, and
populations covary, to varying extent, in their deviation from this global
mean. We model the strength of the covariance between two populations as a
decreasing function of the geographic and ecological distance between them, so
that populations that are closer in space or more similar in ecology tend to
have more similar allele frequencies. We note that this model is not an
explicit population genetics model, but a statistical model – we fit the
observed spatial pattern of genetic variation, rather than modeling the
processes that generated it. Informally, we can think of this model as
representing the simplistic scenario of a set of spatially homogeneous
populations at migration-drift equilibrium under isolation by distance.
The parameters of this model are estimated in a Bayesian framework using a
Markov chain Monte Carlo algorithm (Metropolis et al., 1953; Hastings, 1970).
We demonstrate the utility of this method with two previously published
datasets. The first is a dataset from several subspecies of Zea mays, known
collectively as teosinte (Fang et al., 2012), in which we examine the
contribution of difference in elevation to genetic differentiation between
populations. The second is a subset of the Human Genome Diversity Panel (HGDP,
(Conrad et al., 2006; Li et al., 2008)), for which we quantify the effect size
of the Himalaya mountain range on genetic differentiation between human
populations. We have coded this method — Bayesian Estimation of
Differentiation in Alleles by Spatial Structure and Local Ecology (_BEDASSLE_)
— in a user-friendly format in the statistical platform R (R Development Core
Team, 2013), and have made the code available for download at genescape.org.
## Methods
### Data
Our data consist of $L$ unlinked biallelic single nucleotide polymorphisms
(SNPs) in $K$ populations; a matrix of pairwise geographic distance between
the sampled populations ($D$); and one or more environmental distance matrices
($E$). The elements of our environmental distance matrix may be binary (e.g.,
same or opposite side of a hypothesized barrier to gene flow) or continuous
(e.g., difference in elevation or average annual rainfall between two sampled
populations). The matrices $D$ and $E$ can be arbitrary, so long as they are
nonnegative definite, a constraint satisfied if they are each matrices of
distances with respect to some metric. We summarize the genetic data as a set
of allele counts ($C$) and sample sizes ($S$). We use $C_{\ell,k}$ to denote
the number of observations of one of the two alleles at biallelic locus $\ell$
in population $k$ out of a total sample size of $S_{\ell,k}$ alleles. The
designation of which allele is counted (for convenience, we denote the counted
allele as allele ‘1’), is arbitrary, but must be consistent among populations
at the same locus.
### Likelihood Function
We model the data as follows. The $C_{\ell,k}$ observed ‘1’ alleles in
population $k$ at locus $\ell$ result from randomly sampling a number
$S_{\ell,k}$ of alleles from an underlying population in which allele 1 is at
frequency $f_{\ell,k}$. These population frequencies $f_{\ell,k}$ are
themselves random variables, independent between loci but correlated between
populations in a way that depends on pairwise geographic and ecological
distance. A flexible way to model these correlations is to assume that the
allele frequencies $f_{\ell,k}$ are multivariate normal random variables,
inverse logit-transformed to lie between 0 and 1. In other words, we assume
that $f_{\ell,k}$ is obtained by adding a deviation $\theta_{\ell,k}$ to the
global value $\mu_{\ell}$, and transforming:
$f_{\ell,k}=f(\theta_{\ell,k}+\mu_{\ell})=\frac{1}{1+\exp(-(\theta_{\ell,k}+\mu_{\ell}))}.$
(1)
Under this notation, $\mu_{\ell}$ is the transformed mean allele frequency at
locus $\ell$ and $\theta_{\ell,k}$ is the population- and locus-specific
deviation from that transformed mean. We can then write the binomial
probability of seeing $C_{\ell,k}$ of allele ’1’ at locus $\ell$ in population
$k$ as
$P\big{(}C_{\ell,k}|S_{\ell,k},f_{\ell,k}\big{)}=\binom{S_{\ell,k}}{C_{\ell,k}}f_{\ell,k}^{C_{\ell,k}}(1-f_{\ell,k})^{S_{\ell,k}-C_{\ell,k}}.$
(2)
In doing so, we are assuming that the individuals are outbred, so that the
$S_{\ell,k}$ alleles represent independent draws from this population
frequency. We will return to relax this assumption later.
To model the covariance of the allele frequencies across populations, we
assume that $\theta_{\ell,k}$ are multivariate normally distributed, with mean
zero and a covariance matrix $\Omega$ that is a function of the pairwise
geographic and ecological distances between the sampled populations. We model
the covariance between populations $i$ and $j$ as
$\Omega_{i,j}=\frac{1}{\alpha_{0}}\exp{\left(-(\alpha_{D}D_{i,j}+\alpha_{E}E_{i,j})^{\alpha_{2}}\right)},$
(3)
where $D_{i,j}$ and $E_{i,j}$ are the pairwise geographic and ecological
distances between populations $i$ and $j$, respectively, and $\alpha_{D}$ and
$\alpha_{E}$ are the effect sizes of geographic distance and ecological
distance, respectively. The parameter $\alpha_{0}$ controls the variance of
population specific deviate $\theta$ (i.e. at $D_{i,j}+E_{i,j}=0$), and
$\alpha_{2}$ controls the shape of the decay of the covariance with distance.
As alluded to above, as many separate ecological distance variables may be
included as desired, each with its own $\alpha_{E_{x}}$ effect size parameter,
but here we restrict discussion to a model with one.
With this model, writing
$\alpha=(\alpha_{0},\alpha_{D},\alpha_{E},\alpha_{2})$, the likelihood of the
SNP counts observed at locus $\ell$ in all sampled populations can now be
expressed as
$P\big{(}C_{\ell},\theta_{\ell}|S_{\ell},\mu_{\ell},\alpha\big{)}=P\big{(}\theta_{\ell}|\Omega(\alpha)\big{)}\prod_{k=1}^{K}P\big{(}C_{\ell,k}|S_{\ell,k},f(\theta_{\ell},\mu_{\ell})\big{)}$
(4)
where we drop subscripts to indicate a vector (e.g. $C_{\ell}=(C_{\ell
1},\ldots,C_{\ell K})$), and $P(\theta_{\ell}|\Omega)$ is the multivariate
normal density with mean zero and covariance matrix $\Omega$.
The joint likelihood of the SNP counts $C$ and the transformed population
allele frequencies $\theta$ across all $L$ unlinked loci in the sampled
populations is just the product across loci:
$P\big{(}C,\theta|S,\mu,\alpha\big{)}=\prod_{\ell=1}^{L}P\big{(}\theta_{\ell}|\Omega(\alpha)\big{)}\prod_{k=1}^{K}P\big{(}C_{\ell,k}|S_{\ell,k},f(\theta_{\ell},\mu_{\ell})\big{)}.$
(5)
### Posterior Probability
We take a Bayesian approach to inference on this problem, and specify priors
on each of our parameters. We place exponential priors on $\alpha_{D}$ and
$\alpha_{E}$, each with mean 1; and a gamma prior on $\alpha_{0}$, with shape
and rate parameters both equal to 1. We took the prior on $\alpha_{2}$ to be
uniform between 0.1 and 2. Finally, we chose a Gaussian prior for each
$\mu_{\ell}$, with mean $0$, variance $1/\beta$, and a gamma distributed
hyper-prior on $\beta$ with shape and rate both equal to 0.001. For a
discussion of the rationale for these priors, please see the Appendix.
The full expression for the joint posterior density, including all priors, is
therefore given by
$\displaystyle
P(\theta,\mu,\alpha_{0},\alpha_{D},\alpha_{E},\alpha_{2},\beta|C,S)\propto\begin{split}\left(\prod_{\ell=1}^{L}P(\theta_{\ell,k}|\Omega)P(\mu_{\ell}|\beta)\prod_{k=1}^{K}P(C_{\ell,k}|S_{\ell,k},f_{\ell,k})\right)\\\
\qquad\qquad\times
P(\beta)P(\alpha_{0})P(\alpha_{D})P(\alpha_{E})P(\alpha_{2})\end{split}$ (6)
where the various $P$ denote the appropriate marginal densities, and the
proportionality is up to the normalization constant given by the right-hand
side integrated over all parameters.
### Markov chain Monte Carlo
We wish to estimate the posterior distribution of our parameters, particularly
$\alpha_{D}$ and $\alpha_{E}$ (or at least, their ratio). As the integral of
the posterior density given above cannot be solved analytically, we use Markov
chain Monte Carlo (MCMC) to sample from the distribution. We wrote a custom
MCMC sampler in the statistical platform R (R Development Core Team, 2013).
The details of our MCMC procedure are given in the Appendix.
### Model Adequacy
Our model is a simplification of the potentially complex relationships present
in the data, and there are likely other correlates of differentiation not
included in the model. Therefore, it is important to test the model’s fit to
the data, and to highlight features of the data that the model fails to
capture. To do this, we use posterior predictive sampling, using the set of
pairwise population $F_{ST}$ values as a summary statistic (Weir and Hill,
2002), as we are primarily interested in the fit to the differentiation
between pairs of populations. In posterior predictive sampling, draws of
parameters are taken from the posterior and used to simulate new datasets,
summaries of which can be compared to those observed in the original datasets
(Gelman et al., 1996).
Our posterior predictive sampling scheme proceeds as follows. For each
replicate of the simulations we
1. 1.
Take a set of values of $\beta$ and all $\alpha$ parameters from their joint
posterior (i.e. our MCMC output).
2. 2.
Compute a covariance matrix $\Omega$ from this set of $\alpha$ and the
pairwise geographic and ecological distance matrices from the observed data.
3. 3.
Use $\Omega$ to generate $L$ multivariate normally distributed $\theta$, and
use $\beta$ to generate a set of normally distributed $\mu$. These $\theta$
and $\mu$ are transformed using equation (1) into allele frequencies for each
population-locus combination, and binomially distributed allele counts are
sampled using those frequencies and the per-population sample sizes from the
observed data.
4. 4.
Calculate $F_{ST}$ between each pair of populations across all loci using the
count data. Specifically we use the $F_{ST}$ estimator defined by the equation
given on the top of page 730 in Weir and Hill (2002).
We then use various visualizations of $F_{ST}(i,j)$, e.g. plotted against
distance between $i$ and $j$, to compare the patterns in the observed dataset
to the patterns in the simulated datasets. This functions as a powerful and
informative visual summary of the ability of the model to describe the
observed data. Since $F_{ST}$ is a good measure of genetic differentiation,
users can assess how well the method is able to pick up general trends in the
data (e.g., increasing genetic differentiation with ecological or geographic
distance) and how well those general trends in the model match the slope of
their observed counterparts, and also identify specific pairwise population
comparisons that the model is doing a poor job describing. These latter may
help reveal other important processes that are generating genetic
differentiation between populations, such as unmeasured ecological variables,
or heterogeneity in population demography.
### Accounting for overdispersion
A consequence of the form of the covariance given in equation (3) is that all
populations have the same variance of allele frequencies about the global mean
(and this is $\Omega_{ii}=1/\alpha_{0}$). This will be the case in a
homogeneous landscape, but is not expected under many scenarios, such as those
characterized by local differences in population size, inbreeding rate,
historical bottlenecks, or population substructure. In practice, this leads to
overdispersion – particular populations deviating more from global means than
others. Indeed, in both empirical datasets examined in this paper, there are
clearly populations with much greater deviation in allele frequencies from the
global mean than predicted from their geographical and ecological distances.
To account for this, we will explicitly model the within-population
correlations in allelic identity due to varying histories. In so doing, we
simultaneously keep outlier populations from having an undue influence on our
estimates of $\alpha_{D}$ and $\alpha_{E}$, the effect sizes of the distance
variables measured, and highlight those populations that the model is
describing poorly. Introducing correlations accounts for overdispersion
because a population whose allele frequencies differ more from its predicted
frequencies across loci has individuals whose allelic identities are more
correlated (and the converse is also true). To see this, observe that, for
instance, if one completely selfing population and one outbred population each
have a given allele at frequency $p$, then the variance in sampled allele
frequency will be twice as high in the selfing population, since the number of
effective independent draws from the pool of alleles is half as large.
To introduce within-population correlations we assume that the allele
frequencies from which the allele counts $C_{\ell,k}$ are drawn are not fixed
at $f_{\ell,k}$, but rather randomly distributed, with mean given by
$f_{\ell,k}$ and variance controlled by another parameter. Specifically, given
$\mu_{\ell}$ and $\theta_{\ell,k}$, we suppose that the allele frequency at
locus $\ell$ in population $k$ is beta-distributed with parameters
$\Phi_{k}f_{\ell,k}$ and $\Phi_{k}(1-f_{\ell,k})$, where
$f_{\ell,k}=f(\mu_{\ell},\theta_{\ell,k})$ as before, and $\Phi_{k}$ is a
population-specific parameter, estimated separately in each population, that
controls the extent of allelic correlations between draws from individuals in
population $k$. To see why this introduces allelic correlations, consider the
following equivalent description of the distribution of $C_{\ell,k}$. We
sample the alleles one at a time; if we have drawn $n$ alleles; then the
$(n+1)^{\mathrm{st}}$ allele is either: a new draw with probability
$\Phi_{k}/(\Phi_{k}+n)$ (in which case it is of type ‘1’ with probability
$f_{\ell,k}$ and of type ’0’ with probability $1-f_{\ell,k}$); otherwise, it
is of the same type as a previously sampled allele, randomly chosen from the
$n$ sampled so far. Conceptually, each allele is either a “close relative” of
an allele already sampled, or else a “new draw” from the “ancestral
population” with allele frequency $f_{\ell,k}$. Smaller values of $\Phi_{k}$
lead to increased allelic correlations, which in turn increase the variance of
population allele frequencies.
Conveniently, the random frequency integrates out, so that the likelihood of
the count data becomes
$P(C_{\ell,k}|S_{\ell,k},f_{\ell,k}=f(\theta_{\ell,k},\mu_{\ell}))=\binom{S_{\ell,k}}{C_{\ell,k}}\frac{B(C_{\ell,k}+\Phi_{k}f_{\ell,k},S_{\ell,k}-C_{\ell,k}+\Phi_{k}(1-f_{\ell,k}))}{B(\Phi_{k}f_{\ell,k},\Phi_{k}(1-f_{\ell,k}))},$
(7)
where $B(x,y)$ is the beta function. This is known as the “beta-binomial”
model (Williams, 1975), and is used in a population genetics context by
Balding and Nichols (1995, 1997); see Balding (2003) for a review.
The parameter $\Phi_{k}$ can be related to one of Wright’s $F$-statistics
(Wright, 1943). As derived in previous work (Balding and Nichols, 1995, 1997),
if we define $F_{k}$ by $\Phi_{k}=F_{k}/(1-F_{k})$ ($0\leq F_{k}<1$), then
$F_{k}$ is analogous to the inbreeding coefficient for population $k$ relative
to its set of the spatially predicted population frequencies (Cockerham and
Weir, 1986; Balding, 2003), with higher $F_{k}$ corresponding to higher
allelic correlation in population $k$, as one would expect given increased
drift (inbreeding) in that population. However, it is important to note that
$F_{k}$ cannot solely be taken as an estimate of the past strength of drift,
since higher $F_{k}$ would also be expected in populations that simply fit the
model less well. We report values of $F_{k}$ in the output and results, and
discuss the interpretation of this parameter further in the discussion.
We have coded this beta-binomial approach as an alternative to the basic model
(see Results for a comparison of both approaches on empirical data). To
combine estimation of this overdispersion model into our inference framework,
we place an inverse exponential prior on $\Phi_{k}$ (that is,
$1/\Phi_{k}\sim\mathop{\mbox{Exp}}(5)$). This prior and the beta-binomial
probability density function are incorporated into the posterior.
### Simulation Study
We conducted two simulation studies to evaluate the performance of the method.
In the first, we simulated data under the inference model, and in the second,
we simulated under a spatially explicit coalescent model.
For the datasets simulated under the model, each simulated dataset consisted
of 30 populations, each with 10 diploid individuals sequenced at 1000
polymorphic bi-allelic loci. Separately for each dataset, the geographic
locations of the populations were sampled uniformly from the unit square, and
geographic distances ($D_{i,j}$) were calculated as the Euclidean distance
between them. We also simulated geographically autocorrelated environmental
variables, some continuous, some discrete (see Figure 1a and c). For both
discrete and continuous variables we simulated datasets in which ecological
distance had no effect on genetic differentiation between populations; these
simulations tested whether our method avoids the false positive issues of the
partial Mantel test. We also simulated datasets with an effect of both
geographic and ecological distance on genetic distance across a range of
relative effect sizes (varying the ratio $\alpha_{E}/\alpha_{D}$) to test our
power to detect their relative effects. The study thus consisted of four
sections, each comprised of 50 datasets: discrete and continuous ecological
variables, with or without an effect of ecology.
Figure 1: a) Populations simulated in the unit square, colored by their value
of a continuous ecological variable. b) Pairwise $F_{ST}$ between simulated
populations from (a), colored by difference in their values of the continuous
ecological variable. c) Populations simulated in the unit square, colored by
their value of a binary ecological variable. d) Pairwise $F_{ST}$ between
simulated populations from (c), colored by difference in their values of the
binary ecological variable.
For each dataset, we set $\alpha_{0}=0.5$, and sampled $\alpha_{D}$ and
$\alpha_{2}$ from uniform distributions ($U(0.2,4)$ and $U(0.1,2)$
respectively); the choice of $\alpha_{E}$ varied, depending on the specific
scenario (described below). These parameters were chosen to give a range of
pairwise population $F_{ST}$ spanning an order of magnitude between
approximately 0.02 and 0.2, and a realistic allele frequency spectrum. The
covariance matrix $\Omega$ was calculated using these $\alpha$ and the
pairwise geographic and ecological distance matrices (normalized by their
standard deviations), and $\Omega$ was used to generate the multivariate,
normally distributed $\theta$. Values of $\mu$ were drawn from a normal
distribution with variance $1/(\beta=0.09)$. Allele frequencies at each locus
were calculated for each population from the $\theta$ and $\mu$ using equation
(1), and SNP counts at each locus in each population were drawn from binomial
distributions parameterized by that allele frequency with the requirement that
all loci be polymorphic. We simulated under the following ecological
scenarios.
##### 1\. Continuous, Autocorrelated Ecological Variable
For the continuous case, we simulated the values of an ecological variable
across populations by sampling from a multivariate normal distribution with
mean zero and covariance between population $i$ and population $j$ equal to
$\mathop{\mbox{Cov}}(E(i),E(j))=\exp(-D_{i,j}/a_{c})$, where $a_{c}$
determines the scale of the autocorrelation (following Guillot and Rousset,
2013). For all simulations, we set $a_{c}=0.7$, to represent a reasonably
distributed ecological variable on a landscape.
##### 2\. Binary Ecological Variable
A binary variable was produced by declaring that the latitudinal equator in
the unit square was a barrier to dispersal, so that all populations on the
same side of the barrier were separated by an ecological distance of zero, and
all population pairs that spanned the equator were separated by an ecological
distance of 1.
##### A. Zero Effect Size
For each type of ecological variable, we produced 50 simulated datasets with
$\alpha_{E}=0$, so that ecological distance had no effect on the covariance of
$\theta$, and hence on genetic differentiation between populations. For each
of these simulated datasets, we performed a partial Mantel test in R using the
package ecodist (Goslee and Urban, 2007) with 1,000,000 permutations.
##### B. Varying Effect Size
We also produced 50 simulated datasets for each type of ecological variable by
simulating ten datasets for each value of $\alpha_{E}/\alpha_{D}$ from 0.2 to
1.0 in intervals of 0.2 (see Figure 1b and d). (As above, values of
$\alpha_{D}$ were drawn from a uniform distribution ($U(0.2,4)$), so this
determines $\alpha_{E}$.)
Figure 2: Populations simulated using a spatially explicit coalescent model
in the unit square. All simulated populations are indicated with black dots,
while populations that were sampled for inclusion in each dataset are
indicated by large black dots. All pairwise migration is indicated with gray
arrows. The barrier to dispersal is given by the red dotted line, across which
the standard migration rate was divided by a barrier effect size, which we
varied.
For the datasets simulated using a spatially explicit coalescent process,
allelic count data were simulated on a fixed lattice using the program ms
(Hudson, 2002). A total of 49 populations were simulated, evenly spaced in a
seven-by-seven grid, of which a subset of 25 populations were sampled to make
the final dataset; these 25 sampled populations were arranged in a five-by-
five grid, as shown in Figure 2. Each population consisted of 10 chromosomes
sampled at 1,000 polymorphic, unlinked, biallelic loci. Migration occurred
between neighboring populations (with no diagonal migration) at a rate of
$4Nm_{i,j}=4$. In all simulations, a longitudinal potential barrier to gene
flow was included just to the east of the central line (see Figure 2).
Migration rate between populations that were separated by this barrier was
diminished by dividing by some barrier effect size, which varied between
simulation sets. For 40 datasets, the barrier effect size was set to 1, so
that the barrier had no effect on genetic differentiation across it. The
barrier effect size was set to 5, 10, and 15, for 20 datasets each, for a
total of 100 datasets simulated under the spatial coalescent. For all
datasets, geographic distance was measured as the pairwise Euclidean distance
between populations on the lattice, and ecological distance was defined as
zero between populations on the same side of the barrier, and 1 between
populations on opposite sides.
#####
All analyses on the simulated datasets were run for 1,000,000 MCMC iterations,
which appeared sufficient in most cases for convergence on the stationary
distribution. The chain was sampled every 1,000 generations, and all summary
statistics from the simulation study were calculated after a burn-in of 20%.
The metrics of method performance used on the datasets simulated under the
inference model were precision, accuracy, and coverage of the
$\alpha_{E}:\alpha_{D}$ ratio. We defined _precision_ as breadth of the 95%
credible set of the marginal posterior distribution; _accuracy_ as the
absolute value of the difference between the median value of the marginal
posterior distributions and the values used to simulate the data in each
dataset; and _coverage_ as the proportion of analyses for which the value used
to simulate the data fell within the 95% credible set of the marginal
posterior distribution for that parameter. For the datasets simulated under
the spatial coalescent process, we wished to assess the ability of the method
to accurately recover the relative strength of the barrier to gene flow.
For approximately 30% of all analyses, the MCMC runs displayed obvious
difficulty with convergence within the first 1,000,000 generations. The signs
of potentially poor single-chain MCMC behavior that we looked for included:
acceptance rates that are too low or too high (generally 20-70% acceptance
rates are thought to be optimal); parameter trace plots that exhibit high
autocorrelation times; acceptance rates that have not plateaued by the end of
the analysis; and marginal distributions that are multimodal, or not
approximately normal (for a more complete discussion on MCMC diagnosis, please
see Gilks et al. (1996); for plots of example MCMC output, see Figures S5, S6,
and S7). In some cases, this was because the naive scales of the various
tuning parameters of the random-walk proposal mechanisms were inappropriate
for the particular dataset, and mixing was too slow over the number of
generations initially specified (as diagnosed by visualizing the parameter
acceptance rates of MCMC generations). This was addressed by re-running
analyses on those datasets using different random-walk tuning parameters, or
by increasing the number of generations over which the MCMC ran. In the other
cases, failure to converge was due to poor performance of the MCMC in regions
of parameter space too near the prior boundaries. Specifically, when the chain
was randomly started at values of some $\alpha$ parameters too close to zero,
it was unable to mix out of that region of parameter space. This problem was
addressed by re-running the analyses using different, randomly chosen initial
values for the $\alpha$ parameters. In our R package release of the code we
provide simple diagnostic tools for the MCMC output, and further guidance for
their use.
### Empirical Data
To demonstrate the utility of this method, we applied it to two empirical
datasets: one consisting of populations of teosinte (Zea mays), the wild
progenitor of maize, and one consisting of human populations from the HGDP
panel. Both processed datasets are available for download at _genescape.org_.
See Tables S1 and S2 in the Supplementary Materials for names and metadata of
populations used.
The teosinte dataset consisted of 63 populations of between 2 and 30 diploid
individuals genotyped at 978 biallelic, variant SNP loci (Fang et al., 2012).
Each population was associated with a latitude, longitude, and elevation at
the point of sampling (see Figure S2 and Table S1). Pairwise geographic great-
circle distances and ecological distances were calculated for all pairs of
populations, where ecological distance was defined as the difference in
elevation between populations. Both pairwise distance variables were
normalized by their standard deviations.
The human dataset was the Eurasian subset of that available from the HGDP
(Conrad et al., 2006; Li et al., 2008), consisting of 33 populations of
between 6 and 45 individuals genotyped at 1000 biallelic, variant SNP loci
(see Figure S3 and Table S2). Pairwise geographic great-circle distances and
ecological distances were calculated for all pairs of populations, where
ecological distance was defined as 0 or 1 if the populations were on the same
or opposite side of the Himalaya mountain range, respectively. For the
purposes of our analysis the western edge of the Himalaya was defined at
$75^{\circ}$ East.
For comparison, the method was run on each of the two datasets both with and
without the beta-binomial overdispersion model. MCMC marginal traces were
examined visually to assess convergence on a stationary distribution. The
chain was thinned by sampling every 1000 generations, and the median and 95%
credible sets were reported on the marginal distribution after a burn-in of
20%. The MCMC analysis for the teosinte dataset without the overdispersion
model was run for 10 million generations; the analysis with the overdispersion
model was run for 15 million generations. For the HGDP dataset, the numbers of
generations were 25 million and 35 million, for the analyses without and with
the overdispersion model, respectively.
## Results
### Simulation Results
As described above, we conducted two simulation studies. The performance of
the method in inference of the parameters of greatest interest is given below.
First we note that, consistent with the results of (Guillot and Rousset,
2013), the spatial autocorrelation in our ecological variable caused the
partial Mantel to have a high false positive rate when $\alpha_{E}=0$, which
suggests that the partial Mantel test is not well calibrated to assess the
significance of ecological distance on patterns of genetic differentiation. At
a significance level of $p=0.05$, the false positive rate for the datasets
simulated under the inference model with a binary ecological distance variable
was 8%, and for the continuous ecological variable, the false positive error
rate was 24%. For the datasets simulated under the spatial coalescent process
with a barrier effect size of 1 (meaning that the barrier had no effect on
genetic differentiation across it), the false positive error rate was 37.5%
(see Figure S4).
The precision and accuracy results for the datasets simulated under the model
with a continuous and discrete ecological variable are visualized in Figure
panels 3a and b, respectively, across the six simulated values of the ratio
$\alpha_{E}/\alpha_{D}$. Median precision, accuracy, and coverage are reported
in Table 1.
The performance of the method on the datasets simulated using the spatial
coalescent model is given in Figure 4, which shows the posterior distributions
of $\alpha_{E}:\alpha_{D}$ ratio from each analyzed dataset over the four
barrier effect sizes.
Figure 3: a) Performance of the method for the 100 datasets simulated with a continuous ecological distance variable. b) Performance of the method for the 100 datasets simulated with a binary ecological distance variable. In each, the left panel depicts performance on the 50 datasets for which $a_{E}$ was fixed at 0, and the right panel depicts performance on the 50 datasets for which $a_{E}$ varied. | Sim Study 1A | Sim Study 1B | Sim Study 2A | Sim Study 2B
---|---|---|---|---
Precision | 0.041 | 0.30 | 0.15 | 0.96
Accuracy | 0.013 | 0.0066 | 0.031 | 0.033
Coverage | NA | 94% | NA | 94%
Table 1: Simulation Studies 1A and 1B were conducted with a continuous
ecological variable and $\alpha_{E}=0$ and $\alpha_{E}>0$, respectively.
Simulation Studies 2A and 2B were conducted with a binary ecological variable
and $\alpha_{E}=0$ and $\alpha_{E}>0$, respectively. Precision, accuracy, and
coverage are reported on inference of the $\alpha_{E}$ : $\alpha_{D}$ ratio.
_Precision_ is breadth of the 95% credible set of the marginal posterior
distribution (smaller values indicate better method performance). _Accuracy_
is the absolute value of the difference between the median value of the
marginal posterior distributions and the values used to simulate the data
(smaller values indicate better method performance). _Coverage_ is the
proportion of analyses for which the value used to simulate the data fell
within the 95% credible set of the marginal posterior distribution for that
parameter (higher values indicate better method performance). Coverage is not
reported for the simulations in which the effect size of the ecological
distance variable was fixed to zero ($\alpha_{E}=0$), as the parameter value
used to generate the data is on the prior bound on $\alpha_{E}$, and coverage
was therefore zero. Figure 4: The marginal distributions on the
$\alpha_{E}/\alpha_{D}$ ratio from the analyses performed on the datasets
simulated using a spatially explicit coalescent process. The migration rate
between populations separated by the barrier was divided by a barrier effect
size, which varied among simulations. Inset: Pairwise $F_{ST}$, colored by
whether populations were on the same or opposite sides of a barrier to
dispersal, plotted against pairwise geographic distance for example datasets
for each of the 4 barrier effect sizes. a) Barrier effect size of 1 (n=40); b)
Barrier effect size of 5 (n=20); c) Barrier effect size of 10 (n=20); d)
Barrier effect size of 15 (n=20).
### Empirical Results
### Teosinte Results
For the _Zea mays_ SNP dataset analysis, the mean and median of the posterior
ratio of the effect size of pairwise difference in elevation to the effect
size of pairwise geographic distance (i.e.- the $\alpha_{E}:\alpha_{D}$ ratio)
was 0.153, and the 95% credible set was 0.137 to 0.171 (see Figure S10a). The
interpretation of this ratio is that one thousand meters of elevation
difference between two populations has a similar impact on genetic
differentiation as around 150 (137–171) kilometers of lateral distance.
Accounting for overdispersion (using the beta-binomial model) we obtain
slightly different results, with a mean and median $\alpha_{E}:\alpha_{D}$
ratio of 0.205, and a 95% credible set from 0.180 to 0.233 (1,000 meters
difference in elevation $\approx$ 205 kilometers lateral distance, see Figure
S10b). Values of our $F$ statistics $F_{k}$ estimated across populations
ranged from $2\times 10^{-4}$ to 0.53, and are shown in Supplemental Figure
S2.
Posterior predictive sampling indicates incorporating overdispersion with the
beta-binomial extension results in a better fit to the data (see Figure 5a and
b): the mean Pearson’s product moment correlation between the posterior
predictive datasets and the observed data without the beta-binomial extension
was 0.64, while the mean correlation with the beta-binomial model was 0.86
(see Figure S1a). The ability of the model to predict specific pairwise
population $F_{ST}$ is shown Figure S8.
Figure 5: Posterior predictive sampling with 1,000 simulated datasets, using
pairwise $F_{ST}$ as a summary statistic of the allelic count data for: a) the
teosinte dataset, using the standard model; b) the teosinte dataset, using the
overdispersion model; c) HGDP dataset, standard model. d) HGDP dataset,
overdispersion model.
### HGDP results
For the human (HGDP) SNP dataset analysis, the mean posterior
$\alpha_{E}:\alpha_{D}$ ratio was $5.13\times 10^{4}$, the median was
$5.00\times 10^{4}$, and the 95% credible set was $3.09\times 10^{4}$ to
$7.85\times 10^{4}$ (see Figure S11a). However, this result seems to be
sensitive to outlier populations, as the beta-binomial extension of this
method on the same dataset yields significantly different results, with a mean
$\alpha_{E}:\alpha_{D}$ ratio of $1.35\times 10^{4}$, a median of $1.34\times
10^{4}$, and a 95% credible set from $1.09\times 10^{4}$ to $1.65\times
10^{4}$ (see Figure S11b). This latter result is broadly consistent with that
of Rosenberg (2011), who found an effect size ratio of $9.52\times 10^{3}$ in
a linear regression analysis that treated pairwise population comparisons as
independent observations. The interpretation of our result is that being on
the opposite side of the Himalaya mountain range has the impact of between
approximately 11 and 16 thousand kilometers of extra pairwise geographic
distance on genetic differentiation.
Under our beta-binomial extension values of $F_{k}$ estimated across
populations ranged from $3.2\times 10^{-4}$ to 0.06. Population values of
$F_{k}$ are shown on the map in Figure S3.
Posterior predictive sampling again indicates a better fit to the data
including overdispersion (see Figure 5c and d): the mean Pearson’s product
moment correlation between the posterior predictive datasets and the observed
data without the beta-binomial extension was 0.88, while the mean correlation
with the beta-binomial model was 0.91 (see Figure S1b). The ability of the
model to predict specific pairwise population $F_{ST}$ is shown in Figure S9.
## Discussion
In this paper, we have presented a method that uses raw allelic count data to
infer the relative contribution of geographic and ecological distance to
genetic differentiation between sampled populations. The method performs quite
well: we have shown that it reliably and accurately estimates correct
parameter values using simulations, and produces sensible models that give a
good fit to observed patterns of differentiation in real datasets. We feel
that our method has broad utility to the field of landscape genetics and to
studies of local adaptation, and holds a number of advantages over existing
methods. (although see Wang et al. (2012) for another recent approach.) It
allows users to simultaneously quantify effect sizes of geographic distance
and ecological distance (rather than assessing the significance of a
correlation once the effect of geography has been removed, as in the partial
Mantel test). Explicitly modeling the covariance in allele frequencies allows
users to accommodate non-independence in the data, and the method’s Bayesian
framework naturally accommodates uncertainty and provides a means of
evaluating model adequacy. The inclusion of overdispersion allows fit to a set
of populations with heterogeneous demographic histories. In addition, the
basic model presented here – a parametric model of spatial covariance in
allele frequencies – is extremely versatile, allowing for the inclusion of
multiple ecological or geographic distance variables, as well as great
flexibility in the function used to model the covariance.
### Simulation Study
Our method performed well in both simulation studies (see Figure 3, Table 1,
and Figure 4), and was able to effectively recognize and indicate when an
ecological variable contributes significantly to genetic differentiation. This
is in contrast to the partial Mantel, which has a high false positive rate in
the presence of spatial autocorrelation of environmental variables (see Figure
S4).
For datasets simulated under the inference model, coverage, accuracy, and
precision were all satisfactory (see Table 1). The precision of our estimator
of $\alpha_{E}$ was generally lower for our discrete ecological variable,
likely due to the strong spatial structure of the discrete ecological
variable.
For datasets simulated using the spatial coalescent, there were no true values
for the $\alpha_{E}:\alpha_{D}$ ratio to compare with those inferred by the
method. However, we note that the $\alpha_{E}:\alpha_{D}$ ratios estimated
across analyzed simulated datasets tracked the barrier effect sizes used to
simulate them, and that when the barrier had no effect on migration, the
marginal distributions on the $\alpha_{E}:\alpha_{D}$ ratio estimated were
stacked up against the prior bound at zero and had very low median values. The
width of the 95% credible set of the marginal posteriors grew with the barrier
effect size as a result of the flattening of the posterior probability surface
as true parameter value increased. Overall, the method performed well on the
datasets simulated under a model different from that used for inference (and
presumably closer to reality).
An issue we observed in practice is that at some parameter values, different
combinations of $\alpha$ are essentially nonidentifiable — the form of the
covariance given in equation (3) sometimes allows equally reasonable fits at
different values of $\alpha_{2}$, or at different combinations of
$\alpha_{0}$, $\alpha_{D}$, and $\alpha_{E}$. (In other cases, all four
parameters can be well-estimated.) Even when this is the case, the
$\alpha_{E}$ : $\alpha_{D}$ ratio, which is the real parameter of interest,
remains constant across the credible region, even as $\alpha_{E}$ and
$\alpha_{D}$ change together to compensate for changes in $\alpha_{2}$ and
$\alpha_{0}$. Such ‘ridges’ in the likelihood surface are readily diagnosed by
viewing the trace plots and joint marginals of the $\alpha$ parameters (see
Figures S5 and S6).
### Empirical Results
#### Teosinte
The application of our method to the teosinte SNP dataset indicated that
difference in elevation has a potentially substantial contribution to genetic
differentiation between teosinte populations. Difference in elevation could be
correlated with another, as yet unmeasured ecological variable, so we cannot
claim to report a causal link, but these results are certainly suggestive,
especially in the light of the research on morphological adaptations in
teosinte to high altitude (Eagles and Lothrop, 1994).
The analysis of the teosinte SNP data with the beta-binomial extension of our
method shows a much better model fit, and highlights a number of populations
with particularly high $F_{k}$ values. These populations (highlighted in
Figure S2) all belong to the subspecies Zea mays mexicana, which primarily
occurs at higher altitudes and is hypothesized to have undergone significant
drift due to small effective population sizes or bottlenecks (Fukunaga et al.,
2005). In addition, a number of these populations occur in putative hybrid
zones between Zea mays mexicana and Zea mays parviglumis, a separate, co-
occuring subspecies (Heerwaarden et al., 2011). Like drift, admixture would
have the effect of increasing the variance in observed allele frequencies
around the expectation derived from the strict geographic/ecological distance
model, and would drive up the inferred $F_{k}$ parameters for admixed
populations.
#### HGDP
In the Human Genome Diversity Panel data we find a strong effect of separation
by the Himalayas on genetic differentiation, confirming previous results (e.g.
Rosenberg et al., 2005). To obtain a good fit to the data it is necessary to
model overdispersion (with the beta-binomial extension). This lack of model
fit of the basic model can be seen in the posterior predictive sampling in
Figure 5c and d, which highlights the importance of assessing model adequacy
during analysis. Under the beta-binomial extension the $\alpha_{E}/\alpha_{D}$
ratio estimates an effect of the Himalayas far greater than the distance
simply to circumnavigate around the Himalayas. We think this likely reflects
the fact that Eurasian populations are away from migration-selection
equilibrium, reflecting past large-scale population expansions (Keinan et al.,
2007).
With overdispersion included, the model appears to describe the data
reasonably well, suggesting substantial heterogeneity beyond that dictated by
geographic distance and separation by the Himalayas between the sampled
populations. A number of populations stand out in their $F_{k}$ values, in
particular the Kalash, the Lahu, the Mozabites, the Hazara, and the Uygur
(highlighted in Figure S3). This is consistent with the known history of these
populations and previous work on these samples (Rosenberg et al., 2002), which
suggests that these populations are unusual for their geographic position
(that is, they depart from expectations of their covariance in allele
frequencies with their neighbors). The Hazara and Uygur populations are known
to be recently admixed populations between central Asian and East ancestry
populations. The Mozabite population has substantial recent admixture from
Sub-Saharan African populations (Rosenberg et al., 2002; Rosenberg, 2011). The
Kalash, who live in northwest Pakistan, are an isolated population with low
heterozygosity, suggesting a historically small effective population size.
Finally the Lahu have unusually low heterozygosity compared to the other East
Asian populations, suggesting that they too may have had an unusually low
effective population size. Thus our beta-binomial model, in addition to
improving the fit to the data, is successfully highlighting populations that
are outliers from simple patterns of isolation by distance.
#### Population-specific variance
As noted above, in both empirical datasets analyzed, the beta-binomial
extension to the basic model offers substantially better model fit. This could
in part reflect ecological variables not included in the analyses, in addition
to heterogeneity in demographic processes, both of which could shape genetic
variation in these populations by pushing population allele frequencies away
from their expectations under our simple isolation by distance and ecology
model. Our $F_{k}$ statistic provides a useful way to highlight populations
that show the strongest deviations away from our model, and to prevent these
deviations from obscuring environmental correlations or causing spurious
correlations. Therefore, we recommend that the extended model be used as the
default model for analyses.
### Limitations
The flexibility of this statistical model is accompanied by computational
expense. Depending on the number of loci and populations in a dataset, as well
as the number of MCMC generation required to accurately describe the
stationary distribution, analyses can take anywhere from hours to days.
Speedups could be obtained by parallelization or porting code to C. In
addition, as with any method that employs an MCMC algorithm, users should take
care to assess MCMC performance to ensure that the chain is mixing well, has
been run for a sufficient number of generations, and has converged on a
stationary distribution (Gilks et al., 1996). Users are well advised to run
multiple independent chains from random initial locations in parameter space,
and to compare the output of those analyses to confirm that all are describing
the same stationary distributions.
Our model rests on a number of assumptions, principal among which is that
population allele frequencies are well-represented by a spatially homogeneous
process, such as are obtained under mutation-migration equilibrium. That is,
we assume that current patterns of gene flow between populations are solely
responsible for observed patterns of genetic differentiation. Some examples of
biological situations that may violate the assumptions of our model include:
two populations that have higher genetic differentiation than expected based
on their pairwise geographic distance because they arrived in nearby locations
as part of separate waves of colonization; or two populations that have been
recently founded on either side of some landscape element that truly does act
as a barrier to gene flow, but that do not exhibit strong genetic
differentiation yet, because the system is not in equilibrium. In reality, we
expect that very few natural populations will conform perfectly to the
assumptions of our model; however, we feel that the method will provide valid
approximations of the patterns for many systems, and that it will be a useful
tool for teasing apart patterns of genetic variation in populations across
heterogeneous landscapes.
### Extensions
The flexibility of this method translates well into extendability. Among a
number of natural extensions the community might be interested in
implementing, we highlight a few here.
One natural extension is to incorporate different definitions of the
ecological distance between our populations. Just because two populations have
no difference in their ecological variable state does not guarantee that there
is not great heterogeneity in the distance between them. For example, a pair
of populations separated by the Grand Canyon might have nearly identical
elevations, but the cost to migrants between them incurred by elevation may
well be significant. One solution to this would be to enter a simple binary
barrier variable, or to calculate least-cost paths between populations, and
use those distances in lieu of geographic distance. A more elegant solution
would be to use “isolation by resistance” distances, obtained by rasterizing
landscapes and employing results relating mean passage rates of random walks
in a heterogeneous environment to quantities from circuit theory in order to
calculate the conductance (ease of migration) between nodes on that landscape
(McRae and Beier, 2007). This method has the advantage of integrating over all
possible pathways between populations. Currently, users must specify the
resistance of landscape elements _a priori_ , but those resistance parameters
could be incorporated into our parametric covariance function, and estimated
along with the other parameters of our model in the same MCMC. This approach
carries great appeal, as it combines the conceptual rigor of accommodating
multiple migration paths with the methodological rigor of statistically
estimated, rather than user-specified, parameter values.
Another extension is the further relaxation of the assumption of process
homogeneity in decay of allelic covariance over geographic and ecological
distance. Specifically, the method currently requires that a single unit of
pairwise ecological distance translate into the same extent of pairwise
genetic differentiation between all population pairs. This assumption is
unlikely to be realistic in most empirical examples, especially if populations
are locally adapted. For example, individuals from populations adapted to high
elevation may be able to migrate more easily over topography than individuals
from populations adapted to low elevations. Such heterogeneity could be
accommodated by using different covariance functions for different, pre-
specified population pairings.
A final extension that could be integrated into this method is a model
selection framework, in which models with and without an ecological distance
variable, or with different combinations of ecological distance variables, can
be rigorously compared. Because our method is implemented in a Bayesian
framework, we could select between models by calculating Bayes factors (the
ratio of the marginal likelihoods of the data under two competing hypotheses)
(Dickey, 1971; Verdinelli and Wasserman, 1995). This approach would seem to
offer the best of both worlds: robust parameter inference that accommodates
uncertainty in addition to output that could be interpreted as definitive
evidence for or against the association of an ecological variable of interest
with genetic differentiation between populations.
### Conclusion
In closing, we present a tool that can be useful in a wide variety of
contexts, allowing a description of the landscape as viewed by the movements
of genetic material between populations. We urge users to be cautious in their
interpretation of results generated with this model. A correlation between
genetic differentiation and an ecological distance variable does not guarantee
a causal relationship, especially because unmeasured ecological variables may
be highly correlated with those included in an analysis. In addition, evidence
of a correlation between genetic differentiation and an ecological variable
may not be evidence of local adaptation or selection against migrants, as both
neutral and selective forces can give rise to an association between genetic
divergence and ecological distance.
Finally, we are making this method available online at genescape.org, and we
hope that users elaborate on the framework we present here to derive new
models that are better able to describe empirical patterns of isolation by
distance — both geographic and ecological.
### Acknowledgements
We thank Yaniv Brandvain, Marjorie Weber, Luke Mahler, Will Wetzel, B. Moore
and the Coop lab for their counsel, Jeff Ross-Ibarra and Torsten Günther for
their help with empirical datasets, J. Novembre and D. Davison for their code,
and Jon Wilkins and two anonymous reviewers for their comments on previous
drafts. This material is based upon work supported by the National Science
Foundation under Grant No. 1262645 (PR and GC), NSF GRFP No. 1148897 (GB), a
NIH Ruth L. Kirschstein NRSA fellowship F32GM096686 (PR), and a Sloan
Foundation fellowship (GC).
## Appendix
### Priors
We denote a gamma distribution with given shape and rate parameters as
$\Gamma(\mbox{shape},\mbox{rate})$, a normal distribution with given mean and
variance parameters as $N(\mbox{mean},\mbox{variance})$, an exponential
distribution with given rate parameter $\mathop{\mbox{Exp}}(\mbox{rate})$, and
a uniform distribution between given upper and lower boundaries as
$U(\mbox{lower},\mbox{upper})$. The priors specified on the parameters of this
model are: $\alpha_{0}\sim\Gamma(0.001,0.001)$;
$\alpha_{D}\sim\mathop{\mbox{Exp}}(1)$;
$\alpha_{E}\sim\mathop{\mbox{Exp}}(1)$; $\alpha_{2}\sim U(0.1,2)$; and
$\mu_{\ell}\sim N(0,1/\beta)$, with a hyper-prior
$\beta\sim\Gamma(0.001,0.001)$.
The priors on $\alpha_{D}$ and $\alpha_{E}$ were chosen to reflect the
assumption that there is some, and potentially very great, effect of isolation
by geography and ecology. The priors on $\alpha_{2}$, $\alpha_{0}$, and
$\beta$ were the same as those used by (Wasser et al., 2004), and, in the case
of the latter two (on $\beta$ and $\alpha_{0}$), were chosen because they were
conjugate to the likelihood, so their parameters could therefore be updated by
a Gibbs sampling step.
In early implementations of our method, we experimented with uniform priors on
$\alpha_{D}$ and $\alpha_{E}$ (U(0,4)), as used by Wasser et al. (2004)
(although they did not have a parameter analogous to $\alpha_{E}$). We
replaced these uniform priors with exponentials to reflect the fact that we
have no prior belief that there should be any upper bound to the effects
geographic or ecological distance may have on genetic differentiation. In
practice, we found that for all simulated and empirical datasets tested, there
was sufficient information in the data for the likelihood function to swamp
the effect of the priors — whether uniform or exponential — on $\alpha_{D}$
and $\alpha_{E}$.
However, in all analyses, we encourage users to visualize the marginal
distributions of each parameter at the end of a run and compare it to its
prior. If the marginal distribution looks exactly like the prior, there may be
insufficient information in the data to parameterize the model effectively,
and the prior may be having an unduly large impact on analysis. If the
marginal distribution for a parameter shows that it is “piling up” against its
prior’s hard bound (e.g., the marginal distribution on $\alpha_{E}$ has a
median of 1e-3, close to its hard bound at 0), that may suggest that the
current form of the prior is not describing the natural distribution of the
parameter for that particular dataset well (e.g., $\alpha_{E}$ “wants” to be
zero, but the prior is constraining it). In both cases (the marginal posterior
and the prior have significant overlap; the prior is exhibiting an edge
effect), we suggest that the user experiment with different priors and/or
model parameterizations to see what effect they are having on inference.
### MCMC
Our MCMC scheme proceeds as follows. The chain is initiated at maximum
likelihood estimates (MLEs) for $\theta$ and $\mu$, and, for
$\alpha_{0},\alpha_{D},\alpha_{E},$ and $\alpha_{2}$, at values drawn randomly
from their priors. The multiplicative inverse of the empirical variance of the
MLEs of $\mu$ is used as the initial value of $\beta$.
In each generation one of
$\\{\mu,\beta,\theta,\alpha_{0},\alpha_{D},\alpha_{E},\alpha_{2}\\}$ is
selected at random to be updated.
The priors on $\beta$ and $\alpha_{0}$ are conjugate to their marginal
posteriors, and each is updated via a Gibbs sampling step. The updated value
of $\beta$ given the current $\mu$ is drawn from
$\beta\;|\;\mu_{1},\cdots,\mu_{L}\sim\Gamma\left(0.001+\frac{L}{2},~{}0.001+\frac{1}{2}\sum\limits_{\ell=1}^{L}\mu_{\ell}^{2}\right),$
(8)
and the updated value of $\alpha_{0}$ conditional on the current set of
$\theta$ is drawn from
$\alpha_{0}\;|\;\theta_{1},\cdots,\theta_{L}\sim\Gamma\left(1+\frac{Lk}{2},~{}1+\frac{1}{2}\sum\limits_{\ell=1}^{L}\theta_{\ell,k}\chi^{-1}\theta_{\ell,k}^{T}\right),$
(9)
where $k$ is the number of populations sampled, $L$ is the number of loci
sequenced, and
$\chi=\alpha_{0}\Omega=\exp{(-(\alpha_{D}D_{i,j}+\alpha_{E}E_{i,j})^{\alpha_{2}}})$.
The remaining parameters are updated by a Metropolis-Hastings step; here we
describe the proposal mechanisms. The proposed updates to $\theta$ do not
affect each other, and so are accepted or rejected independently. Following
Wasser et al. (2004) (derived from (Christensen and Waagepetersen, 2002;
Møller et al., 1998)), the proposal is chosen as
$\theta_{\ell}^{{}^{\prime}}=\theta_{\ell}+R_{\ell}Z$, where $R$ is a vector
of normally distributed random variables with mean zero and small variance
(controlled by the scale of the tuning parameter on $\theta$) and $Z$ is the
Cholesky decomposition of $\Omega$ (so that $ZZ^{T}=\Omega$). Under this
proposal mechanism, proposed updates to $\theta_{\ell}$ tend to stay within
the region of high posterior probability, so that more updates are accepted
and mixing is improved relative to a scheme in which the $\theta$ in each
population were updated individually.
Updates to $\alpha_{D}$, $\alpha_{E}$, and $\alpha_{2}$ are accomplished via a
random-walk sampler (adding a normally distributed random variable with mean
zero and small variance to the current value) (Gilks et al., 1996). Updates to
elements of $\mu_{\ell}$ are also accomplished via a random-walk sampler, and
again the updates to each locus are accepted or rejected independently.
In the overdispersion model, initial values of $\Phi_{k}$ are drawn from the
prior for each population. Updates are proposed one population at a time via a
random-walk step, and are accepted or rejected independently.
Well-suited values of tuning parameters (variances in the proposal
distributions for $\mu,\theta,\alpha_{D},\alpha_{E}$, and $\alpha_{2}$) and
the number of generations required to accurately describe the joint posterior
will vary from dataset to dataset, and so may require tuning.
## References
* Andrew et al. (2012) Andrew, R. L., K. L. Ostevik, D. P. Ebert, and L. H. Rieseberg, 2012. Adaptation with gene flow across the landscape in a dune sunflower. Molecular ecology 21:2078–91.
* Balding (2003) Balding, D. J., 2003. Likelihood-based inference for genetic correlation coefficients. Theoretical Population Biology 63:221–230.
* Balding and Nichols (1995) Balding, D. J. and R. A. Nichols, 1995. A method for quantifying differentiation between populations at multi-allelic loci and its implications for investigating identity and paternity. Genetica 96:3–12.
* Balding and Nichols (1997) Balding, D. J. and R. a. Nichols, 1997. Significant genetic correlations among Caucasians at forensic DNA loci. Heredity 108:583–9.
* Charlesworth et al. (2003) Charlesworth, B., D. Charlesworth, and N. H. Barton, 2003. The effects of genetic and geographic structure on neutral variation. Annual Review of Ecology, Evolution, and Systematics 34:99–125.
* Christensen and Waagepetersen (2002) Christensen, O. F. and R. Waagepetersen, 2002. Bayesian prediction of spatial count data using generalized linear mixed models. Biometrics 58:280–6.
* Cockerham and Weir (1986) Cockerham, C. C. and B. S. Weir, 1986. Estimation of inbreeding parameters in stratified populations. Annals of Human Genetics 50:271–81.
* Conrad et al. (2006) Conrad, D. F., M. Jakobsson, G. Coop, X. Wen, J. D. Wall, N. a. Rosenberg, and J. K. Pritchard, 2006. A worldwide survey of haplotype variation and linkage disequilibrium in the human genome. Nature Genetics 38:1251–60.
* Coop et al. (2010) Coop, G., D. Witonsky, A. Di Rienzo, and J. K. Pritchard, 2010. Using environmental correlations to identify loci underlying local adaptation. Genetics 185:1411–23.
* Dickey (1971) Dickey, J., 1971. The weighted likelihood ratio, linear hypotheses on normal location parameters. The Annals of Mathematical Statistics 42:204–223.
* Diggle et al. (1998) Diggle, P. J., J. A. Tawn, and R. A. Moyeed, 1998. Model-based geostatistics. Jounal of the Royal Statistical Society. Series C (Applied Statistics) 47:299–350.
* Drès and Mallet (2002) Drès, M. and J. Mallet, 2002. Host races in plant-feeding insects and their importance in sympatric speciation. Philosophical transactions of the Royal Society of London. Series B, Biological sciences 357:471–92.
* Eagles and Lothrop (1994) Eagles, H. A. and J. E. Lothrop, 1994. Highland maize from central Mexico-Its origin, characteristics, and use in breeding programs. Crop Science 34:11–19.
* Edelaar and Bolnick (2012) Edelaar, P. and D. I. Bolnick, 2012. Non-random gene flow: an underappreciated force in evolution and ecology. Trends in Ecology & Evolution 27:659–65.
* Fang et al. (2012) Fang, Z., T. Pyhäjärvi, A. L. Weber, R. K. Dawe, J. C. Glaubitz, J. D. Jesus, S. González, C. Ross-ibarra, J. Doebley, and P. L. Morrell, 2012. Megabase-scale inversion polymorphism in the wild ancestor of maize. Genetics 191:883–894.
* Fukunaga et al. (2005) Fukunaga, K., J. Hill, Y. Vigouroux, Y. Matsuoka, J. S. G, K. Liu, E. S. Buckler, and J. Doebley, 2005. Genetic diversity and population structure of teosinte. Genetics 169:2241–2254.
* Gelman et al. (1996) Gelman, A., X.-l. Meng, and H. Stern, 1996. Posterior predictive assessment of model fitness via realized discrepancies. Statistica Sinica 6:733–807.
* Gilks et al. (1996) Gilks, W., S. Richardson, and D. Spiegelhalter, 1996. Markov Chain Monte Carlo in Practice. Interdisciplinary Statistics. Chapman & Hall.
* Gómez-Díaz et al. (2010) Gómez-Díaz, E., P. F. Doherty Jr, D. Duneau, and K. D. McCoy, 2010. Cryptic vector divergence masks vector-specific patterns of infection: an example from the marine cycle of Lyme borreliosis. Evolutionary Applications 3:391–401.
* Goslee and Urban (2007) Goslee, S. C. and D. L. Urban, 2007. The ecodist package for dissimilarity-based analysis of ecological data. Journal of Statistical Software 22:1–19.
* Guillot and Rousset (2013) Guillot, G. and F. Rousset, 2013. Dismantling the Mantel tests. Methods in Ecology and Evolution 4:336–344.
* Günther and Coop (2013) Günther, T. and G. Coop, 2013. Robust identification of local adaptation from allele frequencies. arXiv:1209.3029v1 .
* Hastings (1970) Hastings, W., 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57:97–109.
* Heerwaarden et al. (2011) Heerwaarden, J. V., J. Doebley, W. H. Briggs, J. C. Glaubitz, M. M. Goodman, J. d. J. S. Gonzalez, and J. Ross-Ibarra, 2011. Genetic signals of origin, spread, and introgression in a large sample of maize landraces. PNAS 108:1088–1092.
* Hendry (2004) Hendry, A. P., 2004. Selection against migrants contributes to the rapid evolution of ecologically dependent reproductive isolation. Evolutionary Ecology Research 6:1219–1236.
* Hey (1991) Hey, J., 1991. A multi-dimensional coalescent process applied to multi-allelic selection models and migration models. Theoretical Population Biology 39:30–48.
* Hudson (2002) Hudson, R. R., 2002. Generating samples under a Wright-Fisher neutral model of genetic variation. Bioinformatics 18:337–338.
* Keinan et al. (2007) Keinan, A., J. C. Mullikin, N. Patterson, and D. Reich, 2007. Measurement of the human allele frequency spectrum demonstrates greater genetic drift in East Asians than in Europeans. Nature genetics 39:1251–5.
* Legendre and Fortin (2010) Legendre, P. and M.-J. Fortin, 2010. Comparison of the Mantel test and alternative approaches for detecting complex multivariate relationships in the spatial analysis of genetic data. Molecular Ecology Resources 10:831–844.
* Li et al. (2008) Li, J. Z., D. M. Absher, H. Tang, A. M. Southwick, A. M. Casto, S. Ramachandran, H. M. Cann, G. S. Barsh, M. Feldman, L. L. Cavalli-Sforza, and R. M. Myers, 2008. Worldwide human relationships inferred from genome-wide patterns of variation. Science 319:1100–4.
* Malécot (1975) Malécot, G., 1975. Heterozygosity and relationship in regularly subdivided populations. Theoretical Population Biology 8:212–241.
* McRae and Beier (2007) McRae, B. H. and P. Beier, 2007. Circuit theory predicts gene flow in plant and animal populations. PNAS 104:19885–90.
* Meirmans (2012) Meirmans, P. G., 2012. The trouble with isolation by distance. Moleculary Ecology 21:2839–2846.
* Metropolis et al. (1953) Metropolis, N., A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, 1953. Equation of State Calculations. Journal of Chemical Physics 21:1087–1092.
* Møller et al. (1998) Møller, J., A. R. Syversveen, and R. P. Waagepetersen, 1998. Log Gaussian Cox Processes. Scandinavian Journal of Statistics 25:451–482.
* Mosca et al. (2012) Mosca, E., A. J. Eckert, E. A. D. I. Pierro, D. Rocchini, and N. L. A. Porta, 2012\. The geographical and environmental determinants of genetic diversity for four alpine conifers of the European Alps. Molecular Ecology 21:5530–5545.
* Nordborg and Krone (2002) Nordborg, M. and S. M. Krone, 2002. Separation of time scales and convergence to the coalescent in structured populations. Pp. 130–164, _in_ M. Slatkin and M. Veuille, eds. Modern Developments in Theoretical Populations Genetics. Oxford University Press, Oxford.
* R Development Core Team (2013) R Development Core Team, 2013. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org. ISBN 3-900051-07-0.
* Rosenberg (2011) Rosenberg, N. A., 2011. A Population-Genetic Perspective on the Similarities and Differences among Worldwide Human Populations. Human Biology 83:659–684.
* Rosenberg et al. (2005) Rosenberg, N. A., S. Mahajan, S. Ramachandran, C. Zhao, J. K. Pritchard, and M. W. Feldman, 2005. Clines , clusters , and the effect of study design on the inference of human population structure. PLoS Genetics 1:660–71.
* Rosenberg et al. (2002) Rosenberg, N. a., J. K. Pritchard, J. L. Weber, H. M. Cann, K. K. Kidd, L. a. Zhivotovsky, and M. W. Feldman, 2002. Genetic structure of human populations. Science 298:2381–5.
* Rosenblum and Harmon (2011) Rosenblum, E. B. and L. J. Harmon, 2011. “Same same but different”: replicated ecological speciation at White Sands. Evolution 65:946–60.
* Rousset (1997) Rousset, F., 1997. Genetic differentiation and estimation of gene flow from F-statistics under isolation by distance. Genetics 145:1219–1228.
* Slatkin and Maruyama (1975) Slatkin, A. M. and T. Maruyama, 1975. The influence of gene flow on genetic distance. The American Naturalist 109:597–601.
* Slatkin (1993) Slatkin, M., 1993. Isolation by distance in equilibrium and non-equilibrium populations. Evolution 47:264–279.
* Smouse et al. (1986) Smouse, P. E., J. C. Long, and R. R. Sokal, 1986. Multiple regression and correlation extensions of the Mantel test of matrix correspondence extensions of the multiple regression and correlation Mantel test of matrix correspondence. Systematic Zoology 35:627–632.
* Vekemans and Hardy (2004) Vekemans, X. and O. Hardy, 2004. New insights from fine-scale spatial genetic structure analyses in plant populations. Molecular Ecology 13:921–935.
* Verdinelli and Wasserman (1995) Verdinelli, I. and L. Wasserman, 1995. Computing Bayes factors using a generalization of the Savage-Dickey density ratio. Journal of the American Statistical Association 90:614–618.
* Wang et al. (2012) Wang, I. J., R. E. Glor, and J. Losos, 2012. Quantifying the roles of ecology and geography in spatial genetic divergence. Ecology Letters 16:175–182.
* Wasser et al. (2004) Wasser, S. K., A. M. Shedlock, K. Comstock, E. a. Ostrander, B. Mutayoba, and M. Stephens, 2004. Assigning African elephant DNA to geographic region of origin: Applications to the ivory trade. PNAS 101:14847–52.
* Weir and Cockerham (1984) Weir, B. S. and C. C. Cockerham, 1984. Estimating F-statistics for the analysis of population structure. Evolution 38:1358–1370.
* Weir and Hill (2002) Weir, B. S. and W. G. Hill, 2002. Estimating F-statistics. Annual Review of Genetics Pp. 721–750.
* Williams (1975) Williams, D. A., 1975. 394: The analysis of binary responses from toxicological experiments involving reproduction and teratogenicity. Biometrics 31:949–952.
* Wright (1943) Wright, S., 1943. Isolation by distance. Genetics 28:114–138.
## Supplemental material
Figure S1: Distribution of Pearson’s correlations between each posterior
predictive simulated dataset and the observed data, highlighting the improved
fit of the overdispersion model to describe: a) the teosinte dataset; b) the
HGDP dataset. Figure S2: Map of teosinte populations sampled, colored by
their median estimated population-specific overdispersion parameter, $F_{k}$.
The five populations with the highest values are noted. Figure S3: Map of
human populations included in the analysis, colored by their median estimated
population-specific overdispersion parameter, $F_{k}$. The five populations
with the highest values are noted. The dashed line denotes the line of
longitude used to delimit the Himalayas. Figure S4: Histograms of p-values
produced by the partial Mantel test (with 1,000,000 permutations) on the 140
datasets for which the true contribution of ecological distance to genetic
differentiation was zero. The black column indicates the type I error rate
with a significance level of p=0.05 in: a) the datasets with a continuous
ecological distance variable; b) the datasets with a binary ecological
distance variable. c) the datasets simulated under the spatial coalescent with
a barrier that had no effect on genetic differentiation. Figure S5: Trace
plots of the $\alpha$ parameters of the covariance matrix $\Omega$. Note the
partial non-identifiability of the separate $\alpha$ parameters compared to
the stability of the joint parameter, the $\alpha_{E}:\alpha_{D}$ ratio.
Figure S6: Joint marginal plots of the $\alpha$ parameters of the covariance
matrix $\Omega$, colored by the MCMC generation in which they were sampled.
Figure S7: Acceptance rates for the parameters of the model that are updated
with random-walk samplers, plotted over the duration of an individual MCMC
run. Dashed green lines indicate the bounds of acceptance rates that indicate
optimal mixing: 20%-70%. Figure S8: Heatmapped matrices showing the
performance of the model at all pairwise population comparisons. The posterior
predictive p-value was defined as $1-2\times|0.5-ecdf(F_{ST_{obs}})|$, in
which $ecdf(F_{ST_{obs}})$ is the empirical cumulative probability of the
observed $F_{ST}$ between two populations from a distribution defined by the
posterior predictive sample for that population comparison, representing the
p-value of a two-tailed t-test. Higher p-values indicate better model fit.
Populations are enumerated on the margins, and may be referenced in SuppMat
Table 1. a) The standard model. b) The overdispersion model. Figure S9:
Heatmapped matrices indicating the performance of the model at all pairwise
population comparisons. The posterior predictive p-value was defined as
$1-2\times|0.5-ecdf(F_{ST_{obs}})|$, in which $ecdf(F_{ST_{obs}})$ is the
empirical cumulative probability of the observed $F_{ST}$ between two
populations from a distribution defined by the posterior predictive sample for
that population comparison, representing the p-value of a two-tailed t-test.
Higher p-values indicate better model fit. Populations are enumerated on the
margins, and may be referenced in SuppMat Table 2. a) The standard model. b)
The overdispersion model.
Figure S10: Trace plots of the marginal posterior estimates for the
$\alpha_{E}/\alpha_{D}$ ratio from MCMC analysis of the teosinte dataset.
Inset figures give the marginal densities and 95% credible set for the samples
after a burn-in of 20% a) The standard model. b) The overdispersion model.
Figure S11: Trace plots of the marginal posterior estimates for the $\alpha_{E}/\alpha_{D}$ ratio from MCMC analysis of the HGDP dataset. Inset figures give the marginal densities and 95% credible set for the samples after a burn-in of 20% a) The standard model. b) The overdispersion model. – | Population name (sample size) | Latitude | Longitude | Elevation | Subspecies
---|---|---|---|---|---
1– | Km 1 El Crustel-Teloloapan (44) | 18.383 | 18.383 | 985 | parviglumis
2– | Amates Grandes (50) | 18.388 | 18.388 | 1110 | parviglumis
3– | Km 3 Amates Grandes-Teloloapan (48) | 18.394 | 18.394 | 1210 | parviglumis
4– | Km 72 Iguala-Arcelia (Km Alcholoa-Arcelia) (56) | 18.414 | 18.414 | 1506 | parviglumis
5– | Rincón del Sauce (56) | 18.35 | 18.35 | 1624 | parviglumis
6– | Ahuacatitlán (km 1.5 del entronque) (38) | 18.356 | 18.356 | 1528 | parviglumis
7– | Km 80 Huetamo-Villa Madero (50) | 19.063 | 19.063 | 832 | parviglumis
8– | Puerto de la Cruz (Km 119 Huetamo-V.Madero) (40) | 18.963 | 18.963 | 870 | parviglumis
9– | El Zapote (km 122 Huetamo-Caracuaro) (50) | 18.938 | 18.938 | 915 | parviglumis
10– | Puerto El Coyote (40) | 18.916 | 18.916 | 727 | parviglumis
11– | Km 135-136 Huetamo-Villa Madero (40) | 18.9 | 18.9 | 677 | parviglumis
12– | Cuirindalillo (km 142 Huetamo-Caracuaro) (42) | 18.883 | 18.883 | 697 | parviglumis
13– | Crucero Puertas de Chiripio (50) | 18.794 | 18.794 | 653 | parviglumis
14– | Quenchendio (km 151.5 Zitácuaro-Huetamo) (54) | 18.805 | 18.805 | 635 | parviglumis
15– | El Potrero (km 145.5 Zitácuaro-Huetamo) (40) | 18.82 | 18.82 | 654 | parviglumis
16– | La Crucita (km 135 Zitácuaro-Huetamo) (58) | 18.858 | 18.858 | 609 | parviglumis
17– | El Guayabo (km 132.5 Zitácuaro-Huetamo) (54) | 18.862 | 18.862 | 555 | parviglumis
18– | Km 107-108 Toluca-Altamirano (50) | 18.899 | 18.899 | 1422 | parviglumis
19– | Km 112 Toluca-Altamirano (46) | 18.895 | 18.895 | 1355 | parviglumis
20– | Km 119 Toluca-Altamirano (38) | 18.854 | 18.854 | 1015 | parviglumis
21– | Salitre-Monte de Dios (46) | 18.842 | 18.842 | 958 | parviglumis
22– | Taretan (La Perimera) (36) | 19.344 | 19.344 | 1170 | parviglumis
23– | Los Guajes (km 43 Zitácuaro-Huetamo) (54) | 19.231 | 19.231 | 985 | parviglumis
24– | 1 Km Norte de Santa Ana (54) | 19.281 | 19.281 | 1332 | parviglumis
25– | Km 8 Zuluapan-Tingambato (58) | 19.148 | 19.148 | 1178 | parviglumis
26– | Km 4 Zuluapan-Tingambato (60) | 19.146 | 19.146 | 1346 | parviglumis
27– | K2 Zacazonapan-Otzoloapan (56) | 19.079 | 19.079 | 1468 | parviglumis
28– | K22 Zacazonapan-Luvianos (EL Puente) (56) | 19.039 | 19.039 | 1085 | parviglumis
29– | Acatitlán-El Puente (50) | 19.029 | 19.029 | 1075 | parviglumis
30– | Queretanillo (56) | 19.551 | 19.551 | 1342 | parviglumis
31– | Km 33.5 Temascal-Huetamo (56) | 19.483 | 19.483 | 1100 | parviglumis
32– | Km 37 Temascal-Huetamo (40) | 19.464 | 19.464 | 1030 | parviglumis
33– | Casa Blanca (km 62 Huetamo-Villa Madero) (54) | 19.161 | 19.161 | 1268 | parviglumis
34– | San Antonio Tecomitl (4) | 19.217 | 19.217 | 2400 | mexicana
35– | Ozumba (4) | 19.05 | 19.05 | 2340 | mexicana
36– | Temamatla (6) | 19.183 | 19.183 | 2400 | mexicana
37– | Zoquiapan (4) | 19.317 | 19.317 | 2270 | mexicana
38– | Los Reyes La Paz (6) | 19.4 | 19.4 | 2200 | mexicana
39– | Miraflores (4) | 19.217 | 19.217 | 2200 | mexicana
40– | Tepetlixpa (4) | 19.017 | 19.017 | 2320 | mexicana
41– | El Pedregal (4) | 19.267 | 19.267 | 2500 | mexicana
42– | Mexicaltzingo (4) | 19.217 | 19.217 | 2600 | mexicana
43– | Santa Cruz (4) | 19.083 | 19.083 | 2425 | mexicana
44– | San Antonio (4) | 19.067 | 19.067 | 2440 | mexicana
45– | San Salvador (4) | 19.133 | 19.133 | 2425 | mexicana
46– | Tlachichuca (4) | 19.167 | 19.167 | 2355 | mexicana
47– | K3 San Salvador El Seco-Coatepec (4) | 19.117 | 19.117 | 2425 | mexicana
48– | San Nicolas B. Aires (4) | 19.167 | 19.167 | 2355 | mexicana
49– | San Felipe (4) | 19.517 | 19.517 | 2250 | mexicana
50– | 4 miles N of Hidalgo, Arroyo Zarco (4) | 19.7 | 19.7 | 2040 | mexicana
51– | 5-7 km SW Cojumatlan (4) | 20.1 | 20.1 | 1700 | mexicana
52– | Puente Gavilanes (4) | 24.017 | 24.017 | 1950 | mexicana
53– | La Estancia (4) | 21.5 | 21.5 | 1920 | mexicana
54– | Moroleon (4) | 20.083 | 20.083 | 2100 | mexicana
55– | Pinicuaro (8) | 20.05 | 20.05 | 2087.5 | mexicana
56– | Puruandiro (4) | 20.083 | 20.083 | 2000 | mexicana
57– | km 2 Puruandiro-Las Tortugas (4) | 20.117 | 20.117 | 1880 | mexicana
58– | 10 km S of Degollado (4) | 20.367 | 20.367 | 1625 | mexicana
59– | Ayotlan (4) | 20.417 | 20.417 | 1520 | mexicana
60– | Churitzio (8) | 20.175 | 20.175 | 1780 | mexicana
61– | El Salitre 1-2 km SE (4) | 20.183 | 20.183 | 1530 | mexicana
62– | Rancho El Tejocote (4) | 20.167 | 20.167 | 1750 | mexicana
63– | Villa Escalante (6) | 19.4 | 19.4 | 2320 | mexicana
Table S1: Metadata for populations used in the teosinte dataset. – | Population name (sample size) | Latitude | Longitude | Side of the Himalayas
---|---|---|---|---
1– | Adygei (30) | 44 | 39 | W
2– | Basque (36) | 43 | 0 | W
3– | Italian (20) | 46 | 10 | W
4– | French (52) | 46 | 2 | W
5– | Orcadian (28) | 59 | -3 | W
6– | Russian (46) | 61 | 40 | W
7– | Sardinian (46) | 40 | 9 | W
8– | Tuscan (10) | 43 | 11 | W
9– | Bedouin (86) | 31 | 35 | W
10– | Druze (78) | 32 | 35 | W
11– | Mozabite (50) | 32 | 3 | W
12– | Palestinian (88) | 32 | 35 | W
13– | Balochi (44) | 30.5 | 66.5 | W
14– | Brahui (46) | 30.5 | 66.5 | W
15– | Burusho (48) | 36.5 | 74 | W
16– | Hazara (40) | 33.5 | 70 | W
17– | Kalash (44) | 36 | 71.5 | W
18– | Makrani (48) | 26 | 64 | W
19– | Pathan (40) | 33.5 | 70.5 | W
20– | Sindhi (44) | 25.5 | 69 | W
21– | Cambodian (16) | 12 | 105 | E
22– | Dai (18) | 21 | 100 | E
23– | Daur (14) | 48.5 | 124 | E
24– | Han (64) | 32.5 | 114 | E
25– | Hezhen (16) | 47.5 | 133.5 | E
26– | Japanese (50) | 38 | 138 | E
27– | Lahu (12) | 22 | 100 | E
28– | Miao (10) | 28 | 109 | E
29– | Mongola (18) | 48.5 | 119 | E
30– | Naxi (14) | 26 | 100 | E
31– | Oroqen (16) | 50.5 | 126.5 | E
32– | She (18) | 27 | 119 | E
33– | Tu (18) | 36 | 101 | E
34– | Tujia (18) | 29 | 109 | E
35– | Uygur (18) | 44 | 81 | E
36– | Xibo (16) | 43.5 | 81.5 | E
37– | Yakut (46) | 63 | 129.5 | E
38– | Yi (18) | 28 | 103 | E
Table S2: Metadata for populations used from the HGDP dataset.
|
arxiv-papers
| 2013-02-13T23:50:23 |
2024-09-04T02:49:41.707046
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Gideon Bradburd, Peter Ralph, Graham Coop",
"submitter": "Gideon Bradburd",
"url": "https://arxiv.org/abs/1302.3274"
}
|
1302.3340
|
# How to reduce the crack density in drying colloidal material?
François Boulogne1,2, Frédérique Giorgiutti-Dauphiné1, Ludovic Pauchard1
###### Abstract
The drying of a colloidal dispersion can result in a gel phase defined as a
porous matrix saturated in solvent. During the drying process, high mechanical
stresses are generated. When these stresses exceed the strength of the
material, they can be released in the formation of cracks. This process
strongly depends on both the mechanical properties of the material and the way
the gel consolidates. In this report, we give experimental evidences that the
number of cracks formed in the consolidating film depend on the drying rate,
the nature of the solvent and the mechanical properties of the colloidal
particles.
_1 UPMC Univ Paris 06, Univ Paris-Sud, CNRS, F-91405. Lab FAST, Bat 502,
Campus Univ, Orsay, F-91405, France.
2 [email protected] _
## 1 Introduction
Coatings are usually made through the deposition of a volatile liquid that
contains dispersed colloidal particles. The dry coating is obtained through
evaporation of the volatile liquid. However, crack patterns often affect the
final quality of the coating and often need to be avoided. Numerous studies on
cracks induced by drying of colloidal suspensions have been investigated for
the last two decades. Nevertheless the relation between the stress release and
the resulting crack pattern remains unclear. Particularly, during the first
stage of the drying process, particles concentrate until the formation of a
porous matrix saturated with solvent . Capillary shrinkage at the evaporation
surface, limited by adhesion on a substrate, results in sufficiently high
tensile stress able to compete with the material cohesion (ie critical stress
for fracture). When the stress reaches a threshold value, the material
responds by a re-organisation of its structure at different scales in order to
release the increasing stress. At the mesoscopic scale, the stress release can
lead to the formation of various spatial patterns, like cracks. The resulting
crack patterns depend notably on the following parameters:
* •
the thickness of the layer in which the elastic energy of the system is
stored[11],
* •
the permeability of the porous media, e.g. pore size, controlling the
capillary shrinkage[3],
* •
the presence of surfactants possibly reducing the capillary pressure[3, 10],
* •
the friction on the substrate[8],
* •
the drying kinetics,
* •
the nature of the solvent flowing though the porous matrix[3][2],
* •
the mechanical properties of the porous matrix[13], by this way, the
interaction between particles[14], the particles toughness, their size,
polydispersity affecting the elasto-plastic viscoelasticity.
In the following we focus particularly on the effect of (i) the drying rate,
(ii) the nature of the solvent in the porous media, and (iii) the mechanical
properties of the colloidal particles on the resulting crack patterns.
## 2 Materials and methods
The first system is an aqueous suspension made of silica particles, Ludox
HS-40 purchased from Sigma-Aldrich. The radius of the particles is $9\pm 2$ nm
and the density $2.36\times 10^{3}$ kg/m3. The mass fraction of the initial
dispersion was estimated to $40.5$% by a dry extract. The $pH$ is about $9.0$,
the particle surface bears a high negative charge density and DLVO (Derjaguin,
Landau, Verwey and Overbeek) theory is expected to apply.
Two different cosolvents are added to the aqueous dispersion: glycerol
(purity: $99.0$%, from Merck) and glucose (purity: $99.5$%, from Sigma). These
solvents exhibit high miscibility properties in water and low volatility[5,
12] compared to water; thus, we assume glycerol and glucose as non-volatile
compounds. In addition, these cosolvents exhibit highly different intrinsic
diffusion coefficients with that of water (intrinsic diffusion coefficient of
glycerol in water $\sim 50\times$ intrinsic diffusion coefficient of water in
glycerol[6]). Cosolvent/water weight ratio is prepared using the following
process[2]: firstly, a quantity of water is evaporated from the aqueous
dispersion by stirring and heating at $35^{\circ}$C in an Erlenmeyer. The
resulting solution is filtered to $0.5$ $\mu$m to eliminate eventual
aggregates; the mass fraction is estimated by a dry extract at $140^{\circ}$C.
Then, the mass of water removed is replaced by the same mass of the cosolvent.
We note $\kappa$ the ratio between the mass of cosolvent added and the mass of
the resulting dispersion.
The second system is aqueous dispersions of nanolatex particles, stable
without evaporation, provided by Rhodia Recherche, Aubervilliers, France. Two
different types of particles are considered: rigid ones and deformable ones
with two different glass transition temperatures, $T_{g}$. At room
temperature, rigid particles are characterized by $T_{g}=100^{\circ}$C and
soft particles by $T_{g}=0^{\circ}$C; consequently the last can deform more
easily than rigid particles. The radius of the particles is $15$ nm and the
density of pure dry material of rigid and soft particles are respectively
$1.08\times 10^{3}$ kg/m3 and $1.10\times 10^{3}$ kg/m3. The volume fraction
of both dispersions is $30\%$. The proportion $\phi_{s}$ in soft particles is
defined as $\phi_{s}=V_{s}/(V_{s}+V_{h})$, where $V_{s}$, $V_{h}$ are the
volume of soft and rigid particles, respectively. Binary mixtures are
magnetically stirred during 15 min at room temperature, and then sonicated at
a $1$ s on/off pulse interval during $300$ s.
A circular container (inner diameter $\sim 15$ mm, height precised below) is
filled with a given amount of dispersion (sample in Figure 1). The contact
line of the dispersion is quenched at the upper edge of the wall and remains
pinned all along the drying process. At the final stage, the layer close to
the border of the container exhibits a thickness gradient and results in
radial cracks. On the contrary, in the center of the container we obtain a
layer of approximately constant thickness. In this region, covering about
$70\%$ of the total surface area, the evaporation is assumed to be uniform:
measurements of crack spacing are investigated there. The substrate is a non-
porous glass plate, carefully cleaned with pure water then ethanol before
being dried in a heat chamber at $100^{\circ}$C.
Figure 1: Setup. Dry or moist air is produced by an air flow from the ambient
atmosphere through dessicant or water respectively to the scale box. Depending
on the humidity captured by the humidity sensor inside the scale, a solenoid
valve is actioned to converge the humidity to the desired value.
In our experiments, the transfer of water in air is limited by diffusion and
therefore controlled by the relative humidity, $RH$, using a homemade PID
controller, at room temperature (Figure 1). The drying kinetics is obtained
using a scale (Sartorius) with a precision of $0.01$ mg. The drying rate is
deduced from mass loss as a function of time (Figure 2). At the final stage of
the drying process, the material is still transparent which allows us to
observe easily the dynamics of crack patterns in the layer. The crack
morphology is recorded with a camera positioned on the top of the sample.
The macroscopic elastic response of the colloidal material is characterized
using the CSM Instruments Micro Indentation testing (MHT). The indenter tip
(Vickers) is driven into the sample by applying an external force, constant in
time. The penetration depth is recorded as a function of time.
Figure 2: Measurements of a sample mass, $m$, during the drying process of a
dispersion of Ludox HS-40: the arrow indicates the time for crack formation.
Inset: mass variations with time, $\frac{dm}{dt}$, calculated from the mass
measurements. Sketch at left: colloidal film during the drying; sketch at
right: solid network saturated with solvent (curvature of the solvent/air
menisci occurs at the evaporation surface during solvent loss).
## 3 Formation of a crack network in the plane of the layer
Different stages take place during the drying process of a colloidal film[4].
In a first stage, the evaporation rate is constant, close to the evaporation
rate of pure solvent, and hence mainly controlled by the external conditions
of relative humidity and temperature in the surroundings. This stage is
usually named Constant Rate Period. During this regime, the density of
particles increases until the formation of a close packing network saturated
with solvent (Figure 2). The boundary conditions imposed by both the interface
of the film with the substrate and the interface of the film with the air
result in a capillary shrinkage at the top layer limited by the adhesion on
the substrate that prevents the material from shrinking[3, 18]. This results
in high tension that progressively builds up in the film.
The maximum capillary pressure in the porous media can reach
$P_{cap}=\frac{a\gamma_{s,a}cos(\theta)}{r_{p}}$, where $\gamma_{s,a}$ is the
solvent-air surface tension, $\theta$ the liquid/solid contact angle, $r_{p}$
the pore radius and $a$ is a geometrical constant ($\sim 10$)[3]. As a result
the tension in the liquid pore compresses the matrix and induces flow from the
bulk to the interface. In the first stage of the drying process, for the
meniscus to remain at the surface of the porous media, the evaporation rate
$V_{E}$ has to be equal the flux of liquid $J_{flow}$ at the surface, in
accordance with the Darcy law:
$J_{flow}=\frac{k}{\eta}\nabla P\mid_{surface}=V_{E}$ (1)
where $k$ is the porous media permeability, $\eta$ is the viscosity of the
liquid and $\nabla P$ the pressure gradient in the liquid pore.
When the drying stress reaches a threshold value, cracks form and invade the
material. It takes place at the final stage of the Constant Rate Period, at
time $t_{cracking}$ (see Figure 2). The crack network is formed
hierarchically: the successive generations of cracks are shown in Figure 3.
Finally, cracks delimit adjacent fragments of typical size $\delta$, defined
as the square root of the surface area of the fragment (Figure 3d). At the
final stage (Figure 3d), the pattern is related to the history of the
formation since the first crack is the longest, the second crack generation
connects to the first one and so on… Consequently the temporal order of the
successive crack generations can be approximately reconstructed [1].
Figure 3: Photographs of the formation of a crack pattern. The geometrical
orders of the cracks are shown from left to right: first, second, third and
fourth order. The cracks of lower orders are drawn in black. In $b$ the arrows
highlight the connection of the cracks in white to the existing crack in
black. In $c$ the typical size $\delta$ is defined as the square root of the
surface area of the fragment colored in red. Photograph size $\sim 500\mu$m.
## 4 Effect of the drying rate on the crack patterns
An amount of colloidal dispersion (Ludox-HS40) is deposited in a circular
container and dries at room temperature under a relative humidity $RH$. This
experiment is repeated at different $RH$. Images in Figure 4 show the final
aspect of the films. For films dried at low relative humidity numerous cracks
are observable in relation to the formation of successive generations of
cracks (Figure 4a). For films dried at higher relative humidity, the number of
cracks is reduced at the final stage of the drying; consequently, for
increasing $RH$ the crack spacing increases as shown in the graph in Figure 4.
Moreover drying process at $RH\sim 95\%$ results in a crack-free film (Figure
4d).
Figure 4: Final crack patterns in layers of thickness $h=2$ mm of Ludox HS-40
dried at different humidity rates, $RH$: (a) $RH=25\%$; (b) $RH=50\%$; (c)
$RH=75\%$; (d) $RH=95\%$ (the layer is crack-free and a de-adhesion process
can be observed between the colloidal material and the border of the circular
container). Graph: ratio between the crack spacing, $\delta$, and the final
layer thickness, $h$, plotted as a function of $RH$ ($T=23^{\circ}$C; the
initial weight deposited in each trough is the same); dashed line is a guide
for the eye.
Under our experimental conditions (the transfer of water in air is limited by
diffusion) and during the Constant Rate Period, the evaporation rate depends
on the relative humidity as follows:
$V_{E}=D_{w}\frac{1}{L}\frac{n_{wsat}}{n_{1}}(1-(RH/100))$ (2)
where $D_{w}=2.6\times 10^{-5}m^{2}.s^{-1}$ is the diffusion coefficient of
water into air, $n_{wsat}$ is the water concentration in air at equilibrium
with the film (practically $n_{wsat}$ is almost the same as for pure water,
$0.91mol/m^{3}$), $n_{1}$ the number of water mole per unit volume in liquid
water ($55000mol/m^{3}$). The length $L$ corresponds to the typical length of
the vapor pressure gradient [7]. In our geometry, this length is approximately
given by the container radius since the vapor concentration profile is like a
hemispherical cap sitting on the container. The drying timescale is defined
as:
$t_{D}=\frac{h}{V_{E}}$ (3)
Time $t_{D}$ gives an order of magnitude of the time needed to dry completely
a pure solvent film of thickness $h$. Variations of $t_{D}$ from equation (3)
and $t_{cracking}$, obtained from measurements, are plotted as a function of
$RH$ in Figure 5. For high $RH$, the evaporation rate is low, the stress
developpment is slow resulting in a long period before cracking.
Figure 5: Characteristic drying time, calculated from $RH$, and characteristic
cracking (measurements) time as a function of the relative humidity, $RH$ for
Ludox HS-40 (initial thickness $h=2$ mm). The black line is the theoretical
drying timescale from equations 2 and 3.
In the following we consider drying conditions at room temperature and $RH\sim
50\%$.
## 5 Effect of the nature of the solvent on the crack patterns
The addition of a cosolvent to the colloidal dispersion of Ludox HS-40
results, during the drying process, in a gel saturated by a binary mixture of
solvent. The presence of a content of glycerol reduces the number of cracks
(Figure 6 a, b, c, d). Above a critical concentration in glycerol close to
$10$ %, we obtained a crack-free material (Figure 6 d).
Similarly, the addition of glucose increases the crack spacing (Figure 6).
However, with glucose cracks are not suppressed as glycerol does for
concentrations higher than $10\%$. The crack spacing seems to tend to a
maximum value. Our results indicate that there might be chemical effects
(hydrogen bonds) between polyols and silanol groups covering the surface of
silica particles. Gulley et al.[9] measured the aggregation rate of silica
particles with polyols. They reported that the shorter the polyol molecule is,
the higher the stabilizing effect is. They also remarked that polysaccharides
are not stabilizer. They suggested that, due to the chemical structure, these
molecules coat differently the surface of colloids, until the molecules
dessorb due to the weakness of the hydrogen bonds and the high pressure
between particles.
From the recording of crack dynamics, we remarked that both evaporation time
$t_{D}$, and cracking time $t_{cracking}$, are not significantly modified by
the presence of the cosolvent (glycerol or glucose).
Figure 6: Final crack patterns in films in presence of additional cosolvent to
the dispersion of Ludox HS-40 (initial thickness $h=2$ mm). The cosolvent is
glycerol in a,b,c,d : (a) Without any additional glycerol; (b) $\kappa=5.1\%$;
(c) $\kappa=8.9\%$; (d) $\kappa=10.2\%$ (the layer is crack-free). The
cosolvent is glucose in a’,b’,c’,d’ : (a’) $\kappa=3.1\%$ ; (b’)
$\kappa=10.4\%$; (c’) $\kappa=14.3\%$; (d’) $\kappa=16.8\%$. Graph: ratio
between the crack spacing, $\delta$, and the final layer thickness, $h$,
plotted as a function of the concentration in glycerol and in glucose (drying
conditions $RH=50\%$ and $T=23^{\circ}$C; the initial weight deposited in each
trough is the same); dashed lines are a guide for the eye.
## 6 Effect of the mechanical properties of the particles network on the
crack patterns
The global mechanical properties of the material can be modified by changing
the mechanical properties of the particles themselves. Final films made of
rigid particles are found to contain a large number of cracks as shown
previously. On the contrary, films made of soft particles may be
homogeneous[15]. Mixtures of rigid and soft particules results in various
crack pattern depending on the composition. Images in Figure 7 show final
aspect of films composed with various volume fractions in soft particles.
Above a threshold proportion in soft particles, the final film is crack-free
(Figure 7d).
Figure 7: Final crack patterns in films made of both rigid and soft
nanoparticles (initial thickness $h=0.5$ mm). a, b, c, d: the proportion
$\phi_{s}$ in soft particles: (a) $\phi_{s}=0$ ; (a) $\phi_{s}=40\%$ ; (a)
$\phi_{s}=50\%$ ; (a) $\phi_{s}=60\%$ (the layer is crack-free). Graph: ratio
between the crack spacing, $\delta$, and the final layer thickness, $h$,
plotted as a function of the proportion $\phi_{s}$ in soft particles (drying
conditions $RH=50\%$ and $T=23^{\circ}$C; the initial weight deposited in each
trough is the same); dashed line is a guide for the eye.
## 7 Discussion
The crack formation is due to a mismatch between the mechanical stress in the
solid and its strength. Indeed, if drying stresses exceed the strength of the
porous material, they will release in the formation of cracks. In particular,
the drying rate, the effect of the nature of the solvent and the effect of the
mechanical properties of the particles network easily result in a modification
of the macroscopic response of the solid and so the critical stress for
fracture. The expression of the critical stress derived from the model by
Tirumkudulu et al. [17] is:
$\sigma_{c}\sim\bar{G}^{1/3}\big{(}\frac{\gamma}{h}\big{)}^{2/3}$, where
$\gamma$ is the surface energy solid/air, and $\bar{G}$ corresponds to the
macroscopic response of the solid film that expresses as:
$\bar{G}\sim\phi_{m}MG$, here $G$ is the shear modulus of the particles and
$\phi_{m}$ is the final close-packed volume fraction and $M$ is the
coordination number of the network.
* •
Drying rate can affect the organization of the particles network (particularly
the coordination number $M$). We suggest that a slow drying process of the
porous material results in a structure capable of supporting the mechanical
drying stresses more efficiently. In this assumption a low drying rate could
increase the resistance to cracking. In addition, a slow drying rate tends to
reduce the tension in the liquid near the drying surface of the layer in
accordance with equation (1). Consequently the tendency to crack is reduced as
shown in section 4.
* •
The presence of an additional cosolvent has two consequences on the porous
material: a chemical effect and a physical one. The last effect is due to a
combination between the flow driven by the pressure gradient (Darcy law) and
diffusion mechanisms (Fick law) in accordance with Scherer work (1989)[16]
this combination of processes affects the drying process. Indeed the pressure
gradient in the liquid pore, $\nabla P$, is a key parameter in the development
of the tensile stress distribution in the porous material. Indeed, if the
pressure in the liquid pore was uniform, the shrinkage of the porous matrix
would be uniform too and cracking process were inhibited. Inversely, the
steeper $\nabla P$ is, the greater the difference in shrinkage rate is between
the interior and the exterior of the layer (Figure 8). It results in the
flattening of the pressure gradient in the liquid pore (Figure 8)[2]. The
distribution of the drying stress is more uniform and consequently the cracks
are inhibited as shown in section 5.
Figure 8: Sketch of the pressure gradient in liquid pores.
* •
In the case of a network made of mixtures of rigid and soft particles the
macroscopic response of the material is function of the shear modulus of rigid
$G_{r}$ and soft $G_{s}$ particles: $\bar{G}=f(G_{r},G_{s})$[13]. The stress
can be released by internal modification of the structure. The presence of
soft particles releases a part of this stress, consequently reduces the crack
formation[13], as shown in section 6.
Moreover the presence of an additional cosolvent or soft particles in the
material modifies its mechanical properties. The mechanical properties are
usually characterized by (i) the instantaneous response of an applied force
and (ii) the time-dependence of the stress release of the layer. Orders of
magnitude of both parameters can be investigated using indentation testing.
Case (i) is related to elastic modulus measurement and case (ii) is obtained
by creep measurements. The most common method to measure creep behavior of a
material is to maintain the applied force at a constant maximum value, $F$,
and measure the change in depth of the indenter, $p$, as a function of time.
Figure 9 gives a comparison of creep behaviors for three films.
Modeling the layer by a Kelvin-Voigt two-element model (a purely viscous
damper and purely elastic spring connected in parallel), the creep response to
an external force $F$ could be expressed as:
$\delta^{2}(t)=\frac{\pi
cot(\alpha)}{2}\frac{F}{E}\left[1-e^{\left(\frac{-t}{\tau}\right)}\right]$ (4)
where $\alpha$ is the indenter cone semi-angle, $E$ and $\tau$ are fit
parameters representing an elastic modulus of the material (corresponding to
the spring element), and a constant that quantifies the time dependent
property of the material, respectively. The values for the two parameters are
given in Table 1.
Accordingly with Figure 9, a layer of an aqueous suspension made of rigid
particles is stiffer than a mixture of soft and hard particles. Also, in the
same way the relaxation time of the material under an external force is
larger.
The characteristic time $\tau$ marks the transition from the viscous to the
elastic regime. The larger this relaxation time is, the longer the matrix can
be reorganized to relax stresses. This process competes with the relaxation by
the formation of cracks. As a result, from the data in Table 1, we can argue
that the addition of glycerol or soft particles enhance stresses relaxation
leading to less cracks in the dry material.
Figure 9: Creep comparison of different dryied layers: the change in depth, $p$, of the indenter tip (Vickers tip) was measured as a function of time from $F=50$ mN indents reached with $50$ mN/min loading rate. Three systems are investigated: silica Ludox HS-40, silica Ludox HS-40 with a cosolvent content added, binary mixture of soft nanolatex and rigid nanolatex particles. For each layer the drying process was investigated at $RH=50\%$. Insets: sketch of the indentation testing and print let at the surface of the layer after removing the indenter tip (bar=$25\mu$m). System | $E$ (MPa) | $\tau$ (s)
---|---|---
$50\%$ soft / $50\%$ rigid | $5\pm 2$ | $1210$
silica particles in | |
water-glycerol mixture ($90$%/$10$%) | $40\pm 8$ | $825$
silica particles in pure water | $100\pm 110$ | $120$
Table 1: Creep behavior modeled by a Kelvin-Voigt two-element:
$\delta^{2}(t)=\frac{\pi cot(\alpha)}{2}\frac{F}{E}[1-e^{(\frac{-t}{\tau})}]$,
where $\alpha=68^{\circ}$ and $F=0.05$ N.
## 8 Conclusion
During the drying of colloidal suspensions via the pressure gradient in liquid
pores, cracks propagate in the material due to two antagonist mechanisms: the
retraction induced by the loss of solvent which is limited by the adhesion of
the layer on the substrate. These two mechanisms are responsible to the build
up of a tensile stress in the material.
In this article, we point out that the resulting crack patterns are affected
by various parameters and we studied three of them: drying rate throw the
relative humidity, the effect of the solvent volatility and the presence of
soft particles among rigid ones. Using silica nanoparticles in water, a strong
dependance of the crack spacing has been observed with the drying kinetic.
Thus, a control of the drying conditions (relative humidity, temperature) is
request to ensure reproducible observations. The volatility of the solvent has
been investigated by adding non-volatile compounds (polyols). As water
evaporates, mixing Fick’s fluxes emerge in the porous medium resulting in a
flattened pressure gradient and an increase of the crack spacing. Moreover,
the crack spacing is quantitatively affected by the non-volatile compound.
Molecular structures are believed to modify silanol-polyol interactions.
Finally, we also observed a reduction of cracks in mixtures of soft and hard
particles: soft particles are allowed to deform during the drying, storing the
mechanical energy.
For both later systems, we check that kinetic is not affected by the addition
of an extra compound, but the caracteristic relaxation time is significantly
increased. This explains a reduction of the tensile stress leading to a
decrease of the number of cracks.
Futher investigations on the drying in presence of molecules in the solvent
are necessary for a better understanding of the state of final materials.
Especially, it would be interesting to study the influence of molecular
structures in order to tune the adsoprtion-desorption energy.
## 9 Acknowledgment
The authors thank Mikolaj Swider for his participation to the Relative
Humidity experiment and Alban Aubertin for designing the PID controller.
## References
* [1] S. Bohn, L. Pauchard, and Y. Couder. Hierarchical crack pattern as formed by successive domain divisions. Phys. Rev. E, 71:046214, 2005.
* [2] F. Boulogne, L. Pauchard, and F. Giorgiutti-Dauphiné. Effect of a non-volatile cosolvent on crack patterns induced by desiccation of a colloidal gel. Soft Matter, 8:8505–8510, 2012.
* [3] C.J Brinker and G.W Scherer. Sol-Gel Science : The Physics and Chemistry of Sol-Gel Processing. Academic Press, 1990.
* [4] P. Coussot. Scaling approach of the convective drying of a porous medium. Eur. Phys. J. B, 15, 2000.
* [5] T. E. Daubert and R. P. Danner. Physical and thermodynamic properties of pure chemicals: data compilation. Hemisphere Publishing Corp New York, 1989.
* [6] G. D’Errico, O. Ortona, F. Capuano, and V. Vitagliano. Diffusion coefficients for the binary system glycerol + water at 25 °c. a velocity correlation study. Journal of Chemical & Engineering Data, 49:1665–1670, 2004.
* [7] E. R. Dufresne, E. I. Corwin, N. A. Greenblatt, J. Ashmore, D. Y. Wang, A. D. Dinsmore, J. X. Cheng, X. S. Xie, J. W. Hutchinson, and D. A. Weitz. Flow and fracture in drying nanoparticle suspensions. Phys. Rev. Lett., 91:224501, 2003.
* [8] A. Groisman and E. Kaplan. An experimental study of cracking induced by desiccation. EPL (Europhysics Letters), 25:415, 1994.
* [9] G. Gulley and J. Martin. Stabilization of colloidal silica using polyols. Journal of Colloid and Interface Science, 241:340–345, 2001.
* [10] S. Kowalski and K. Kulczyński. Reduction of fractures in dried clay-like materials due to specific surfactants. Chemical Engineering Research and Design, pages –, 2012.
* [11] V. Lazarus and L. Pauchard. From craquelures to spiral crack patterns: influence of layer thickness on the crack patterns induced by desiccation. Soft Matter, 7:2552, 2011.
* [12] W.B Neely and G.E Blau. Environmental exposure from chemicals. Volume 1. CRC Press, Inc., Boca Raton, FL, 1985.
* [13] L. Pauchard, B. Abou, and K. Sekimoto. Influence of mechanical properties of nanoparticles on macrocrack formation. Langmuir, 25:6672–6677, 2009.
* [14] L. Pauchard, F. Parisse, and C. Allain. Influence of salt content on crack patterns formed through colloidal suspension desiccation. Phys. Rev. E, 59:3737–3740, 1999.
* [15] T. Provder, M. Winnik, and M. Urban. Film Formation in waterborne coatings. ACS Symposium, 1996.
* [16] G. Scherer. Drying gels vii. diffusion during drying. Journal of Non-Crystalline Solids, 107:135–148, 1989.
* [17] M.S Tirumkudulu and W.B Russel. Cracking in drying latex films. Langmuir, 21:4938–4948, 2005.
* [18] P. Xu, A. S. Mujumdar, and B. Yu. Drying-induced cracks in thin film fabricated from colloidal dispersions. Drying Technology: An International Journal, 27:636–652, 2009.
|
arxiv-papers
| 2013-02-14T08:46:28 |
2024-09-04T02:49:41.721304
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Fran\\c{c}ois Boulogne and Fr\\'ed\\'erique Giorgiutti-Dauphin\\'e and\n Ludovic Pauchard",
"submitter": "Fran\\c{c}ois Boulogne",
"url": "https://arxiv.org/abs/1302.3340"
}
|
1302.3356
|
# Characterization of affine automorphisms and ortho-order automorphisms of
quantum probabilistic maps
Zhaofang Bai School of Mathematical Sciences, Xiamen University, Xiamen,
361000, P. R. China. and Shuanping Du∗ School of Mathematical Sciences,
Xiamen University, Xiamen, 361000, P. R. China. [email protected]
###### Abstract.
In quantum mechanics, often it is important for the representation of quantum
system to study the structure-preserving bijective maps of the quantum system.
Such maps are also called isomorphisms or automorphisms. In this note, using
the Uhlhorn-type of Wigner’s theorem, we characterize all affine automorphisms
and ortho-order automorphisms of quantum probabilistic maps.
PACS numbers: 11.30.-j, 03.65.-w, 02.10.-v.
2000 Mathematical Subject Classification. Primary: 81Q10, 47N50.
Key words and phrases. Density operator, affine automorphism, ortho-order
automorphism
∗ Corresponding author
## 1\. Introduction
In quantum physics, of particular importance for the representation of
physical system and symmetries are structure-preserving bijective maps of the
system. Such maps are also called isomorphisms or automorphisms. Automorphisms
or isomorphisms are frequently amenable to mathematical formulation and can be
exploited to simplify many physical problems. By now, they have been
extensively studied in different quantum systems, and systematic theories have
been achieved [10]. Recently, most deepest results in this field have been
obtained by Lajos Monlar in a series of articles [13, 14, 15, 16, 17, 18]. And
an overview of recent results can be found in [5, 19].
Let us now fix the notations and set the problem in mathematical terms. Let
$H$ be a separable Hilbert space with dimension at least 3 and inner product
$<.>$. Let ${\mathcal{B}_{1}}(H)$ be the complex Banach space of the trace
class operators on $H$, with trace $tr(T)$ and trace norm $\|T\|_{1}=tr(|T|)$,
$|T|=\sqrt{T^{*}T}$, $T\in{\mathcal{B}_{1}}(H)$. The self-adjoint part of
${\mathcal{B}_{1}}(H)$ is denoted by ${\mathcal{B}_{1r}}(H)$ which is a real
Banach space. By ${\mathcal{B}_{1r}}^{+}(H)$ we denote the positive cone of
${\mathcal{B}_{1r}}(H)$. As usual, the unit ball of
${\mathcal{B}_{1r}}^{+}(H)$ is denoted by
$S_{1}(H)=\\{T\in{\mathcal{B}_{1r}}^{+}(H):tr(T)=\|T\|_{1}\leq 1\\}$, the
surface of $S_{1}(H)$ by $V=\\{T\in{\mathcal{B}_{1r}}^{+}(H):tr(T)=1\\}$. With
reference to the quantum physical applications, ${\mathcal{B}_{1r}}(H)$ is
called state space, the elements of ${\mathcal{B}_{1r}}^{+}(H)$ and $V$ are
called density operators and states, respectively (see [7], [10]). Naturally,
$S_{1}(H)$ can be equipped with several algebraic operations. Clearly,
$S_{1}(H)$ is a convex set, so one can consider the convex combinations on it.
Furthermore, one can define a partial addition on it. Namely, if $T,S\in
S_{1}(H)$ and $T+S\in S_{1}(H)$, then one can set $T\oplus S=T+S$. Moreover,
as for a multiplicative operation on $S_{1}(H)$, note that in general, $T,S\in
S_{1}(H)$ does not imply that $TS\in S_{1}(H)$. However, we all the time have
$TST\in S_{1}(H)$, since $TST\in{\mathcal{B}_{1r}}^{+}(H)$ and
$tr(TST)=\|TST\|_{1}\leq\|T\|_{1}\|S\|_{1}\|T\|_{1}\leq 1$. This
multiplication is a nonassociative operation and sometimes called Jordan
triple product also appears in infinite dimensional holomorphy as well as in
connection with the geometrical properties of $C^{*}$-algebras. Finally, there
is a natural partial order $\leq$ on $S_{1}(H)$ which is induced by the usual
order between selfadjoint operators on $H$. So, for any $T,S\in S_{1}(H)$ we
write $T\leq S$ if and only if $<Tx,x>\leq<Sx,x>$ holds for every $x\in H$.
Physically, the most interesting order may be spectral order (see [20]). The
detailed definition is as follows. Let $T,S\in S_{1}(H)$ and consider their
spectral measures $E_{T},E_{S}$ defined on the Borel subsets of
${\mathbb{R}}$. We write
$T\preceq S\mbox{\hskip 7.22743ptif and only if \hskip
7.22743pt}E_{T}(-\infty,t]\geq E_{S}(-\infty,t]\hskip
7.22743pt(t\in{\mathbb{R}}).$
The spectral order has a natural interpretation in quantum mechanics. In fact,
the spectral projection $E_{T}(-\infty,t]$ represents the probability that a
measurement of $T$ detects its value in the interval $(-\infty,t]$. Hence for
$T,S\in S_{1}(H)$ the relation $T\preceq S$ means for every $t\in[0,1]$ we
have $E_{T}(-\infty,t]\geq E_{S}(-\infty,t]$ in each state of the system,
i.e., the corresponding distribution functions are pointwise ordered.
Because of the importance of $S_{1}(H)$, it is a natural problem to study the
automorphisms of the mentioned structures. The aim of this paper is to
contribute to these investigations. In [2], the automorphisms of $S_{1}(H)$
with the partial addition and Jordan triple product were characterized. In
this paper, we are aimed to characterize the affine automorphisms and ortho-
order automorphisms of $S_{1}(H)$. The core of the proof is to reduce the
problem to using the Uhlhorn-type of Wigner’s theorem (see [22]).
Now, let us give the concrete definitions of affine automorphism and ortho-
order automorphism. A bijective map $\Phi:S_{1}(H)\rightarrow S_{1}(H)$ is an
affine automorphism if
$\Phi(\lambda T+(1-\lambda)S)=\lambda\Phi(T)+(1-\lambda)\Phi(S)\hskip
7.22743pt\mbox{for all}\hskip 7.22743ptT,S\in S_{1}(H),0\leq\lambda\leq 1.$
A bijective map $\Phi:S_{1}(H)\rightarrow S_{1}(H)$ is called an ortho-order
automorphism if
$\displaystyle{\rm(i)}$ $\displaystyle\hskip
7.22743ptTS=0\Leftrightarrow\Phi(T)\Phi(S)=0\hskip 7.22743pt\mbox{for
all}\hskip 7.22743ptT,S\in S_{1}(H),$ $\displaystyle{\rm(ii)}$
$\displaystyle\hskip 7.22743ptT\preceq
S\Leftrightarrow\Phi(T)\preceq\Phi(S)\hskip 7.22743pt\mbox{for all}\hskip
7.22743ptT,S\in S_{1}(H).$
Here, it is worth mentioning that the affine automorphism has an intimate
relationship with the so-called operation of ${\mathcal{B}}_{1}(H)$(see [6,
9]), which is a fundamental notion in quantum theory. Recall that an operation
$\Phi$ is a completely positive linear mapping on ${\mathcal{B}}_{1}(H)$ such
that $0\leq tr(\Phi(T))\leq 1$ for every $T\in V$. An operation represents a
probabilistic state transformation. Namely, if $\Phi$ is applied on an input
state $T$, then the state transformation $T\rightarrow\Phi(T)$ occurs with the
probability $tr(\Phi(T))$, in which case the output state is
$\frac{\Phi(T)}{tr(\Phi(T))}$. By the Kraus representation theorem (see [3,
9]), $\Phi$ is an operation if and only if there exists a countable set of
bounded linear operators $\\{A_{k}\\}$ such that $\sum_{k}A_{k}^{*}A_{k}\leq
I$ and $\Phi(T)=\sum_{k}A_{k}TA_{k}^{*}$ holds for all
$T\in{\mathcal{B}}_{1}(H)$. This is very important in describing dynamics,
measurements, quantum channels, quantum interactions, quantum error,
correcting codes, etc [21]. Since operation $\Phi$ is linear and $0\leq
tr(\Phi(T))\leq 1$ for every $T\in V$, it is evident such $\Phi$ maps
$S_{1}(H)$ into $S_{1}(H)$ and possesses the affine condition mentioned in the
definition of affine automorphism. Thus operations on ${\mathcal{B}}_{1}(H)$
can be reduced to maps on $S_{1}(H)$. Furthermore, if the reduction of
operation on $S_{1}(H)$ is bijective, from our Theorem 2.1, an explicit
description can be given even without completely positive assumption.
## 2\. Affine automorphisms on $S_{1}(H)$
In this section, we present a structure theorem of affine automorphisms on
$S_{1}(H)$. The following are the main results of this section.
Theorem 2.1. If $\Phi:S_{1}(H)\rightarrow S_{1}(H)$ is an affine automorphism,
then there exists an either unitary or antiunitary operator $U$ on $H$ such
that $\Phi(T)=UTU^{*}$ for all $T\in S_{1}(H)$.
Corollary 2.2. If $\dim H<+\infty$, $\Phi$ is a $\|.\|_{1}$-isometric
automorphism of $S_{1}(H)$ (which is a bijection of $S_{1}(H)$ and satisfies
$\|T-S\|_{1}=\|\Phi(T)-\Phi(S)\|_{1}$ for all $T,S\in S_{1}(H)$), then there
exists an either unitary or antiunitary operator $U$ on $H$ such that
$\Phi(T)=UTU^{*}$ for all $T\in S_{1}(H)$.
We remark that, in the above two results, the bijectivity assumption is
indispensable to obtain a nice form of $\Phi$. To show it, an example
originating from the Kraus representation theorem will be given after the
proof of theorem 2. 1 and corollary 2.2.
Before the proof of Theorem 2.1, let us recall the general structure of
density operators (see for instance [1]). For $T\in{\mathcal{B}}_{1r}^{+}(H)$,
there exists an orthonormal basis $\\{e_{n}\\}_{n\in{\mathbb{N}}}$ of $H$ and
numbers $\lambda_{n}>0$ such that
$T=\sum_{n=1}^{+\infty}\lambda_{n}P_{n}$
or
$Tx=\sum_{n=1}^{+\infty}\lambda_{n}<x,e_{n}>e_{n},\forall x\in H\hskip
7.22743pt\mbox{and}\hskip
7.22743pt0<tr(T)=\sum_{n=1}^{+\infty}\lambda_{n}<+\infty,$
where $P_{n}$ is the one dimensional projection onto the eigenspace spanned by
the eigenvector $e_{n}$. Let ${\mathcal{P}}_{1}(H)$ stand for the set of all
one dimensional projections on $H$. With reference to the quantum physical
applications, the elements of ${\mathcal{P}}_{1}(H)$ are called pure states.
Now, we are in a position to prove our first theorem.
Proof of Theorem 2.1. we will finish the proof by checking 3 claims.
Claim 1. $\Phi$ is continuous in the trace norm.
To see the continuity of $\Phi$, consider the affine transformation
$\Psi:S_{1}(H)\longmapsto{\mathcal{B}_{1r}}(H)$ defined by
$\Psi(T)=\Phi(T)-\Phi(0)$ for every $T\in S_{1}(H)$. It is easy to see that
$\Psi$ is injective. In the following, we will prove $\Psi$ has a unique
linear extension from $S_{1}(H)$ to ${\mathcal{B}}_{1r}(H)$. Since $\Psi$ is
affine and $\Psi(0)=0$, for each $\lambda\in[0,1]$ and every $T\in S_{1}(H)$,
$\Psi(\lambda T)=\lambda\Psi(T)$. For $T\in{\mathcal{B}}_{1r}^{+}(H)$, a
natural extension of $\Psi$ from $S_{1}(H)$ to ${\mathcal{B}}_{1r}^{+}(H)$ is
to define
$\widetilde{\Psi}(T)=\|T\|_{1}\Psi(\frac{T}{\|T\|_{1}}).$
Then for any $\lambda\geq 0$, one gets $\widetilde{\Psi}(\lambda
T)=\lambda\widetilde{\Psi}(T)$, which is the positive homogeneity. For
$T,S\in{\mathcal{B}}_{1r}^{+}(H)$, suppose
$\widetilde{\Psi}(T)=\widetilde{\Psi}(S)$, without loss of generality, assume
further $\|T\|_{1}\leq\|S\|_{1}$. Then
$\widetilde{\Psi}(\frac{T}{\|S\|_{1}})=\widetilde{\Psi}(\frac{S}{\|S\|_{1}}),\frac{T}{\|S\|_{1}},\frac{S}{\|S\|_{1}}\in
S_{1}(H)$. By the injectivity of $\Psi$, $T=S$ and thus $\widetilde{\Psi}$ is
injective. For $T_{1},T_{2}\in{\mathcal{B}}_{1r}^{+}(H)$, we can rewrite
$T_{1}+T_{2}$ in the form
$T_{1}+T_{2}=(\|T_{1}\|_{1}+\|T_{2}\|_{1})(\frac{\|T_{1}\|_{1}}{\|T_{1}\|_{1}+\|T_{2}\|_{1}}\frac{T_{1}}{\|T_{1}\|_{1}}+\frac{\|T_{2}\|_{1}}{\|T_{1}\|_{1}+\|T_{2}\|_{1}}\frac{T_{2}}{\|T_{2}\|_{1}}).$
The positive homogeneity of $\widetilde{\Psi}$ and the affine property of
$\Psi$ yield the additivity of $\widetilde{\Psi}$, that is,
$\widetilde{\Psi}(T_{1}+T_{2})=\widetilde{\Psi}(T_{1})+\widetilde{\Psi}(T_{2})$.
Next for $T\in{\mathcal{B}}_{1r}(H)$, write $T=T^{+}-T^{-}$, where
$T^{+}=\frac{1}{2}(|T|+T),T^{-}=\frac{1}{2}(|T|-T),|T|=(T^{*}T)^{\frac{1}{2}}$.
Let
$\widehat{\Psi}(T)=\widetilde{\Psi}(T^{+})-\widetilde{\Psi}(T^{-}).$
Also if $T=T_{1}-T_{2}$ for some other
$T_{1},T_{2}\in{\mathcal{B}}_{1r}^{+}(H)$, then $T^{+}+T_{2}=T^{-}+T_{1}$, by
the additivity of $\widetilde{\Psi}$,
$\widetilde{\Psi}(T^{+})-\widetilde{\Psi}(T^{-})=\widetilde{\Psi}(T_{1})-\widetilde{\Psi}(T_{2})$,
which shows $\widehat{\Psi}$ is well defined. Furthermore, for
$T\in{\mathcal{B}}_{1r}(H)$, it is easy to see
$\widehat{\Psi}(-T)=-\widehat{\Psi}(T)$, combining the homogeneity of
$\widetilde{\Psi}$ over non-negative real number, we know $\widehat{\Psi}$ is
linear. Assume $\widehat{\Psi}(T)=0$, from the definition of $\widehat{\Psi}$,
$\widehat{\Psi}(T^{+})=\widehat{\Psi}(T^{-})$, i.e.,
$\widetilde{\Psi}(T^{+})=\widetilde{\Psi}(T^{-})$. Now, the injectivity of
$\widetilde{\Psi}$ implies $T^{+}=T^{-}$, so $T=0$ and $\widehat{\Psi}$ is
injective. If $\Gamma:{\mathcal{B}}_{1r}(H)\rightarrow{\mathcal{B}}_{1r}(H)$
is another linear map which extends $\Psi$, then for any
$T\in{\mathcal{B}}_{1r}(H)$,
$\displaystyle\Gamma(T)$
$\displaystyle=\Gamma(T^{+}-T^{-})=\Gamma(T^{+})-\Gamma(T^{-})$
$\displaystyle=\|T^{+}\|_{1}\Gamma(\frac{T^{+}}{\|T^{+}\|_{1}})-\|T^{-}\|_{1}\Gamma(\frac{T^{-}}{\|T^{-}\|_{1}})$
$\displaystyle=\|T^{+}\|_{1}\Psi(\frac{T^{+}}{\|T^{+}\|_{1}})-\|T^{-}\|_{1}\Psi(\frac{T^{-}}{\|T^{-}\|_{1}})$
$\displaystyle=\widehat{\Psi}(T^{+})-{\widehat{\Psi}}(T^{-})={\widehat{\Psi}}(T).$
This shows the extension is unique, as desired.
Now, ${\widehat{\Psi}}:{\mathcal{B}}_{1r}(H)\rightarrow{\mathcal{B}}_{1r}(H)$
is a linear injection. We assert that ${\widehat{\Psi}}$ is continuous in the
trace norm $\|.\|_{1}$. For any $T\in S_{1}(H)$, clearly,
$\|{\widehat{\Psi}}(T)\|_{1}=\|\Phi(T)-\Phi(0)\|_{1}\leq 2$. For arbitrary
$T\in{\mathcal{B}}_{1r}(H),\|T\|_{1}\leq 1$, it is easy to see $T^{+}\in
S_{1}(H),T^{-}\in S_{1}(H)$. Thus
$\|{\widehat{\Psi}}(T)\|_{1}=\|{\widehat{\Psi}}(T^{+})-{\widehat{\Psi}}(T^{-})\|_{1}\leq\|{\widehat{\Psi}}(T^{+})\|_{1}+\|{\widehat{\Psi}}(T^{-})\|_{1}\leq
4$. It follows that ${\widehat{\Psi}}$ is bounded on the unit ball of
${\mathcal{B}}_{1r}(H)$, hence ${\widehat{\Psi}}$ is continuous. Note that
$\Psi$ is the restriction of ${\widehat{\Psi}}$ on $S_{1}(H)$, therefore
$\Psi$ is continuous and so $\Phi$ is continuous on $S_{1}(H)$, as desired.
Claim 2. $\Phi(0)=0$, $\Phi({\mathcal{P}}_{1}(H))={\mathcal{P}}_{1}(H)$.
Clearly, $\Phi$ preserves the extreme points of $S_{1}(H)$ which are exactly
the one dimensional projections and zero in $S_{1}(H)$ (see [2, Lemma 1]).
Since $\Phi^{-1}$ has the same properties as $\Phi$,
$\Phi({\mathcal{P}}_{1}(H)\cup\\{0\\})={\mathcal{P}}_{1}(H)\cup\\{0\\}$. We
claim that $\Phi(0)=0$. Assume on the contrary, $\Phi(0)\neq 0$, then there
exists one dimensional projection $P$ such that $\Phi(P)=0$. Note that for
$P_{1},P_{2}\in{\mathcal{P}}_{1}(H)$,
$\|P_{1}-P_{2}\|_{1}=2\sqrt{1-tr(P_{1}P_{2})}$. Therefore we can choose a
sequence $\\{P_{n}\\}_{n=1}^{\infty}$ in ${\mathcal{P}}_{1}(H)$ such that
$\|P_{n}-P\|_{1}\rightarrow 0(n\rightarrow+\infty)$. By Claim 1 and the
property of $\Phi$ , $\|\Phi(P_{n})-\Phi(P)\|_{1}=1\rightarrow
0(n\rightarrow+\infty)$, a contradiction. This tells us $\Phi(0)=0$ and so
$\Phi({\mathcal{P}}_{1}(H))={\mathcal{P}}_{1}(H)$.
Claim 3. There exists an either unitary or antiunitary operator $U$ on $H$
such that $\Phi(T)=UTU^{*}$ for all $T\in S_{1}(H)$.
From Claim 2, $\Phi=\Psi$, so we denote $\widehat{\Psi}=\widehat{\Phi}$. From
the proof of Claim 1, one can easily get $\widehat{\Phi}$ is also surjective.
Thus $\Phi$ has a unique positive linear bijective extension from $S_{1}(H)$
to ${\mathcal{B}}_{1r}(H)$. In addition, Since $\Phi^{-1}$ has the same
properties as $\Phi$, a direct computation shows
$\widehat{\Phi^{-1}}={\widehat{\Phi}}^{-1}$.
In the following, we will prove ${\widehat{\Phi}}$ is trace norm preserving.
Firstly, it will be shown that ${\widehat{\Phi}}$ is trace preserving, i.e.
$tr(T)=tr({\widehat{\Phi}}(T))$ for every $T\in{\mathcal{B}}_{1r}(H)$. Assume
$T\in{\mathcal{B}}_{1r}^{+}(H)$, and
$T=\lambda_{1}P_{1}+\lambda_{2}P_{2}+\cdots+\lambda_{n}P_{n}$, where
$\\{P_{i}\\}_{i=1}^{n}$ are mutually orthogonal one dimensional projections,
$\lambda_{h}>0,i=1,2\cdots,n$. Then
$\displaystyle{\widehat{\Phi}}(T)$
$\displaystyle=\lambda_{1}{\widehat{\Phi}}(P_{1})+\lambda_{2}{\widehat{\Phi}}(P_{2})+\cdots+\lambda_{n}{\widehat{\Phi}}(P_{n})$
$\displaystyle=\lambda_{1}\Phi(P_{1})+\lambda_{2}\Phi(P_{2})+\cdots+\lambda_{n}\Phi(P_{n}).$
By Claim 2, we can obtain
$tr(T)=\Sigma_{i=1}^{n}\lambda_{i}=tr({\widehat{\Phi}}(T))$. For any
$T\in{\mathcal{B}}_{1r}^{+}(H)$, by the spectral theorem of positive
operators, there exists monotone increasing sequence
$\\{T_{n}=\Sigma_{i=1}^{n}\lambda_{i}P_{i}\\}_{n=1}^{\infty}$ such that
$\|T_{n}-T\|_{1}=tr(T-T_{n})=tr(T)-tr(T_{n})\rightarrow
0(n\rightarrow\infty)$, where $\\{P_{i}\\}_{i=1}^{n}$ are mutually orthogonal
one dimensional projections, $\lambda_{i}>0,i=1,2,\cdots,n$. Since
${\widehat{\Phi}}$ is positive preserving and continuous,
$\\{{\widehat{\Phi}}(T_{n})\\}_{n=1}^{\infty}$ is monotone increasing and
$\|{\widehat{\Phi}}(T_{n})-{\widehat{\Phi}}(T)\|_{1}=tr({\widehat{\Phi}}(T))-tr({\widehat{\Phi}}(T_{n}))\rightarrow
0(n\rightarrow\infty)$. Note that $tr({\widehat{\Phi}}(T_{n}))=tr(T_{n})$, so
for every $T\in{\mathcal{B}}_{1r}^{+}(H),tr(T)=tr({\widehat{\Phi}}(T))$. For
any $T\in{\mathcal{B}}_{1r}(H)$,
$tr({\widehat{\Phi}}(T))=tr({\widehat{\Phi}}(T^{+}))-tr({\widehat{\Phi}}(T^{-}))=tr(T^{+})-tr(T^{-})=tr(T),$
So ${\widehat{\Phi}}:{\mathcal{B}}_{1r}(H)\rightarrow{\mathcal{B}}_{1r}(H)$ is
positive and trace preserving.
Next, we will show that ${\widehat{\Phi}}$ preserves the trace norm. In fact,
for any $T\in{\mathcal{B}}_{1r}(H)$, we have
$\displaystyle\|{\widehat{\Phi}}(T)\|_{1}$
$\displaystyle=\|{\widehat{\Phi}}(T^{+}-T^{-})\|_{1}=\|{\widehat{\Phi}}(T^{+})-{\widehat{\Phi}}(T^{-})\|_{1}$
$\displaystyle\leq\|{\widehat{\Phi}}(T^{+})\|_{1}+\|{\widehat{\Phi}}(T^{-})\|_{1}=tr({\widehat{\Phi}}(T^{+}))+tr({\widehat{\Phi}}(T^{-}))$
$\displaystyle=tr(T^{+})+tr(T^{-})=tr(T^{+}+T^{-})=tr(|T|)=\|T\|_{1}.$
So ${\widehat{\Phi}}:{\mathcal{B}}_{1r}(H)\rightarrow{\mathcal{B}}_{1r}(H)$ is
contractive, i.e., for
$T\in{\mathcal{B}}_{1r}(H),\|{\widehat{\Phi}}(T)\|_{1}\leq\|T\|_{1}$. Since
${\widehat{\Phi}}^{-1}$ has the same properties as ${\widehat{\Phi}}$, we have
$\|{\widehat{\Phi}}(T)\|_{1}\geq\|T\|_{1}$ and thus
$\|{\widehat{\Phi}}(T)\|_{1}=\|T\|_{1}$, that is ${\widehat{\Phi}}$ is a
$\|\cdot\|_{1}$-isometry of ${\mathcal{B}}_{1r}(H)$.
Note that, for $P,Q\in{\mathcal{P}}_{1}(H),\|P-Q\|_{1}=2\sqrt{1-tr(PQ)}$. Thus
$PQ=0\Leftrightarrow\|P-Q\|_{1}=2$. Since ${\widehat{\Phi}}$ is trace norm
preserving, we have
$PQ=0\Leftrightarrow{\widehat{\Phi}}(P){\widehat{\Phi}}(Q)=0$. By Claim 2,
${\widehat{\Phi}}|_{{\mathcal{P}}_{1}(H)}:{\mathcal{P}}_{1}(H)\rightarrow{\mathcal{P}}_{1}(H)$
is a bijection with the property
$PQ=0\Leftrightarrow{\widehat{\Phi}}(P){\widehat{\Phi}}(Q)=0,P,Q\in{\mathcal{P}}_{1}(H)$.
Using the well-known Uhlhorn-type of Wigner’s theorem (see [22]), we have
${\widehat{\Phi}}(P)=UPU^{*}(P\in{\mathcal{P}}_{1}(H))$ for some unitary or
antiunitary operator $U$ on $H$. By the spectral theorem of selfadjoint
operators and the continuity of ${\widehat{\Phi}}$, for all
$T\in{\mathcal{B}}_{1r}(H)$, ${\widehat{\Phi}}(T)=UTU^{*}$, therefore
$\Phi(T)=UTU^{*}$ for all $T\in S_{1}(H)$, as desired.
Based on the Theorem 2.1, we can prove Corollary 2.2.
Proof of Corollary 2.2. Firstly, we recall a nice result of Mankiewicz, namely
[11, Theorem 5] which states that if we have a bijective isometry between
convex set in normed linear space with nonempty interiors, then this isometry
can be uniquely extended to a bijective affine isometry between the whole
space. Clearly, in the finite dimensional case, the convex set $S_{1}(H)$ has
nonempty interiors in the normed linear space of ${\mathcal{B}_{1r}}(H)$ (In
fact, the interior of $S_{1}(H)$ consists of all invertible positive
operators). Consequently, applying the result of Mankiewicz, we know that
$\Phi$ is automatically affine. Combing this with Theorem 1, we get the
desired.
Remark 2.3. Now, in order to illustrate that the bijective assumption is
indispensable in theorem 2.1 and corollary 2,2. we give an example, the idea
is come from the Kraus representation theorem (see [9]): Suppose that $H$ is a
complex separable infinite dimensional Hilbert space such that $H$ can be
presented as a direct sum of mutually orthogonal closed subspaces,
$H=(\oplus_{k=1}^{N}H_{k})\oplus H_{0}$, $N\in{\mathbb{N}}$, $\dim H_{k}=\dim
H$, $k=1,2,\cdots,N$. Let $U_{k}:H\rightarrow H_{k}$ be unitary or antiunitary
operators,
$\lambda_{1},\lambda_{2},\cdots,\lambda_{N}\in(0,1),\sum_{k=1}^{N}{\lambda_{k}}=1$.
Let $\Phi(T)=\sum_{k=1}^{N}\lambda_{k}U_{k}TU_{k}^{*},\hskip 7.22743pt\forall
T\in S_{1}(H)$. Then
$tr(\Phi(T))=\sum_{k=1}^{N}\lambda_{k}tr(U_{k}TU_{k}^{*})=tr(T)$, the last
equality being due to $U_{k}^{*}U_{k}=I$. This implies that $\Phi$ is indeed a
mapping which maps $S_{1}(H)$ into $S_{1}(H)$. Furthermore, it is easy to see
that $\Phi(\lambda T+(1-\lambda)S)=\lambda\Phi(T)+(1-\lambda)\Phi(S)\hskip
7.22743pt\mbox{for all}\hskip 7.22743ptT,S\in S_{1}(H),0\leq\lambda\leq 1$.
Finally, all $\Phi_{k}:T\rightarrow U_{k}TU_{k}^{*}$ are isometric and for all
$T\in
S_{1}(H),|\Phi_{k}(T)|\bot|\Phi_{l}(T)|(\mbox{i.e.},|\Phi_{k}(T)||\Phi_{l}(T)|=0)$
if $k\neq l$. Thus
$\|\Phi(T)-\Phi(S)\|_{1}=\|\sum_{k=1}^{N}\lambda_{k}U_{k}(T-S)U_{k}^{*}\|_{1}=\sum_{k=1}^{N}\lambda_{k}(\|T-S\|_{1})=\|T-S\|_{1}$
for all $T,S\in S_{1}(H)$. But, in general, such $\Phi$ is not a bijection and
does not have a nice form as theorem 2.1 and corollary 2.2.
## 3\. Ortho-order automorphisms on $S_{1}(H)$
The purpose of this section is to characterize the ortho-order automorphisms
of $S_{1}(H)$, that is, the bijective map $\Phi$ preserves the spectral order
in both directions and preserves orthogonality in both directions. The
following is the main result.
Theorem 3.1. If $\Phi:S_{1}(H)\rightarrow S_{1}(H)$ is an ortho-order
automorphism, then there exists an either unitary or antiunitary operator $U$
on $H$, a strictly increasing continuous bijection $f:[0,1]\longmapsto[0,1]$
such that $\Phi(T)=Uf(T)U^{*}$ for all $T\in S_{1}(H)$, where $f(T)$ is
obtained from the continuous function calculus.
Before embarking on the proof of Theorem 3.1, we need some terminologies and
facts about spectral order.
First of all, by a resolution of identity we mean a function from
${\mathbb{R}}$ into the lattice $({\mathcal{P}}(H),\leq)$ of all projections
on $H$ which is increasing, right-continuous, for all small real numbers it
takes the value $0$, while for large enough real numbers it takes value $I$
(the identity operator of $H$). It is well-known that there is a one-to-one
correspondence between the compactly supported spectral measures on the Borel
sets of ${\mathbb{R}}$ and the resolutions of the identity (see [8, Page
360]). In fact, every resolution of the identity is the form $t\mapsto
E(-\infty,t]$. If $T\in S_{1}(H)$, the resolution of the identity
corresponding to $E_{T}$ is called the spectral resolution of $T$. Next, the
spectral order implies the usual order: if $T,S\in S_{1}(H)$ and $T\preceq S$,
then $T\leq S$. Furthermore, $T\preceq S$ if and only if $T^{n}\leq S^{n}$ for
every $n\in{\mathbb{N}}$. For commuting $T,S\in S_{1}(H)$, by the spectral
theorem of positive operators, it is easy to see $T\preceq S$ if and only if
$T\leq S$. Finally, for $T,S\in S_{1}(H)$, the supremum of the set $\\{T,S\\}$
in this structure denoted by $T\vee S$ exists. Similarly, the infimum of the
set $\\{T,S\\}$ denoted by $T\wedge S$ also exists. For details, one can see
[20].
After these preparations, we turn to the proof of Theorem 3.1.
Proof of Theorem 3.1. The proof is divided into 3 claims.
Claim 1. $\Phi$ preserves the rank of operators.
Let $\Phi$ be an ortho-order automorphism of $S_{1}(H)$. Since $0=\wedge
S_{1}(H)$, it follows that $\Phi(0)=0$. For $T\in S_{1}(H)$, we denote by
$\\{T\\}^{\bot}=\\{S\in S_{1}(H):TS=0\\}$, i.e., the set of all elements of
$S_{1}(H)$ which are orthogonal to $T$. By the spectral theorem of positive
operators, it is easy to see that $T\in S_{1}(H)$ is of rank $n$ if and only
$\\{T\\}^{\bot\bot}$ contains $n$ pairwise orthogonal nonzero elements but it
does not contain more. As $\Phi$ preserves orthogonality in both directions,
it is now clear that $\Phi$ preserves the rank of operators.
Claim 2. There exists a strictly increasing continuous bijection
$f:[0,1]\longmapsto[0,1]$ such that $\Phi(\lambda P)=f(\lambda)\Phi(P)$ for
all $P\in{\mathcal{P}}_{1}(H)$.
By Claim 1, $\Phi$ preserves the rank of operators. In particular, $\Phi$
preserves the rank one elements of $S_{1}(H)$. Since $\Phi^{-1}$ has the same
properties as $\Phi$, we have $\Phi$ preserves the rank one elements in both
directions, i.e., $T\in S_{1}(H)$ is rank one if and only if $\Phi(T)$ is rank
one. Note that the rank one projections are exactly the maximal elements of
the set of all rank one elements in $S_{1}(H)$. This implies $\Phi$ preserves
rank one projections in both directions, i.e.,
$\Phi({\mathcal{P}}_{1}(H))={\mathcal{P}}_{1}(H)$.
Let $P$ be a rank one projection. For $\lambda\in[0,1]$, then $\lambda
P\preceq P$ and so we have $\Phi(\lambda P)\preceq\Phi(P)$. This implies that
there is a scalar $f_{P}(\lambda)\in[0,1]$ such that
$\Phi(\lambda P)=f_{P}(\lambda)\Phi(P).$
It follows from the properties of $\Phi$ that $f_{P}$ is a strictly increasing
continuous bijection of $[0,1]$. Now, $\Phi(0)=0$ together with $\Phi$
preserves rank one projection implies $f_{P}(0)=0,f_{P}(1)=1$.
In the following, we will prove that $f_{P}$ does not depend on $P$.
Let $E,F,E\neq F$, be rank one projections and $0<\lambda\leq\mu\leq 1$.
Computing the spectral resolution of $\lambda E$ and $\mu F$, we have
$E_{\lambda E}(-\infty,t]=\left\\{\begin{array}[]{cc}0&t<0;\\\ I-E&0\leq
t<\lambda;\\\ I&\lambda\leq t,\end{array}\right.$ $E_{\mu
F}(-\infty,t]=\left\\{\begin{array}[]{cc}0&t<0;\\\ I-F&0\leq t<\mu;\\\
I&\mu\leq t.\end{array}\right.$
From [20], we know that $E_{\lambda E\vee\mu F}(-\infty,t]=E_{\lambda
E}(-\infty,t]\wedge E_{\mu F}(-\infty,t]$, and so
$E_{\lambda E\vee\mu F}=\left\\{\begin{array}[]{cc}0&t<0;\\\
(I-E)\wedge(I-F)&0\leq t<\lambda;\\\ I-F&\lambda\leq t<\mu;\\\ I&\mu\leq
t.\end{array}\right.$
Note that $(I-E)\wedge(I-F)=I-E\vee F$, thus we have
$\lambda E\vee\mu F=\lambda(E\vee F-F)+\mu F.$
This tells us that the nonzero eigenvalues of the operator $\lambda E\vee\mu
F$ are $\lambda$ and $\mu$.
Let $R$ be a rank two projection of $H$ and pick $\lambda\leq\frac{1}{2}$,
then $\lambda R\in S_{1}(H)$. Since $\Phi$ preserves the rank of operators, we
have $\Phi(\lambda R)$ is a rank two operator and hence it can be written in
the form
$\Phi(\lambda R)=\alpha P^{\prime}+\beta Q^{\prime}$
with mutually orthogonal rank one projection $P^{\prime},Q^{\prime}$ and
$0<\alpha\leq\beta<1$. Pick any two different rank one subprojections $P,Q$ of
$R$. Then we compute
$\Phi(\lambda R)=\Phi(\lambda P\vee\lambda Q)=\Phi(\lambda P)\vee\Phi(\lambda
Q)=f_{P}(\lambda)\Phi(P)\vee f_{Q}(\lambda)\Phi(Q).$
It follows that
$\\{\alpha,\beta\\}=\\{f_{P}(\lambda),f_{Q}(\lambda)\\}.$
We claim that $\alpha=\beta$. Suppose on the contrary that $\alpha\neq\beta$.
Without loss of generality, we may assume that $f_{P}(\lambda)=\alpha$ and
$f_{Q}(\lambda)=\beta$. Pick a third rank one subprojection $R_{0}$ of $R$
which is different from $P$ and $Q$. Then repeating the above argument for the
pair $P,R_{0}$, we have $f_{R_{0}}(\lambda)=\beta$. Similarly, for the pair
$R_{0},Q$, we have $f_{R_{0}}(\lambda)=\alpha$. This contradiction yields that
$\alpha=\beta$ and so $f_{P}(\lambda)=f_{Q}(\lambda)$ for all
$\lambda\in[0,\frac{1}{2}]$. Thus we can denote $f_{P}(\lambda)=f(\lambda)$
for all $\lambda\in[0,\frac{1}{2}]$.
The remianed is to prove $f_{P}(\lambda)$ does not depend on $P$ for every
$\lambda\in(\frac{1}{2},1)$. Let $Q$ be a rank one projection such that
$PQ=0$. We compute
$\Phi(\lambda P+(1-\lambda)Q)=\Phi(\lambda P\vee(1-\lambda)Q))=\Phi(\lambda
P)\vee\Phi((1-\lambda)Q).$
Since $\Phi(\lambda P)$ and $\Phi((1-\lambda)Q)$ are orthogonal, It follows
that
$\Phi(\lambda P+(1-\lambda)Q)=f_{P}(\lambda)\Phi(P)+f(1-\lambda)\Phi(Q).$
For $T\in S_{1}(H)$, by the spectral theorem of positive operator, $tr(T)=1$
if and only if there does not exist $S\in S_{1}(H)$ such that $T\preceq S$. By
the properties of $\Phi$ and $\Phi^{-1}$, we have $\Phi(V)=V$, recall that $V$
is the surface of $S_{1}(H)$. Combing this with $\Phi(\lambda
P+(1-\lambda)Q)=f_{P}(\lambda)\Phi(P)+f(1-\lambda)\Phi(Q)$, we can obtain
$f_{P}(\lambda)+f(1-\lambda)=1$. Clearly, $f_{P}(\lambda)=1-f(1-\lambda)$ and
so $f_{P}(\lambda)$ does not depend on $P$ for every
$\lambda\in(\frac{1}{2},1)$. This completes the proof of this claim.
Claim 3. There exists an either unitary or antiunitary operator $U$ on $H$, a
strictly increasing continuous bijection $f:[0,1]\longmapsto[0,1]$ such that
$\Phi(T)=Uf(T)U^{*}$ for all $T\in S_{1}(H)$, where $f(T)$ is obtained from
the continuous function calculus.
Now $\Phi:{\mathcal{P}}_{1}(H)\rightarrow{\mathcal{P}}_{1}(H)$ is a bijection
and preserves orthogonality in both directions. By the Uhlhorn-type of
Wigner’s theorem (see [22]), there exists a unitary or antiunitary operator
$U$ on $H$ such that $\Phi(P)=UPU^{*}$ for all $P\in{\mathcal{P}}_{1}(H)$.
For $T_{n}=\lambda_{1}P_{1}+\lambda_{1}P_{2}+\cdots+\lambda_{n}P_{n}$,
$\lambda_{i}\in(0,1](i=1,2,\cdots,n),\sum_{i=1}^{n}\lambda_{i}\in(0,1]$,
$P_{i}P_{j}=0(i\neq j,i,j=1,2,\cdots n)$. Then
$\displaystyle\Phi(T)$
$\displaystyle=\Phi(\lambda_{1}P_{1}+\lambda_{1}P_{2}+\cdots+\lambda_{n}P_{n})$
$\displaystyle=\Phi(\lambda_{1}P_{1}\vee\lambda_{2}P_{2}\vee\cdots\vee\lambda_{n}P_{n})$
$\displaystyle=f(\lambda_{1})UP_{1}U^{*}+\cdots+f(\lambda_{n})UP_{n}U^{*}$
$\displaystyle=Uf(T)U^{*},$
where $f(T)$ is obtained from the continuous function calculus. For every
$T\in S_{1}(H)$, by the spectral theorem of positive operators, there exists a
monotonically increasing sequence $\\{T_{n}\\}_{n=1}^{+\infty}$ of $S_{1}(H)$
such that $T=\vee_{n=1}^{+\infty}T_{n}$. Since $\Phi$ preserves the spectral
order of operators in both directions, it follows that $\Phi(T)=Uf(T)U^{*}$
for every $T\in S_{1}(H)$.
Acknowledgments
This work was supported partially by National Natural Science Foundation of
China (10771175,11071201),Youth National Natural Science Foundation of China
(11001230) , the Fundamental Research Funds for the Central
Universities(2010121001) and Youth Talented Natural Science Foundation of
Fujian (2008F3103)
## References
* [1] Ph. Blanchard, E. Br$\ddot{u}$ning, Mathematical Methods in Physics Distrubitions Hilbert Space Operators and Variational Methods, Progress in Mathematical Physics, Birkh$\ddot{a}$user, Boston, Basel, Berlin, 2003.
* [2] Z.F. Bai, S.P. Du, Characterization of sum automorphisms and Jordan triple automorphisms of quantum probalistic maps, J. Phys. A: Math. Theor., 43(2010), 165210.
* [3] E. Br$\ddot{u}$ning, F. Petruccione, Nonlinear positive mappings for density matrices, Physica E, 42(2009), 436-438.
* [4] G. Cassinelli, E. De Vito, P.J. Lahti, A. Levrero, Symmetry groups in quantum mechanics and the theorem of Wigner on the symmetry transformations, Rev. Math. Phys., 9(1997), 921-941.
* [5] G. Cassinelli, E. De Vito, P.J. Lahti, A. Levrero, The theory of symmetry actions in quantum mechanics, Berlin, Heidelberg, Springer, 2004.
* [6] T. Heinosaari, Daniel Reitzner, P. Stano and M. Ziman, Coexistence of quantum operations, J. Phys. A: Math. Theor., 42(2009), 365302.
* [7] R.V. Kadsion, Transformations of states in operator theory and dynamics, Topology 3(Suppl. 2)(1965), 177-198.
* [8] R.V. Kadison and J.R. Ringrose, Fundamentals of the theory of operator algebras, Vol. I, Acdemic Press, New York, 1983; Vol. II, Acdemic Press, New York, 1986.
* [9] K. Kraus, States, Effects, and Operations, Berlin, Springer-Verlag, 1983.
* [10] G. Ludwig, Foundations of Quantuam Mechanics, Vol. I, Berlin-Heidlberg-New York: Springer-Verlag, 1983.
* [11] P. Mankiewicz, On extension of isometries in normed linear space, Bull. Acad. Pol. Sci., Ser. Sci. Math. Astron. Phys., 20(1972), 367-371.
* [12] L. Molnar, On some automorphisms of the set of effects on Hilbert space, Lett. Math. Phys., 51(2000), 37-45.
* [13] L. Molnar, Z. Pales, $\bot$-order automorphisms of Hilbert space effect algebras: The two dimensional case, J. Math. Phys., 42(2001), 1907-1912.
* [14] L. Molnar, order-automorphisms of the set of bounded observables, J. Math. Phys., 42(2001), 5904-5909.
* [15] L. Molnar, Characterizations of the automorphisms of Hilbert space effect algebras, Commun. Math. Phys., 223(2001),437-450.
* [16] L. Molnar, P. Semrl, Spectral order automorphisms of the spaces of Hilbert space effects and observables, Lett. Math. Phys., 80(2007), 239-255.
* [17] L. Molnar, Maps on states preserving the relative entropy, J. Math. Phys., 49(2008), 032114.
* [18] L. Molnar, P. Szokol, Maps on states preserving relative entropy II, Linear Alg. Appl., 432(2010), 3343-3350.
* [19] L. Molnar, Selected Preserver Problems on Algebraic Structures of Linear Operators and on Function Spaces, Berin, Springer-Verlag, 2007.
* [20] M.P. Olson, The selfadjoint operators of a von Neumann algebra form a conditionally complete lattice, Proc. Am. Math. Soc., 28(1971), 537-544.
* [21] M. Nielsen, J. Chuang, Quantum Computation and Quantum Information, Cambridge: Cambrige University Press, 2002.
* [22] U. Uhlhorn, Representation of symmetry transformations in quantum mechanics, Ark. Fysik, 23(1963), 307-340.
|
arxiv-papers
| 2013-02-14T10:19:15 |
2024-09-04T02:49:41.728220
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhaofang Bai and Shuanping Du",
"submitter": "Zhaofang Baii",
"url": "https://arxiv.org/abs/1302.3356"
}
|
1302.3412
|
# Secrecy capacities of compound quantum wiretap channels and applications
Holger Boche4 Minglai Cai1 Ning Cai2Christian Deppe1
4Lehrstuhl für Theoretische Informationstechnik, Technische Universität
München, München, Germany, [email protected] 1Department of Mathematics,
University of Bielefeld, Bielefeld, Germany, {mlcai,cdeppe}@math.uni-
bielefeld.de 2The State Key Laboratory of Integrated Services Networks,
University of Xidian, Xian, China, [email protected]
###### Abstract
We determine the secrecy capacity of the compound channel with quantum
wiretapper and channel state information at the transmitter. Moreover, we
derive a lower bound on the secrecy capacity of this channel without channel
state information and determine the secrecy capacity of the compound
classical-quantum wiretap channel with channel state information at the
transmitter. We use this result to derive a new proof for the achievement of a
lower bound on the entanglement generating capacity of the compound quantum
channel. We also derive a new proof for the formula for entanglement
generating capacity of the compound quantum channel with channel state
information at the encoder which was given in [7].
###### Index Terms:
Compound channel; Wiretap channels; Quantum channels; Entanglement generation
## I Introduction
Our goal is to analyze information transmission over a set of indexed
channels, which is called a compound channel. The indices are referred to as
channel states. Only one channel in this set is actually used for the
information transmission, but the users cannot control which channel in the
set will be used. The capacity of the compound channel was determined in [9].
A compound channel with an eavesdropper is called a compound wiretap channel.
It is defined as a family of pairs of channels
$\\{(W_{t},V_{t}):t=1,\cdots,T\\}$ with a common input alphabet and possibly
different output alphabets, connecting a sender with two receivers, a legal
one and a wiretapper, where $t$ stands for the state of the channel pair
$(W_{t},V_{t})$. The legitimate receiver accesses the output of the first
channel $W_{t}$ in the pair $(W_{t},V_{t})$, and the wiretapper observes the
output of the second part $V_{t}$ in the pair $(W_{t},V_{t})$, respectively,
when a state $t$ governs the channel. A code for the channel conveys
information to the legal receiver such that the wiretapper’s knowledge of the
transmitted information can be kept sufficiently small. This is a
generalization of Wyner’s wiretap channel [32] to a case of multiple channel
states. In [32] the author required that the wiretapper cannot detect the
message using a weak security criterion (cf. Remark 1).
We deal with two communication scenarios. In the first one, only the sender is
informed about the index $t$, or in other words, he has CSI, where CSI is the
abbreviation for “channel state information”. In the second one, both the
sender and the receiver do not have any information about that index at all.
The compound wiretap channels were introduced in [19]. A lower bound of the
secrecy capacity was obtained under the condition that the sender does not
have any knowledge about the CSI. In [19] the authors required that the
receiver’s average error goes to zero and that the wiretapper is not able to
detect the message using the same security criterion as in [32]. The result of
[19] was improved in [8] by using a stronger condition for the limit of
legitimate receiver’s error, i.e., the maximal error should go to zero, as
well as a stronger condition for the security criterion (c.f. Remark 1).
Furthermore, the secrecy capacity was determined for the case when the sender
had knowledge about the CSI.
In this paper we analyze two variants of the compound wiretap quantum channel.
A quantum channel can transmit both classical and quantum information. We
consider the capacity of quantum channels carrying classical information. This
is equivalent to considering the capacity of classical-quantum channels, where
the classical-quantum channels are quantum channels whose sender’s inputs are
classical variables. In general, there are two ways to represent a quantum
channel with linear algebraic tools (cf. e.g. Section VII), either as a sum of
several transformations, or as a single unitary transformation which
explicitly includes the unobserved environment. We use the latter one for our
result in the entanglement generating capacity. These two representations can
both be used to determine the entanglement generating capacity for quantum
channels, but it is unknown if this holds for the entanglement generating
capacity of compound quantum channels. The classical capacity of quantum
channels has been determined by Holevo in [15], [16], and by Schumacher and
Westmoreland in [25]. The first variant is called the classical compound
channel with quantum wiretapper. In this channel model we assume that the
wiretap channels are quantum channels, while the legal transmission channels
are classical channels. The second variant is called the compound classical-
quantum wiretap channel. In this channel model, we assume that both families
of channels are quantum channels, while the sender transmits classical
information.
Our results are summarized as follows. Under the condition that the sender has
knowledge about the CSI, the secrecy capacity for these two channel models is
derived. Additionally, when the sender does not have any knowledge about the
CSI, we determine the secrecy capacity of the compound classical-quantum
wiretap channel, and give a lower bound for the secrecy capacity of the
classical compound channel with quantum wiretapper.
As an application of the above results, we turn to the question: “What is the
maximal amount of entanglement that we can generate or transmit over a given
compound quantum channel?” For the sender and the receiver, the objective is
to share a nearly maximally entangled state on a $2^{nR}\times 2^{nR}$
dimensional Hilbert space by using a large number $n$ instances of the
compound quantum channel. The entanglement generating capacity of the quantum
channel has been determined in [13], [20] and [4]. The entanglement generating
capacities of the compound quantum channel with and without CSI have been
determined in [6] and [7]. In our paper we derive a lower bound on the
entanglement generating capacity of the compound quantum channel by using an
alternative technique to the method in [6] and [7] (cf. Section VII).
Furthermore, we derive the entanglement generating capacity of the compound
quantum channel with CSI at the encoder using the alternative technique.
The main definitions are given in Section II.
In Section III we present some known results for classical compound wiretap
channel which are used for the proof of the results in Section IV.
In Section IV we discuss the classical compound channel with a quantum
wiretapper. For the case when the sender has the full knowledge about the CSI,
we derive the secrecy capacity. For the case when the sender does not know the
CSI, we give a lower bound for the secrecy capacity. In this channel model,
the wiretapper uses classical-quantum channels.
In Section V we derive the secrecy capacity of the compound classical-quantum
wiretap channel with CSI. In this model, both the receiver and the wiretapper
use classical quantum channels and the set of the channel states may be finite
or infinite.
In Section VI we use the results of Section V to derive a lower bound on the
entanglement generating capacity for the compound quantum channel. The
entanglement generating capacity of the compound quantum channel with CSI at
the encoder is also derived.
In Section VII we discuss the two ways to represent a quantum channel with
linear algebraic tools.
## II Preliminaries
For finite sets $A$ and $B$ we define a (discrete) classical channel
$\mathsf{V}$: $A\rightarrow P(B)$, $A\ni x\rightarrow\mathsf{V}(x)\in P(B)$ to
be a system characterized by a probability transition matrix
$\mathsf{V}(\cdot|\cdot)$. Here for a set $B$, $P(B)$ stands for the sets of
probability distributions on $B$. For $x\in A$, $y\in B$, $\mathsf{V}(y|x)$
expresses the probability of the output symbol $y$ when we send the symbol $x$
through the channel. The channel is said to be memoryless if the probability
distribution of the output depends only on the input at that time and is
conditionally independent of previous channel inputs and outputs.
Let $n\in\mathbb{N}$, for a finite set $A$ and a finite-dimensional complex
Hilbert space $H$, we define $A^{n}:=\\{(a_{1},\cdots,a_{n}):a_{i}\in A\text{
}\forall i\in\\{1,\cdots,n\\}\\}$, and the space which
$\\{\rho_{1}\otimes\cdots\otimes\rho_{n}:\rho_{i}\in H\text{ }\forall
i\in\\{1,\cdots,n\\}\\}$ span by $H^{\otimes n}$. We also write $a^{n}$ and
$\rho^{\otimes n}$ for the elements of $A^{n}$ and $H^{\otimes n}$,
respectively.
For a discrete random variable $X$ on $A$, and a discrete random variable $Y$
on $B$, we denote the Shannon entropy by $H(X)$ and the mutual information
between $X$ and $Y$ by $I(X;Y)$ (cf. [30]).
For a probability distribution $P$ on $A$, a conditional stochastic matrix
$\Lambda$, and a positive constant $\delta$, we denote the set of typical
sequences by $\mathcal{T}^{n}_{P,\delta}$ and the set of conditionally typical
sequences by $\mathcal{T}^{n}_{\Lambda,\delta}(x^{n})$ (here we use the strong
condition) (cf. [30]).
For finite-dimensional complex Hilbert spaces $G$ and $G^{\prime}$, a quantum
channel $V$: $\mathcal{S}(G)\rightarrow\mathcal{S}(G^{\prime})$,
$\mathcal{S}(G)\ni\rho\rightarrow V(\rho)\in\mathcal{S}(G^{\prime})$ is
represented by a completely positive trace preserving map, which accepts input
quantum states in $\mathcal{S}(G)$ and produces output quantum states in
$\mathcal{S}(G^{\prime})$. Here $\mathcal{S}(G)$ and $\mathcal{S}(G^{\prime})$
stand for the space of density operators on $G$ and $G^{\prime}$,
respectively.
If the sender wishes to transmit a classical message $m\in M$ to the receiver
using a quantum channel, his encoding procedure will include a classical-to-
quantum encoder $M\rightarrow\mathcal{S}(G)$ to prepare a quantum message
state $\rho\in\mathcal{S}(G)$ suitable as an input for the channel. If the
sender’s encoding is restricted to transmitting an indexed finite set of
orthogonal quantum states $\\{\rho_{x}:x\in A\\}\subset\mathcal{S}(G)$, then
we can consider the choice of the signal quantum states $\rho_{x}$ to be a
component of the channel. Thus, we obtain a channel with classical inputs
$x\in A$ and quantum outputs: $\sigma_{x}:=V(\rho_{x})$, which we call a
classical-quantum channel. This is a map $\mathtt{V}$:
$A\rightarrow\mathcal{S}(G^{\prime})$, $\mathfrak{X}\ni
x\rightarrow\mathtt{V}(x)\in\mathcal{S}(G^{\prime})$ which is represented by
the set of $|A|$ possible output quantum states
$\left\\{\sigma_{x}=\mathtt{V}(x):=V(\rho_{x}):x\in
A\right\\}\subset\mathcal{S}(G^{\prime})$, meaning that each classical input
of $x\in A$ leads to a distinct quantum output
$\sigma_{x}\in\mathcal{S}(G^{\prime})$.
Following [30], we define the $n$-th memoryless extension of the stochastic
matrix $\mathsf{V}$ by $\mathsf{V}^{n}$, i.e., for
$x^{n}=(x_{1},\cdots,x_{n})\in A^{n}$ and $y^{n}=(y_{1},\cdots,y_{n})\in
B^{n}$, $\mathsf{V}^{n}(y^{n}|x^{n})=\prod_{i=1}^{n}\mathsf{V}(y_{i}|x_{i})$.
Following [30], we define the $n$-th extension of quantum channel and
classical-quantum channel as follows. Associated with $V$ and $\mathtt{V}$ are
the channel maps on an n-block $V^{\otimes n}$: $\mathcal{S}(G^{\otimes
n})\rightarrow\mathcal{S}({G^{\prime}}^{\otimes n})$ and $\mathtt{V}^{\otimes
n}$: $A^{n}\rightarrow\mathcal{S}({G^{\prime}}^{\otimes n})$, such that for
any $\rho^{\otimes
n}=\rho_{1}\otimes\cdots\otimes\rho_{n}\in\mathcal{S}({G}^{\otimes n})$ and
any $x^{n}=(x_{1},\cdots,x_{n})\in A^{n}$, $V^{\otimes n}(\rho^{\otimes
n})=V(\rho_{1})\otimes\cdots\otimes V(\rho_{n})$, and $\mathtt{V}_{t}^{\otimes
n}(x^{n})=\mathtt{V}(x_{1})\otimes\cdots\otimes\mathtt{V}(x_{n})$,
respectively.
For a quantum state $\rho\in\mathcal{S}(G)$, we denote the von Neumann entropy
of $\rho$ by
$S(\rho)=-\mathrm{tr}(\rho\log\rho)\text{ .}$
Let $\mathfrak{P}$ and $\mathfrak{Q}$ be quantum systems, we denote the
Hilbert space of $\mathfrak{P}$ and $\mathfrak{Q}$ by $G^{\mathfrak{P}}$ and
$G^{\mathfrak{Q}}$, respectively. Let $\phi$ be a quantum state in
$\mathcal{S}(G^{\mathfrak{PQ}})$, we denote
$\rho:=\mathrm{tr}_{\mathfrak{Q}}(\phi)\in\mathcal{S}(G^{\mathfrak{P}})$, and
$\sigma:=\mathrm{tr}_{\mathfrak{P}}(\phi)\in\mathcal{S}(G^{\mathfrak{Q}})$.
The conditional quantum entropy of $\sigma$ given $\rho$ is denote by
$S(\sigma\mid\rho):=S(\phi)-S(\rho)\text{ .}$
For quantum states $\rho$ and $\sigma\in\mathcal{S}(G)$, we denote the
fidelity of $\rho$ and $\sigma$ by
$F(\rho,\sigma):=\|\sqrt{\rho}\sqrt{\sigma}\|_{1}^{2}\text{ ,}$
where $\|\cdot\|_{1}$ stands for the trace norm.
We denote the identity operator on a space $G$ by $\mathrm{id}_{G}$.
For a quantum state $\rho\in\mathcal{S}(G)$ and a quantum channel $V$:
$\mathcal{S}(G)\rightarrow\mathcal{S}(G^{\prime})$, the coherent information
is defined as
$I_{C}(\rho,V):=S(V(\rho))-S\left((\mathrm{id}_{G}\otimes
V)(|\psi\rangle\langle\psi|)\right)\text{ ,}$
where $|\psi\rangle\langle\psi|$ is an arbitrary purification of $\rho$ in
$\mathcal{S}(G)\otimes\mathcal{S}(G)$. Let $\Phi:=\\{\rho_{x}:x\in A\\}$ be a
set of quantum states labeled by elements of $A$. For a probability
distribution $P$ on $A$, the Holevo $\chi$ quantity is defined as
$\chi(P;\Phi):=S\left(\sum_{x\in A}P(x)\rho_{x}\right)-\sum_{x\in
A}P(x)S\left(\rho_{x}\right)\text{ .}$
For $\rho\in\mathcal{S}(H)$ and $\alpha>0$ there exists an orthogonal subspace
projector $\Pi_{\rho,\alpha}$ commuting with $\rho^{\otimes n}$ and satisfying
$\mathrm{tr}\left(\rho^{\otimes n}\Pi_{\rho,\alpha}\right)\geq
1-\frac{d}{4n\alpha^{2}}\text{ ,}$ (1)
$\mathrm{tr}\left(\Pi_{\rho,\alpha}\right)\leq
2^{nS(\rho)+Kd\alpha\sqrt{n}}\text{ ,}$ (2)
$\Pi_{\rho,\alpha}\cdot\rho^{\otimes n}\cdot\Pi_{\rho,\alpha}\leq
2^{-nS(\rho)+Kd\alpha\sqrt{n}}\Pi_{\rho,\alpha}\text{ ,}$ (3)
where $d:=\dim H$, and $K$ is a positive constant.
For $P\in P(A)$, $\alpha>0$ and $x^{n}\in\mathcal{T}^{n}_{P}$ there exists an
orthogonal subspace projector $\Pi_{\mathtt{V},\alpha}(x^{n})$ commuting with
$\mathtt{V}^{\otimes n}_{x^{n}}$ and satisfying
$\mathrm{tr}\left(\mathtt{V}^{\otimes
n}(x^{n})\Pi_{\mathtt{V},\alpha}(x^{n})\right)\geq
1-\frac{ad}{4n\alpha^{2}}\text{ ,}$ (4)
$\mathrm{tr}\left(\Pi_{\mathtt{V},\alpha}(x^{n})\right)\leq
2^{nS(\mathtt{V}|P)+Kad\alpha\sqrt{n}}\text{ ,}$ (5)
$\displaystyle\Pi_{\mathtt{V},\alpha}(x^{n})\cdot\mathtt{V}^{\otimes
n}(x^{n})\cdot\Pi_{\mathtt{V},\alpha}(x^{n})$ $\displaystyle\leq
2^{-nS(\mathtt{V}|P)+Kad\alpha\sqrt{n}}\Pi_{\mathtt{V},\alpha}(x^{n})\text{
,}$ (6) $\mathrm{tr}\left(\mathtt{V}^{\otimes
n}(x^{n})\cdot\Pi_{P\mathtt{V},\alpha\sqrt{a}}\right)\geq
1-\frac{ad}{4n\alpha^{2}}\text{ ,}$ (7)
where $a:=\\#\\{A\\}$, and $K$ is a positive constant (cf. [30]). Here
$S(\mathtt{V}|P)=\sum_{x\in X}P(x)S(\mathtt{V}(x))$ is the conditional entropy
of the channel for the input distribution $P$. In (7),
$\Pi_{P\mathtt{V},\alpha\sqrt{a}}$ is defined analogously to
$\Pi_{\rho,\alpha}$ in (1), (2), and (3), where $P\mathtt{V}$ is the resulting
quantum state at the outcome of $\mathtt{V}$ when the input is sent according
to $P$.
Let $A$, $B$, and $C$ be finite sets. Let $H$, $H^{\prime}$, and
$H^{\prime\prime}$ be complex Hilbert spaces. Let $\mathfrak{P}$ and
$\mathfrak{Q}$ be quantum systems, we denote the Hilbert space of
$\mathfrak{P}$ and $\mathfrak{Q}$ by $H^{\mathfrak{P}}$ and
$H^{\mathfrak{Q}}$, respectively. Let $\theta$ := $\\{1,\cdots,T\\}$ be a
finite set. For every $t\in\theta$ let
$\mathsf{W}_{t}$ be a classical channel $A\rightarrow P(B)$;
$\mathsf{V}_{t}$ be a classical channel $A\rightarrow P(C)$;
$\mathtt{V}_{t}$ be a classical-quantum channel $A\rightarrow\mathcal{S}(H)$;
$W_{t}$ be a quantum channel
$\mathcal{S}(H^{\prime})\rightarrow\mathcal{S}(H^{\prime\prime})$;
$V_{t}$ be a quantum channel
$\mathcal{S}(H^{\prime})\rightarrow\mathcal{S}(H)$;
$N_{t}$ be a quantum channel
$\mathcal{S}(H^{\mathfrak{P}})\rightarrow\mathcal{S}(H^{\mathfrak{Q}})$.
We call the set of the classical channel pairs
$(\mathsf{W}_{t},\mathsf{V}_{t})_{t\in\theta}$ a (classical) compound wiretap
channel. When the channel state is $t$, and the sender inputs a sequence
$x^{n}\in A^{n}$ into the channel, the receiver receives the output $y^{n}\in
B^{n}$ with probability $\mathsf{W}_{t}^{n}(y^{n}|x^{n})$, while the
wiretapper receives the output $z^{n}\in Z^{n}$ with probability
$\mathsf{V}_{t}^{n}(z^{n}|x^{n})$.
We call the set of the classical channel/classical-quantum channel pairs
$(\mathsf{W}_{t},\mathtt{V}_{t})_{t\in\theta}$ a compound channel with quantum
wiretapper. When the channel state is $t$ and the sender inputs a sequence
$x^{n}\in A^{n}$ into the channel, the receiver receives the output $y^{n}\in
B^{n}$ with probability $\mathsf{W}_{t}^{n}(y^{n}|x^{n})$, while the
wiretapper receives an output quantum state $\mathtt{V}_{t}^{\otimes
n}(x^{n})\in\mathcal{S}(H^{\otimes n})$.
We call the set of the quantum channel pairs $(W_{t},V_{t})_{t\in\theta}$ a
quantum compound wiretap channel. When the channel state is $t$ and the sender
inputs a quantum state $\rho^{\otimes n}\in\mathcal{S}({H^{\prime}}^{\otimes
n})$ into the channel, the receiver receives an output quantum state
$W_{t}^{\otimes n}(\rho^{\otimes n})\in\mathcal{S}({H^{\prime\prime}}^{\otimes
n})$, while the wiretapper receives an output quantum state $V_{t}^{\otimes
n}(\rho^{\otimes n})\in\mathcal{S}(H^{\otimes n})$.
We call the set of the quantum channel $(N_{t})_{t\in\theta}$ a quantum
compound channel. When the channel state is $t$ and the sender inputs a
quantum state
$\rho^{\mathfrak{P}^{n}}\in\mathcal{S}({H^{\mathfrak{P}}}^{\otimes n})$ into
the channel, the receiver receives an output quantum state $N_{t}^{\otimes
n}(\rho^{\mathfrak{P}^{n}})\in\mathcal{S}({H^{\mathfrak{Q}}}^{\otimes n})$.
We distinguish two different scenarios according to the sender’s knowledge of
the channel state:
* •
the sender has the CSI, i.e. he knows which $t$ the channel state actually is,
* •
the sender does not have any CSI.
In both cases, we assume that the receiver does not have any CSI, but the
wiretapper always has the full knowledge of the CSI. Of course we also have
the case where both the sender and the receiver have the CSI, but this case is
equivalent to the case when we only have one pair of channels $(W_{t},V_{t})$,
instead of a family of pairs of channels $\\{(W_{t},V_{t}):t=1,\cdots,T\\}$.
An $(n,J_{n})$ code for the compound wiretap channel
$(\mathsf{W}_{t},\mathsf{V}_{t})_{t\in\theta}$ consists of a stochastic
encoder $E$ : $\\{1,\cdots,J_{n}\\}\rightarrow P(A^{n})$ specified by a matrix
of conditional probabilities $E(\cdot|\cdot)$, and a collection of mutually
disjoint sets $\left\\{D_{j}\subset B^{n}:j\in\\{1,\cdots,J_{n}\\}\right\\}$
(decoding sets).
If the sender has the CSI, then instead of using a single code for all channel
states, we may use the following strategy. For every $t\in\theta$, the sender
and the receiver build an $(n,J_{n})$ code
$(E_{t},\\{D_{j}:j=1,\cdots,J_{n}\\})$ such that all codes in
$\Bigl{\\{}(E_{t},\\{D_{j}:j=1,\cdots,J_{n}\\}):t\in\theta\Bigr{\\}}$ share
the same decoding sets $\\{D_{j}:j=1,\cdots,J_{n}\\}$, which do not depend on
$t$, to transform the message,
A non-negative number $R$ is an achievable secrecy rate for the compound
wiretap channel $(\mathsf{W}_{t},\mathsf{V}_{t})$ having CSI at the encoder,
if for every positive $\varepsilon$, $\delta$, every $t\in\theta$, and a
sufficiently large $n$ there is an $(n,J_{n})$ code
$(E_{t},\\{D_{j}:j=1,\cdots,J_{n}\\})$, such that $\frac{1}{n}\log J_{n}\geq
R-\delta$, and
$\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{x^{n}\in
A^{n}}E_{t}(x^{n}|j)\mathsf{W}_{t}^{n}(D_{j}^{c}|x^{n})\leq\varepsilon\text{
,}$ (8) $\max_{t\in\theta}I(X_{uni};K_{t}^{n})\leq\varepsilon\text{ ,}$ (9)
where $X_{uni}$ is a random variable uniformly distributed on
$\\{1,\cdots,J_{n}\\}$, and $K_{t}^{n}$ are the resulting random variables at
the output of wiretap channels $\mathsf{V}_{t}^{n}$. Here for a set $\Xi$,
$\Xi^{c}$ denotes its complement.
###### Remark 1
A weaker and widely used security criterion, e.g. in [19] (also cf. [32] for
wiretap channel’s security criterion), is obtained if we replace (9) with
$\max_{t\in\theta}\frac{1}{n}I(X_{uni};K_{t}^{n})\leq\varepsilon\text{ .}$ In
this paper we will follow [8] and use (9).
A non-negative number $R$ is an achievable secrecy rate for the compound
wiretap channel $(\mathsf{W}_{t},\mathsf{V}_{t})$ having no CSI at the
encoder, if for every positive $\varepsilon$, $\delta$ and a sufficiently
large $n$ there is an $(n,J_{n})$ code $(E,\\{D_{j}:j=1,\cdots,J_{n}\\})$ such
that $\frac{1}{n}\log J_{n}\geq R-\delta$, and
$\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{x^{n}\in
A^{n}}E(x^{n}|j)\mathsf{W}_{t}^{n}(D_{j}^{c}|x^{n})\leq\varepsilon\text{ ,}$
(10) $\max_{t\in\theta}I(X_{uni};K_{t}^{n})\leq\varepsilon\text{ .}$ (11)
An $(n,J_{n})$ code for the compound channel with quantum wiretapper
$(\mathsf{W}_{t},\mathtt{V}_{t})_{t\in\theta}$ consists of a stochastic
encoder $E$ : $\\{1,\cdots,J_{n}\\}\rightarrow P(A^{n})$ and a collection of
mutually disjoint sets $\left\\{D_{j}\subset
B^{n}:j\in\\{1,\cdots,J_{n}\\}\right\\}$ (decoding sets).
A non-negative number $R$ is an achievable secrecy rate for the compound
channel with quantum wiretapper $(\mathsf{W}_{t},\mathtt{V}_{t})_{t\in\theta}$
having CSI at the encoder, if for every positive $\varepsilon$, $\delta$,
every $t\in\theta$, and a sufficiently large $n$, there is an $(n,J_{n})$ code
$(E_{t},\\{D_{j}:j=1,\cdots,J_{n}\\})$ such that $\frac{1}{n}\log J_{n}\geq
R-\delta$, and
$\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{x^{n}\in
A^{n}}E_{t}(x^{n}|j)\mathsf{W}_{t}^{n}(D_{j}^{c}|x^{n})\leq\varepsilon\text{
,}$ (12) $\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes
n})\leq\varepsilon\text{ .}$ (13)
Here $Z_{t}^{n}$ are the resulting quantum states at the output of wiretap
channels $\mathtt{V}_{t}^{n}$.
A non-negative number $R$ is an achievable secrecy rate for the compound
channel with quantum wiretapper $(\mathsf{W}_{t},\mathtt{V}_{t})_{t\in\theta}$
having no CSI at the encoder, if for every positive $\varepsilon$, $\delta$
and a sufficiently large $n$, there is an $(n,J_{n})$ code
$(E,\\{D_{j}:j=1,\cdots,J_{n}\\})$ such that $\frac{1}{n}\log J_{n}\geq
R-\delta$, and
$\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{x^{n}\in
A^{n}}E(x^{n}|j)\mathsf{W}_{t}^{n}(D_{j}^{c}|x^{n})\leq\varepsilon\text{ ,}$
(14) $\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes n})\leq\varepsilon\text{
.}$ (15)
An $(n,J_{n})$ code carrying classical information for the compound quantum
wiretap channel $(W_{t},V_{t})_{t\in\theta}$ consists of a family of vectors
$\\{w(j):j=1,\cdots,J_{n}\\}\subset\mathcal{S}({H^{\prime}}^{\otimes n})$ and
a collection of positive semi-definite operators
$\left\\{D_{j}:j\in\\{1,\cdots,J_{n}\\}\right\\}\subset\mathcal{S}({H^{\prime\prime}}^{\otimes
n})$ which is a partition of the identity, i.e.
$\sum_{j=1}^{J_{n}}D_{j}=\mathrm{id}_{{H^{\prime\prime}}^{\otimes n}}$.
A non-negative number $R$ is an achievable secrecy rate with classical input
for the compound quantum wiretap channel $(W_{t},V_{t})_{t\in\theta}$ having
CSI at the encoder with average error, if for every positive $\varepsilon$,
$\delta$, every $t\in\theta$, and a sufficiently large $n$, there is an
$(n,J_{n})$ code carrying classical information
$(\\{w_{t}(j):j\\},\\{D_{j}:j\\})$ such that $\frac{1}{n}\log J_{n}\geq
R-\delta$, and
$\max_{t\in\theta}\frac{1}{J_{n}}\sum_{j=1}^{J_{n}}\mathrm{tr}\left((\mathrm{id}_{{H^{\prime\prime}}^{\otimes
n}}-D_{j})W_{t}^{\otimes n}\left(w_{t}(j)\right)\right)\leq\varepsilon\text{
,}$ (16) $\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes
n})\leq\varepsilon\text{ .}$ (17)
A non-negative number $R$ is an achievable secrecy rate with classical input
for the compound quantum wiretap channel $(W_{t},V_{t})_{t\in\theta}$ having
no CSI at the encoder, if for every positive $\varepsilon$, $\delta$, and a
sufficiently large $n$, there is an $(n,J_{n})$ code carrying classical
information $(\\{w(j):j\\},\\{D_{j}:j\\})$ such that $\frac{1}{n}\log
J_{n}\geq R-\delta$, and
$\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\mathrm{tr}\left((\mathrm{id}_{{H^{\prime\prime}}^{\otimes
n}}-D_{j})W_{t}^{\otimes n}\left(w(j)\right)\right)\leq\varepsilon\text{ ,}$
(18) $\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes n})\leq\varepsilon\text{
.}$ (19)
Instead of “achievable secrecy rate with classical input for the compound
quantum wiretap channel ”, we say $R$ is an achievable secrecy rate for the
compound classical-quantum wiretap channel $(W_{t},V_{t})_{t\in\theta}$.
An $(n,J_{n})$ code carrying quantum information for the compound quantum
channel $\left(N_{t}^{\otimes n}\right)_{t\in\theta}$ consists of a Hilbert
spaces $H^{\mathfrak{A}}$ such that $\dim H^{\mathfrak{A}}=J_{n}$, and a
general decoding quantum operation $D$, i.e. a completely positive, trace
preserving map
$D:\mathcal{S}(H^{\mathfrak{Q}^{n}})\rightarrow\mathcal{S}(H^{\mathfrak{M}})$,
where $H^{\mathfrak{M}}$ is a Hilbert space such that $\dim
H^{\mathfrak{M}}=J_{n}$. The code can be used for the entanglement generation
in the following way. The sender prepares a pure bipartite quantum state
$|\psi\rangle^{\mathfrak{AP}^{n}}$, defined on $H^{\mathfrak{A}}\otimes
H^{\mathfrak{P}^{n}}$, and sends the $\mathfrak{P}^{n}$ portion of it through
the channel $N_{t}^{\otimes n}$. The receiver performs the general decoding
quantum operation on the channel output
$D:\mathcal{S}(H^{\mathfrak{Q}^{n}})\rightarrow\mathcal{S}(H^{\mathfrak{M}})$.
The sender and the receiver share the resulting quantum state
$\Omega^{\mathfrak{AM}}_{t}:=[\mathrm{id}^{\mathfrak{A}}\otimes(D\circ
N_{t}^{\otimes
n})]\left(|\psi\rangle\langle\psi|^{\mathfrak{AP}^{n}}\right)\text{ .}$ (20)
A non-negative number $R$ is an achievable entanglement generating rate for
the compound quantum channel $\left(N_{t}^{\otimes n}\right)_{t\in\theta}$ if
for every positive $\varepsilon$, $\delta$, and a sufficiently large $n$,
there is an $(n,J_{n})$ code carrying quantum information
$\left(H^{\mathfrak{A}},D\right)$ such that $\frac{1}{n}\log J_{n}\geq
R-\delta$, and
$\min_{t\in\theta}F\left(\Omega^{\mathfrak{AM}}_{t},|\Phi_{K}\rangle\langle\Phi_{K}|^{\mathfrak{AM}}\right)\geq
1-\varepsilon\text{ ,}$ (21)
where
$|\Phi_{K}\rangle^{\mathfrak{AM}}:=\sqrt{\frac{1}{J_{n}}}\sum_{j=1}^{J_{n}}|j\rangle^{\mathfrak{A}}|j\rangle^{\mathfrak{M}}\text{
,}$
which is the standard maximally entangled state shared by the sender and the
receiver. $\\{|j\rangle^{\mathfrak{A}}\\}$ and
$\\{|j\rangle^{\mathfrak{M}}\\}$ are orthonormal bases for $H^{\mathfrak{A}}$
and $H^{\mathfrak{M}}$, respectively.
The largest achievable secrecy rate is called the secrecy capacity. The
largest achievable entanglement generating rate is called the entanglement
generating capacity.
## III Classical Compound Wiretap Channels
Let $A$, $B$, $C$, $\theta$, and
$(\mathsf{W}_{t},\mathsf{V}_{t})_{t\in\theta}$ be defined as in Section II.
For every $t\in\theta$, we fix a probability distribution $p_{t}$ on $A^{n}$.
Let
$p^{\prime}_{t}(x^{n}):=\begin{cases}\frac{p_{t}^{n}(x^{n})}{p_{t}^{n}(\mathcal{T}^{n}_{p_{t},\delta})}\text{
,}&\text{if }x^{n}\in\mathcal{T}^{n}_{p_{t},\delta}\\\ 0\text{
,}&\text{else}\end{cases}$
and
$X^{(t)}:=\\{X_{j,l}^{(t)}\\}_{j\in\\{1,\cdots,J_{n}\\},l\in\\{1,\cdots,L_{n,t}\\}}$
be a family of random matrices whose entries are i.i.d. according to
$p^{\prime}_{t}$, where $L_{n,t}$ is a natural number, which will be specified
later.
It was shown in [8] that for any positive $\omega$, if we set
$J_{n}=\lfloor 2^{n(\min_{t\in\theta}(I(p_{t};\mathsf{W}_{t})-\frac{1}{n}\log
L_{n,t}-\mu)}\rfloor\text{ ,}$
where $\mu$ is a positive constant which does not depend on $j$, $t$, and can
be arbitrarily small when $\omega$ goes to $0$, then there are such
$\\{D_{j}:j=1,\cdots,J_{n}\\}$ that for all $t\in\theta$ and for all
$L_{n,t}\in\mathbb{N}$
$\displaystyle
Pr\left(\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathsf{W}_{t}^{n}(D_{j}^{c}|X_{j,l}^{(t)})>\sqrt{T}2^{-n\omega/2}\right)$
$\displaystyle\leq\sqrt{T}2^{-n\omega/2}\text{ .}$ (22)
Since here only the error of the legitimate receiver is analyzed, for the
result (22) just the channels $\mathsf{W}_{t}$, but not those of the
wiretapper, are regarded. Here for every $j\in\\{1,\cdots,J_{n}\\}$,
$l\in\\{1,\cdots,L_{n,t}\\}$, and $t\in\theta$,
$\mathsf{W}_{t}^{n}(D_{j}^{c}|X_{j,l}^{(t)})$ is a random variable taking
value in $]0,1[$, which depends on $X_{j,l}^{(t)}$, since we defined
$X_{j,l}^{(t)}$ as a random variable with value in $A^{n}$.
In view of (22), by choosing $L_{n,t}=\left\lfloor
2^{n[I(p_{t};V_{t})+\tau]}\right\rfloor$, for any positive constant $\tau$ ,
the authors of [8] showed that $C_{S,CSI}$, the secrecy capacity of the
compound wiretap channel with CSI at the transmitter is given by
$C_{S,CSI}=\min_{t\in\theta}\max_{\mathcal{U}\rightarrow
A\rightarrow(BK)_{t}}(I(\mathcal{U};B_{t})-I(\mathcal{U};K_{t}))\text{ ,}$
(23)
where $B_{t}$ are the resulting random variables at the output of legal
receiver channels. $K_{t}$ are the resulting random variables at the output of
wiretap channels. The maximum is taken over all random variables that satisfy
the Markov chain relationships: $\mathcal{U}\rightarrow A\rightarrow(BZ)_{t}$.
Here $A\rightarrow(BZ)_{t}$ means $A\rightarrow B_{t}\times Z_{t}$, where
$A\rightarrow B_{t}$ means $A\xrightarrow{W_{t}}B_{t}$ and $A\rightarrow
Z_{t}$ means $A\xrightarrow{V_{t}}Z_{t}$.
Analogously, in the case without CSI, the idea is similar to the case with
CSI: Fix a probability distribution $p$ on $A^{n}$. Let
$p^{\prime}(x^{n}):=\begin{cases}\frac{p^{n}(x^{n})}{p^{n}(\mathcal{T}^{n}_{p,\delta})}&\text{if
}x^{n}\in\mathcal{T}^{n}_{p,\delta}\\\ 0&\text{else}\end{cases}$
and
$X^{n}:=\\{X_{j,l}\\}_{j\in\\{1,\cdots,J_{n}\\},l\in\\{1,\cdots,L_{n}\\}}$,
where $L_{n}$, a natural number, will be specified later, be a family of
random matrices whose components are i.i.d. according to $p^{\prime}$.
For any $\omega>0$, we define
$J_{n}=\lfloor 2^{n(\min_{t\in\theta}(I(p;\mathsf{W}_{t})-\frac{1}{n}\log
L_{n}-\mu)}\rfloor\text{ ,}$
where $\mu$ is a positive constant which does not depend on $j$ and $t$, and
can be arbitrarily small when $\omega$ goes to $0$, then there are such
$\\{D_{j}:j=1,\cdots,J_{n}\\}$ that for all $t\in\theta$ and for all
$L_{n}\in\mathbb{N}$
$\displaystyle
Pr\left(\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{l=1}^{L_{n}}\frac{1}{L_{n}}\mathsf{W}_{t}^{n}(D_{j}^{c}|X_{j,l})>\sqrt{T}2^{-n\omega/2}\right)$
$\displaystyle\leq\sqrt{T}2^{-n\omega/2}\text{ .}$ (24)
In view of (24), by choosing $L_{n}=\left\lfloor
2^{n[\max_{t}I(p_{t};V_{t})+\frac{\tau}{4}]}\right\rfloor$, where $\tau$ is a
positive constant, the authors of [8] showed that $C_{S}$, the secrecy
capacity of the compound wiretap channel without CSI at the transmitter, is
lower bounded as follows
$C_{S}\geq\max_{\mathcal{U}\rightarrow
A\rightarrow(BK)_{t}}(\min_{t\in\theta}I(\mathcal{U};B_{t})-\max_{t\in\theta}I(\mathcal{U};K_{t}))\text{
.}$ (25)
## IV Compound Channels with Quantum Wiretapper
Let $A$, $B$, $H$, $\theta$, and
$(\mathsf{W}_{t},\mathtt{V}_{t})_{t\in\theta}$ be defined as in Section II.
###### Theorem 1
The secrecy capacity of the compound channel with quantum wiretapper
$(\mathsf{W}_{t},\mathtt{V}_{t})_{t\in\theta}$ in the case with CSI at the
transmitter $C_{S,CSI}$ is given by
$C_{S,CSI}=\min_{t\in\theta}\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}(I(\mathcal{U};B_{t})-\limsup_{n\rightarrow\infty}\frac{1}{n}\chi(\mathcal{U};Z_{t}^{\otimes
n}))\text{ .}$ (26)
Respectively, in the case without CSI, the secrecy capacity of the compound
channel with quantum wiretapper $(\mathsf{W}_{t},\mathtt{V}_{t})_{t\in\theta}$
$C_{S}$ is lower bounded as follows
$C_{S}\geq\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}(\min_{t\in\theta}I(\mathcal{U};B_{t})-\max_{t}\chi(\mathcal{U};Z_{t}))\text{
,}$ (27)
where $B_{t}$ are the resulting random variables at the output of legal
receiver channels, and $Z_{t}$ are the resulting random quantum states at the
output of wiretap channels.
###### Remark 2
We have only the multi-letter formulas (26) and (27), since we don’t have a
single-letter formula even for a quantum channel which is neither compound nor
has wiretappers.
###### Proof:
1) Lower bound for case with CSI
For every $t\in\theta$, fix a probability distribution $p_{t}$ on $A^{n}$. Let
$J_{n}=\lfloor 2^{n(\min_{t\in\theta}(I(p_{t};\mathsf{W}_{t})-\frac{1}{n}\log
L_{n,t}-\mu)}\rfloor\text{ ,}$
where $L_{n,t}$ is a natural number that will be specified below, and $\mu$ is
defined as in Section III. Let $p^{\prime}_{t}$, $X^{(t)}$, and $D_{j}$ be
defined as in the classical case. Then (22) still holds, since the sender
transmits through a classical channel to the legitimate receiver.
Let
$Q_{t}(x^{n}):=\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}\Pi_{\mathtt{V}_{t},\alpha}(x^{n})\cdot\mathtt{V}_{t}^{\otimes
n}(x^{n})\cdot\Pi_{\mathtt{V}_{t},\alpha}(x^{n})\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}\text{
,}$
where $\alpha$ will be defined later.
###### Lemma 1 ( [31])
Let $\rho$ be a quantum state and $X$ be a positive operator with
$X\leq\mathrm{id}$ and $1-\mathrm{tr}(\rho X)\leq\lambda\leq 1$. Then
$\|\rho-\sqrt{X}\rho\sqrt{X}\|\leq\sqrt{8\lambda}\text{ .}$ (28)
In view of the fact that $\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}$ and
$\Pi_{\mathtt{V}_{t},\alpha}(x^{n})$ are both projection matrices, by (1),
(7), and Lemma 1 for any $t$ and $x^{n}$, it holds that
$\|Q_{t}(x^{n})-\mathtt{V}_{t}^{\otimes
n}(x^{n})\|\leq\sqrt{\frac{2(ad+d)}{n\alpha^{2}}}\text{ .}$ (29)
We set
$\Theta_{t}:=\sum_{x^{n}\in\mathcal{T}^{n}_{p_{t},\delta}}{p^{\prime}}_{t}^{n}(x^{n})Q_{t}(x^{n})$.
For given $z^{n}$ and $t$, $\langle z^{n}|\Theta_{t}|z^{n}\rangle$ is the
expected value of $\langle z^{n}|Q_{t}(x^{n})|z^{n}\rangle$ under the
condition $x^{n}\in\mathcal{T}^{n}_{p_{t},\delta}$.
###### Lemma 2 ([2])
Let $\mathcal{V}$ be a finite dimensional Hilbert space. Let
$\mathcal{E}\subset\mathcal{S}(\mathcal{V})$ be a collection of density
operators such that $\sigma\leq\mu\cdot\mathrm{id}_{\mathcal{V}}$ for all
$\sigma\in\mathcal{E}$, and let $p$ be a probability distribution on on
$\mathcal{E}$. For any positive $\lambda$, we define a sequence of i.i.d.
random variables $X_{1},\cdots,X_{L}$, taking values in $\mathcal{E}$ such
that for all $\sigma\in\mathcal{E}$ we have
$p(\sigma)=Pr\left\\{X_{i}=\Pi_{\rho,\lambda}^{\prime}\cdot\sigma\cdot\Pi_{\rho,\lambda}^{\prime}\right\\}$,
where $\rho:=\sum_{\sigma\in\mathcal{E}}p(\sigma)\sigma$, and
$\Pi_{\rho,\lambda}^{\prime}$ is the projector onto the subspace spanned by
the eigenvectors of $\rho$ whose corresponding eigenvalues are greater than
$\frac{\lambda}{\dim\mathcal{V}}$. For any $\epsilon\in]0,1[$, the following
inequity holds
$\displaystyle Pr\left(\lVert
L^{-1}\sum_{i=1}^{L}X_{i}-\Pi_{\rho,\lambda}^{\prime}\cdot\rho\cdot\Pi_{\rho,\lambda}^{\prime}\rVert>\epsilon\right)$
$\displaystyle\leq
2\cdot(\dim\mathcal{V})\text{exp}\left(-L\frac{\epsilon^{2}\lambda}{2\ln
2(\dim\mathcal{V})\mu}\right)\text{ .}$ (30)
Let $\mathcal{V}$ be the range space of
$\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}$. By (2) we have
$\dim\mathcal{V}\leq 2^{nS(p_{t})+Kd\alpha\sqrt{an}}\text{ .}$
Furthermore, for all $x^{n}$
$\displaystyle Q_{t}(x^{n})$
$\displaystyle=\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}\Pi_{\mathtt{V}_{t},\alpha}(x^{n})\cdot\mathtt{V}_{t}^{\otimes
n}(x^{n})\cdot\Pi_{\mathtt{V}_{t},\alpha}(x^{n})\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}$
$\displaystyle\leq
2^{-n(S(\mathtt{V}_{t}|p_{t})+Kad\alpha\sqrt{n})}\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}\Pi_{\mathtt{V}_{t},\alpha}(x^{n})\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}$
$\displaystyle\leq 2^{-n\cdot
S(\mathtt{V}_{t}|p_{t})+Kad\alpha\sqrt{n}}\cdot\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}$
$\displaystyle\leq 2^{-n\cdot
S(\mathtt{V}_{t}|p_{t})+Kad\alpha\sqrt{n}}\cdot\mathrm{id}_{\mathcal{V}}\text{
.}$ (31)
The first inequality follows from (6). The second inequality holds because
$\Pi_{\mathtt{V}_{t},\alpha}$ and $\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}$
are projection matrices. The third inequality holds because
$\Pi_{p_{t}\mathtt{V}_{t},\alpha\sqrt{a}}$ is a projection matrix onto
$\mathcal{V}$.
Let $\lambda=\epsilon$. By applying Lemma 2, where we set $\mu:=2^{-n\cdot
S(\mathtt{V}_{t}|p_{t})+Kad\alpha\sqrt{n}}$ in (30) in view of (31), if $n$ is
large enough we have
$\displaystyle
Pr\left(\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}Q_{t}(X_{j,l})-\Theta_{t}\rVert>\epsilon\right)$
$\displaystyle\leq 2^{n(S(p_{t})+Kd\alpha\sqrt{an})}$ (32)
$\displaystyle\cdot\text{exp}\left(-L_{n,t}\frac{\epsilon^{2}}{2\ln
2}\lambda\cdot
2^{n(S(\mathtt{V}_{t}|p_{t})-S(p_{t}))+Kd\alpha\sqrt{n}(\sqrt{a}-1)}\right)$
$\displaystyle=2^{n(S(p_{t})+Kd\alpha\sqrt{an})}$
$\displaystyle\cdot\text{exp}\left(-L_{n,t}\frac{\epsilon^{2}}{2\ln
2}\lambda\cdot 2^{n(-\chi(p_{t};Z_{t}))+Kd\alpha\sqrt{n}(\sqrt{a}-1)}\right)$
$\displaystyle\leq\text{exp}\left(-L_{n,t}\cdot
2^{-n(\chi(p_{t};Z_{t})+\zeta)}\right)\text{ ,}$ (33)
where $\zeta$ is some suitable positive constant which does not depend on $j$,
$t$, and can be arbitrarily small when $\epsilon$ is close to $0$. The
equality in the last line holds since
$\displaystyle S(p_{t})-S(\mathtt{V}_{t}|p_{t})$
$\displaystyle=S\left(\sum_{j}p_{t}(j)\sum_{l}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(X^{(t)}_{j,l})\right)$
$\displaystyle-\sum_{j}p_{t}(j)S\left(\sum_{l}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(X^{(t)}_{j,l})\right)$ $\displaystyle=\chi(p_{t};Z_{t})\text{ .}$
Let $L_{n,t}=\lceil 2^{n(\chi(p_{t};Z_{t})+2\zeta)}\rceil$, and $n$ be large
enough, then by (33) for all $j$ it holds that
$Pr\left(\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}Q_{t}(X^{(t)}_{j,l})-\Theta_{t}\rVert>\epsilon\right)\leq\text{exp}(-2^{n\zeta})$
(34)
and
$\displaystyle
Pr\left(\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}Q_{t}(X^{(t)}_{j,l})-\Theta_{t}\rVert\leq\epsilon\text{
}\forall t\text{ }\forall j\right)$
$\displaystyle=1-Pr\left(\bigcup_{t}\bigcup_{j}\\{\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}Q_{t}(X^{(t)}_{j,l})-\Theta_{t}\rVert>\epsilon\\}\right)$
$\displaystyle\geq 1-TJ_{n}\text{exp}(-2^{n\zeta})$ $\displaystyle\geq
1-T2^{n(\min_{t\in\theta}(I(p_{t};\mathsf{W}_{t})-\frac{1}{n}\log
L_{n,t})}\text{exp}(-2^{n\zeta})$ $\displaystyle\geq 1-2^{-n\upsilon}\text{
,}$ (35)
where $\upsilon$ is some suitable positive constant which does not depend on
$j$ and $t$.
###### Remark 3
Since $\text{exp}(-2^{n\zeta})$ converges to zero double exponentially
quickly, the inequality (35) remains true even if $T$ depends on $n$ and is
exponentially large over $n$, i.e., we can still achieve an exponentially
small error.
From (22) and (35) it follows: For any $\epsilon>0$, if $n$ is large enough
then the event
$\displaystyle\left(\bigcap_{t}\left\\{\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathsf{W}_{t}^{n}(D_{j}^{c}(\mathcal{X})|X_{j,l}^{(t)})\leq\epsilon\right\\}\right)$
$\displaystyle\cap\left(\left\\{\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}Q_{t}(X_{j,l}^{(t)})-\Theta_{t}\rVert\leq\epsilon\text{
}\forall t\text{ }\forall j\right\\}\right)$
has a positive probability. This means that we can find a realization
$x_{j,l}^{(t)}$ of $X_{j,l}^{(t)}$ with a positive probability such that for
all $t\in\theta$ and $j\in\\{1,\cdots,J_{n}\\}$, we have
$\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathsf{W}_{t}^{n}(D_{j}^{c}|x_{j,l}^{(t)})\leq\epsilon\text{
,}$ (36)
and
$\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}Q_{t}(x_{j,l}^{(t)})-\Theta_{t}\rVert\leq\epsilon\text{
.}$ (37)
For an arbitrary $\gamma>0$ let
$R:=\min_{t\in\theta}\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}(I(\mathcal{U};B_{t})-\limsup_{n\rightarrow\infty}\frac{1}{n}\chi(\mathcal{U};Z_{t}^{\otimes
n}))-\gamma\text{ .}$
Choose $\mu<\frac{1}{2}\gamma$, then for every $t\in\theta$, there is an
$(n,J_{n})$ code
$\left((x_{j,l}^{(t)})_{j=1,\cdots,J_{n},l=1,\cdots,L_{n,t}},\\{D_{j}:j=1,\cdots,J_{n}\\}\right)$
such that
$\frac{1}{n}\log J_{n}\geq R\text{ ,}$ (38)
$\lim_{n\rightarrow\infty}\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathsf{W}_{t}^{n}(D_{j}^{c}|x_{j,l}^{(t)})=0\text{
.}$ (39)
Choose a suitable $\alpha$ in (29) such that for all $j$, it holds
$\lVert\mathtt{V}_{t}^{\otimes
n}(x_{j,l}^{(t)})-Q_{t}(x_{j,l}^{(t)})\rVert<\epsilon\text{ .}$ For any given
$j^{\prime}\in\\{1,\cdots,J_{n}\\}$, (29) and (37) yield
$\displaystyle\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(x_{j^{\prime},l}^{(t)})-\Theta_{t}\rVert$
$\displaystyle\leq\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(x_{j^{\prime},l}^{(t)})-\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}Q_{t}(x_{j^{\prime},l}^{(t)})\rVert$
$\displaystyle+\lVert\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}Q_{t}(x_{j^{\prime},l}^{(t)})-\Theta_{t}\rVert$
$\displaystyle\leq\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\lVert\mathtt{V}_{t}^{\otimes
n}(x_{j^{\prime},l}^{(t)})-Q_{t}(x_{j^{\prime},l}^{(t)})\rVert$
$\displaystyle+\lVert\sum_{l=1}^{L_{n,t}^{(t)}}\frac{1}{L_{n,t}}Q_{t}(x_{j^{\prime},l}^{(t)})-\Theta_{t}\rVert$
$\displaystyle\leq 2\epsilon\text{ ,}$ (40)
and
$\|\sum_{j=1}^{J_{n}}\frac{1}{J_{n}}\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(x_{j,l}^{(t)})-\Theta_{t}\|\leq\epsilon$.
###### Lemma 3 (Fannes inequality (cf. [31]))
Let $\Phi$ and $\Psi$ be two quantum states in a $d$-dimensional complex
Hilbert space and $\|\Phi-\Psi\|\leq\mu<\frac{1}{e}$, then
$|S(\Phi)-S(\Psi)|\leq\mu\log d-\mu\log\mu\text{ .}$ (41)
By Lemma 3 and the inequality (40), for a uniformly distributed distributed
random variable $X_{uni}$ with value in $\\{1,\cdots,J_{n}\\}$, we have
$\displaystyle\chi(X_{uni};Z_{t}^{\otimes n})$
$\displaystyle=S\left(\sum_{j=1}^{J_{n}}\frac{1}{J_{n}}\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(x_{j,l}^{(t)})\right)$
$\displaystyle-\sum_{j=1}^{J_{n}}\frac{1}{J_{n}}S\left(\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(x_{j,l}^{(t)})\right)$
$\displaystyle\leq\left|S\left(\sum_{j=1}^{J_{n}}\frac{1}{J_{n}}\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(x_{j,l}^{(t)})\right)-S\left(\Theta_{t}\right)\right|$
$\displaystyle+\left|S(\Theta_{t})-\sum_{j=1}^{J_{n}}\frac{1}{J_{n}}S\left(\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(x_{j,l}^{(t)})\right)\right|$ $\displaystyle\leq\epsilon\log
d-\epsilon\log\epsilon$
$\displaystyle+\left|\sum_{j=1}^{J_{n}}\frac{1}{J_{n}}\left[S(\Theta_{t})-S\left(\sum_{l=1}^{L_{n,t}}\frac{1}{L_{n,t}}\mathtt{V}_{t}^{\otimes
n}(x_{j,l}^{(t)})\right)\right]\right|$ $\displaystyle\leq 3\epsilon\log
d-\epsilon\log\epsilon-2\epsilon\log 2\epsilon\text{ .}$ (42)
By (42), for any positive $\lambda$ if $n$ is sufficiently large, we have
$\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes n})\leq\lambda\text{ .}$ (43)
For every $t\in\theta$ we define an $(n,J_{n})$ code
$(E_{t},\\{D_{j}:j=1,\cdots,J_{n}\\})$, where $E_{t}$ is so built that
$Pr\left(E_{t}(j)=x_{j,l}^{(t)}\right)=\frac{1}{L_{n,t}}$ for
$l\in\\{1,\cdots,L_{n,t}\\}$. Combining (39) and (43) we obtain
$\displaystyle C_{S,CSI}\geq\min_{t\in\theta}\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}(I(\mathcal{U};B_{t})-\limsup_{n\rightarrow\infty}\frac{1}{n}\chi(\mathcal{U};Z_{t}^{\otimes
n}))\text{ .}$ (44)
Thus, we have shown the “$\geq$” part of (26).
2) Upper bound for case with CSI
Let $(\mathcal{C}_{n})$ be a sequence of $(n,J_{n})$ code such that
$\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{x^{n}\in
A^{n}}E(x^{n}|j)\mathsf{W}_{t}^{n}(D_{j}^{c}|x^{n})=:\epsilon_{1,n}\text{ ,}$
(45) $\max_{t\in\theta}\chi(J;Z_{t}^{\otimes n})=:\epsilon_{2,n}\text{ ,}$
(46)
where $\lim_{n\to\infty}\epsilon_{1,n}=0$ and
$\lim_{n\to\infty}\epsilon_{2,n}=0$, $J$ denotes the random variable which is
uniformly distributed on the message set $\\{1,\ldots,J_{n}\\}$.
We denote the security capacity of the wiretap channel
$(\mathsf{W}_{t},\mathtt{V}_{t})$ in the sense of [30] by
$C(\mathsf{W}_{t},\mathtt{V}_{t})$. Choose $t^{\prime}\in\theta$ such that
$C(\mathsf{W}_{t^{\prime}},\mathtt{V}_{t^{\prime}})=\min_{t\in\theta}C(\mathsf{W}_{t},\mathtt{V}_{t})$.
It is known (cf. [22]) that even in the case without wiretapper (we have only
one classical channel $\mathsf{W}_{t^{\prime}}$), the capacity cannot exceed
$I(X_{uni};B_{t^{\prime}})+\xi$ for any constant $\xi>0$. Thus for any
$\epsilon>0$ choose $\xi=\frac{1}{2}\epsilon$, if $n$ is sufficiently large
the capacity of a classical channel with quantum wiretapper
$(\mathsf{W}_{t^{\prime}},\mathtt{V}_{t^{\prime}})$ cannot be greater than
$\displaystyle I(X_{uni};B_{t^{\prime}})+\xi$
$\displaystyle\leq[I(X_{uni};B_{t^{\prime}})-\limsup_{n\rightarrow\infty}\chi(X_{uni};Z_{t^{\prime}}^{\otimes
n})]+\xi+\epsilon_{2,n}$
$\displaystyle\leq[I(X_{uni};B_{t^{\prime}})-\limsup_{n\rightarrow\infty}\frac{1}{n}\chi(X_{uni};Z_{t^{\prime}}^{\otimes
n})]+\epsilon\text{ .}$
Since we cannot exceed the secrecy capacity of the worst wiretap channel, we
have
$C_{S,CSI}\leq\min_{t\in\theta}\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}(I(\mathcal{U};B_{t})-\limsup_{n\rightarrow\infty}\frac{1}{n}\chi(\mathcal{U};Z_{t}^{\otimes
n}))\text{ .}$ (47)
Combining (47) and (44) we obtain (26).
3) Lower bound for case without CSI
Fix a probability distribution $p$ on $A^{n}$. Let
$J_{n}=\lfloor 2^{\min_{t\in\theta}(nI(p;\mathsf{W}_{t})-\log
L_{n})-n\mu}\rfloor\text{ ,}$
where $L_{n}$ is a nature number that will be defined belowand, $\mu$ is
defined as in Section III. Let $p^{\prime}$, $X^{n}$, and $D_{j}$ be defined
as in the classical case, then (24) still holds.
For a positive $\alpha$ we define
$Q_{t}(x^{n}):=\Pi_{p\mathtt{V}_{t},\alpha\sqrt{a}}\Pi_{\mathtt{V}_{t},\alpha}(x^{n})\cdot\mathtt{V}_{t}^{\otimes
n}(x^{n})\cdot\Pi_{\mathtt{V}_{t},\alpha}(x^{n})\Pi_{p\mathtt{V}_{t},\alpha\sqrt{a}}$
and
$\Theta_{t}:=\sum_{x^{n}\in\mathcal{T}^{n}_{p,\delta}}{p^{\prime}}^{n}(x^{n})Q_{t}(x^{n})$.
For any positive $\delta$ let $L_{n}=\lceil
2^{n\max_{t}(\chi(p;Z_{t})+\delta)}\rceil$ and $n$ be large enough, in the
same way as our proof of (35) for the case with CSI at the encoder, there is a
positive constant $\upsilon$ so that
$Pr\left(\lVert\sum_{l=1}^{L_{n}}\frac{1}{L_{n}}Q_{t}(X^{(t)}_{j,l})-\Theta_{t}\rVert\leq\epsilon\text{
}\forall t\text{ }\forall j\right)\geq 1-2^{-n\upsilon}\text{ .}$ (48)
For any positve $\epsilon$ we choose a suitable $\alpha$, by (24) and (48)
there is a realization $x_{j,l}$ of $X_{j,l}$ with a positive probability such
that: For all $t\in\theta$ and all $j\in\\{1,\cdots J_{n}\\}$, we have
$\sum_{l=1}^{L_{n}}\frac{1}{L_{n}}\mathsf{W}_{t}^{n}(D_{j}^{c}|x_{j,l})\leq\epsilon\text{
,}$
$\lVert\sum_{l=1}^{L_{n}}\frac{1}{L_{n}}Q_{t}(x_{j,l})-\Theta_{t}\rVert\leq\epsilon\text{
.}$
For any $\gamma>0$ let
$R:=\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}\left(\min_{t\in\theta}I(\mathcal{U};B_{t})-\max_{t}\chi(\mathcal{U};Z_{t})\right)-\gamma\text{
.}$
Then there is an $(n,J_{n})$ code
$\left(E,\\{D_{j}:j=1,\cdots,J_{n}\\}\right)$, where $E$ is so built that
$Pr\left(E(j)=x_{j,l}\right)=\frac{1}{L_{n,t}}$ for
$l\in\\{1,\cdots,L_{n,t}\\}$, such that
$\liminf_{n\rightarrow\infty}\frac{1}{n}\log J_{n}\geq R$, and
$\lim_{n\rightarrow\infty}\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\sum_{l=1}^{L_{n}}\frac{1}{L_{n}}\mathsf{W}_{t}^{n}(D_{j}^{c}|x_{j,l}))=0\text{
.}$ (49)
In the same way as our proof of (43) for the case with CSI at the encoder,
$\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes n})\leq\epsilon\text{ ,}$ (50)
for any uniformly distributed distributed random variable $X_{uni}$ with value
in $\\{1,\cdots,J_{n}\\}$.
Combining (49) and (50) we obtain
$C_{S}\geq\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}(\min_{t\in\theta}I(\mathcal{U};B_{t})-\max_{t\in\theta}\chi(\mathcal{U};Z_{t}))\text{
.}$
∎
## V Compound Classical-Quantum Wiretap Channel
Let $A$, $H$, $H^{\prime}$, $H^{\prime\prime}$, $\theta$, and
$(W_{t},V_{t})_{t\in\theta}$ be defined as in Section II.
###### Theorem 2
The secrecy capacity of the compound classical-quantum wiretap channel in the
case with CSI is given by
$C_{CSI}=\lim_{n\rightarrow\infty}\min_{t\in\theta}\max_{P_{inp},w_{t}}\frac{1}{n}(\chi(P_{inp};B_{t}^{\otimes
n})-\chi(P_{inp};Z_{t}^{\otimes n}))\text{ ,}$ (51)
where $B_{t}$ are the resulting random quantum states at the output of legal
receiver channels and $Z_{t}$ are the resulting random quantum states at the
output of wiretap channels. The supremum is taken over all probability
distributions $P_{inp}$ on the input quantum states $w_{t}$.
Assume that the sender’s encoding is restricted to transmitting an indexed
finite set of orthogonal quantum states $\\{\rho_{x}:x\in
A\\}\subset\mathcal{S}({H^{\prime}}^{\otimes n})$, then the secrecy capacity
of the compound classical-quantum wiretap channel in the case with no CSI at
the encoder is given by
$\displaystyle C_{S}$
$\displaystyle=\lim_{n\rightarrow\infty}\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}\frac{1}{n}\biggl{(}\min_{t\in\theta}\chi(\mathcal{U};B_{t}^{\otimes
n})$ $\displaystyle-\max_{t\in\theta}\chi(\mathcal{U};Z_{t}^{\otimes
n})\biggr{)}\text{ .}$ (52)
###### Proof:
At first we are going to prove (51). Our idea is to send the information in
two parts. First, we send the channel state information with finite blocks of
finite bits with a code $C_{1}$ to the receiver, and then, depending on $t$,
we send the message with a code $C_{2}^{(t)}$ in the second part.
1.1) Sending channel state information with finite bits
We don’t require that the first part should be secure against the wiretapper,
since we assume that the wiretapper already has the full knowledge of the CSI.
By ignoring the security against the wiretapper, we consider only the compound
channel $(W_{t})_{t\in\theta}$. Let $W=(W_{t})_{t}$ be an arbitrary compound
classical quantum channel. Then by [5], for each $\lambda\in(0,1)$, the
$\lambda$-capacity $C(W,\lambda)$ equals
$C(W,\lambda)=\max_{P_{inp}\in P(A)}\min_{t}\chi(P_{inp};W_{t})\text{ .}$ (53)
If $\max_{P_{inp}}\min_{t}\chi(P_{inp};W_{t})>0$ holds, then the sender can
build a code $C_{1}$ such that the CSI can be sent to the legal receiver with
a block with length $l\leq\frac{\log
T}{\min_{t}\max_{P_{inp}}\chi(P_{inp},W_{t})}-\epsilon$. If
$\max_{P_{inp}}\min_{t}\chi(P_{inp};W_{t})=0$ holds, we cannot build a code
$C_{1}$ such that the CSI can be sent to the legal receiver. But this does not
cause any problem for our proof, since if
$\max_{P_{inp}}\min_{t}\chi(P_{inp};W_{t})=0$ the right hand side of (51) is
zero. This means in this case, we need do nothing.
1.2) Message transformation when both the sender and the legal receiver know
CSI
If both the sender and the legal receiver have the full knowledge of $t$, then
we only have to look at the single wiretap channel $(W_{t},V_{t})$.
In [12] and [13] it was shown that if $n$ is sufficiently large, there exists
an $(n,J_{n})$ code for the quantum wiretap channel $(W,V)$ with
$\log J_{n}=\max_{P_{inp},w}(\chi(P_{inp};B^{\otimes
n})-\chi(P_{inp};Z^{\otimes n}))-\epsilon\text{ ,}$ (54)
for any positive $\epsilon$ and positive $\delta$, where $B$ is the resulting
random variable at the output of legal receiver’s channel and $Z$ the output
of the wiretap channel.
When the sender and the legal receiver both know $t$, they can build an
$(n,J_{n,t})$ code $C_{2}^{(t)}$ where
$\log J_{n,t}=\max_{P_{inp},w_{t}}(\chi(P_{inp};B_{t}^{\otimes
n})-\chi(P_{inp};Z_{t}^{\otimes n}))-\epsilon\text{ .}$ (55)
Thus,
$C_{CSI}\geq\lim_{n\rightarrow\infty}\min_{t\in\theta}\max_{P_{inp},w_{t}}\frac{1}{n}(\chi(P_{inp};B_{t}^{\otimes
n})-\chi(P_{inp};Z_{t}^{\otimes n}))\text{ .}$ (56)
###### Remark 4
For the construction of the second part of our code, we use random coding and
request that the randomization can be sent (cf. [12]). However, it was shown
in [8] that the randomization could not always be sent if we require that we
use one unique code which is secure against the wiretapper and suitable for
every channel state, i.e., it does not depend on $t$. This is not a
counterexample to our results above, neither to the construction of $C_{1}$
nor to the construction of $C_{2}^{(t)}$, because of the following facts.
The first part of our code does not need to be secure. For our second part,
the legal transmitters can use the following strategy: At first they build a
code $C_{1}=(E,\\{D_{t}:t=1,\cdots,|\theta|\\})$ and a code
$C_{2}^{(t)}=(E^{(t)},\\{D^{(t)}_{j}:j=1,\cdots,J_{n}\\})$ for every
$t\in\theta$. If the sender wants to send the CSI $t^{\prime}\in\theta$ and
the message $j$, he encodes $t^{\prime}$ with $E$ and $j$ with
$E^{(t^{\prime})}$, then he sends both parts together through the channel.
After receiving both parts, the legal receiver decodes the first part with
$\\{D_{t}:t\\}$, and chooses the right decoders
$\\{D^{(t^{\prime})}_{j}:j\\}\in\left\\{\\{D^{(t)}_{j}:j\\}:t\in\theta\right\\}$
to decode the second part. With this strategy, we can avoid using one unique
code which is suitable for every channel state.
1.3) Upper bound for the case CSI at the encoder
For any $\epsilon>0$, we choose $t^{\prime}\in\theta$ such that
$C(W_{t^{\prime}},V_{t^{\prime}})\leq\inf_{t\in\theta}C(W_{t},V_{t})+\epsilon$.
From [12] and [13] we know that the secrecy capacity of the quantum wiretap
channel $(W_{t^{\prime}},V_{t^{\prime}})$ cannot be greater than
$\lim_{n\rightarrow\infty}\max_{P_{inp},w_{t^{\prime}}}\frac{1}{n}(\chi(P_{inp};B_{t^{\prime}}^{\otimes
n})-\chi(P_{inp};Z_{t^{\prime}}^{\otimes n}))\text{ .}$
Since we cannot exceed the capacity of the worst wiretap channel, we have
$C_{CSI}\leq\lim_{n\rightarrow\infty}\min_{t\in\theta}\max_{P_{inp},w_{t}}\frac{1}{n}(\chi(P_{inp};B_{t}^{\otimes
n})-\chi(P_{inp};Z_{t}^{\otimes n})){.}$ (57)
This together with (56) completes the proof of (51).
###### Remark 5
In [29] it was shown that if for a given $t$ and any $n\in\mathbb{N}$,
$\chi(P_{inp};B_{t}^{\otimes n})\geq\chi(P_{inp};Z_{t}^{\otimes n})$
holds for all $P_{inp}\in P(A)$ and $\\{w_{t}(j):j=1,\cdots,J_{n}\\}\subset
S(H^{\otimes n})$, then
$\displaystyle\lim_{n\rightarrow\infty}\max_{P_{inp},w_{t}}\frac{1}{n}(\chi(P_{inp};B_{t}^{\otimes
n})-\chi(P_{inp};Z_{t}^{\otimes n}))$
$\displaystyle=\max_{P_{inp},w_{t}}(\chi(P_{inp};B_{t})-\chi(P_{inp};Z_{t}))\text{
.}$
Thus if for every $t\in\theta$ and $n\in\mathbb{N}$,
$I(P_{inp},B_{t}^{\otimes n})\geq I(P_{inp};Z_{t}^{\otimes n})$
holds for all $P_{inp}\in P(A)$ and $\\{w_{t}(j):j=1,\cdots,J_{n}\\}\subset
S(H^{\otimes n})$, we have
$C_{CSI}=\min_{t\in\theta}\max_{P_{inp},w_{t}}(\chi(P_{inp};B_{t})-\chi(P_{inp};Z_{t}))\text{
.}$
Now we are going to prove (2).
2.1) Lower bound for case without CSI
Fix a probability distribution $p$ on $A^{n}$. Let
$J_{n}=\lfloor 2^{\min_{t\in\theta}\chi(p;B_{t}^{\otimes
n})-\max_{t\in\theta}\chi(p;Z_{t}^{\otimes n})-2n\mu}\rfloor\text{ ,}$
$L_{n}=\lceil 2^{\max_{t}\chi(p;Z_{t}^{\otimes n})+n\mu}\rceil\text{ ,}$
and let $p^{\prime}$ and $X^{n}=\\{X_{j,l}:j,l\\}$ be defined as in the
classical case (cf. Section III). Since $J_{n}\cdot L_{n}\leq
2^{\min_{t}\chi(p;B_{t}^{\otimes n})-n\mu}$, in [11] it was shown that if $n$
is sufficiently large, there exist a collection of quantum states
$\\{\rho_{x^{n}}:x^{n}\in A^{n}\\}\subset\mathcal{S}({H^{\prime}}^{\otimes
n})$, a collection of positive semi-definite operators
$\\{D_{t,x^{n}}:t\in\theta,x^{n}\in A^{n}\\}$, and a positive constant
$\beta$, such that for any
$(t,j,l)\in\theta\times\\{1,\cdots,J_{n}\\}\times\\{1,\cdots,L_{n}\\}$ it
holds
$Pr\left[\mathrm{tr}\left(W_{t}^{n}(\rho_{X_{j,l}}^{\otimes
n})D_{t,X_{j,l}}\right)\geq 1-2^{-n\beta}\right]>1-2^{-n\beta}\text{ ,}$ (58)
and for any realization $\\{x_{j,l}:j,l\\}$ of $\\{X_{j,l}:j,l\\}$ it holds
that
$\sum_{t\in\theta}\sum_{j=1}^{J_{n}}\sum_{l=1}^{L_{n}}D_{t,x_{j,l}}\leq\mathrm{id}\text{
.}$
We define
$Q_{t}(\rho_{x^{n}}):=\Pi_{pV_{t},\alpha\sqrt{a}}\Pi_{V_{t},\alpha}(x^{n})\cdot
V_{t}^{\otimes
n}(\rho_{x^{n}})\cdot\Pi_{V_{t},\alpha}(x^{n})\Pi_{pV_{t},\alpha\sqrt{a}}\text{
,}$
and
$\Theta_{t}:=\sum_{x^{n}\in\mathcal{T}^{n}_{p,\delta}}{p^{\prime}}^{n}(x^{n})Q_{t}(\rho_{x^{n}})$.
Choosing $n$ sufficiently large, in the same way as our proof of (35) for the
classical compound channel with quantum wiretapper, there is a positive
constant $\upsilon$ such that
$Pr\left(\lVert\sum_{l=1}^{L_{n}}\frac{1}{L_{n}}Q_{t}(\rho_{X^{(t)}_{j,l}})-\Theta_{t}\rVert\leq\epsilon\text{
}\forall t\text{ }\forall j\right)\geq 1-2^{-n\upsilon}\text{ .}$ (59)
We choosing a suitable $\alpha$. If $n$ is sufficiently large, we can find a
realization $x_{j,l}$ of $X_{j,l}$ with a positive probability such that: For
all $j\in\\{1,\cdots J_{n}\\}$, we have
$\min_{t\in\theta}\mathrm{tr}\left(W_{t}^{n}(\rho_{x_{j,l}}^{\otimes
n})D_{t,x_{j,l}}\right)\geq 1-2^{-n\beta}$
and
$\max_{t\in\theta}\lVert\sum_{l=1}^{L_{n}}\frac{1}{L_{n}}Q_{t}(\rho_{x_{j,l}})-\Theta_{t}\rVert\leq\epsilon\text{
.}$
We define $D_{j}:=\sum_{t\in\theta}\sum_{l=1}^{L_{n}}D_{t,x_{j,l}}$, then
$\sum_{j=1}^{J_{n}}D_{j}=\sum_{t\in\theta}\sum_{j=1}^{J_{n}}\sum_{l=1}^{L_{n}}D_{t,x_{j,l}}\leq\mathrm{id}$.
Furthermore, for all $t^{\prime}\in\theta$ and
$l^{\prime}\in\\{1,\cdots,L_{n}\\}$ we have
$\displaystyle\mathrm{tr}\left(W_{t^{\prime}}^{n}(\rho_{x_{j,l^{\prime}}}^{\otimes
n})D_{j}\right)$
$\displaystyle=\sum_{t\in\theta}\sum_{l=1}^{L_{n}}\mathrm{tr}\left(W_{t^{\prime}}^{n}(\rho_{x_{j,l^{\prime}}}^{\otimes
n})D_{t,x_{j,l}}\right)$
$\displaystyle\geq\mathrm{tr}\left(W_{t^{\prime}}^{n}(\rho_{x_{j,l^{\prime}}}^{\otimes
n})D_{t^{\prime},x_{j,l^{\prime}}}\right)$ $\displaystyle\geq
1-2^{-n\beta}\text{ ,}$
the inequality in the third line holds because for two positive semi-definite
matrices $M_{1}$ and $M_{2}$, we always have
$\mathrm{tr}\left(M_{1}M_{2}\right)=\mathrm{tr}\left(\sqrt{M_{1}}M_{2}\sqrt{M_{1}}\right)\geq
0$.
For any $\gamma>0$ let
$R:=\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}\frac{1}{n}\left[\min_{t\in\theta}\chi(p;B_{t}^{\otimes
n})-\max_{t\in\theta}\chi(p;Z_{t}^{\otimes n})\right]-\gamma\text{ .}$
Then for any positive $\lambda$, there is an $(n,J_{n},\lambda)$ code
$\biggl{(}\\{w(j):=\sum_{l=1}^{L_{n}}\frac{1}{L_{n}}\rho_{x_{j,l}}^{\otimes
n}:j=1,\cdots,J_{n},\\},\\{D_{j}:j=1,\cdots,J_{n}\\}\biggr{)}$, such that
$\liminf_{n\rightarrow\infty}\frac{1}{n}\log J_{n}\geq R$,
$\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\mathrm{tr}\left(\mathrm{id}_{{H^{\prime\prime}}^{\otimes
n}}-W_{t}^{\otimes n}\left(w(j)\right)D_{j}\right)\leq\lambda\text{ ,}$ (60)
and in the same way as our proof of (43) for the classical compound channel
with quantum wiretapper,
$\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes n})\leq\lambda\text{ ,}$ (61)
for any uniformly distributed random variable $X_{uni}$ with value in
$\\{1,\cdots,J_{n}\\}$.
Combining (60) and (61) we obtain
$C_{S}\geq\lim_{n\rightarrow\infty}\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}\frac{1}{n}\left(\min_{t\in\theta}\chi(\mathcal{U};B_{t}^{\otimes
n})-\max_{t\in\theta}\chi(\mathcal{U};Z_{t}^{\otimes n})\right)\text{ .}$ (62)
2.2) Upper bound for case without CSI
Let $(\mathcal{C}_{n})=(\\{\rho_{j}^{(n)}:j\\},\\{D_{j}^{(n)}:j\\})$ be a
sequence of $(n,J_{n},\lambda_{n})$ code such that
$\max_{t\in\theta}\max_{j\in\\{1,\cdots,J_{n}\\}}\mathrm{tr}\left(\mathrm{id}-W_{t}^{\otimes
n}\left(\rho_{j}^{(n)}\right)D_{j}^{(n)}\right)\leq\lambda_{n}\text{ ,}$ (63)
$\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes n})=:\epsilon_{2,n}\text{ ,}$
(64)
where $\lim_{n\to\infty}\lambda_{n}=0$ and
$\lim_{n\to\infty}\epsilon_{2,n}=0$, $X_{uni}$ denotes the random variable
which is uniformly distributed on the message set $\\{1,\ldots,J_{n}\\}$.
We denote the classical capacity of the quantum channel $W_{t}$ in the sense
of [30] by $C(W_{t})$. Choose $t^{\prime}\in\theta$ such that
$C(W_{t^{\prime}})=\min_{t\in\theta}C(W_{t})$.
It is known (cf. [22]) that $C(W_{t^{\prime}})$ cannot exceed
$\chi(X_{uni};B_{t^{\prime}}^{\otimes n})+\xi$ for any constant $\xi>0$. Since
the secrecy capacity of a compound wiretap channel cannot exceed the capacity
of the worst channel without wiretapper, for any $\epsilon>0$ choose
$\xi=\frac{1}{2}\epsilon$, if $n$ is large enough, the secrecy rate of
$(\mathcal{C}_{n})$ cannot be greater than
$\displaystyle\frac{1}{n}\chi(X_{uni};B_{t^{\prime}}^{\otimes n})+\xi$
$\displaystyle=\min_{t\in\theta}\frac{1}{n}\chi(X_{uni};B_{t}^{\otimes
n})+\xi$
$\displaystyle\leq\min_{t\in\theta}\frac{1}{n}\chi(X_{uni};B_{t}^{\otimes
n})-\max_{t\in\theta}\frac{1}{n}\chi(X_{uni};Z_{t}^{\otimes
n})+\xi+\frac{1}{n}\epsilon_{2,n}$
$\displaystyle\leq\frac{1}{n}\left(\min_{t\in\theta}\chi(X_{uni};B_{t}^{\otimes
n})-\max_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes n})\right)+\epsilon\text{ .}$
(65)
Thus
$C_{S}\leq\lim_{n\rightarrow\infty}\max_{\mathcal{U}\rightarrow
A\rightarrow(BZ)_{t}}\frac{1}{n}\left(\min_{t\in\theta}\chi(\mathcal{U};B_{t}^{\otimes
n})-\max_{t\in\theta}\chi(\mathcal{U};Z_{t}^{\otimes n})\right)\text{ .}$ (66)
Combining (66) and (62) we obtain (2). ∎
So far, we assumed that $|\theta|$, the number of the channels, is finite,
therefore we can send the CSI with finite bits to the receiver in the case
where the sender has CSI. Now we look at the case where $|\theta|$ can be
arbitrary. We of course are not allowed to send the CSI with finite bits if
$|\theta|=\infty$, but in this case, we may use a “finite approximation” to
obtain the following corollary.
###### Corollary 1
For an arbitrary set $\theta$ we have
$C_{S,CSI}=\lim_{n\rightarrow\infty}\inf_{t\in\theta}\max_{P_{inp},w_{t}}\frac{1}{n}(\chi(P_{inp};B_{t}^{\otimes
n})-\chi(P_{inp};Z_{t}^{\otimes n}))\text{ .}$ (67)
###### Proof:
Let $W:\mathcal{S}(H^{\prime})\rightarrow\mathcal{S}(H^{\prime\prime})$ be a
linear map, then let
$\|W\|_{\lozenge}:=\sup_{n\in\mathbb{N}}\max_{a\in S(\mathbb{C}^{n}\otimes
H^{\prime}),\|a\|_{1}=1}\|(\mathrm{id}_{n}\otimes W)(a)\|_{1}\text{ .}$ (68)
It is known [23] that this norm is multiplicative, i.e. $\|W\otimes
W^{\prime}\|_{\lozenge}=\|W\|_{\lozenge}\cdot\|W^{\prime}\|_{\lozenge}$.
A $\tau$-net in the space of the completely positive trace preserving maps
$\mathcal{S}(H^{\prime})\rightarrow\mathcal{S}(H^{\prime\prime})$ is a finite
set $\left({W^{(k)}}\right)_{k=1}^{K}$ of completely positive trace preserving
maps $\mathcal{S}(H^{\prime})\rightarrow\mathcal{S}(H^{\prime\prime})$ with
the property that for each completely positive trace preserving map
$W:\mathcal{S}(H^{\prime})\rightarrow\mathcal{S}(H^{\prime\prime})$, there is
at least one $k\in\\{1,\cdots,K\\}$ with $\|W-W^{(k)}\|_{\lozenge}<\tau$.
###### Lemma 4 ($\tau-$net [21])
Let $H^{\prime}$ and $H^{\prime\prime}$ be finite-dimensional complex Hilbert
spaces. For any $\tau\in(0,1]$, there is a $\tau$-net of quantum-channels
$\left(W^{(k)}\right)_{k=1}^{K}$ in the space of the completely positive trace
preserving maps
$\mathcal{S}(H^{\prime})\rightarrow\mathcal{S}(H^{\prime\prime})$ with
$K\leq(\frac{3}{\tau})^{2{d^{\prime}}^{4}}$, where $d^{\prime}=\dim
H^{\prime}$.
If $|\theta|$ is arbitrary, then for any $\xi>0$ let
$\tau=\frac{\xi}{-\log\xi}$. By Lemma 4 there exists a finite set
$\theta^{\prime}$ with
$|\theta^{\prime}|\leq(\frac{3}{\tau})^{2{d^{\prime}}^{4}}$ and $\tau$-nets
$\left(W_{t^{\prime}}\right)_{t^{\prime}\in\theta^{\prime}}$,
$\left(V_{t^{\prime}}\right)_{t^{\prime}\in\theta^{\prime}}$ such that for
every $t\in\theta$ we can find a $t^{\prime}\in\theta^{\prime}$ with
$\left\|W_{t}-W_{t^{\prime}}\right\|_{\lozenge}\leq\tau$ and
$\left\|V_{t}-V_{t^{\prime}}\right\|_{\lozenge}\leq\tau$. For every
$t^{\prime}\in\theta^{\prime}$, the legal transmitters build a code
$C_{2}^{(t^{\prime})}=\\{w_{t^{\prime}},\\{D_{t^{\prime},j}:j\\}\\}$. Since by
[12], the error probability of the code $C_{2}^{(t^{\prime})}$ decreases
exponentially to its length, there is an $N=O(-\log\xi)$ such that for all
$t^{\prime\prime}\in\theta^{\prime}$ it holds
$\frac{1}{J_{N}}\sum_{j=1}^{J_{N}}\mathrm{tr}\left(W_{t^{\prime\prime}}^{\otimes
N}\left(w_{t^{\prime\prime}}(j)\right)D_{t^{\prime\prime},j}\right)\geq
1-\lambda-\xi\text{ ,}$ (69) $\chi(X_{uni};Z_{t^{\prime}}^{\otimes
N})\leq\xi\text{ .}$ (70)
Then, if the sender obtains the channel state information “$t$” , he chooses a
“$t^{\prime}$” $\in\theta^{\prime}$ such that
$\left\|W_{t}-W_{t^{\prime}}\right\|_{\lozenge}\leq\tau$ and
$\left\|V_{t}-V_{t^{\prime}}\right\|_{\lozenge}\leq\tau$. He can send
“$t^{\prime}$” to the legal receiver in the first part with finite bits, and
then they build a code $C_{2}^{(t^{\prime})}$ that fulfills (69) and (70) to
transmit the message.
For every ${t^{\prime}}$ and $j$ let
$|\psi_{t^{\prime}}(j)\rangle\langle\psi_{t^{\prime}}(j)|\in\mathcal{S}({H^{\prime}}^{\otimes
N}\otimes{H^{\prime}}^{\otimes N})$ be an arbitrary purification of the
quantum state $w_{t^{\prime}}(j)$, then $\mathrm{tr}\left[\left(W_{t}^{\otimes
N}-W_{t^{\prime}}^{\otimes
N}\right)(w_{t^{\prime}}(j))\right]=\mathrm{tr}\left(\mathrm{tr}_{{H^{\prime}}^{\otimes
N}}\left[\mathrm{id}_{H^{\prime}}^{\otimes N}\otimes(W_{t}^{\otimes
N}-W_{t^{\prime}}^{\otimes
N})\left(|\psi_{t^{\prime}}(j)\rangle\langle\psi_{t^{\prime}}(j)|\right)\right]\right)$.
We have
$\displaystyle\mathrm{tr}\left[\left(W_{t}^{\otimes N}-W_{t^{\prime}}^{\otimes
N}\right)(w_{t^{\prime}}(j))\right]$
$\displaystyle=\mathrm{tr}\left(\mathrm{tr}_{{H^{\prime}}^{\otimes
N}}\left[\mathrm{id}_{H^{\prime}}^{\otimes N}\otimes(W_{t}^{\otimes
N}-W_{t^{\prime}}^{\otimes
N})\left(|\psi_{t^{\prime}}(j)\rangle\langle\psi_{t^{\prime}}(j)|\right)\right]\right)$
$\displaystyle=\mathrm{tr}\left[\mathrm{id}_{H^{\prime}}^{\otimes
N}\otimes(W_{t}^{\otimes n}-W_{t^{\prime}}^{\otimes
N})\left(|\psi_{t^{\prime}}(j)\rangle\langle\psi_{t^{\prime}}(j)|\right)\right]$
$\displaystyle\leq
d^{\prime}d^{\prime\prime}N^{2}\left\|\mathrm{id}_{H^{\prime}}^{\otimes
N}\otimes(W_{t}^{\otimes N}-W_{t^{\prime}}^{\otimes
N})\left(|\psi_{t^{\prime}}(j)\rangle\langle\psi_{t^{\prime}}(j)|\right)\right\|_{1}$
$\displaystyle\leq d^{\prime}d^{\prime\prime}N^{2}\|W_{t}^{\otimes
N}-W_{t^{\prime}}^{\otimes
N}\|_{\lozenge}\cdot\left\|\left(|\psi_{t^{\prime}}(j)\rangle\langle\psi_{t^{\prime}}(j)|\right)\right\|_{1}$
$\displaystyle\leq d^{\prime}d^{\prime\prime}N^{3}\tau\text{ ,}$
where $d^{\prime\prime}=\dim H^{\prime\prime}$. The second equality follows
from the definition of trace. The first inequality follows from the fact that
for any matrix $(a_{i,j})_{i=1,\cdots,n,j=1,\cdots,n}$, it holds that
$n\cdot\|(a_{i,j})_{i,j}\|_{1}$ $=$ $n\cdot\max_{1\leq j\leq
n}\sum_{i=1}^{n}|a_{ij}|$ $\geq$ $\sum_{j}\sum_{i}|a_{ij}|$ $\geq$
$\sum_{j}|a_{jj}|$ $\geq$ $|\sum_{j}a_{jj}|$ $=$
$|\mathrm{tr}(a_{i,j})_{i,j}|$. The second inequality follows by the
definition of $\|\cdot\|_{\lozenge}$. The third inequality follows from the
facts that
$\|\left(|\psi_{t^{\prime}}(j)\rangle\langle\psi_{t^{\prime}}(j)|\right)\|_{1}=1$
and $\left\|W_{t}^{\otimes N}-W_{t^{\prime}}^{\otimes
N}\right\|_{\lozenge}=\left\|\left(W_{t}-W_{t^{\prime}}\right)^{\otimes
N}\right\|_{\lozenge}=N\cdot\left\|W_{t}-W_{t^{\prime}}\right\|_{\lozenge}$,
since $\|\cdot\|_{\lozenge}$ is multiplicative.
It follows that
$\displaystyle\biggl{|}\frac{1}{J_{N}}\sum_{j=1}^{J_{N}}\mathrm{tr}\left(W_{t}^{\otimes
N}\left(w_{t^{\prime}}(j)\right)D_{t^{\prime},j}\right)$
$\displaystyle-\frac{1}{J_{N}}\sum_{j=1}^{J_{N}}\mathrm{tr}\left(W_{t^{\prime}}^{\otimes
N}\left(w_{t^{\prime}}(j)\right)D_{t^{\prime},j}\right)\biggr{|}\allowdisplaybreaks$
$\displaystyle\leq\frac{1}{J_{N}}\sum_{j=1}^{J_{N}}\left|\mathrm{tr}\left[\left(W_{t}^{\otimes
N}-W_{t^{\prime}}^{\otimes
N}\right)\left(w_{t^{\prime}}(j)\right)D_{t^{\prime},j}\right]\right|\allowdisplaybreaks$
$\displaystyle\leq\frac{1}{J_{N}}\sum_{j=1}^{J_{N}}\left|\mathrm{tr}\left[\left(W_{t}^{\otimes
N}-W_{t^{\prime}}^{\otimes
N}\right)\left(w_{t^{\prime}}(j)\right)\right]\right|\allowdisplaybreaks$
$\displaystyle\leq\frac{1}{J_{N}}J_{N}d^{\prime}d^{\prime\prime}N^{3}\tau\allowdisplaybreaks$
$\displaystyle=d^{\prime}d^{\prime\prime}N^{3}\tau\text{ .}$ (71)
$d^{\prime}d^{\prime\prime}N^{3}\tau$ can be arbitrarily small when $\xi$ is
close to zero, since $N=O(-\log\xi)$.
Let $X_{uni}$ be a random variable uniformly distributed on
$\\{1,\cdots,J_{N}\\}$, and $\\{\rho(j):j=1,\cdots,J_{n}\\}$ be a set of
quantum states labeled by elements of $\\{1,\cdots,J_{n}\\}$. We have
$\displaystyle\lvert\chi(X_{uni};V_{t})-\chi(X_{uni};V_{t^{\prime}})\rvert$
$\displaystyle\leq\left\lvert
S\left(\sum_{j=1}^{J_{N}}\frac{1}{J_{N}}V_{t}(\rho(j))\right)-S\left(\sum_{j=1}^{J_{N}}\frac{1}{J_{N}}V_{t^{\prime}}(\rho(j))\right)\right\rvert$
$\displaystyle+\left\lvert\sum_{j=1}^{J_{N}}\frac{1}{J_{N}}S\left(V_{t}(\rho(j))\right)-\sum_{j=1}^{J_{N}}\frac{1}{J_{N}}S\left(V_{t^{\prime}}(\rho(j))\right)\right\rvert$
$\displaystyle\leq 2\tau\log d-2\tau\log\tau\text{ ,}$ (72)
where $d=\dim H$. The inequality in the last line holds by Lemma 3 and because
$\left\|V_{t}(\rho)-V_{t^{\prime}}(\rho)\right\|\leq\tau$ for all
$\rho\in\mathcal{S}(H)$ when
$\left\|V_{t}-V_{t^{\prime}}\right\|_{\lozenge}\leq\tau$.
By (71) and (72) we have
$\sup_{t\in\theta}\frac{1}{J_{N}}\sum_{j=1}^{J_{N}}\mathrm{tr}\left(W_{t}^{\otimes
N}\left(w_{t^{\prime}}(j)\right)D_{t^{\prime},j}\right)\geq
1-\lambda-\xi-d^{\prime}d^{\prime\prime}N^{3}\tau\text{ ,}$
$\chi(X_{uni};Z_{t}^{\otimes N})\leq\xi+2\tau\log d-2\tau\log\tau\text{ .}$
Since $\xi+d^{\prime}d^{\prime\prime}N^{3}\tau$ and $2\tau\log d$ can be
arbitrarily small, when $\xi$ is close to zero, we have
$\sup_{t\in\theta}\frac{1}{J_{N}}\sum_{j=1}^{J_{N}}\mathrm{tr}\left(W_{t}^{\otimes
N}\left(w_{t^{\prime}}(j)\right)D_{t^{\prime},j}\right)\geq 1-\lambda\text{
,}$ $\sup_{t\in\theta}\chi(X_{uni};Z_{t}^{\otimes N})\leq\epsilon\text{ .}$
The bits that the sender uses to transform the CSI is large but constant, so
it is still negligible compared to the second part. We obtain
$C_{CSI}\geq\lim_{n\rightarrow\infty}\inf_{t\in\theta}\max_{P_{inp},w_{t}}\frac{1}{n}(\chi(P_{inp};B_{t}^{\otimes
n})-\chi(P_{inp};Z_{t}^{\otimes n}))\text{ .}$ (73)
The proof of the converse is similar to those given in the proof of Theorem 2,
where we consider a worst $t^{\prime}$. ∎
###### Remark 6
In (51) and Corollary 1 we have only required that the legal receiver can
decode the correct message with a high probability if $n$ is sufficiently
large. We have not specified how fast the error probability tends to zero when
the code length goes to infinity. If we analyze the relation between the error
probability $\varepsilon$ and the code length, then we have the following
facts.
In the case of finite $\theta$, let $\varepsilon_{1}$ denote the error
probability of the first part of the code (i.e. the legal receiver does not
decode the correct CSI), and let $\varepsilon_{2}$ denote the error
probability of the second part of the code (i.e. the legal receiver decodes
the correct CSI, but does not decode the message). Since the length of the
first part of the code is $l\cdot\log\mathit{c}\cdot
c^{\prime}=O(\log\varepsilon_{1})$, we have $\varepsilon_{1}^{-1}$ is
$O(\text{exp}(l\cdot\log\mathit{c}\cdot c^{\prime}))=O(\text{exp}(n))$, where
$n$ stands for the length of the first part of the code. And for the second
part of the code, $\varepsilon_{2}$ decreased exponentially to the length of
the second part, as proven in [12]. Thus, the error probability
$\varepsilon=\max\\{\varepsilon_{1},\varepsilon_{2}\\}$ decreases
exponentially to the code length in the case of finite $\theta$.
If $\theta$ is infinite, let $\varepsilon_{1}$ denote the error probability of
the first part of the code probability. Here we have to build two $\tau$-nets
for a suitable $\tau$, each contains
$O((\frac{-\log\varepsilon_{1}}{\varepsilon_{1}})^{-2{d^{\prime}}^{4}})$
channels. If we want to send the CSI of these $\tau$-nets, the length of first
part $l$ will be
$O(-2{d^{\prime}}^{4}\cdot\log(\varepsilon_{1}\log\varepsilon_{1}))$, which
means here $\varepsilon_{1}^{-1}$ will be
$O(\text{exp}(\frac{n}{4{d^{\prime}}^{4}}))=O(\text{exp}(n))$. Thus we can
still achieve that the error probability decreases exponentially to the code
length in case of infinite $\theta$.
## VI Entanglement Generation over compound quantum channels
Let $\mathfrak{P}$, $\mathfrak{Q}$, $H^{\mathfrak{P}}$, $H^{\mathfrak{Q}}$,
$\theta$, and $\left(N_{t}^{\otimes n}\right)_{t\in\theta}$ be defined as in
section II (i.e. we assume that $\theta$ is finite).
We denote $\dim H^{\mathfrak{P}}$ by $a$, and denote
$\mathcal{X}:=\\{1,\cdots,a\\}$. Consider the eigen-decomposition of
$\rho^{\mathfrak{P}}$ into the orthonormal pure quantum state ensemble
$\\{p(x),|\phi_{x}\rangle^{\mathfrak{P}}:x\in\mathcal{X}\\}$,
$\sum_{x\in\mathcal{X}}p(x)|\phi_{x}\rangle\langle\phi_{x}|^{\mathfrak{P}}=\rho^{\mathfrak{P}}\text{
.}$
The distribution $p$ defines a random variable $X$.
###### Theorem 3
The entanglement generating capacity of $\left(N_{t}\right)_{t\in\theta}$ is
bounded as follows
$A\geq\max_{p}\left(\min_{t\in\theta}\chi(p;Q_{t})-\max_{t\in\theta}\chi(p;E_{t})\right)\text{
,}$ (74)
where $Q_{t}$ stands for the quantum outputs that the receiver observes at the
channel state $t$, and $E_{t}$ the quantum outputs at the environment.
(Theorem 3 is weaker than the result in [7], the reason is that we use for our
proof a different quantum channel representation. For details and the result
in [7] cf. Section VII.)
###### Proof:
Let $\rho^{\mathfrak{P}}\rightarrow U_{N_{t}}\rho^{\mathfrak{P}}U_{N_{t}}^{*}$
be a unitary transformation which represents $N_{t}$ (cf. Section VII), where
$U_{N_{t}}$ is a linear operator $\mathcal{S}(H^{\mathfrak{P}})$ $\rightarrow$
$\mathcal{S}(H^{\mathfrak{QE}})$, and $\mathfrak{E}$ is the quantum system of
the environment. Fix a $\rho^{\mathfrak{P}}$ with eigen-decomposition
$\sum_{x\in\mathcal{X}}p(x)|\phi_{x}\rangle^{\mathfrak{P}}\langle\phi_{x}|^{\mathfrak{P}}$.
If the channel state is $t$, the local output density matrix seen by the
receiver is
$\mathrm{tr}_{\mathfrak{E}}\left(\sum_{x}p(x)U_{N_{t}}|\phi_{x}\rangle\langle\phi_{x}|^{\mathfrak{P}}U_{N_{t}}^{*}\right)\text{
,}$
and the local output density matrix seen by the environment (which we
interpret as the wiretapper) is
$\mathrm{tr}_{\mathfrak{Q}}\left(\sum_{x}p(x)U_{N_{t}}|\phi_{x}\rangle\langle\phi_{x}|^{\mathfrak{P}}U_{N_{t}}^{*}\right)\text{
.}$
Therefore $\left(N_{t}\right)_{t\in\theta}$ defines a compound classical-
quantum wiretap channel $(W_{N_{t}},V_{N_{t}})_{t\in\theta}$, where
$W_{N_{t}}:H^{\mathfrak{P}}\rightarrow H^{\mathfrak{Q}}$,
$\sum_{x\in\mathcal{X}}p(x)|\phi_{x}\rangle\langle\phi_{x}|^{\mathfrak{P}}$
$\rightarrow$
$\mathrm{tr}_{\mathfrak{E}}\left(\sum_{x}p(x)U_{N_{t}}|\phi_{x}\rangle\langle\phi_{x}|^{\mathfrak{P}}U_{N_{t}}^{*}\right)$,
and $V_{N_{t}}:H^{\mathfrak{P}}\rightarrow H^{\mathfrak{Q}}$,
$\sum_{x\in\mathcal{X}}p(x)|\phi_{x}\rangle\langle\phi_{x}|^{\mathfrak{P}}$
$\rightarrow$
$\mathrm{tr}_{\mathfrak{E}}\left(\sum_{x}p(x)U_{N_{t}}|\phi_{x}\rangle\langle\phi_{x}|^{\mathfrak{P}}U_{N_{t}}^{*}\right)$.
1) Building the encoder and the first part of the decoding operator
Let
$J_{n}=\lceil
2^{n[\min_{t}\chi(X;Q_{t})-\max_{t}\chi(X;E_{t})-2\delta]}\rceil\text{ ,}$
and
$L_{n}=\lceil 2^{n(\max_{t}\chi(X;E_{t})+\delta)}\rceil\text{ .}$
For the compound classical-quantum wiretap channel
$(W_{N_{t}},V_{N_{t}})_{t\in\theta}$, since
$\displaystyle\\#\\{(j,l):j=1,\cdots,J_{n},l=1,\cdots,L_{n}\\}$
$\displaystyle=J_{n}\cdot L_{n}\leq 2^{n\min_{t}[\chi(X;Q_{t})-\delta]}\text{
,}$
if $n$ is large enough, by Theorem 2 and [11], the following holds. There is a
collection of quantum states
$\\{\rho_{x_{j,l}}^{\mathfrak{P}^{n}}:j=1,\cdots,J_{n},l=1,\cdots,L_{n}\\}\subset\mathcal{S}(H^{\mathfrak{P}^{n}})$,
a collection of positive semi-definite operators
$\\{D_{t,j,l}:=D_{t,x_{j,l}}:t\in\theta,j=1,\cdots,J_{n},l=1,\cdots,L_{n}\\}$,
a positive constant $\beta$, and a quantum state $\xi_{t}^{\mathfrak{E}^{n}}$
on $H^{\mathfrak{E}^{n}}$, such that
$\mathrm{tr}\left((D_{t,x_{j,l}}^{\mathfrak{Q}^{n}}\otimes\mathrm{id}^{\mathfrak{E}^{n}})U_{N_{t}}\rho_{x_{j,l}}^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)\geq
1-2^{-n\beta}\text{ ,}$ (75)
and
$\|\omega_{j,t}^{\mathfrak{E}^{n}}-\xi_{t}^{\mathfrak{E}^{n}}\|_{1}<\epsilon\text{
,}$ (76)
where
$\omega_{j,t}^{\mathfrak{E}^{n}}:=\frac{1}{L_{n,t}}\sum_{l=1}^{L_{n,t}}\mathrm{tr}_{\mathfrak{Q}^{n}}\left(U_{N_{t}}\rho_{x_{j,l}}^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)$.
Now the quantum state $\rho_{x_{j,l}}^{\mathfrak{P}^{n}}$ may be pure or
mixed. Assume $\rho_{x_{j,l}}^{\mathfrak{P}^{n}}$ is a mixed quantum state
$\sum_{i=1}^{n}{p^{\prime}}_{j,l}(i)|\varkappa_{x_{j,l}}^{(i)}\rangle\langle\varkappa_{x_{j,l}}^{(i)}|^{\mathfrak{P}^{n}}$,
then
$\displaystyle\sum_{i=1}^{n}{p^{\prime}}_{j,l}(i)\mathrm{tr}\left((D_{t,x_{j,l}}^{\mathfrak{Q}^{n}}\otimes\mathrm{id}^{\mathfrak{E}^{n}})U_{N_{t}}|\varkappa_{x_{j,l}}^{(i)}\rangle\langle\varkappa_{x_{j,l}}^{(i)}|^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)$
$\displaystyle\mathrm{tr}\left((D_{t,x_{j,l}}^{\mathfrak{Q}^{n}}\otimes\mathrm{id}^{\mathfrak{E}^{n}})U_{N_{t}}(\sum_{i=1}^{n}{p^{\prime}}_{j,l}(i)|\varkappa_{x_{j,l}}^{(i)}\rangle\langle\varkappa_{x_{j,l}}^{(i)}|^{\mathfrak{P}^{n}})U_{N_{t}}^{*}\right)$
$\displaystyle\geq 1-2^{-n\beta}\text{ .}$
Thus, for all $i$ such that
${p^{\prime}}_{j,l}(i)\geq\frac{2^{-n\beta}}{1-2^{-n\beta}}$ it must hold
$\mathrm{tr}\left((D_{t,x_{j,l}}^{\mathfrak{Q}^{n}}\otimes\mathrm{id}^{\mathfrak{E}^{n}})U_{N_{t}}|\varkappa_{x_{j,l}}^{(i)}\rangle\langle\varkappa_{x_{j,l}}^{(i)}|^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)\geq
1-2^{-n\beta}\text{ .}$
If $n$ is large enough, then there is at least one
$i_{l,j}\in\\{1,\cdots,n\\}$ such that
${p^{\prime}}_{j,l}(i_{l,j})\geq\frac{2^{-n\beta}}{1-2^{-n\beta}}$. By Theorem
2, there is a $\xi_{t}^{\mathfrak{E}^{n}}$ on $H^{\mathfrak{E}^{n}}$, such
that
$\|\frac{1}{L_{n,t}}\sum_{l=1}^{L_{n,t}}\mathrm{tr}_{\mathfrak{Q}^{n}}\left(U_{N_{t}}|\varkappa_{x_{j,l}}^{(i_{l,j})}\rangle\langle\varkappa_{x_{j,l}}^{(i_{l,j})}|^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)-\xi_{t}^{\mathfrak{E}^{n}}\|_{1}<\epsilon\text{
.}$
Thus,
$\left(\\{|\varkappa_{x_{j,l}}^{(i_{l,j})}\rangle\langle\varkappa_{x_{j,l}}^{(i_{l,j})}|^{\mathfrak{P}^{n}}:j,l\\},\\{D_{t,x_{j,l}}^{\mathfrak{Q}^{n}}:j,l,t\\}\right)$
is a code with the same security rate as
$\left(\\{\rho_{x_{j,l}}^{\mathfrak{P}^{n}}:j,l\\},\\{D_{t,x_{j,l}}^{\mathfrak{Q}^{n}}:j,l,t\\}\right)\text{
.}$
Hence we may assume that $\rho_{x_{j,l}}^{\mathfrak{P}^{n}}$ is a pure quantum
state.
Assume
$\rho_{x_{j,l}}^{\mathfrak{P}^{n}}=|\varkappa_{j,l}\rangle\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}$.
Let $H^{\mathfrak{M}}$ be a $J_{n}$-dimensional Hilbert space with an
orthonormal basis $\\{|j\rangle^{\mathfrak{M}}:j=1,\cdots,J_{n}\\}$,
$H^{\mathfrak{L}}$ be a $L_{n}$-dimensional Hilbert space with an orthonormal
basis $\\{|l\rangle^{\mathfrak{L}}:l=1,\cdots,L_{n,t}\\}$, and $H^{\theta}$ be
a $|\theta|$-dimensional Hilbert Space with an orthonormal basis
$\\{|t\rangle^{\theta}:t\in\theta\\}$. Let
$|0\rangle^{\mathfrak{M}}|0\rangle^{\mathfrak{L}}|0\rangle^{\theta}$ be the
ancillas on $H^{\mathfrak{M}}$, $H^{\mathfrak{L}}$, and $H^{\theta}$,
respectively, that the receiver adds. We can (cf. [22]) define a unitary
matrix $V^{\mathfrak{Q}^{n}\mathfrak{ML}\theta}$ on
$H^{\mathfrak{Q}^{n}\mathfrak{ML}\theta}$ such that for any given quantum
state $\rho^{\mathfrak{Q}^{n}}\in\mathcal{S}(H^{\mathfrak{Q}^{n}})$ we have
$\displaystyle
V^{\mathfrak{Q}^{n}\mathfrak{ML}\theta}\biggl{(}\rho^{\mathfrak{Q}^{n}}\otimes|0\rangle\langle
0|^{\mathfrak{M}}\otimes|0\rangle\langle
0|^{\mathfrak{L}}\otimes|0\rangle\langle
0|^{\theta}\biggr{)}(V^{\mathfrak{Q}^{n}\mathfrak{ML}\theta})^{*}$
$\displaystyle=\sum_{t}\sum_{j}\sum_{l}\left(D_{t,x_{j,l}}^{\mathfrak{Q}^{n}}\rho^{\mathfrak{Q}^{n}}\right)\otimes|j\rangle\langle
j|^{\mathfrak{\mathfrak{M}}}|l\rangle\langle l|^{\mathfrak{L}}|t\rangle\langle
t|^{\theta}\text{ .}$
We denote
$\displaystyle{\psi}_{j,l,t}^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{ML}\theta}$
$\displaystyle:=\left(\mathrm{id}^{\mathfrak{E}^{n}}\otimes
V^{\mathfrak{Q}^{n}\mathfrak{ML}\theta}\right)\left(U_{N}\otimes\mathrm{id}^{\mathfrak{M}\mathfrak{L}\theta}\right)\Bigl{[}|\varkappa_{j,l}\rangle\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}$
$\displaystyle\otimes|0\rangle\langle 0|^{\mathfrak{M}}\otimes|0\rangle\langle
0|^{\mathfrak{L}}\otimes|0\rangle\langle
0|^{\theta}\Bigr{]}\left(U_{N}\otimes\mathrm{id}^{\mathfrak{M}\mathfrak{L}\theta}\right)^{*}$
$\displaystyle\left(\mathrm{id}^{\mathfrak{E}^{n}}\otimes
V^{\mathfrak{Q}^{n}\mathfrak{ML}\theta}\right)^{*}\text{ ,}$
in view of (75), we have
$\displaystyle
F\left(\mathrm{tr}_{\mathfrak{Q}^{n}\mathfrak{E}^{n}}\left({\psi}_{j,l,t}^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}\right),|j\rangle\langle
j|^{\mathfrak{M}}\otimes|l\rangle\langle
l|^{\mathfrak{L}}\otimes|t\rangle\langle t|^{\theta}\right)$
$\displaystyle\geq 1-\epsilon\text{ .}$ (77)
By Uhlmann’s theorem (cf. e.g. [30]) we can find a
$|\zeta_{j,l,t}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}$ on
$H^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}$, such that
$\displaystyle\langle 0|^{\theta}\langle 0|^{\mathfrak{L}}\langle
0|^{\mathfrak{M}}\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}\left(U_{N_{t}}\otimes\mathrm{id}^{\mathfrak{M}\mathfrak{L}\theta}\right)^{*}$
$\displaystyle\left(\mathrm{id}^{\mathfrak{E}^{n}}\otimes
V^{\mathfrak{Q}^{n}\mathfrak{ML}\theta}\right)^{*}|\zeta_{j,l,t}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}|j\rangle^{\mathfrak{M}}|l\rangle^{\mathfrak{L}}|t\rangle^{\theta}$
$\displaystyle=F\biggl{(}{\psi}_{j,l,t}^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta},|\zeta_{j,l,t}\rangle\langle\zeta_{j,l,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}$
$\displaystyle\otimes|j\rangle\langle j|^{\mathfrak{M}}\otimes|l\rangle\langle
l|^{\mathfrak{L}}\otimes|t\rangle\langle t|^{\theta}\biggr{)}$
$\displaystyle\geq 1-\epsilon\text{ .}$ (78)
2) Building the seconder part of the decoding operator
We define
$|a_{j,l}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}:=|\varkappa_{j,l}\rangle^{\mathfrak{P}^{n}}|0\rangle^{\mathfrak{M}}|0\rangle^{\mathfrak{L}}|0\rangle^{\theta}\text{
,}$
and
$\displaystyle|b_{j,l,t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}:=\left(U_{N_{t}}\otimes\mathrm{id}^{\mathfrak{M}\mathfrak{L}\theta}\right)^{*}\left(\mathrm{id}^{\mathfrak{E}^{n}}\otimes
V^{\mathfrak{Q}^{n}\mathfrak{ML}\theta}\right)^{*}$
$\displaystyle|\zeta_{j,l,t}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}|j\rangle^{\mathfrak{M}}|l\rangle^{\mathfrak{L}}|t\rangle^{\theta}\text{
.}$
For every $j$, $l$, and $t$, we have $\langle
a_{j,l}|b_{j,l,t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\geq
1-\epsilon$.
We define
$|\hat{a}_{j,k}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}:=\frac{1}{\sqrt{L_{n}}}\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k}{L_{n}}}|a_{j,l}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\text{
,}$
$|\hat{b}_{j,k,t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}:=\frac{1}{\sqrt{L_{n}}}\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k}{L_{n}}}|b_{j,l,t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\text{
,}$
and
$|\overline{b}_{j,k}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}:=\frac{1}{|\theta|}\sum_{t=1}^{|\theta|}|\hat{b}_{j,k,t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\text{
.}$
For every $j\in\\{1,\cdots,J_{n}\\}$, by (78) it holds
$\displaystyle\frac{1}{L_{n}}\sum_{k=1}^{L_{n}}\langle\hat{a}_{j,k}|\overline{b}_{j,k}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}$
$\displaystyle=\frac{1}{|\theta|}\frac{1}{L_{n}}\sum_{t=1}^{|\theta|}\sum_{k=1}^{L_{n}}\langle\hat{a}_{j,k}|\hat{b}_{j,k,t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}$
$\displaystyle=\frac{1}{|\theta|}\frac{1}{L_{n}}\sum_{t=1}^{|\theta|}\sum_{l=1}^{L_{n}}\langle
a_{j,l}|b_{j,l,t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}$
$\displaystyle\geq 1-\epsilon\text{ .}$ (79)
Hence there is at least one $k_{j}\in\\{1,\cdots,L_{n}\\}$ such that for every
$j$, we have
$\displaystyle 1-\epsilon$ $\displaystyle\leq
e^{is_{k_{j}}}\langle\hat{a}_{j,k_{j}}|\overline{b}_{j,k_{j}}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}$
$\displaystyle=\frac{1}{|\theta|}\sum_{t=1}^{|\theta|}e^{is_{k_{j}}}\langle\hat{a}_{j,k_{j}}|\hat{b}_{j,k_{j},t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\text{
,}$
for a suitable phase $s_{k_{j}}$. Since for all $t$ it holds
$\left|e^{is_{k_{j}}}\langle\hat{a}_{j,k_{j}}|\hat{b}_{j,k_{j},t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\right|\leq
1$, we have
$\min_{t\in\theta}\left|e^{is_{k_{j}}}\langle\hat{a}_{j,k_{j}}|\hat{b}_{j,k_{j},t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\right|\geq
1-|\theta|\epsilon\text{ .}$
Therefore, there is a suitable phase $r_{k_{j}}$ such that for all
$t\in\theta$,
$\displaystyle 1-|\theta|\epsilon$
$\displaystyle\leq\left|e^{is_{k_{j}}}\langle\hat{a}_{j,k_{j}}|\hat{b}_{j,k_{j},t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\right|$
$\displaystyle=e^{ir_{k_{j}}}\langle\hat{a}_{j,k_{j}}|\hat{b}_{j,k_{j},t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}$
$\displaystyle=e^{ir_{k_{j}}}\frac{1}{L_{n}}\left(\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}\langle
a_{j,l}|^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\right)$
$\displaystyle\left(\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}|b_{j,l,t}\rangle^{\mathfrak{P}^{n}\mathfrak{M}\mathfrak{L}\theta}\right)\text{
.}$ (80)
For every $t\in\theta$, we set
$|\varpi_{j,t}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}:=\sqrt{\frac{1}{L_{n}}}\sum_{l=1}^{L_{n}}e^{2\pi
i(l\frac{k_{j}}{L_{n}}+r_{k_{j}})}|\zeta_{j,l,t}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}\otimes|l\rangle^{\mathfrak{L}}$
and
$\displaystyle|\vartheta_{j,t}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}:=\sqrt{\frac{1}{L_{n}}}\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}\left[\mathrm{id}^{\mathfrak{E}^{n}}\otimes
V^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}\theta}\right]$
$\displaystyle(U_{N}^{\otimes
n}|\varkappa_{j,l}\rangle^{\mathfrak{P}^{n}})|0\rangle^{\mathfrak{M}}|0\rangle^{\mathfrak{L}}|0\rangle^{\theta}\text{
.}$
For all $t\in\theta$ and $j\in\\{1,\cdots J_{n}\\}$ it holds by (80)
$\displaystyle
F\biggl{(}|\vartheta_{j,t}\rangle\langle\vartheta_{j,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta},$
$\displaystyle|\varpi_{j,t}\rangle\langle\varpi_{j,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}\otimes|j\rangle\langle
j|^{\mathfrak{M}}\otimes|t\rangle\langle t|^{\theta}\biggr{)}$
$\displaystyle=\left|\langle\vartheta_{j,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}|\varpi_{j,t}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}|j\rangle^{M}|t\rangle^{\theta}\right|$
$\displaystyle=\frac{1}{L_{n}}\left(\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}\langle
a_{j,l}|^{\mathfrak{P}^{n}\mathfrak{ML}\theta}\right)$
$\displaystyle\left(\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}e^{ir_{{k}_{j}}}|b_{j,l,t}\rangle^{\mathfrak{P}^{n}\mathfrak{ML}\theta}\right)$
$\displaystyle\geq 1-|\theta|\epsilon\text{ .}$ (81)
Furthermore, since (76) holds there is a quantum state
$\xi_{t}^{\mathfrak{E}^{n}}$, which does not depend on $j$ and $l$, on
$H^{\mathfrak{E}^{n}}$ such that
$\left\|\xi_{t}^{\mathfrak{E}^{n}}-\mathrm{tr}_{\mathfrak{Q}^{n}}\left(U_{N_{t}}|\varkappa_{j,l}\rangle\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)\right\|_{1}\leq\epsilon\text{
.}$ (82)
By monotonicity of fidelity, for any $l\in\\{1,\cdots,L_{n}\\}$
$\displaystyle\left\|\mathrm{tr}_{\mathfrak{Q}^{n}}\left(U_{N_{t}}|\varkappa_{j,l}\rangle\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)-\mathrm{tr}_{\mathfrak{Q}^{n}}\left(|\zeta_{j,l,t}\rangle\langle\zeta_{j,l,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}\right)\right\|_{1}$
$\displaystyle\leq
2\biggl{[}1-F\biggl{(}\mathrm{tr}_{\mathfrak{Q}^{n}}\left(U_{N_{t}}|\varkappa_{j,l}\rangle\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right),$
$\displaystyle\mathrm{tr}_{\mathfrak{Q}^{n}}\left(|\zeta_{j,l,t}\rangle\langle\zeta_{j,l,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}\right)\biggr{)}\biggr{]}^{\frac{1}{2}}$
$\displaystyle\leq
2\biggl{[}1-F\biggl{(}{\psi}_{j,l,t}^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta},|\zeta_{j,l,t}\rangle\langle\zeta_{j,l,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}$
$\displaystyle\otimes|j\rangle\langle j|^{\mathfrak{M}}\otimes|l\rangle\langle
l|^{\mathfrak{L}}\otimes|t\rangle\langle
t|^{\theta}\biggr{)}\biggr{]}^{\frac{1}{2}}$ $\displaystyle\leq
2\sqrt{\epsilon}\text{ ,}$ (83)
the first inequality holds because for two quantum states $\varrho$ and
$\eta$, we have
$\frac{1}{2}\|\varrho-\eta\|_{1}\leq\sqrt{1-F(\varrho,\eta)^{2}}$.
By (82) and (83)
$\displaystyle\left\|\mathrm{tr}_{\mathfrak{Q}^{n}\mathfrak{L}}\left(|\varpi_{j,t}\rangle\langle\varpi_{j,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}\right)-\xi_{t}^{\mathfrak{E}^{n}}\right\|_{1}$
$\displaystyle=\left\|\frac{1}{L_{n}}\sum_{l=1}^{L_{n}}\mathrm{tr}_{\mathfrak{Q}^{n}}\left(|\zeta_{j,l,t}\rangle\langle\zeta_{j,l,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}\right)-\xi_{t}^{\mathfrak{E}^{n}}\right\|_{1}$
$\displaystyle\leq\frac{1}{L_{n}}\sum_{l=1}^{L_{n}}\biggl{\|}\mathrm{tr}_{\mathfrak{Q}^{n}}\left(U_{N_{t}}|\varkappa_{j,l}\rangle\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)$
$\displaystyle-\mathrm{tr}_{\mathfrak{Q}^{n}}\left(|\zeta_{j,l,t}\rangle\langle\zeta_{j,l,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}}\right)\biggr{\|}_{1}$
$\displaystyle+\left\|\xi_{t}^{\mathfrak{E}^{n}}-\mathrm{tr}_{\mathfrak{Q}^{n}}\left(U_{N_{t}}|\varkappa_{j,l}\rangle\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}U_{N_{t}}^{*}\right)\right\|_{1}$
$\displaystyle\leq 2\sqrt{\epsilon}+\epsilon\text{ ,}$ (84)
holds for all $t\in\theta$ and $j\in\\{1,\cdots,J_{n}\\}$.
In [26] (cf. also [13]) it was shown that when (84) holds, for every
$t\in\theta$ we can find a unitary operator
$U^{\mathfrak{Q}^{n}\mathfrak{ML}}_{(t)}$ such that if we set
$\displaystyle\chi^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{ML}}_{j,t}:=\left(U^{\mathfrak{Q}^{n}\mathfrak{ML}}_{(t)}\otimes\mathrm{id}^{\mathfrak{E}^{n}}\right)$
$\displaystyle\left(|\varpi_{j,t}\rangle\langle\varpi_{j,t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}\otimes|j\rangle\langle
j|^{\mathfrak{M}}\right)\left(U^{\mathfrak{Q}^{n}\mathfrak{ML}}_{(t)}\otimes\mathrm{id}^{\mathfrak{E}^{n}}\right)^{*}\text{
,}$
then
$F\left(|\xi_{t}\rangle\langle\xi_{t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}\otimes|j\rangle\langle
j|^{\mathfrak{M}},\chi^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{ML}}_{j,t}\right)\geq
1-4\epsilon-4\sqrt{\epsilon}\text{ ,}$ (85)
where $|\xi_{t}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}$ is
chosen so that
$|\xi_{t}\rangle\langle\xi_{t}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}$
is a purification of $\xi_{t}^{\mathfrak{E}^{n}}$ on
$H^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}$.
3) Defining the code
We can now define our entanglement generating code. Let $t^{\prime}$ be
arbitrary in $\theta$. The sender prepares the quantum state
$\displaystyle\frac{1}{J_{n}}\frac{1}{L_{n}}\left(\sum_{j=1}^{J_{n}}\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}|\varkappa_{j,l}\rangle^{\mathfrak{P}^{n}}|j\rangle^{\mathfrak{A}}\right)$
$\displaystyle\left(\sum_{j=1}^{J_{n}}\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}\langle
j|^{\mathfrak{A}}\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}\right)\text{ ,}$
(86)
keeps the system $\mathfrak{A}$, and sends the system $\mathfrak{P}^{n}$
through the channel $N_{t^{\prime}}^{\otimes n}$, i.e., the resulting quantum
state is
$\displaystyle\frac{1}{J_{n}}\frac{1}{L_{n}}\left(\mathrm{id}^{\mathfrak{A}}\otimes
U_{N_{t^{\prime}}}^{\otimes
n}\right)\biggl{[}\left(\sum_{j=1}^{J_{n}}\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}|j\rangle^{A}|\varkappa_{j,l}\rangle^{\mathfrak{P}^{n}}\right)$
$\displaystyle\left(\sum_{j=1}^{J_{n}}\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}\langle
j|^{\mathfrak{A}}\right)\biggr{]}\left(\mathrm{id}^{\mathfrak{A}}\otimes
U_{N_{t^{\prime}}}^{\otimes n}\right)^{*}$
$\displaystyle=\frac{1}{J_{n}}\frac{1}{L_{n}}\left[\sum_{j=1}^{J_{n}}|j\rangle^{\mathfrak{A}}\left(\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}U_{N_{t^{\prime}}}^{\otimes
n}|\varkappa_{j,l}\rangle^{\mathfrak{P}^{n}}\right)\right]$
$\displaystyle\left[\sum_{j=1}^{J_{n}}\left(\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}\langle\varkappa_{j,l}|^{\mathfrak{P}^{n}}(U_{N_{t^{\prime}}}^{\otimes
n})^{*}\right)\langle j|^{\mathfrak{A}}\right]\text{ .}$
The receiver subsequently applies the decoding operator
$\displaystyle\tau^{\mathfrak{Q}^{n}}\rightarrow\mathrm{tr}_{\mathfrak{Q}^{n}\mathfrak{L}\theta}\biggl{[}\left(\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{ML}}_{(t)}\otimes|t\rangle\langle
t|^{\theta}\right)V^{\mathfrak{Q}^{n}\mathfrak{ML}\theta}$
$\displaystyle\left(\tau^{\mathfrak{Q}^{n}}\otimes|0\rangle\langle
0|^{\mathfrak{M}}\otimes|0\rangle\langle
0|^{\mathfrak{L}}\otimes|0\rangle\langle 0|^{\theta}\right)$
$\displaystyle{V^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}\theta}}^{*}\left(\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}\otimes|t\rangle\langle
t|^{\theta}\right)^{*}\biggr{]}\text{ ,}$ (87)
to his outcome.
3.1) The resulting quantum state after performing the decoding operator
We define
$\displaystyle\iota^{\mathfrak{A}\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}_{t^{\prime}}$
$\displaystyle:=\left(\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}\otimes\mathrm{id}^{\mathfrak{A}\mathfrak{E}^{n}}\otimes|t\rangle\langle
t|^{\theta}\right)(V^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}\theta}\otimes\mathrm{id}^{\mathfrak{A}\mathfrak{E}^{n}})$
$\displaystyle\Biggl{(}\frac{1}{J_{n}}\frac{1}{L_{n}}\left[\sum_{j=1}^{J_{n}}|j\rangle^{\mathfrak{A}}\left(\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}U_{N_{t^{\prime}}}^{\otimes
n}|\varkappa_{j,l}\rangle^{P^{n}}\right)\right]$
$\displaystyle\left[\sum_{j=1}^{J_{n}}\left(\sum_{l=1}^{L_{n}}e^{2\pi
il\frac{k_{j}}{L_{n}}}\langle\varkappa_{j,l}|^{P^{n}}(U_{N_{t^{\prime}}}^{\otimes
n})^{*}\right)\langle j|^{\mathfrak{A}}\right]$
$\displaystyle\otimes|0\rangle\langle 0|^{\mathfrak{M}}\otimes|0\rangle\langle
0|^{\mathfrak{L}}\otimes|0\rangle\langle
0|^{\theta}\biggl{)}(V^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}\theta}\otimes\mathrm{id}^{\mathfrak{A}\mathfrak{E}^{n}})^{*}$
$\displaystyle\left(\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}\otimes\mathrm{id}^{\mathfrak{A}\mathfrak{E}^{n}}\otimes|t\rangle\langle
t|^{\theta}\right)^{*}$
$\displaystyle=\left(\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}\otimes\mathrm{id}^{\mathfrak{A}\mathfrak{E}^{n}}\otimes|t\rangle\langle
t|^{\theta}\right)$
$\displaystyle\left(\frac{1}{J_{n}}(\sum_{j=1}^{J_{n}}|j\rangle^{\mathfrak{A}}|\vartheta_{j,t^{\prime}}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta})(\sum_{j=1}^{J_{n}}\langle\vartheta_{j,t^{\prime}}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}\langle
j|^{\mathfrak{A}})\right)$
$\displaystyle\left(\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}\otimes\mathrm{id}^{\mathfrak{A}\mathfrak{E}^{n}}\otimes|t\rangle\langle
t|^{\theta}\right)^{*}\text{ ,}$ (88)
then the resulting quantum state after performing the decoding operator is
$\mathrm{tr}_{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}\theta}(\iota^{\mathfrak{A}\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}_{t^{\prime}})$.
3.2) The fidelity of
$\frac{1}{J_{n}}\sum_{j=1}^{J_{n}}\chi^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}}_{j,t^{\prime}}\otimes|j\rangle\langle
j|^{\mathfrak{A}}\otimes|t^{\prime}\rangle\langle t^{\prime}|^{\theta}$ and
the actual quantum state
Because
$\displaystyle\left(\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}\otimes\mathrm{id}^{\mathfrak{A}\mathfrak{E}^{n}}\otimes|t\rangle\langle
t|^{\theta}\right)$
$\displaystyle\left(\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}\otimes\mathrm{id}^{\mathfrak{E}^{n}}\otimes|t\rangle\langle
t|^{\theta}\right)^{*}$
$\displaystyle=\mathrm{id}^{\mathfrak{A}\mathfrak{E}^{n}}\otimes\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}(U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)})^{*}\otimes|t\rangle\langle
t|^{\theta}$
$\displaystyle=\mathrm{id}^{\mathfrak{A}\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}\text{
,}$
$\sum_{t\in\theta}U^{\mathfrak{Q}^{n}\mathfrak{M}\mathfrak{L}}_{(t)}\otimes\mathrm{id}^{\mathfrak{E}^{n}}\otimes|t\rangle\langle
t|^{\theta}$ is unitary.
Because of this unitarity and by (81)
$\displaystyle
F\left(\iota^{\mathfrak{A}\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}_{t^{\prime}},\frac{1}{J_{n}}\sum_{j=1}^{J_{n}}\chi^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}}_{j,t^{\prime}}\otimes|j\rangle\langle
j|^{\mathfrak{A}}\otimes|t^{\prime}\rangle\langle t^{\prime}|^{\theta}\right)$
$\displaystyle=F\Biggl{(}\frac{1}{J_{n}}(\sum_{j=1}^{J_{n}}|j\rangle^{\mathfrak{A}}|\vartheta_{j,t^{\prime}}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta})(\sum_{j=1}^{J_{n}}\langle\vartheta_{j,t^{\prime}}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}\langle
j|^{\mathfrak{A}}),\allowdisplaybreaks$
$\displaystyle\frac{1}{J_{n}}(\sum_{j=1}^{J_{n}}|\varpi_{j,t^{\prime}}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}\otimes|j\rangle^{\mathfrak{A}}\otimes|j\rangle^{\mathfrak{M}})\allowdisplaybreaks$
$\displaystyle(\sum_{j=1}^{J_{n}}\langle j|^{\mathfrak{M}}\otimes\langle
j|^{\mathfrak{A}}\otimes\langle\varpi_{j,t^{\prime}}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}})\otimes|t^{\prime}\rangle\langle
t^{\prime}|^{\theta}\Biggr{)}\allowdisplaybreaks$
$\displaystyle=\frac{1}{J_{n}}\biggl{|}\biggl{(}\sum_{j=1}^{J_{n}}\langle\vartheta_{j,t^{\prime}}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}\biggr{)}\allowdisplaybreaks$
$\displaystyle\biggl{(}\sum_{j=1}^{J_{n}}|\varpi_{j,t^{\prime}}\rangle^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}\otimes|j\rangle^{\mathfrak{M}}\otimes|t^{\prime}\rangle^{\theta}\biggr{)}\biggr{|}\allowdisplaybreaks$
$\displaystyle\geq 1-|\theta|\epsilon\text{ .}$ (89)
3.3) The fidelity of
$\frac{1}{J_{n}}\sum_{j=1}^{J_{n}}\chi^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}}_{j,t^{\prime}}\otimes|j\rangle\langle
j|^{\mathfrak{A}}\otimes|t^{\prime}\rangle\langle t^{\prime}|^{\theta}$ and
the standard maximally entanglement state
By (85) we have
$\displaystyle
F\biggl{(}\frac{1}{J_{n}}\sum_{j=1}^{J_{n}}\chi^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}}_{j,t^{\prime}}\otimes|t^{\prime}\rangle\langle
t^{\prime}|^{\theta}\otimes|j\rangle\langle j|^{\mathfrak{A}},$
$\displaystyle\frac{1}{J_{n}}\sum_{j=1}^{J_{n}}|\xi_{t^{\prime}}\rangle\langle\xi_{t^{\prime}}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}\otimes|j\rangle\langle
j|^{\mathfrak{A}}\otimes|j\rangle\langle
j|^{\mathfrak{M}}\otimes|t^{\prime}\rangle\langle
t^{\prime}|^{\theta}\biggr{)}$ $\displaystyle\geq
1-4\epsilon-4\sqrt{\epsilon}\text{ .}$ (90)
3.4) The fidelity of the actual quantum state and the standard maximally
entanglement state
Since for two quantum states $\varrho$ and $\eta$, it holds
$1-F(\varrho,\eta)\leq\frac{1}{2}\|\varrho-\eta\|_{1}\leq\sqrt{1-F(\varrho,\eta)^{2}}\text{
,}$
for three quantum states $\varrho$, $\eta$, and $\upsilon$, we have
$\displaystyle F(\varrho,\eta)$ $\displaystyle\geq
1-\frac{1}{2}\|\varrho-\eta\|_{1}$ $\displaystyle\geq
1-\frac{1}{2}\|\varrho-\upsilon\|_{1}-\frac{1}{2}\|\upsilon-\eta\|_{1}$
$\displaystyle\geq
1-\sqrt{1-F(\varrho,\upsilon)^{2}}-\sqrt{1-F(\upsilon,\eta)^{2}}\text{ .}$
Combining (89) and (90), for all $t^{\prime}\in\theta$ we have
$\displaystyle
F\left(\mathrm{tr}_{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}\theta}(\iota^{\mathfrak{A}\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}_{t^{\prime}}),\sum_{j=1}^{J_{n}}|j\rangle\langle
j|^{\mathfrak{A}}\otimes|j\rangle\langle j|^{\mathfrak{M}}\right)$
$\displaystyle\geq
F\biggl{(}\iota^{\mathfrak{A}\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{M}\mathfrak{L}\theta}_{t^{\prime}},\frac{1}{J_{n}}\sum_{j=1}^{J_{n}}|\xi_{t^{\prime}}\rangle\langle\xi_{t^{\prime}}|^{\mathfrak{Q}^{n}\mathfrak{E}^{n}\mathfrak{L}}$
$\displaystyle\otimes|j\rangle\langle j|^{\mathfrak{A}}\otimes|j\rangle\langle
j|^{\mathfrak{M}}\otimes|t^{\prime}\rangle\langle
t^{\prime}|^{\theta}\biggr{)}$ $\displaystyle\geq
1-\sqrt{2|\theta|\epsilon-|\theta|^{2}\epsilon^{2}}-\sqrt{8\sqrt{\epsilon}-16\epsilon^{2}-32\epsilon\sqrt{\epsilon}-8\epsilon}$
(91) $\displaystyle\geq
1-\sqrt{2|\theta|}\sqrt{\epsilon}-\sqrt{8}\sqrt[4]{\epsilon}\text{ .}$ (92)
This means that if $n$ is large enough, then for any positive $\delta$ and
$\epsilon$, there is an
$(n,\sqrt{2|\theta|}\sqrt{\epsilon}+\sqrt{8}\sqrt[4]{\epsilon})$ code with
rate
$\min_{t}\chi(X;Q_{t})-\max_{t}\chi(X;E_{t})-2\delta\text{ .}$
∎
###### Proposition 1
The entanglement generating capacity of $\left(N_{t}\right)_{t\in\theta}$ with
CSI at the encoder is
$A_{CSI}=\lim_{n\rightarrow\infty}\frac{1}{n}\min_{t\in\theta}\max_{\rho\in\mathcal{S}(H)^{\mathfrak{Q}^{n}}}I_{C}(\rho;{N_{t}}^{\otimes
n})\text{ .}$ (93)
###### Proof:
As the authors of [11] showed, after receiving a dummy code word as the first
block, the sender also can have CSI. Then we have the case where both the
sender and the receiver have CSI. But this case is equivalent to the case
where we only have one channel $(N_{t})$ instead of a family of channels
$\\{(N_{t}):t=1,\cdots,|\theta|\\}$, and we may assume it is the worst
channel. The bits that we use to detect the CSI is large but constant, so it
is negligible compared to the rest. By [13], the entanglement generating
capacity of the quantum channel $N_{t}$ is
$\lim_{n\rightarrow\infty}\frac{1}{n}\max_{\rho\in\mathcal{S}(H)^{\mathfrak{Q}^{n}}}I_{C}(\rho;N_{t}^{\otimes
n})\text{ .}$
The proof of the converse is similar to those given in the proof of Theorem 2,
where we consider a worst $t^{\prime}$. ∎
###### Proposition 2
The entanglement generating capacity of $\left(N_{t}\right)_{t\in\theta}$ with
feedback is bounded as follows
$A_{feed}\geq\lim_{n\rightarrow\infty}\frac{1}{n}\min_{t\in\theta}\max_{\rho\in\mathcal{S}(H)^{\mathfrak{Q}^{n}}}I_{C}(\rho;{N_{t}}^{\otimes
n})\text{ .}$ (94)
###### Proof:
As the authors of [11] showed, the receiver can detect the channel state $t$
correctly after receiving a dummy word as the first block. Then he can send
$t$ back to the sender via feedback. ∎
###### Remark 7
For a one-way entanglement distillation protocol using secret key, see [14].
For arbitrarily varying classical-quantum wiretap channel, which is a
generalization of compound classical-quantum wiretap channels, see [10].
## VII Further Notes
Let $\mathfrak{P}$, $\mathfrak{Q}$, $H^{\mathfrak{P}}$, and $H^{\mathfrak{Q}}$
be defined as in Section II. Let $N$ be a quantum channel
$\mathcal{S}(H^{\mathfrak{P}})\rightarrow\mathcal{S}(H^{\mathfrak{Q}})$. In
general, there are two ways to represent a quantum channel, i. e. a completely
positive trace preserving map
$\mathcal{S}(H^{\mathfrak{P}})\rightarrow\mathcal{S}(H^{\mathfrak{Q}})$, with
linear algebraic tools.
1\. Operator Sum Decomposition (Kraus Representation)
$N(\rho)=\sum_{i=1}^{K}A_{i}\rho{A_{i}}^{*}\text{ ,}$ (95)
where $A_{1},\cdots,A_{K}$ (Kraus operators) are linear operators
$\mathcal{S}(H^{\mathfrak{P}})$ $\rightarrow$ $\mathcal{S}(H^{\mathfrak{Q}})$
(cf.[18], [3], and [22]). They satisfy the completeness relation
$\sum_{i=1}^{K}{A_{i}}^{*}A_{i}=\mathrm{id}_{H^{\mathfrak{P}}}$. The
representation of a quantum channel $N$ according to (95) is not unique. Let
$A_{1},\cdots,A_{K}$ and $B_{1},\cdots,B_{K^{\prime}}$ be two sets of Kraus
operators (by appending zero operators to the shorter list of operation
elements we may ensure that $K^{\prime}=K$). Suppose $A_{1},\cdots,A_{K}$
represents $N$, then $B_{1},\cdots,B_{K}$ also represents $N$ if and only if
there exist a $K\times K$ unitary matrix
$\left(u_{i,j}\right)_{i,j=1,\cdots,K}$ such that for all $i$ we have
$A_{i}=\sum_{j=1}^{K}u_{i,j}B_{j}$ (cf. [22]).
2\. Isometric Extension (Stinespring Dilation)
$N(\rho)=\mathrm{tr}_{\mathfrak{E}}\left(U_{N}\rho U_{N}^{*}\right)\text{ ,}$
(96)
where $U_{N}$ is a linear operator $\mathcal{S}(H^{\mathfrak{P}})$
$\rightarrow$ $\mathcal{S}(H^{\mathfrak{QE}})$ such that
$U_{N}^{*}U_{N}=\mathrm{id}_{H^{\mathfrak{P}}}$, and $\mathfrak{E}$ is the
quantum system of the environment (cf. [27], [3], and also [28] for a more
general Stinespring Dilation Theorem). $H^{\mathfrak{E}}$ can be chosen such
that $\dim H^{\mathfrak{E}}\leq(\dim H^{\mathfrak{P}})^{2}$. The isometric
extension of a quantum channel $N$ according to (96) is not unique either. Let
$U$ and $U^{\prime}$ be two linear operators $\mathcal{S}(H^{\mathfrak{P}})$
$\rightarrow$ $\mathcal{S}(H^{\mathfrak{QE}})$. Suppose $U$ represents $N$,
then $U^{\prime}$ also represents $N$ if and only if $U$ and $U^{\prime}$ are
unitarily equivalent.
We can reduce each of these two representations of the quantum channel from
the other one. Let $A_{1},\cdots,A_{K}$ be a set of Kraus operators which
represents $N$. Let $\\{|j\rangle^{\mathfrak{E}}:j=1,\cdots,K\\}$ be an
orthonormal system on $H^{\mathfrak{E}}$. Then
$U_{N}=\sum_{j=1}^{K}{A_{j}}\otimes|j\rangle^{\mathfrak{E}}$ is an isometric
extension which represents $N$, since
$\left(\sum_{j=1}^{K}{A_{j}}\otimes|j\rangle^{\mathfrak{E}}\right)$ $\rho$
$\left(\sum_{k=1}^{K}{A_{k}}\otimes|k\rangle^{\mathfrak{E}}\right)^{*}$ $=$
$\sum_{j=1}^{K}A_{j}\rho{A_{j}}^{*}$ and
$\left(\sum_{j=1}^{K}{A_{j}}\otimes|j\rangle^{\mathfrak{E}}\right)^{*}$
$\left(\sum_{k=1}^{K}{A_{k}}\otimes|k\rangle^{\mathfrak{E}}\right)$ $=$
$\sum_{j=1}^{K}{A_{j}}^{*}A_{j}$. For the other way around, every isometric
extension $U_{N}$ that represents $N$ can be written in the form
$U_{N}=\sum_{j=1}^{K}{A_{j}}\otimes|j\rangle^{\mathfrak{E}}$, i.e. if the
sender sends $\rho$, and if the environment’s measurement gives
$|i\rangle^{\mathfrak{E}}$, the receiver’s outcome will be
$A_{i}\rho{A_{i}}^{*}$. Here $A_{1},\cdots,A_{K}$ is a set of Kraus operators
which represents $N$, and $\\{|j\rangle^{\mathfrak{E}}:j=1,\cdots,K\\}$ is an
orthonormal system on $H^{\mathfrak{E}}$.
Using either of both methods to represent a quantum channel, one can show that
(cf. [4] and [13]) the entanglement generating capacity of a quantum channel
$N$ is
$\mathcal{A}(N)=\lim_{n\rightarrow\infty}\frac{1}{n}\max_{\rho\in\mathcal{S}(H)^{\mathfrak{Q}^{n}}}I_{C}(\rho;{N}^{\otimes
n})\text{ .}$ (97)
The advantage of the Kraus representation is that it describes the dynamics of
the principal system without having to explicitly consider properties of the
environment, whose dynamics are often unimportant. All that we need to know is
the system of the receiver alone, this simplifies calculations. In [17], an
explicit construction of a quantum error correction code (both perfect and
approximate information recovery) with the Kraus operators is given. The main
disadvantage of the Kraus representation is that the set of Kraus operators is
not unique. The reason is that the choice of the orthonormal system
$\\{|j\rangle^{\mathfrak{E}}:j=1,\cdots,K\\}$ is not unique, and it is much
more difficult to check if two sets of Kraus operators represent the same
quantum channel than to check if two isometric extensions represent the same
quantum channel.
In the Stinespring dilation, we have a natural interpretation of the system of
the environment. From the Stinespring dilation, we can conclude that the
receiver can detect almost all quantum information if and only if the channel
releases almost no information to the environment. In [26], an alternative way
to build a quantum error correction code (both perfect and approximate
information recovery) is given using this fact. The disadvantage is that we
suppose it is suboptimal for calculating the entanglement generating capacity
of a compound quantum channel without CSI at the encoder.
In [7], the entanglement generating capacity for the compound quantum channel
is determined, using a quantum error correction code of [17], which is built
by Kraus operators. Their result is the following. The entanglement generating
capacity of a quantum wiretap channel $N=\left(N_{t}\right)_{t\in\theta}$ is
$\mathcal{A}(N)=\lim_{n\rightarrow\infty}\frac{1}{n}\max_{\rho\in\mathcal{S}(H)^{\mathfrak{Q}^{n}}}\min_{t\in\theta}I_{C}(\rho;{N_{t}}^{\otimes
n})\text{ .}$ (98)
This result is stronger than our result in Theorem 3. This is due to the fact
that we use for our proof a quantum error correction code of [26], which is
based upon the Stinespring dilation. If we use the Kraus operators to
represent a compound quantum channel, we have a bipartite system, and for
calculating the entanglement generating capacity of a compound quantum
channel, we can use the technique which is similar to the case of a single
quantum channel. However, if we use the Stinespring dilation to represent a
compound quantum channel, we have a tripartite system which includes the
sender, the receiver, and in addition, the environment. Unlike in the case of
a single quantum channel, for compound quantum channel we have to deal with
uncertainty at the environment. If the sender knows the CSI, the transmitters
can build an $(n,\epsilon)$ code for entanglement generating with rate
$\min_{t}\left[\chi(X;Q_{t})-\chi(X;E_{t})\right]-\delta$ $=$
$\min_{t\in\theta}I_{C}(\rho;{N_{t}})-\delta$ (Proposition 93) for any
positive $\delta$ and $\epsilon$. This result is optimal (cf. [7]). But if the
sender does not know the CSI, he has to build an encoding operator by
considering every possible channel state for the environment. Therefore the
maximal rate that we can achieve is
$\min_{t}\chi(X;Q_{t})-\max_{t}\chi(X;E_{t})$, but not
$\min_{t\in\theta}I_{C}(\rho;{N_{t}})$ $=$
$\min_{t}\left[\chi(X;Q_{t})-\chi(X;E_{t})\right]$. This is only a lower bound
of the entanglement generating capacity. It is unknown if we can achieve the
stronger result (98) using the Stinespring dilation.
## Acknowledgment
Support by the Bundesministerium für Bildung und Forschung (BMBF) via grant
16BQ1050 and 16BQ1052, and the National Natural Science Foundation of China
via grant 61271174 is gratefully acknowledged.
## References
* [1] R. Ahlswede, Elimination of correlation in random codes for arbitrarily varying channels, Z. Wahrscheinlichkeitstheorie und verw. Geb., Vol. 44, 159-185, 1978.
* [2] R. Ahlswede and A. Winter, Strong converse for identification via quantum channels, IEEE Trans. Inform. Theory, Vol. 48, No. 3, 569-579, 2002. Addendum: IEEE Trans. Inform. Theory, Vol. 49, No. 1, 346, 2003.
* [3] H. Barnum, M. A. Nielsen, and B. Schumacher, Information transmission through a noisy quantum channel, Phys. Rev. A, Vol. 57, 4153, 1998.
* [4] C. H. Bennett, P. W. Shor, J. A. Smolin, and A. V. Thapliyal, Entanglement-assisted capacity of a quantum channel and the reverse Shannon theorem. IEEE Trans. Inform. Theory, Vol. 48, 2637-2655, 2002.
* [5] I. Bjelaković and H. Boche, Classical capacities of averaged and compound quantum channels. IEEE Trans. Inform. Theory, Vol. 57, No. 7, 3360-3374, 2009.
* [6] I. Bjelaković, H. Boche, and J. Nötzel, Entanglement transmission and generation under channel uncertainty: universal quantum channel coding, Communications in Mathematical Physics, Vol. 292, No. 1, 55-97, 2009.
* [7] I. Bjelaković, H. Boche, and J. Nötzel, Entanglement transmission capacity of compound channels, Proc. of International Symposium on Information Theory ISIT 2009, Korea, 2009.
* [8] I. Bjelaković, H. Boche, and J. Sommerfeld, Capacity results for compound wiretap channels, Problems of Information Transmission, Vol. 49, No. 1, 83-111, 2011\.
* [9] D. Blackwell, L. Breiman, and A. J. Thomasian, The capacity of a class of channels, Ann. Math. Stat. Vol. 30, No. 4, 1229-1241, 1959.
* [10] V. Blinovsky and M. Cai, Classical-quantum arbitrarily varying wiretap channel, H. Aydinian, F. Cicalese, and C. Deppe (Eds.), Ahlswede Festschrift, LNCS 7777, 234-246, 3013.
* [11] M. Cai and N. Cai, Channel state detecting code for compound quantum channel, preprint.
* [12] N. Cai, A. Winter, and R. W. Yeung, Quantum privacy and quantum wiretap channels, Problems of Information Transmission, Vol. 40, No. 4, 318-336, 2004.
* [13] I. Devetak, The private classical information capacity and quantum information capacity of a quantum channel, IEEE Trans. Inform. Theory, Vol. 51, No. 1, 44-55, 2005.
* [14] I. Devetak and A. Winter, Distillation of secret key and entanglement from quantum states, Proc. R. Soc. A, Vol. 461, 207-235, 2005.
* [15] A. Holevo, Statistical problems in quantum physics, Proceedings of the second Japan-USSR Symposium on Probability Theory, ser. Lecture Notes in Mathematics, G. Maruyama and J. V. Prokhorov, Eds., Vol. 330, 104-119, Springer-Verlag, Berlin, 1973.
* [16] A. Holevo, The Capacity of the Quantum Channel with General Signal States, IEEE Trans. on Inf. Theory, Vol. 44, No. 1, 269-273, 1998.
* [17] R. Klesse, Approximate quantum error correction, random codes, and quantum channel capacity, Phys. Rev. A 75, 062315, 2007.
* [18] K. Kraus, States, Effects, and Operations, Springer, Berlin, 1983.
* [19] Y. Liang, G. Kramer, H. Poor, and S. Shamai, Compound wiretap channels, EURASIP Journal on Wireless Communications and Networking, Article ID 142374, 2008.
* [20] S. Lloyd, Capacity of the noisy quantum channel, Physical Review A, Vol. 55, No. 3, 1613-1622, 1997.
* [21] V. D. Milman and G. Schechtman, Asymptotic Theory of Finite Dimensional Normed Spaces. Lecture Notes in Mathematics 1200, Springer-Verlag, corrected second printing, Berlin, UK, 2001.
* [22] M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2000.
* [23] V. Paulsen, Completely Bounded Maps and Operator Algebras, Cambridge Studies in Advanced Mathematics 78, Cambridge University Press, Cambridge, UK, 2002.
* [24] B. Schumacher and M. A. Nielsen, Quantum data processing and error correction, Phys. Rev. A, Vol. 54, 2629, 1996.
* [25] B. Schumacher and M. Westmoreland, Sending Classical Information via Noisy Quantum Channels, Phys. Rev. A, Vol. 56, No. 1, 131-138, 1997.
* [26] B. Schumacher and M. D. Westmoreland, Approximate quantum error correction, Quant. Inf. Proc., Vol. 1, No. 8, 5-12, 2002.
* [27] P. W. Shor, The quantum channel capacity and coherent information, lecture notes, MSRI Workshop on Quantum Computation, 2002.
* [28] W. F. Stinespring, Positive functions on C*-algebras, Proc. Amer. Math. Soc., Vol. 6, 211, 1955.
* [29] S. Watanabe, Private and quantum capacities of more capable and less noisy quantum channels, Phys. Rev., A 85, 012326, 2012.
* [30] M. Wilde, From Classical to Quantum Shannon Theory, arXiv:1106-1445, 2011.
* [31] A. Winter, Coding theorem and strong converse for quantum channels, IEEE Trans. Inform. Theory, Vol. 45, No. 7, 2481-2485, 1999\.
* [32] A. D. Wyner, The wire-tap channel, Bell System Technical Journal, Vol. 54, No. 8, 1355-1387, 1975.
|
arxiv-papers
| 2013-02-14T14:40:20 |
2024-09-04T02:49:41.737816
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Holger Boche, Minglai Cai, Ning Cai, Christian Deppe",
"submitter": "Minglai Cai",
"url": "https://arxiv.org/abs/1302.3412"
}
|
1302.3919
|
# Derivation of an EM algorithm for constrained and unconstrained multivariate
autoregressive state-space (MARSS) models
Elizabeth Eli Holmes111Northwest Fisheries Science Center, NOAA Fisheries,
Seattle, WA 98112, [email protected], http://faculty.washington.edu/eeholmes
###### Abstract
This report presents an Expectation-Maximization (EM) algorithm for estimation
of the maximum-likelihood parameter values of constrained multivariate
autoregressive Gaussian state-space (MARSS) models. The MARSS model can be
written: x(t)=Bx(t-1)+u+w(t), y(t)=Zx(t)+a+v(t), where w(t) and v(t) are
multivariate normal error-terms with variance-covariance matrices Q and R
respectively. MARSS models are a class of dynamic linear model and vector
autoregressive model state-space model. Shumway and Stoffer presented an
unconstrained EM algorithm for this class of models in 1982, and a number of
researchers have presented EM algorithms for specific types of constrained
MARSS models since then. In this report, I present a general EM algorithm for
constrained MARSS models, where the constraints are on the elements within the
paramater matrices (B,u,Q,Z,a,R). The constraints take the form vec(M)=f+Dm,
where M is the parameter matrix, f is a column vector of fixed values, D is a
matrix of multipliers, and m is the column vector of estimated values. This
allows a wide variety of constrained parameter matrix forms. The presentation
is for a time-varying MARSS model, where time-variation enters through the
fixed (meaning not estimated) f(t) and D(t) matrices for each parameter. The
algorithm allows missing values in y and partially deterministic systems where
0s appear on the diagonals of Q or R.
Keywords: Time-series analysis, Kalman filter, EM algorithm, maximum-
likelihood, vector autoregressive model, dynamic linear model, parameter
estimation, state-space citation: Holmes, E. E. 2012. Derivation of an EM
algorithm for constrained and unconstrained multivariate autoregressive state-
space (MARSS) models.
## 1 Overview
EM algorithms extend maximum-likelihood estimation to models with hidden
states and are widely used in engineering and computer science applications.
This report presents an EM algorithm for a general class of Gaussian
constrained multivariate autoregressive state-space (MARSS) models, with a
hidden multivariate autoregressive process (state) model and a multivariate
observation model. This is an important class of time-series model used in
many different scientific fields. The reader is referred to McLachlan and
Krishnan, (2008) for general background on EM algorithms and to Harvey, (1989)
for a discussion of EM algorithms for time-series data. Borman, (2009) has a
nice tutorial on the EM algorithm.
Before showing the derivation for the constrained case, I first show a
derivation of the EM algorithm for unconstrained222“unconstrained” means that
each element in the parameter matrix is estimated and no elements are fixed or
shared. MARSS model. This EM algorithm was published by Shumway and Stoffer,
(1982), but my derivation is more similar to Ghahramani et al’s (Ghahramani
and Hinton,, 1996; Roweis and Ghahramani,, 1999) slightly different
presentation. One difference in my presentation and all these previous
presentations, however, is that I treat the data as a random variable
throughout; this means that there are no “special” update equations for the
missing values case. Another difference is that I present the update equations
for both stochastic initial states and fixed initial states. I then extend the
derivation to constrained MARSS models where there are fixed and shared
elements in the parameter matrices and to the case of degenerate MARSS models
where some processes in the model are deterministic rather than stochastic.
See also Wu et al., (1996) and Zuur et al., (2003) for other examples of the
EM algorithm for different classes of constrained MARSS models.
When working with MARSS models, one should be cognizant that misspecification
of the prior on the initial hidden states can have catastrophic and difficult
to detect effects on the parameter estimates. There is often no sign that
something is amiss with the MLE estimates output by an EM algorithm. There has
been much work on how to avoid these initial conditions effects; see
especially literature on vector autoregressive state-space models in the
economics literature. The trouble often occurs when the prior on the initial
states is inconsistent with the distribution of the initial states that is
implied by the maximum-likelihood model. This often happens when the model
implies a specific covariance structure on the initial states, but since the
maximum-likelihood parameters are unknown, this covariance structure is
unknown. Using a diffuse prior does not help since your diffuse prior still
has some covariance structure (often independence is being imposed). In some
ways the EM algorithm is less sensitive to a mis-specified prior because it
uses the smoothed states conditioned on all the data. However, if the prior is
inconsistent with the model, the EM algorithm will not (cannot) find the MLEs.
It is very possible however that it will find parameter estimates that are
closer to what you intend (estimates uninfluenced by the prior), but they will
not be MLEs. The derivation presented here allows one to circumvent these
problems by treating the initial states as fixed (and estimated) parameters.
The problematic initial state variance-covariance matrix is removed from the
model, albeit at the cost of additional estimated parameters.
Finally, when working with MARSS models, one needs to ensure that the model is
identifiable, i.e. a unique solution exists. For a given MARSS model, some of
the parameter elements will need to be fixed (not estimated) in order to
produce a model with one solution. How to do that depends on the MARSS model
being fitted and is up to the user.
### 1.1 The MARSS model
The linear MARSS model with a stochastic initial state333‘Stochastic’ means
the initial state has a distribution rather than a fixed value. Because the
process must start somewhere, one needs to specify the initial state. In
equation 1, I show the initial state specified as a distribution. However, the
derivation will also discuss the case where the initial state is specified as
an unknown fixed parameter. is
$\displaystyle\mbox{$\boldsymbol{x}$}_{t}=\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}+\mbox{$\mathbf{w}$}_{t},\text{
where }\mbox{$\mathbf{W}$}_{t}\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{Q}$})$
(1a)
$\displaystyle\mbox{$\boldsymbol{y}$}_{t}=\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}+\mbox{$\mathbf{v}$}_{t},\text{
where }\mbox{$\mathbf{V}$}_{t}\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{R}$})$
(1b)
$\displaystyle\mbox{$\boldsymbol{X}$}_{0}\sim\,\textup{{MVN}}(\mbox{\boldmath$\xi$},\mbox{\boldmath$\Lambda$})$
(1c)
The $\boldsymbol{y}$ equation is called the observation process, and
$\mbox{$\boldsymbol{y}$}_{t}$ is a $n\times 1$ vector. The $\boldsymbol{x}$
equation is called the state or process equation, and
$\mbox{$\boldsymbol{x}$}_{t}$ is a $m\times 1$ vector. The equation for
$\boldsymbol{x}$ describes a multivariate autoregressive process (also called
a random walk or Markov process). $\mathbf{w}$ are the process errors and are
specific realizations of the random variable $\mathbf{W}$; $\mathbf{v}$ is
defined similarly. The initial state can either defined at $t=0$, as is done
in equation 1, or at $t=1$. When presenting the MARSS model, I use $t=0$ but
the derivations will show the EM algorithm for both cases. $\mathbf{Q}$ and
$\mathbf{R}$ are variance-covariance matrices that specify the stochasticity
in the observation and state equations.
In the MARSS model, $\boldsymbol{x}$ and $\boldsymbol{y}$ equations describe
two stochastic processes. By tradition, one conditions on observations of
$\boldsymbol{y}$, and $\boldsymbol{x}$ is treated as completely hidden, hence
the name ‘hidden Markov process’ of which a MARSS model is a special type.
However, you could condition on (partial) observations of $\boldsymbol{x}$ and
treat $\boldsymbol{y}$ as a (partially) hidden process—with as usual proper
constraints to ensure identifiability. Nonetheless in this report, I follow
tradition and treat $\boldsymbol{x}$ as hidden and $\boldsymbol{y}$ as
(partially) observed. If $\boldsymbol{x}$ is partially observed then the
update equations stay the same but the expectations shown in section 6 would
be computed conditioned on the partially observed $\boldsymbol{x}$.
The first part of this report will review the derivation of an EM algorithm
for the time-constant MARSS model (equation 1). However the main objective of
this report is to show the derivation of an EM algorithm to solve a much more
general MARSS model (section 4), which is a MARSS model with linear
constraints on time-varying parameters:
$\begin{gathered}\mbox{$\boldsymbol{x}$}_{t}=\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}_{t}+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{w}$}_{t},\text{
where }\mbox{$\mathbf{W}$}_{t}\sim\mathrm{MVN}(0,\mbox{$\mathbf{Q}$}_{t})\\\
\mbox{$\boldsymbol{y}$}_{t}=\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{v}$}_{t},\text{
where }\mbox{$\mathbf{V}$}_{t}\sim\mathrm{MVN}(0,\mbox{$\mathbf{R}$}_{t})\\\
\mbox{$\boldsymbol{x}$}_{t_{0}}=\mbox{\boldmath$\xi$}+\mbox{$\mathbf{F}$}\mbox{$\mathbf{l}$},\text{
where
}\mbox{$\mathbf{l}$}\sim\mathrm{MVN}(0,\mbox{\boldmath$\Lambda$})\end{gathered}$
(2)
The linear constraints appear as the vectorization of each parameter
($\mathbf{B}$, $\mathbf{u}$, $\mathbf{Q}$, $\mathbf{Z}$, $\mathbf{a}$,
$\mathbf{R}$, $\xi$, $\Lambda$) is described by the relation
$\mbox{$\mathbf{f}$}_{t}+\mbox{$\mathbf{D}$}_{t}\mbox{$\mathbf{m}$}$. This
relation specifies linear constraints of the form
$\beta_{i}+\beta_{a,i}a+\beta_{b,i}b+\dots$ on the elements in each MARSS
parameter matrix. Equation (2) is a much broader class of MARSS models that
includes MARSS models with exogenous variable (covariates), AR-p models,
moving average models, constrained MARSS models and models that are
combinations of these. The derivation also includes partially deterministic
systems where $\mbox{$\mathbf{G}$}_{t}$, $\mbox{$\mathbf{H}$}_{t}$ and
$\mathbf{F}$ may have all zero rows.
### 1.2 The joint log-likelihood function
Denote the set of all $y$’s and $x$’s from $t=1$ to $T$ by $\boldsymbol{y}$
and $\boldsymbol{x}$. The joint log-likelihood444This is not the log
likelihood output by the Kalman filter. The log likelihood output by the
Kalman filter is the $\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{y}$};\Theta)$
(notice $\boldsymbol{x}$ does not appear), which is known as the marginal log
likelihood. of $\boldsymbol{y}$ and $\boldsymbol{x}$ can then be written then
as follows555The log-likelihood function is shown here for the MARSS with non-
time varying parameters (equation 1)., where $\mbox{$\boldsymbol{X}$}_{t}$
denotes the random variable and $\mbox{$\boldsymbol{x}$}_{t}$ is a realization
from that random variable (and similarly for
$\mbox{$\boldsymbol{Y}$}_{t}$):666To alleviate clutter, I have left off
subscripts on the $f$’s. To emphasize that the $f$’s represent different
density functions, one would often use a subscript showing what parameters are
in the functions, i.e.
$f(\mbox{$\boldsymbol{x}$}_{t}|\mbox{$\boldsymbol{X}$}_{t-1}=\mbox{$\boldsymbol{x}$}_{t-1})$
becomes
$f_{B,u,Q}(\mbox{$\boldsymbol{x}$}_{t}|\mbox{$\boldsymbol{X}$}_{t-1}=\mbox{$\boldsymbol{x}$}_{t-1})$.
$f(\mbox{$\boldsymbol{y}$},\mbox{$\boldsymbol{x}$})=f(\mbox{$\boldsymbol{y}$}|\mbox{$\boldsymbol{X}$}=\mbox{$\boldsymbol{x}$})f(\mbox{$\boldsymbol{x}$}),$
(3)
where
$\begin{split}f(\mbox{$\boldsymbol{x}$})&=f(\mbox{$\boldsymbol{x}$}_{0})\prod_{t=1}^{T}f(\mbox{$\boldsymbol{x}$}_{t}|\mbox{$\boldsymbol{X}$}_{1}^{t-1}=\mbox{$\boldsymbol{x}$}_{1}^{t-1})\\\
f(\mbox{$\boldsymbol{y}$}|\mbox{$\boldsymbol{X}$}=\mbox{$\boldsymbol{x}$})&=\prod_{t=1}^{T}f(\mbox{$\boldsymbol{y}$}_{t}|\mbox{$\boldsymbol{X}$}=\mbox{$\boldsymbol{x}$})\end{split}$
(4)
Thus,
$\begin{split}f(\mbox{$\boldsymbol{y}$},\mbox{$\boldsymbol{x}$})&=\prod_{t=1}^{T}f(\mbox{$\boldsymbol{y}$}_{t}|\mbox{$\boldsymbol{X}$}=\mbox{$\boldsymbol{x}$})\times
f(\mbox{$\boldsymbol{x}$}_{0})\prod_{t=1}^{T}f(\mbox{$\boldsymbol{x}$}_{t}|\mbox{$\boldsymbol{X}$}_{1}^{t-1}=\mbox{$\boldsymbol{x}$}_{1}^{t-1})\\\
&=\prod_{t=1}^{T}f(\mbox{$\boldsymbol{y}$}_{t}|\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t})\times
f(\mbox{$\boldsymbol{x}$}_{0})\prod_{t=1}^{T}f(\mbox{$\boldsymbol{x}$}_{t}|\mbox{$\boldsymbol{X}$}_{t-1}=\mbox{$\boldsymbol{x}$}_{t-1}).\end{split}$
(5)
Here $\mbox{$\boldsymbol{x}$}_{t1}^{t2}$ denotes the set of
$\mbox{$\boldsymbol{x}$}_{t}$ from $t=t1$ to $t=t2$ (and thus $\boldsymbol{x}$
is shorthand for $\mbox{$\boldsymbol{x}$}_{1}^{T}$). The third line follows
because conditioned on $\boldsymbol{x}$, the $\mbox{$\boldsymbol{y}$}_{t}$’s
are independent of each other (because the $\mbox{$\mathbf{v}$}_{t}$ are
independent of each other). In the last line,
$\mbox{$\boldsymbol{x}$}_{1}^{t-1}$ becomes $\mbox{$\boldsymbol{x}$}_{t-1}$
from the Markov property of the equation for $\mbox{$\boldsymbol{x}$}_{t}$
(equation 1a), and $\boldsymbol{x}$ becomes $\mbox{$\boldsymbol{x}$}_{t}$
because $\mbox{$\boldsymbol{y}$}_{t}$ depends only on
$\mbox{$\boldsymbol{x}$}_{t}$ (equation 1b).
Since
$(\mbox{$\boldsymbol{X}$}_{t}|\mbox{$\boldsymbol{X}$}_{t-1}=\mbox{$\boldsymbol{x}$}_{t-1})$
is multivariate normal and
$(\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t})$
is multivariate normal (equation 1), we can write down the joint log-
likelihood function using the likelihood function for a multivariate normal
distribution (Johnson and Wichern,, 2007, sec. 4.3).
$\begin{split}&\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{y}$},\mbox{$\boldsymbol{x}$};\Theta)=-\sum_{1}^{T}\frac{1}{2}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{a}$})^{\top}\mbox{$\mathbf{R}$}^{-1}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{a}$})-\sum_{1}^{T}\frac{1}{2}\log|\mbox{$\mathbf{R}$}|\\\
&\quad-\sum_{1}^{T}\frac{1}{2}(\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{x}$}_{t-1}-\mbox{$\mathbf{u}$})^{\top}\mbox{$\mathbf{Q}$}^{-1}(\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{x}$}_{t-1}-\mbox{$\mathbf{u}$})-\sum_{1}^{T}\frac{1}{2}\log|\mbox{$\mathbf{Q}$}|\\\
&\quad-\frac{1}{2}(\mbox{$\boldsymbol{x}$}_{0}-\mbox{\boldmath$\xi$})^{\top}\mbox{\boldmath$\Lambda$}^{-1}(\mbox{$\boldsymbol{x}$}_{0}-\mbox{\boldmath$\xi$})-\frac{1}{2}\log|\mbox{\boldmath$\Lambda$}|-\frac{n}{2}\log
2\pi\end{split}$ (6)
$n$ is the number of data points. This is the same as equation 6.64 in Shumway
and Stoffer, (2006). The above equation is for the case where
$\mbox{$\boldsymbol{x}$}_{0}$ is stochastic (has a known distribution).
However, if we instead treat $\mbox{$\boldsymbol{x}$}_{0}$ as fixed but
unknown (section 3.4.4 in Harvey, 1989), it is then a parameter and there is
no $\Lambda$. The likelihood then is slightly different.
$\mbox{$\boldsymbol{x}$}_{0}$ is defined as a parameter $\xi$ and
$\begin{split}&\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{y}$},\mbox{$\boldsymbol{x}$};\Theta)=-\sum_{1}^{T}\frac{1}{2}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{a}$})^{\top}\mbox{$\mathbf{R}$}^{-1}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{a}$})-\sum_{1}^{T}\frac{1}{2}\log|\mbox{$\mathbf{R}$}|\\\
&\quad-\sum_{1}^{T}\frac{1}{2}(\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{x}$}_{t-1}-\mbox{$\mathbf{u}$})^{\top}\mbox{$\mathbf{Q}$}^{-1}(\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{x}$}_{t-1}-\mbox{$\mathbf{u}$})-\sum_{1}^{T}\frac{1}{2}\log|\mbox{$\mathbf{Q}$}|\end{split}$
(7)
Note that in this case, $\mbox{$\boldsymbol{x}$}_{0}$ is no longer a
realization of a random variable $\mbox{$\boldsymbol{X}$}_{0}$; it is a fixed
(but unknown) parameter. Equation 7 is written as if all the
$\mbox{$\boldsymbol{x}$}_{0}$ are fixed, however when the general derivation
is presented, it allowed that some $\mbox{$\boldsymbol{x}$}_{0}$ are fixed
($\Lambda$=0) and others are stochastic.
If $\mathbf{R}$ is constant through time, then
$\sum_{1}^{T}\frac{1}{2}\log|\mbox{$\mathbf{R}$}|$ in the likelihood equation
reduces to $\frac{T}{2}\log|\mbox{$\mathbf{R}$}|$, however sometimes one needs
to includes time-dependent weighting on $\mathbf{R}$777If for example, one
wanted to include a temporally dependent weighting on $\mathbf{R}$ replace
$|\mbox{$\mathbf{R}$}|$ with
$|\alpha_{t}\mbox{$\mathbf{R}$}|=\alpha_{t}^{n}|\mbox{$\mathbf{R}$}|$, where
$\alpha_{t}$ is the weighting at time $t$ and is fixed not estimated.. The
same applies to $\sum_{1}^{T}\frac{1}{2}\log|\mbox{$\mathbf{Q}$}|$.
All bolded elements are column vectors (lower case) and matrices (upper case).
$\mbox{$\mathbf{A}$}^{\top}$ is the transpose of matrix $\mathbf{A}$,
$\mbox{$\mathbf{A}$}^{-1}$ is the inverse of $\mathbf{A}$, and
$|\mbox{$\mathbf{A}$}|$ is the determinant of $\mathbf{A}$. Parameters are
non-italic while elements that are slanted are realizations of a random
variable ($\boldsymbol{x}$ and $\boldsymbol{y}$ are slated)888In matrix
algebra, a capitol bolded letter indicates a matrix. Unfortunately in
statistics, the capitol letter convention is used for random variables.
Fortunately, this derivation does not need to reference random variables
except indirectly when using expectations. Thus, I use capitols to refer to
matrices not random variables. The one exception is the reference to
$\boldsymbol{X}$ and $\boldsymbol{Y}$. In this case a bolded slanted capitol
is used.
### 1.3 Missing values
In Shumway and Stoffer and other presentations of the EM algorithm for MARSS
models (Shumway and Stoffer,, 2006; Zuur et al.,, 2003), the missing values
case is treated separately from the non-missing values case. In these
derivations, a series of modifications are given for the EM update equations
when there are missing values. In my derivation, I present the missing values
treatment differently, and there is only one set of update equations and these
equations apply in both the missing values and non-missing values cases. My
derivation does this by keeping
$\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}|\text{data}]$ and
$\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}|\text{data}]$
in the update equations (much like
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}|\text{data}]$ is kept in the
equations) while Shumway and Stoffer replace these expectations involving
$\mbox{$\boldsymbol{Y}$}_{t}$ by their values, which depend on whether or not
the data are a complete observation of $\mbox{$\boldsymbol{Y}$}_{t}$ with no
missing values. Section 6 shows how to compute the expectations involving
$\mbox{$\boldsymbol{Y}$}_{t}$ when the data are an incomplete observation of
$\mbox{$\boldsymbol{Y}$}_{t}$.
## 2 The EM algorithm
The EM algorithm cycles iteratively between an expectation step (the
integration in the equation) followed by a maximization step (the arg max in
the equation):
$\Theta_{j+1}=\arg\underset{\Theta}{\max}\int_{\mbox{$\boldsymbol{x}$}}{\int_{\mbox{$\boldsymbol{y}$}}{\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{x}$},\mbox{$\boldsymbol{y}$};\Theta)f(\mbox{$\boldsymbol{x}$},\mbox{$\boldsymbol{y}$}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j})d\mbox{$\boldsymbol{x}$}d\mbox{$\boldsymbol{y}$}}}$
(8)
$\mbox{$\boldsymbol{Y}$}(1)$ indicates those $\boldsymbol{Y}$ that have an
observation and $\mbox{$\boldsymbol{y}$}(1)$ are the actual observations. Note
that $\Theta$ and $\Theta_{j}$ are different. If $\Theta$ consists of multiple
parameters, we can also break this down into smaller steps. Let
$\Theta=\\{\alpha,\beta\\}$, then
$\alpha_{j+1}=\arg\underset{\alpha}{\max}\int_{\mbox{$\boldsymbol{x}$}}{\int_{\mbox{$\boldsymbol{y}$}}{\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{x}$},\mbox{$\boldsymbol{y}$},\beta_{j};\alpha)f(\mbox{$\boldsymbol{x}$},\mbox{$\boldsymbol{y}$}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\alpha_{j},\beta_{j})d\mbox{$\boldsymbol{x}$}d\mbox{$\boldsymbol{y}$}}}$
(9)
Now the maximization is only over $\alpha$, the part that appears after the
“;” in the log-likelihood.
Expectation step The integral that appears in equation (8) is an expectation.
The first step in the EM algorithm is to compute this expectation. This will
involve computing expectations like
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\Theta_{j}]$
and
$\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\Theta_{j}]$.
The $j$ subscript on $\Theta$ denotes that these are the parameters at
iteration $j$ of the algorithm.
Maximization step: A new parameter set $\Theta_{j+1}$ is computed by finding
the parameters that maximize the expected log-likelihood function (the part in
the integral) with respect to $\Theta$. The equations that give the parameters
for the next iteration ($j+1$) are called the update equations and this report
is devoted to the derivation of these update equations.
After one iteration of the expectation and maximization steps, the cycle is
then repeated. New expectations are computed using $\Theta_{j+1}$, and then a
new set of parameters $\Theta_{j+2}$ is generated. This cycle is continued
until the likelihood no longer increases more than a specified tolerance
level. This algorithm is guaranteed to increase in likelihood at each
iteration (if it does not, it means there is an error in one’s update
equations). The algorithm must be started from an initial set of parameter
values $\Theta_{1}$. The algorithm is not particularly sensitive to the
initial conditions but the surface could definitely be multi-modal and have
local maxima. See section 11 on using Monte Carlo initialization to ensure
that the global maximum is found.
### 2.1 The expected log-likelihood function
The function that is maximized in the “M” step is the expected value of the
log-likelihood function. This expectation is conditioned on two things: 1) the
observed $\boldsymbol{Y}$’s which are denoted $\mbox{$\boldsymbol{Y}$}(1)$ and
which are equal to the fixed values $\mbox{$\boldsymbol{y}$}(1)$ and 2) the
parameter set $\Theta_{j}$. Note that since there may be missing values in the
data, $\mbox{$\boldsymbol{Y}$}(1)$ can be a subset of $\boldsymbol{Y}$, that
is, only some $\boldsymbol{Y}$ have a corresponding $\boldsymbol{y}$ value at
time $t$. Mathematically what we are doing is $\,\textup{{E}}_{\text{{\bf
XY}}}[g(\mbox{$\boldsymbol{X}$},\mbox{$\boldsymbol{Y}$})|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$.
This is a multivariate conditional expectation because
$\mbox{$\boldsymbol{X}$},\mbox{$\boldsymbol{Y}$}$ is multivariate (a $m\times
n\times T$ vector). The function $g(\Theta)$ that we are taking the
expectation of is
$\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{Y}$},\mbox{$\boldsymbol{X}$};\Theta)$.
Note that $g(\Theta)$ is a random variable involving the random variables,
$\boldsymbol{X}$ and $\boldsymbol{Y}$, while
$\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{y}$},\mbox{$\boldsymbol{x}$};\Theta)$
is not a random variable but rather a specific value since $\boldsymbol{y}$
and $\boldsymbol{x}$ are a set of specific values.
We denote this expected log-likelihood by $\Psi$. The goal is to find the
$\Theta$ that maximize $\Psi$ and this becomes the new $\Theta$ for the $j+1$
iteration of the EM algorithm. The equations to compute the new $\Theta$ are
termed the update equations. Using the log likelihood equation (6) and
expanding out all the terms, we can write out $\Psi$ in verbose form as:
$\begin{split}&\,\textup{{E}}_{\text{{\bf
XY}}}[\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{Y}$},\mbox{$\boldsymbol{X}$};\Theta);\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]=\Psi=\\\
&\quad-\frac{1}{2}\sum_{1}^{T}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t}]-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t}]-\,\textup{{E}}[(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t}]-\,\textup{{E}}[\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t}]-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$}]\\\
&\quad+\,\textup{{E}}[(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t}]+\,\textup{{E}}[\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t}]+\,\textup{{E}}[(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$}]+\,\textup{{E}}[\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$}]\bigg{)}-\frac{T}{2}\log|\mbox{$\mathbf{R}$}|\\\
&\quad-\frac{1}{2}\sum_{1}^{T}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}]-\,\textup{{E}}[(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t}]\\\
&\quad-\,\textup{{E}}[\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$}]+\,\textup{{E}}[(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}]\\\
&\quad+\,\textup{{E}}[\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}]+\,\textup{{E}}[(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$}]+\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$}\bigg{)}-\frac{T}{2}\log|\mbox{$\mathbf{Q}$}|\\\
&\quad-\frac{1}{2}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{0}^{\top}\mbox{$\mathbf{V}$}_{0}^{-1}\mbox{$\boldsymbol{X}$}_{0}]-\,\textup{{E}}[\mbox{\boldmath$\xi$}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\mbox{$\boldsymbol{X}$}_{0}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{0}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\mbox{\boldmath$\xi$}]+\mbox{\boldmath$\xi$}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\mbox{\boldmath$\xi$}\bigg{)}-\frac{1}{2}\log|\mbox{\boldmath$\Lambda$}|-\frac{n}{2}\log\pi\end{split}$
(10)
All the $\,\textup{{E}}[\quad]$ appearing here denote
$\,\textup{{E}}_{\text{{\bf
XY}}}[g()|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$.
In the rest of the derivation, I drop the conditional and the $XY$ subscript
on E to remove clutter, but it is important to remember that whenever E
appears, it refers to a specific conditional multivariate expectation. If
$\mbox{$\boldsymbol{x}$}_{0}$ is treated as fixed, then
$\mbox{$\boldsymbol{X}$}_{0}=\mbox{\boldmath$\xi$}$ and the last two lines
involving $\Lambda$ are dropped.
Keep in mind that $\Theta$ and $\Theta_{j}$ are different. $\Theta$ is a
parameter appearing in function
$g(\mbox{$\boldsymbol{X}$},\mbox{$\boldsymbol{Y}$},\Theta)$ (i.e. the
parameters in equation 6). $\boldsymbol{X}$ and $\boldsymbol{Y}$ are random
variables which means that
$g(\mbox{$\boldsymbol{X}$},\mbox{$\boldsymbol{Y}$},\Theta)$ is a random
variable. We take the expectation of
$g(\mbox{$\boldsymbol{X}$},\mbox{$\boldsymbol{Y}$},\Theta)$, meaning we take
integral over the joint distribution of $\boldsymbol{X}$ and $\boldsymbol{Y}$.
We need to specify what that distribution is and the conditioning on
$\Theta_{j}$ (meaning the $\Theta_{j}$ appearing to the right of the $|$ in
$\,\textup{{E}}(g()|\Theta_{j})$) is specifying this distribution. This
conditioning affects the value of the expectation of
$g(\mbox{$\boldsymbol{X}$},\mbox{$\boldsymbol{Y}$},\Theta)$, but it does not
affect the value of $\Theta$, which are the $\mathbf{R}$, $\mathbf{Q}$,
$\mathbf{u}$, etc. values on the right side of equation (10). We will first
take the expectation of
$g(\mbox{$\boldsymbol{X}$},\mbox{$\boldsymbol{Y}$},\Theta)$ conditioned on
$\Theta_{j}$ (using integration) and then take the differential of that
expectation with respect to $\Theta$.
### 2.2 The expectations used in the derivation
The following expectations appear frequently in the update equations and are
given special names999This notation is different than what you see in Shumway
and Stoffer (2006), section 6.2. What I call
$\widetilde{\mbox{$\mathbf{V}$}}_{t}$, they refer to as $P_{t}^{n}$, and my
$\widetilde{\mbox{$\mathbf{P}$}}_{t}$ would be
$P_{t}^{n}+\widetilde{\mbox{$\mathbf{x}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\prime}$
in their notation.:
$\displaystyle\widetilde{\mbox{$\mathbf{x}$}}_{t}=\,\textup{{E}}_{\text{{\bf
XY}}}[\mbox{$\boldsymbol{X}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$
(11a)
$\displaystyle\widetilde{\mbox{$\mathbf{y}$}}_{t}=\,\textup{{E}}_{\text{{\bf
XY}}}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$
(11b)
$\displaystyle\widetilde{\mbox{$\mathbf{P}$}}_{t}=\,\textup{{E}}_{\text{{\bf
XY}}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$
(11c)
$\displaystyle\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}=\,\textup{{E}}_{\text{{\bf
XY}}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$
(11d)
$\displaystyle\widetilde{\mbox{$\mathbf{V}$}}_{t}=\,\textup{{var}}_{XY}[\mbox{$\boldsymbol{X}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]=\widetilde{\mbox{$\mathbf{P}$}}_{t}-\widetilde{\mbox{$\mathbf{x}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}$
(11e)
$\displaystyle\widetilde{\mbox{$\mathbf{O}$}}_{t}=\,\textup{{E}}_{\text{{\bf
XY}}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$
(11f)
$\displaystyle\widetilde{\mbox{$\mathbf{W}$}}_{t}=\,\textup{{var}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]=\widetilde{\mbox{$\mathbf{O}$}}_{t}-\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}$
(11g)
$\displaystyle\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}=\,\textup{{E}}_{\text{{\bf
XY}}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$
(11h)
$\displaystyle\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t,t-1}=\,\textup{{E}}_{\text{{\bf
XY}}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$
(11i)
The subscript on the expectation, E, denotes that this is a multivariate
expectation taken over $\boldsymbol{X}$ and $\boldsymbol{Y}$. The right sides
of equations (11e) and (11g) arise from the computational formula for variance
and covariance:
$\displaystyle\,\textup{{var}}[X]$
$\displaystyle=\,\textup{{E}}[XX^{\top}]-\,\textup{{E}}[X]\,\textup{{E}}[X]^{\top}$
(12) $\displaystyle\,\textup{{cov}}[X,Y]$
$\displaystyle=\,\textup{{E}}[XY^{\top}]-\,\textup{{E}}[X]\,\textup{{E}}[Y]^{\top}.$
(13)
Section 6 shows how to compute the expectations in equation 11.
Table 1: Notes on multivariate expectations. For the following examples, let
$\boldsymbol{X}$ be a vector of length three, $X_{1},X_{2},X_{3}$. $f()$ is
the probability distribution function (pdf). $C$ is a constant (not a random
variable).
$\,\textup{{E}}_{X}[g(\mbox{$\boldsymbol{X}$})]=\int{\int{\int{g(\mbox{$\boldsymbol{x}$})f(x_{1},x_{2},x_{3})dx_{1}dx_{2}dx_{3}}}}$
---
$\,\textup{{E}}_{X}[X_{1}]=\int{\int{\int{x_{1}f(x_{1},x_{2},x_{3})dx_{1}dx_{2}dx_{3}}}}=\int{x_{1}f(x_{1})dx_{1}}=\,\textup{{E}}[X_{1}]$
$\,\textup{{E}}_{X}[X_{1}+X_{2}]=\,\textup{{E}}_{X}[X_{1}]+\,\textup{{E}}_{X}[X_{2}]$
$\,\textup{{E}}_{X}[X_{1}+C]=\,\textup{{E}}_{X}[X_{1}]+C$
$\,\textup{{E}}_{X}[CX_{1}]=C\,\textup{{E}}_{X}[X_{1}]$
$\,\textup{{E}}_{X}[\mbox{$\boldsymbol{X}$}|\mbox{$\boldsymbol{X}$}=\mbox{$\boldsymbol{x}$}]=\mbox{$\boldsymbol{x}$}$
## 3 The unconstrained update equations
In this section, I show the derivation of the update equations when all
elements of a parameter matrix are estimated and are all allowed to be
different, i.e. the unconstrained case. These are similar to the update
equations one will see in Shumway and Stoffer, (2006). Section 5 shows the
update equations when there are unestimated (fixed) or estimated but shared
values in the parameter matrices, i.e. the constrained update equations.
To derive the update equations, one must find the $\Theta$, where $\Theta$ is
comprised of the MARSS parameters $\mathbf{B}$, $\mathbf{u}$, $\mathbf{Q}$,
$\mathbf{Z}$, $\mathbf{a}$, $\mathbf{R}$, $\xi$, and $\Lambda$, that maximizes
$\Psi$ (equation 10) by partial differentiation of $\Psi$ with respect to
$\Theta$. However, I will be using the EM equation where one maximizes each
parameter matrix in $\Theta$ one-by-one (equation 9). In this case, the
parameters that are not being maximized are set at their iteration $j$ values,
and then one takes the derivative of $\Psi$ with respect to the parameter of
interest. Then solve for the parameter value that sets the partial derivative
to zero. The partial differentiation is with respect to each individual
parameter element, for example each $u_{i,j}$ in matrix $\mathbf{u}$. The idea
is to single out those terms in equation (10) that involve $u_{i,j}$ (say),
differentiate by $u_{i,j}$, set this to zero and solve for $u_{i,j}$. This
gives the new $u_{i,j}$ that maximizes the partial derivative with respect to
$u_{i,j}$ of the expected log-likelihood. Matrix calculus gives us a way to
jointly maximize $\Psi$ with respect to all elements (not just element $i,j$)
in a parameter matrix.
### 3.1 Matrix calculus need for the derivation
Before commencing, some definitions from matrix calculus will be needed. The
partial derivative of a scalar ($\Psi$ is a scalar) with respect to some
column vector $\mathbf{b}$ (which has elements $b_{1}$, $b_{2}$ . . .) is
$\frac{\partial\Psi}{\partial\mbox{$\mathbf{b}$}}=\begin{bmatrix}\dfrac{\partial\Psi}{\partial
b_{1}}&\dfrac{\partial\Psi}{\partial
b_{2}}&\cdots&\dfrac{\partial\Psi}{\partial b_{n}}\end{bmatrix}$
Note that the derivative of a column vector $\mathbf{b}$ is a row vector. The
partial derivatives of a scalar with respect to some $n\times n$ matrix
$\mathbf{B}$ is
$\frac{\partial\Psi}{\partial\mbox{$\mathbf{B}$}}=\begin{bmatrix}\dfrac{\partial\Psi}{\partial
b_{1,1}}&\dfrac{\partial\Psi}{\partial
b_{2,1}}&\cdots&\dfrac{\partial\Psi}{\partial b_{n,1}}\\\ \\\
\dfrac{\partial\Psi}{\partial b_{1,2}}&\dfrac{\partial\Psi}{\partial
b_{2,2}}&\cdots&\dfrac{\partial\Psi}{\partial b_{n,2}}\\\ \\\
\cdots&\cdots&\cdots&\cdots\\\ \\\ \dfrac{\partial\Psi}{\partial
b_{1,n}}&\dfrac{\partial\Psi}{\partial
b_{2,n}}&\cdots&\dfrac{\partial\Psi}{\partial b_{n,n}}\\\ \end{bmatrix}$
Note that the indexing is interchanged; $\partial\Psi/\partial
b_{i,j}=\big{[}\partial\Psi/\partial\mbox{$\mathbf{B}$}\big{]}_{j,i}$. For
$\mathbf{Q}$ and $\mathbf{R}$, this is unimportant because they are variance-
covariance matrices and are symmetric. For $\mathbf{B}$ and $\mathbf{Z}$, one
must be careful because these may not be symmetric.
A number of derivatives of a scalar with respect to vectors and matrices will
be needed in the derivation and are shown in table 2. In the table, both the
vectorized and non-vectorized versions are shown. The vectorized version of a
matrix $\mathbf{D}$ with dimension $n\times m$ is
$\displaystyle\,\textup{{vec}}(\mbox{$\mathbf{D}$}_{n,m})\equiv\begin{bmatrix}d_{1,1}\\\
\cdots\\\ d_{n,1}\\\ d_{1,2}\\\ \cdots\\\ d_{n,2}\\\ \cdots\\\ d_{1,m}\\\
\cdots\\\ d_{n,m}\end{bmatrix}$
Table 2: Derivatives of a scalar with respect to vectors and matrices. In the following $\mathbf{a}$ and $\mathbf{c}$ are $n\times 1$ column vectors, $\mathbf{b}$ and $\mathbf{d}$ are $m\times 1$ column vectors, $\mathbf{D}$ is a $n\times m$ matrix, $\mathbf{C}$ is a $n\times n$ matrix, and $\mathbf{A}$ is a diagonal $n\times n$ matrix (0s on the off-diagonals). $\mbox{$\mathbf{C}$}^{-1}$ is the inverse of $\mathbf{C}$, $\mbox{$\mathbf{C}$}^{\top}$ is the transpose of $\mathbf{C}$, $\mbox{$\mathbf{C}$}^{-\top}=\big{(}\mbox{$\mathbf{C}$}^{-1}\big{)}^{\top}=\big{(}\mbox{$\mathbf{C}$}^{\top}\big{)}^{-1}$, and $|\mbox{$\mathbf{C}$}|$ is the determinant of $\mathbf{C}$. Note, all the numerators in the differentials reduce to scalars. Although the matrix names may be the same as in the text, these matrices are dummy matrices to show the matrix derivative relations. $\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{c}$})/\partial\mbox{$\mathbf{a}$}=\partial(\mbox{$\mathbf{c}$}^{\top}\mbox{$\mathbf{a}$})/\partial\mbox{$\mathbf{a}$}=\mbox{$\mathbf{c}$}^{\top}$ | (14)
---|---
$\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{D}$}\mbox{$\mathbf{b}$})/\partial\mbox{$\mathbf{D}$}=\partial(\mbox{$\mathbf{b}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{a}$})/\partial\mbox{$\mathbf{D}$}=\mbox{$\mathbf{b}$}\mbox{$\mathbf{a}$}^{\top}$ | (15)
$\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{D}$}\mbox{$\mathbf{b}$})/\partial\,\textup{{vec}}(\mbox{$\mathbf{D}$})=\partial(\mbox{$\mathbf{b}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{a}$})/\partial\,\textup{{vec}}(\mbox{$\mathbf{D}$})=\big{(}\,\textup{{vec}}(\mbox{$\mathbf{b}$}\mbox{$\mathbf{a}$}^{\top})\big{)}^{\top}$
$\partial(\log|\mbox{$\mathbf{C}$}|)/\partial\mbox{$\mathbf{C}$}=-\partial(\log|\mbox{$\mathbf{C}$}^{-1}|)/\partial\mbox{$\mathbf{C}$}=(\mbox{$\mathbf{C}$}^{\top})^{-1}=\mbox{$\mathbf{C}$}^{-\top}$ | (16)
$\partial(\log|\mbox{$\mathbf{C}$}|)/\partial\,\textup{{vec}}(\mbox{$\mathbf{C}$})=\big{(}\,\textup{{vec}}(\mbox{$\mathbf{C}$}^{-\top})\big{)}^{\top}$
$\partial(\mbox{$\mathbf{b}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{C}$}\mbox{$\mathbf{D}$}\mbox{$\mathbf{d}$})/\partial\mbox{$\mathbf{D}$}=\mbox{$\mathbf{d}$}\mbox{$\mathbf{b}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{C}$}+\mbox{$\mathbf{b}$}\mbox{$\mathbf{d}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{C}$}^{\top}$ | (17)
$\partial(\mbox{$\mathbf{b}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{C}$}\mbox{$\mathbf{D}$}\mbox{$\mathbf{d}$})/\partial\,\textup{{vec}}(\mbox{$\mathbf{D}$})=\big{(}\,\textup{{vec}}(\mbox{$\mathbf{d}$}\mbox{$\mathbf{b}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{C}$}+\mbox{$\mathbf{b}$}\mbox{$\mathbf{d}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{C}$}^{\top})\big{)}^{\top}$
If $\mbox{$\mathbf{b}$}=\mbox{$\mathbf{d}$}$ and $\mathbf{C}$ is symmetric
then the sum reduces to
$2\mbox{$\mathbf{b}$}\mbox{$\mathbf{b}$}^{\top}\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{C}$}$
$\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{C}$}\mbox{$\mathbf{a}$})/\partial\mbox{$\mathbf{a}$}=\partial(\mbox{$\mathbf{a}$}\mbox{$\mathbf{C}$}^{\top}\mbox{$\mathbf{a}$}^{\top})/\partial\mbox{$\mathbf{a}$}=2\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{C}$}$ | (18)
$\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{C}$}^{-1}\mbox{$\mathbf{c}$})/\partial\mbox{$\mathbf{C}$}=-\mbox{$\mathbf{C}$}^{-1}\mbox{$\mathbf{a}$}\mbox{$\mathbf{c}$}^{\top}\mbox{$\mathbf{C}$}^{-1}$ | (19)
$\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{C}$}^{-1}\mbox{$\mathbf{c}$})/\partial\,\textup{{vec}}(\mbox{$\mathbf{C}$})=-\big{(}\,\textup{{vec}}(\mbox{$\mathbf{C}$}^{-1}\mbox{$\mathbf{a}$}\mbox{$\mathbf{c}$}^{\top}\mbox{$\mathbf{C}$}^{-1})\big{)}^{\top}$
### 3.2 The update equation for $\mathbf{u}$ (unconstrained)
Take the partial derivative of $\Psi$ with respect to $\mathbf{u}$, which is a
$m\times 1$ matrix. All parameters other than $\mathbf{u}$ are fixed to
constant values (because partial derivation is being done). Since the
derivative of a constant is 0, terms not involving $\mathbf{u}$ will equal 0
and drop out. Taking the derivative to equation (10) with respect to
$\mathbf{u}$:
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{u}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\partial(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$}])/\partial\mbox{$\mathbf{u}$}-\partial(\,\textup{{E}}[\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t}])/\partial\mbox{$\mathbf{u}$}\\\
&\quad+\partial(\,\textup{{E}}[(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$}])/\partial\mbox{$\mathbf{u}$}+\partial(\,\textup{{E}}[\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}])/\partial\mbox{$\mathbf{u}$}+\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{$\mathbf{u}$}\bigg{)}\end{split}$
(20)
The parameters can be moved out of the expectations and then the matrix
derivative relations (table 2) are used to take the derivative.
$\begin{split}\partial\Psi/\partial\mbox{$\mathbf{u}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]^{\top}\mbox{$\mathbf{Q}$}^{-1}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]^{\top}\mbox{$\mathbf{Q}$}^{-1}+(\mbox{$\mathbf{B}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}])^{\top}\mbox{$\mathbf{Q}$}^{-1}+(\mbox{$\mathbf{B}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}])^{\top}\mbox{$\mathbf{Q}$}^{-1}+2\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\bigg{)}\end{split}$
(21)
This also uses $\mbox{$\mathbf{Q}$}^{-1}=(\mbox{$\mathbf{Q}$}^{-1})^{\top}$.
This can then be reduced to
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{u}$}=\sum_{t=1}^{T}\big{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]^{\top}\mbox{$\mathbf{Q}$}^{-1}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}-\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\big{)}\end{split}$
(22)
Set the left side to zero (a $p\times m$ matrix of zeros) and transpose the
whole equation. $\mbox{$\mathbf{Q}$}^{-1}$ cancels out101010$\mathbf{Q}$ is a
variance-covariance matrix and is invertible.
$\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{Q}$}=\mbox{$\mathbf{I}$}$, the
identity matrix. by multiplying on the left by $\mathbf{Q}$ (left since the
whole equation was just transposed), giving
$\mathbf{0}=\sum_{t=1}^{T}\big{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]-\mbox{$\mathbf{B}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]-\mbox{$\mathbf{u}$}\big{)}=\sum_{t=1}^{T}\big{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]-\mbox{$\mathbf{B}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]\big{)}-\mbox{$\mathbf{u}$}$
(23)
Solving for $\mathbf{u}$ and replacing the expectations with their names from
equation 11, gives us the new $\mathbf{u}$ that maximizes $\Psi$,
$\mbox{$\mathbf{u}$}_{j+1}=\frac{1}{T}\sum_{t=1}^{T}\big{(}\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}\big{)}$
(24)
### 3.3 The update equation for $\mathbf{B}$ (unconstrained)
Take the derivative of $\Psi$ with respect to $\mathbf{B}$. Terms not
involving $\mathbf{B}$, equal 0 and drop out. I have put the E outside the
partials by noting that
$\partial(\,\textup{{E}}[h(\mbox{$\boldsymbol{X}$}_{t},\mbox{$\mathbf{B}$})])/\partial\mbox{$\mathbf{B}$}=\,\textup{{E}}[\partial(h(\mbox{$\boldsymbol{X}$}_{t},\mbox{$\mathbf{B}$}))/\partial\mbox{$\mathbf{B}$}]$
since the expectation is conditioned on $\mbox{$\mathbf{B}$}_{j}$ not
$\mathbf{B}$.
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{B}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})/\partial\mbox{$\mathbf{B}$}]\\\
&\quad-\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{B}$}]+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}))/\partial\mbox{$\mathbf{B}$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{$\mathbf{B}$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})/\partial\mbox{$\mathbf{B}$}]\bigg{)}\\\
&=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}])/\partial\mbox{$\mathbf{B}$}]\\\
&\quad-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{B}$}]+\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}))/\partial\mbox{$\mathbf{B}$}]\\\
&\quad+\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{$\mathbf{B}$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})/\partial\mbox{$\mathbf{B}$}\bigg{)}]\\\
\end{split}$ (25)
After pulling the constants out of the expectations, we use relations (2) and
(2) to take the derivative and note that
$\mbox{$\mathbf{Q}$}^{-1}=(\mbox{$\mathbf{Q}$}^{-1})^{\top}$:
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{B}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t}^{\top}]\mbox{$\mathbf{Q}$}^{-1}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t}^{\top}]\mbox{$\mathbf{Q}$}^{-1}\\\
&\quad+2\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}+\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}+\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\bigg{)}\\\
\end{split}$ (26)
This can be reduced to
$\partial\Psi/\partial\mbox{$\mathbf{B}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-2\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t}^{\top}]\mbox{$\mathbf{Q}$}^{-1}+2\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}+2\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\bigg{)}$
(27)
Set the left side to zero (an $m\times m$ matrix of zeros), cancel out
$\mbox{$\mathbf{Q}$}^{-1}$ by multiplying by $\mathbf{Q}$ on the right, get
rid of the -1/2, and transpose the whole equation to give
$\begin{split}&\mathbf{0}=\sum_{t=1}^{T}\big{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]-\mbox{$\mathbf{B}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]-\mbox{$\mathbf{u}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]\big{)}\\\
&\quad=\sum_{t=1}^{T}\big{(}\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}-\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}-\mbox{$\mathbf{u}$}^{\top}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\big{)}\end{split}$
(28)
The last line replaced the expectations with their names shown in equation
(11). Solving for $\mathbf{B}$ and noting that
$\widetilde{\mbox{$\mathbf{P}$}}_{t-1}$ is like a variance-covariance matrix
and is invertible, gives us the new $\mathbf{B}$ that maximizes $\Psi$,
$\mbox{$\mathbf{B}$}_{j+1}=\bigg{(}\sum_{t=1}^{T}\big{(}\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}-\mbox{$\mathbf{u}$}^{\top}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\big{)}\bigg{)}\bigg{(}\sum_{t=1}^{T}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\bigg{)}^{-1}$
(29)
Because all the equations above also apply to block-diagonal matrices, the
derivation immediately generalizes to the case where $\mathbf{B}$ is an
unconstrained block diagonal matrix:
$\mbox{$\mathbf{B}$}=\begin{bmatrix}b_{1,1}&b_{1,2}&b_{1,3}&0&0&0&0&0\\\
b_{2,1}&b_{2,2}&b_{2,3}&0&0&0&0&0\\\ b_{3,1}&b_{3,2}&b_{3,3}&0&0&0&0&0\\\
0&0&0&b_{4,4}&b_{4,5}&0&0&0\\\ 0&0&0&b_{5,4}&b_{5,5}&0&0&0\\\
0&0&0&0&0&b_{6,6}&b_{6,7}&b_{6,8}\\\ 0&0&0&0&0&b_{7,6}&b_{7,7}&b_{7,8}\\\
0&0&0&0&0&b_{8,6}&b_{8,7}&b_{8,8}\end{bmatrix}=\begin{bmatrix}\mbox{$\mathbf{B}$}_{1}&0&0\\\
0&\mbox{$\mathbf{B}$}_{2}&0\\\ 0&0&\mbox{$\mathbf{B}$}_{3}\\\ \end{bmatrix}$
For the block diagonal $\mathbf{B}$,
$\mbox{$\mathbf{B}$}_{i,j+1}=\bigg{(}\sum_{t=1}^{T}\big{(}\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}-\mbox{$\mathbf{u}$}^{\top}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\big{)}\bigg{)}_{i}\bigg{(}\sum_{t=1}^{T}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\bigg{)}_{i}^{-1}$
(30)
where the subscript $i$ means to take the parts of the matrices that are
analogous to $\mbox{$\mathbf{B}$}_{i}$; take the whole part within the
parentheses not the individual matrices inside the parentheses. If
$\mbox{$\mathbf{B}$}_{i}$ is comprised of rows $a$ to $b$ and columns $c$ to
$d$ of matrix $\mathbf{B}$, then take rows $a$ to $b$ and columns $c$ to $d$
of the matrices subscripted by $i$ in equation (30).
### 3.4 The update equation for $\mathbf{Q}$ (unconstrained)
The usual way to do this derivation is to use what is known as the “trace
trick” which will pull the $\mbox{$\mathbf{Q}$}^{-1}$ out to the left of the
$\mbox{$\mathbf{c}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{b}$}$ terms
which appear in the likelihood (10). Here I’m showing a less elegant
derivation that plods step by step through each of the likelihood terms. Take
the derivative of $\Psi$ with respect to $\mathbf{Q}$. Terms not involving
$\mathbf{Q}$ equal 0 and drop out. Again the expectations are placed outside
the partials by noting that
$\partial(\,\textup{{E}}[h(\mbox{$\boldsymbol{X}$}_{t},\mbox{$\mathbf{Q}$})])/\partial\mbox{$\mathbf{Q}$}=\,\textup{{E}}[\partial(h(\mbox{$\boldsymbol{X}$}_{t},\mbox{$\mathbf{Q}$}))/\partial\mbox{$\mathbf{Q}$}]$.
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{Q}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Q}$}]-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})/\partial\mbox{$\mathbf{Q}$}]\\\
&\quad-\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Q}$}]-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{$\mathbf{Q}$}]\\\
&\quad-\,\textup{{E}}[\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Q}$}]+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})/\partial\mbox{$\mathbf{Q}$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{$\mathbf{Q}$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})/\partial\mbox{$\mathbf{Q}$}]\\\
&\quad+\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{$\mathbf{Q}$}\bigg{)}-\partial\bigg{(}\frac{T}{2}\log|\mbox{$\mathbf{Q}$}|\bigg{)}/\partial\mbox{$\mathbf{Q}$}\\\
\end{split}$ (31)
The relations (2) and (2) are used to do the differentiation. Notice that all
the terms in the summation are of the form
$\mbox{$\mathbf{c}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{b}$}$, and
thus after differentiation, all the
$\mbox{$\mathbf{c}$}^{\top}\mbox{$\mathbf{b}$}$ terms can be grouped inside
one set of parentheses. Also there is a minus that comes from equation (2) and
it cancels out the minus in front of the initial $-1/2$.
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{Q}$}=\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{Q}$}^{-1}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}]-\,\textup{{E}}[\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\mathbf{u}$}^{\top}]-\,\textup{{E}}[\mbox{$\mathbf{u}$}\mbox{$\boldsymbol{X}$}_{t}^{\top}]\\\
&\quad+\,\textup{{E}}[\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}]+\,\textup{{E}}[\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\mathbf{u}$}^{\top}]+\,\textup{{E}}[\mbox{$\mathbf{u}$}(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}]+\mbox{$\mathbf{u}$}\mbox{$\mathbf{u}$}^{\top}\bigg{)}\mbox{$\mathbf{Q}$}^{-1}-\frac{T}{2}\mbox{$\mathbf{Q}$}^{-1}\end{split}$
(32)
Pulling the parameters out of the expectations and using
$(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}=\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{B}$}^{\top}$,
we have
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{Q}$}=\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{Q}$}^{-1}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]\mbox{$\mathbf{B}$}^{\top}-\mbox{$\mathbf{B}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]\mbox{$\mathbf{u}$}^{\top}-\mbox{$\mathbf{u}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}]\\\
&\quad+\mbox{$\mathbf{B}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{B}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]\mbox{$\mathbf{u}$}^{\top}+\mbox{$\mathbf{u}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{u}$}\mbox{$\mathbf{u}$}^{\top}\bigg{)}\mbox{$\mathbf{Q}$}^{-1}-\frac{T}{2}\mbox{$\mathbf{Q}$}^{-1}\end{split}$
(33)
The partial derivative is then rewritten in terms of the Kalman smoother
output:
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{Q}$}=\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{Q}$}^{-1}\bigg{(}\widetilde{\mbox{$\mathbf{P}$}}_{t}-\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}\mbox{$\mathbf{B}$}^{\top}-\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1,t}-\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{u}$}^{\top}-\mbox{$\mathbf{u}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\\\
&\quad+\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}\mbox{$\mathbf{u}$}^{\top}+\mbox{$\mathbf{u}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{u}$}\mbox{$\mathbf{u}$}^{\top}\bigg{)}\mbox{$\mathbf{Q}$}^{-1}-\frac{T}{2}\mbox{$\mathbf{Q}$}^{-1}\end{split}$
(34)
Setting this to zero (a $m\times m$ matrix of zeros),
$\mbox{$\mathbf{Q}$}^{-1}$ is canceled out by multiplying by $\mathbf{Q}$
twice, once on the left and once on the right and the $1/2$ is removed:
$\begin{split}T\mbox{$\mathbf{Q}$}=\sum_{t=1}^{T}\bigg{(}\widetilde{\mbox{$\mathbf{P}$}}_{t}-\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}\mbox{$\mathbf{B}$}^{\top}-\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1,t}-\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{u}$}^{\top}-\mbox{$\mathbf{u}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}+\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}\mbox{$\mathbf{u}$}^{\top}+\mbox{$\mathbf{u}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{u}$}\mbox{$\mathbf{u}$}^{\top}\bigg{)}\end{split}$
(35)
This gives us the new $\mathbf{Q}$ that maximizes $\Psi$,
$\begin{split}&\mbox{$\mathbf{Q}$}_{j+1}=\frac{1}{T}\sum_{t=1}^{T}\bigg{(}\widetilde{\mbox{$\mathbf{P}$}}_{t}-\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}\mbox{$\mathbf{B}$}^{\top}-\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1,t}-\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{u}$}^{\top}-\mbox{$\mathbf{u}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\\\
&\quad+\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}\mbox{$\mathbf{u}$}^{\top}+\mbox{$\mathbf{u}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{u}$}\mbox{$\mathbf{u}$}^{\top}\bigg{)}\end{split}$
(36)
This derivation immediately generalizes to the case where $\mathbf{Q}$ is a
block diagonal matrix:
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}q_{1,1}&q_{1,2}&q_{1,3}&0&0&0&0&0\\\
q_{1,2}&q_{2,2}&q_{2,3}&0&0&0&0&0\\\ q_{1,3}&q_{2,3}&q_{3,3}&0&0&0&0&0\\\
0&0&0&q_{4,4}&q_{4,5}&0&0&0\\\ 0&0&0&q_{4,5}&q_{5,5}&0&0&0\\\
0&0&0&0&0&q_{6,6}&q_{6,7}&q_{6,8}\\\ 0&0&0&0&0&q_{6,7}&q_{7,7}&q_{7,8}\\\
0&0&0&0&0&q_{6,8}&q_{7,8}&q_{8,8}\end{bmatrix}=\begin{bmatrix}\mbox{$\mathbf{Q}$}_{1}&0&0\\\
0&\mbox{$\mathbf{Q}$}_{2}&0\\\ 0&0&\mbox{$\mathbf{Q}$}_{3}\\\ \end{bmatrix}$
In this case,
$\begin{split}&\mbox{$\mathbf{Q}$}_{i,j+1}=\frac{1}{T}\sum_{t=1}^{T}\bigg{(}\widetilde{\mbox{$\mathbf{P}$}}_{t}-\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}\mbox{$\mathbf{B}$}^{\top}-\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1,t}-\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{u}$}^{\top}-\mbox{$\mathbf{u}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\\\
&\quad+\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}\mbox{$\mathbf{u}$}^{\top}+\mbox{$\mathbf{u}$}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\mbox{$\mathbf{B}$}^{\top}+\mbox{$\mathbf{u}$}\mbox{$\mathbf{u}$}^{\top}\bigg{)}_{i}\end{split}$
(37)
where the subscript $i$ means take the elements of the matrix (in the big
parentheses) that are analogous to $\mbox{$\mathbf{Q}$}_{i}$; take the whole
part within the parentheses not the individual matrices inside the
parentheses). If $\mbox{$\mathbf{Q}$}_{i}$ is comprised of rows $a$ to $b$ and
columns $c$ to $d$ of matrix $\mathbf{Q}$, then take rows $a$ to $b$ and
columns $c$ to $d$ of matrices subscripted by $i$ in equation (37).
By the way, $\mathbf{Q}$ is never really unconstrained since it is a variance-
covariance matrix and the upper and lower triangles are shared. However,
because the shared values are only the symmetric values in the matrix, the
derivation still works even though it’s technically incorrect (Henderson and
Searle,, 1979). The constrained update equation for $\mathbf{Q}$ shown in
section 5.8 explicitly deals with the shared lower and upper triangles.
### 3.5 Update equation for $\mathbf{a}$ (unconstrained)
Take the derivative of $\Psi$ with respect to $\mathbf{a}$, where $\mathbf{a}$
is a $n\times 1$ matrix. Terms not involving $\mathbf{a}$, equal 0 and drop
out.
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{a}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\partial(\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$}])/\partial\mbox{$\mathbf{a}$}-\partial(\,\textup{{E}}[\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t}])/\partial\mbox{$\mathbf{a}$}\\\
&\quad+\partial(\,\textup{{E}}[(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$}])/\partial\mbox{$\mathbf{a}$}+\partial(\,\textup{{E}}[\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t}])/\partial\mbox{$\mathbf{a}$}+\partial(\,\textup{{E}}[\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$}])/\partial\mbox{$\mathbf{a}$}\bigg{)}\end{split}$
(38)
The expectations around constants can be dropped111111 because
$\,\textup{{E}}_{\text{{\bf XY}}}(C)=C$, where $C$ is a constant.. Using
relations (2) and (2) and using
$\mbox{$\mathbf{R}$}^{-1}=(\mbox{$\mathbf{R}$}^{-1})^{\top}$, we have then
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{a}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}]-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}]+\,\textup{{E}}[(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}]+\,\textup{{E}}[(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}]+2\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\bigg{)}\end{split}$
(39)
Pull the parameters out of the expectations, use
$(\mbox{$\mathbf{a}$}\mbox{$\mathbf{b}$})^{\top}=\mbox{$\mathbf{b}$}^{\top}\mbox{$\mathbf{a}$}^{\top}$
and $\mbox{$\mathbf{R}$}^{-1}=(\mbox{$\mathbf{R}$}^{-1})^{\top}$ where needed,
and remove the $-1/2$ to get
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{a}$}=\sum_{t=1}^{T}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}]^{\top}\mbox{$\mathbf{R}$}^{-1}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]^{\top}\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}-\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\bigg{)}\end{split}$
(40)
Set the left side to zero (a $1\times n$ matrix of zeros), take the transpose,
and cancel out $\mbox{$\mathbf{R}$}^{-1}$ by multiplying by $\mathbf{R}$,
giving
$\mathbf{0}=\sum_{t=1}^{T}\big{(}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}]-\mbox{$\mathbf{Z}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]-\mbox{$\mathbf{a}$}\big{)}=\sum_{t=1}^{T}\big{(}\widetilde{\mbox{$\mathbf{y}$}}_{t}-\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{a}$}\big{)}$
(41)
Solving for $\mathbf{a}$ gives us the update equation for $\mathbf{a}$:
$\mbox{$\mathbf{a}$}_{j+1}=\frac{1}{T}\sum_{t=1}^{T}\big{(}\widetilde{\mbox{$\mathbf{y}$}}_{t}-\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}\big{)}$
(42)
### 3.6 The update equation for $\mathbf{Z}$ (unconstrained)
Take the derivative of $\Psi$ with respect to $\mathbf{Z}$. Terms not
involving $\mathbf{Z}$, equal 0 and drop out. The expectations around terms
involving only constants have been dropped.
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{Z}$}=\text{(note
$\partial\mbox{$\mathbf{Z}$}$ is $m\times n$ while $\mbox{$\mathbf{Z}$}$ is
$n\times m$)}\\\
&\quad-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Z}$}]-\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t})/\partial\mbox{$\mathbf{Z}$}]+\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Z}$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$})/\partial\mbox{$\mathbf{Z}$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Z}$}]\bigg{)}\\\
&=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Z}$}]-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t})/\partial\mbox{$\mathbf{Z}$}]+\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Z}$}]\\\
&\quad+\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$})/\partial\mbox{$\mathbf{Z}$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{Z}$}]\bigg{)}\\\
\end{split}$ (43)
Using the matrix derivative relations (table 2) and using
$\mbox{$\mathbf{R}$}^{-1}=(\mbox{$\mathbf{R}$}^{-1})^{\top}$, we get
$\begin{split}\partial\Psi/\partial\mbox{$\mathbf{Z}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}]-&\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}]\\\
&+2\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}]+\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}]+\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}]\bigg{)}\end{split}$
(44)
Pulling the parameters out of the expectations and getting rid of the $-1/2$,
we have
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{Z}$}=\sum_{t=1}^{T}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}]\mbox{$\mathbf{R}$}^{-1}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\bigg{)}\\\
\end{split}$ (45)
Set the left side to zero (a $m\times n$ matrix of zeros), transpose it all,
and cancel out $\mbox{$\mathbf{R}$}^{-1}$ by multiplying by $\mathbf{R}$ on
the left, to give
$\begin{split}\mathbf{0}=\sum_{t=1}^{T}\big{(}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]-\mbox{$\mathbf{Z}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]-\mbox{$\mathbf{a}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}]\big{)}=\sum_{t=1}^{T}\big{(}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}-\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{P}$}}_{t}-\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\big{)}\end{split}$
(46)
Solving for $\mathbf{Z}$ and noting that $\widetilde{\mbox{$\mathbf{P}$}}_{t}$
is invertible, gives us the new $\mathbf{Z}$:
$\mbox{$\mathbf{Z}$}_{j+1}=\bigg{(}\sum_{t=1}^{T}\big{(}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}-\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\big{)}\bigg{)}\bigg{(}\sum_{t=1}^{T}\widetilde{\mbox{$\mathbf{P}$}}_{t}\bigg{)}^{-1}$
(47)
### 3.7 The update equation for $\mathbf{R}$ (unconstrained)
Take the derivative of $\Psi$ with respect to $\mathbf{R}$. Terms not
involving $\mathbf{R}$, equal 0 and drop out. The expectations around terms
involving constants have been removed.
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{R}$}=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}\,\textup{{E}}[\partial(\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t})/\partial\mbox{$\mathbf{R}$}]-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{R}$}]-\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t})/\partial\mbox{$\mathbf{R}$}]\\\
&\quad-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$})/\partial\mbox{$\mathbf{R}$}]-\,\textup{{E}}[\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{t})/\partial\mbox{$\mathbf{R}$}]+\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{R}$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$})/\partial\mbox{$\mathbf{R}$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})/\partial\mbox{$\mathbf{R}$}]+\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$})/\partial\mbox{$\mathbf{R}$}\bigg{)}-\partial\big{(}\frac{T}{2}\log|\mbox{$\mathbf{R}$}|\big{)}/\partial\mbox{$\mathbf{R}$}\end{split}$
(48)
We use relations (2) and (2) to do the differentiation. Notice that all the
terms in the summation are of the form
$\mbox{$\mathbf{c}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{b}$}$, and
thus after differentiation, we group all the
$\mbox{$\mathbf{c}$}^{\top}\mbox{$\mathbf{b}$}$ inside one set of parentheses.
Also there is a minus that comes from equation (2) and cancels out the minus
in front of $-1/2$.
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{R}$}=\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{R}$}^{-1}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}]-\,\textup{{E}}[\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\mathbf{a}$}^{\top}]-\,\textup{{E}}[\mbox{$\mathbf{a}$}\mbox{$\boldsymbol{Y}$}_{t}^{\top}]\\\
&\quad+\,\textup{{E}}[\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t}(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}]+\,\textup{{E}}[\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t}\mbox{$\mathbf{a}$}^{\top}]+\,\textup{{E}}[\mbox{$\mathbf{a}$}(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{X}$}_{t})^{\top}]+\mbox{$\mathbf{a}$}\mbox{$\mathbf{a}$}^{\top}\bigg{)}\mbox{$\mathbf{R}$}^{-1}-\frac{T}{2}\mbox{$\mathbf{R}$}^{-1}\end{split}$
(49)
Pulling the parameters out of the expectations and using
$(\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{Y}$}_{t})^{\top}=\mbox{$\boldsymbol{Y}$}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}$,
we have
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{R}$}=\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{R}$}^{-1}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]\mbox{$\mathbf{Z}$}^{\top}-\mbox{$\mathbf{Z}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}]\mbox{$\mathbf{a}$}^{\top}-\mbox{$\mathbf{a}$}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}^{\top}]\\\
&\quad+\mbox{$\mathbf{Z}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{Z}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]\mbox{$\mathbf{a}$}^{\top}+\mbox{$\mathbf{a}$}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}]\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{a}$}\mbox{$\mathbf{a}$}^{\top}\bigg{)}\mbox{$\mathbf{R}$}^{-1}-\frac{T}{2}\mbox{$\mathbf{R}$}^{-1}\end{split}$
(50)
We rewrite the partial derivative in terms of expectations:
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{R}$}=\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{R}$}^{-1}\bigg{(}\widetilde{\mbox{$\mathbf{O}$}}_{t}-\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}\mbox{$\mathbf{Z}$}^{\top}-\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}^{\top}-\widetilde{\mbox{$\mathbf{y}$}}_{t}\mbox{$\mathbf{a}$}^{\top}-\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}\\\
&\quad+\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{P}$}}_{t}\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{a}$}^{\top}+\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{a}$}\mbox{$\mathbf{a}$}^{\top}\bigg{)}\mbox{$\mathbf{R}$}^{-1}-\frac{T}{2}\mbox{$\mathbf{R}$}^{-1}\end{split}$
(51)
Setting this to zero (a $n\times n$ matrix of zeros), we cancel out
$\mbox{$\mathbf{R}$}^{-1}$ by multiplying by $\mathbf{R}$ twice, once on the
left and once on the right, and get rid of the $1/2$.
$\begin{split}T\mbox{$\mathbf{R}$}=\sum_{t=1}^{T}\bigg{(}\widetilde{\mbox{$\mathbf{O}$}}_{t}-\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}\mbox{$\mathbf{Z}$}^{\top}-\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}^{\top}-\widetilde{\mbox{$\mathbf{y}$}}_{t}\mbox{$\mathbf{a}$}^{\top}-\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}+\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{P}$}}_{t}\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{a}$}^{\top}+\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{a}$}\mbox{$\mathbf{a}$}^{\top}\bigg{)}\end{split}$
(52)
We can then solve for $\mathbf{R}$, giving us the new $\mathbf{R}$ that
maximizes $\Psi$,
$\begin{split}\mbox{$\mathbf{R}$}_{j+1}=\frac{1}{T}\sum_{t=1}^{T}\bigg{(}\widetilde{\mbox{$\mathbf{O}$}}_{t}-\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}\mbox{$\mathbf{Z}$}^{\top}-\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}^{\top}-\widetilde{\mbox{$\mathbf{y}$}}_{t}\mbox{$\mathbf{a}$}^{\top}-\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}+\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{P}$}}_{t}\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{a}$}^{\top}+\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{a}$}\mbox{$\mathbf{a}$}^{\top}\bigg{)}\end{split}$
(53)
As with $\mathbf{Q}$, this derivation immediately generalizes to a block
diagonal matrix:
$\mbox{$\mathbf{R}$}=\begin{bmatrix}\mbox{$\mathbf{R}$}_{1}&0&0\\\
0&\mbox{$\mathbf{R}$}_{2}&0\\\ 0&0&\mbox{$\mathbf{R}$}_{3}\\\ \end{bmatrix}$
In this case,
$\begin{split}\mbox{$\mathbf{R}$}_{i,j+1}=\frac{1}{T}\sum_{t=1}^{T}\bigg{(}\widetilde{\mbox{$\mathbf{O}$}}_{t}-\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}\mbox{$\mathbf{Z}$}^{\top}-\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}^{\top}-\widetilde{\mbox{$\mathbf{y}$}}_{t}\mbox{$\mathbf{a}$}^{\top}-\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}+\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{P}$}}_{t}\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{Z}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{a}$}^{\top}+\mbox{$\mathbf{a}$}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\mbox{$\mathbf{Z}$}^{\top}+\mbox{$\mathbf{a}$}\mbox{$\mathbf{a}$}^{\top}\bigg{)}_{i}\end{split}$
(54)
where the subscript $i$ means we take the elements in the matrix in the big
parentheses that are analogous to $\mbox{$\mathbf{R}$}_{i}$. If
$\mbox{$\mathbf{R}$}_{i}$ is comprised of rows $a$ to $b$ and columns $c$ to
$d$ of matrix $\mathbf{R}$, then we take rows $a$ to $b$ and columns $c$ to
$d$ of matrix subscripted by $i$ in equation (54).
### 3.8 Update equation for $\xi$ and $\Lambda$ (unconstrained), stochastic
initial state
Shumway and Stoffer, (2006) and Ghahramani and Hinton, (1996) imply in their
discussion of the EM algorithm that both $\xi$ and $\Lambda$ can be estimated
(though not simultaneously). Harvey (1989), however, discusses that there are
only two allowable cases: $\mbox{$\boldsymbol{x}$}_{0}$ is treated as fixed
($\mbox{\boldmath$\Lambda$}=0$) and equal to the unknown parameter $\xi$ or
$\mbox{$\boldsymbol{x}$}_{0}$ is treated as stochastic with a known mean $\xi$
and variance $\Lambda$. For completeness, we show here the update equation in
the case of $\mbox{$\boldsymbol{x}$}_{0}$ stochastic with unknown mean $\xi$
and variance $\Lambda$ (a case that Harvey (1989) says is not consistent).
We proceed as before and solve for the new $\xi$ by minimizing $\Psi$. Take
the derivative of $\Psi$ with respect to $\xi$ . Terms not involving $\xi$,
equal 0 and drop out.
$\begin{split}\partial\Psi/\partial\mbox{\boldmath$\xi$}=-\frac{1}{2}\big{(}-\partial(\,\textup{{E}}[\mbox{\boldmath$\xi$}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\mbox{$\boldsymbol{X}$}_{0}])/\partial\mbox{\boldmath$\xi$}-\partial(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{0}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\mbox{\boldmath$\xi$}])/\partial\mbox{\boldmath$\xi$}+\partial(\mbox{\boldmath$\xi$}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}\big{)}\end{split}$
(55)
Using relations (2) and (2) and using
$\mbox{\boldmath$\Lambda$}^{-1}=(\mbox{\boldmath$\Lambda$}^{-1})^{\top}$, we
have
$\partial\Psi/\partial\mbox{\boldmath$\xi$}=-\frac{1}{2}\big{(}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{0}^{\top}\mbox{\boldmath$\Lambda$}^{-1}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{0}^{\top}\mbox{\boldmath$\Lambda$}^{-1}]+2\mbox{\boldmath$\xi$}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\big{)}$
(56)
Pulling the parameters out of the expectations, we get
$\partial\Psi/\partial\mbox{\boldmath$\xi$}=-\frac{1}{2}\big{(}-2\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{0}^{\top}]\mbox{\boldmath$\Lambda$}^{-1}+2\mbox{\boldmath$\xi$}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\big{)}$
(57)
We then set the left side to zero, take the transpose, and cancel out $-1/2$
and $\mbox{\boldmath$\Lambda$}^{-1}$ (by noting that it is a variance-
covariance matrix and is invertible).
$\mathbf{0}=\big{(}\mbox{\boldmath$\Lambda$}^{-1}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{0}]+\mbox{\boldmath$\Lambda$}^{-1}\mbox{\boldmath$\xi$}\big{)}=(\widetilde{\mbox{$\mathbf{x}$}}_{0}-\mbox{\boldmath$\xi$})$
(58)
Thus,
$\mbox{\boldmath$\xi$}_{j+1}=\widetilde{\mbox{$\mathbf{x}$}}_{0}$ (59)
$\widetilde{\mbox{$\mathbf{x}$}}_{0}$ is the expected value of
$\mbox{$\boldsymbol{X}$}_{0}$ conditioned on the data from $t=1$ to $T$, which
comes from the Kalman smoother recursions with initial conditions defined as
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{0}|\mbox{$\boldsymbol{Y}$}_{0}=\mbox{$\boldsymbol{y}$}_{0}]\equiv\mbox{\boldmath$\xi$}$
and
$\,\textup{{var}}(\mbox{$\boldsymbol{X}$}_{0}\mbox{$\boldsymbol{X}$}_{0}^{\top}|\mbox{$\boldsymbol{Y}$}_{0}=\mbox{$\boldsymbol{y}$}_{0})\equiv\mbox{\boldmath$\Lambda$}$.
A similar set of steps gets us to the update equation for $\Lambda$,
$\mbox{\boldmath$\Lambda$}_{j+1}=\widetilde{\mbox{$\mathbf{V}$}}_{0}$ (60)
$\widetilde{\mbox{$\mathbf{V}$}}_{0}$ is the variance of
$\mbox{$\boldsymbol{X}$}_{0}$ conditioned on the data from $t=1$ to $T$ and is
an output from the Kalman smoother recursions.
If the initial state is defined as at $t=1$ instead of $t=0$, the update
equation is derived in an identical fashion and the update equation is
similar:
$\mbox{\boldmath$\xi$}_{j+1}=\widetilde{\mbox{$\mathbf{x}$}}_{1}$ (61)
$\mbox{\boldmath$\Lambda$}_{j+1}=\widetilde{\mbox{$\mathbf{V}$}}_{1}$ (62)
These are output from the Kalman smoother recursions with initial conditions
defined as
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{1}|\mbox{$\boldsymbol{Y}$}_{0}=\mbox{$\boldsymbol{y}$}_{0}]\equiv\mbox{\boldmath$\xi$}$
and
$\,\textup{{var}}(\mbox{$\boldsymbol{X}$}_{1}\mbox{$\boldsymbol{X}$}_{1}^{\top}|\mbox{$\boldsymbol{Y}$}_{0}=\mbox{$\boldsymbol{y}$}_{0})\equiv\mbox{\boldmath$\Lambda$}$.
Notice that the recursions are initialized slightly differently; you will see
the Kalman filter and smoother equations presented with both types of
initializations depending on whether the author defines the initial state at
$t=0$ or $t=1$.
### 3.9 Update equation for $\xi$ (unconstrained), fixed
$\mbox{$\boldsymbol{x}$}_{0}$
For the case where $\mbox{$\boldsymbol{x}$}_{0}$ is treated as fixed, i.e. as
another parameter, then there is no $\Lambda$, and we need to maximize
$\partial\Psi/\partial\mbox{\boldmath$\xi$}$ using the slightly different
$\Psi$ shown in equation (7). Now $\xi$ appears in the state equation part of
the likelihood.
$\begin{split}&\partial\Psi/\partial\mbox{\boldmath$\xi$}=-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{1}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}]-\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{1})/\partial\mbox{\boldmath$\xi$}]+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{Q}$}^{-1}(\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$}))/\partial\mbox{\boldmath$\xi$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{\boldmath$\xi$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}]\bigg{)}\\\
&=-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{1}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}]-\,\textup{{E}}[\partial(\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{1})/\partial\mbox{\boldmath$\xi$}]+\,\textup{{E}}[\partial(\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}(\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$}))/\partial\mbox{\boldmath$\xi$}]\\\
&\quad+\,\textup{{E}}[\partial(\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{\boldmath$\xi$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}]\bigg{)}\end{split}$
(63)
After pulling the constants out of the expectations, we use relations (2) and
(2) to take the derivative:
$\begin{split}\partial\Psi/\partial\mbox{\boldmath$\xi$}=-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{1}]^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{1}]^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}+2\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}+\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}+\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\bigg{)}\\\
\end{split}$ (64)
This can be reduced to
$\begin{split}\partial\Psi/\partial\mbox{\boldmath$\xi$}=\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{1}]^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}-\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}-\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\end{split}$
(65)
To solve for $\xi$, set the left side to zero (an $m\times 1$ matrix of
zeros), transpose the whole equation, and then cancel out
$\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}$ by
multiplying by its inverse on the left, and solve for $\xi$. This step
requires that this inverse exists.
$\begin{split}\mbox{\boldmath$\xi$}=(\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$})^{-1}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{1}]-\mbox{$\mathbf{u}$})\end{split}$
(66)
Thus, in terms of the Kalman filter/smoother output the new $\xi$ for EM
iteration $j+1$ is
$\begin{split}\mbox{\boldmath$\xi$}_{j+1}=(\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$})^{-1}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}(\widetilde{\mbox{$\mathbf{x}$}}_{1}-\mbox{$\mathbf{u}$})\end{split}$
(67)
Note that using, $\widetilde{\mbox{$\mathbf{x}$}}_{0}$ output from the Kalman
smoother would not work since $\mbox{\boldmath$\Lambda$}=0$. As a result,
$\mbox{\boldmath$\xi$}_{j+1}\equiv\mbox{\boldmath$\xi$}_{j}$ in the EM
algorithm, and it is impossible to move away from your starting condition for
$\xi$.
This is conceptually similar to using a generalized least squares estimate of
$\xi$ to concentrate it out of the likelihood as discussed in Harvey (1989),
section 3.4.4. However, in the context of the EM algorithm, dealing with the
fixed $\mbox{$\boldsymbol{x}$}_{0}$ case requires nothing special; one simply
takes care to use the likelihood for the case where
$\mbox{$\boldsymbol{x}$}_{0}$ is treated as an unknown parameter (equation 7).
For the other parameters, the update equations are the same whether one uses
the log-likelihood equation with $\mbox{$\boldsymbol{x}$}_{0}$ treated as
stochastic (equation 6) or fixed (equation 7).
If your MARSS model is stationary121212meaning the $\boldsymbol{X}$’s have a
stationary distribution and your data appear stationary, however, equation
(66) probably is not what you want to use. The estimate of $\xi$ will be the
maximum-likelihood value, but it will not be drawn from the stationary
distribution; instead it could be some wildly different value that happens to
give the maximum-likelihood. If you are modeling the data as stationary, then
you should probably assume that $\xi$ is drawn from the stationary
distribution of the $\boldsymbol{X}$’s, which is some function of your model
parameters. This would mean that the model parameters would enter the part of
the likelihood that involves $\xi$ and $\Lambda$. Since you probably don’t
want to do that (if might start to get circular), you might try an iterative
process to get decent $\xi$ and $\Lambda$ or try fixing $\xi$ and estimating
$\Lambda$ (above). You can fix $\xi$ at, say, zero, by making sure the model
you fit has a stationary distribution with mean zero. You might also need to
demean your data (or estimate the $\mathbf{a}$ term to account for non-zero
mean data). A second approach is to estimate $\mbox{$\boldsymbol{x}$}_{1}$ as
the initial state instead of $\mbox{$\boldsymbol{x}$}_{0}$.
### 3.10 Update equation for $\xi$ (unconstrained), fixed
$\mbox{$\boldsymbol{x}$}_{1}$
In some cases, the estimate of $\mbox{$\boldsymbol{x}$}_{0}$ from
$\mbox{$\boldsymbol{x}$}_{1}$ using equation 67 will be highly sensitive to
small changes in the parameters. This is particularly the case for certain
$\mathbf{B}$ matrices, even if they are stationary. The result is that your
$\xi$ estimate is wildly different from the data at $t=1$. The estimates are
correct given how you defined the model, just not realistic given the data. In
this case, you can specify $\xi$ as being the value of $\boldsymbol{x}$ at
$t=1$ instead of $t=0$. That way, the data at $t=1$ will constrain the
estimated $\xi$. In this case, we treat $\mbox{$\boldsymbol{x}$}_{1}$ as fixed
but unknown parameter $\xi$. The likelihood is then:
$\begin{split}&\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{y}$},\mbox{$\boldsymbol{x}$};\Theta)=-\sum_{1}^{T}\frac{1}{2}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{a}$})^{\top}\mbox{$\mathbf{R}$}^{-1}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{a}$})-\sum_{1}^{T}\frac{1}{2}\log|\mbox{$\mathbf{R}$}|\\\
&\quad-\sum_{2}^{T}\frac{1}{2}(\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{x}$}_{t-1}-\mbox{$\mathbf{u}$})^{\top}\mbox{$\mathbf{Q}$}^{-1}(\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{x}$}_{t-1}-\mbox{$\mathbf{u}$})-\sum_{1}^{T}\frac{1}{2}\log|\mbox{$\mathbf{Q}$}|\end{split}$
(68)
$\begin{split}&\partial\Psi/\partial\mbox{\boldmath$\xi$}=-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{Y}$}_{1}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}]-\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\boldsymbol{Y}$}_{1})/\partial\mbox{\boldmath$\xi$}]+\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{R}$}^{-1}(\mbox{$\mathbf{Z}$}\mbox{\boldmath$\xi$}))/\partial\mbox{\boldmath$\xi$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{a}$})/\partial\mbox{\boldmath$\xi$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}]\bigg{)}\\\
&\quad-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\partial(\mbox{$\boldsymbol{X}$}_{2}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}]-\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\boldsymbol{X}$}_{2})/\partial\mbox{\boldmath$\xi$}]+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{Q}$}^{-1}(\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$}))/\partial\mbox{\boldmath$\xi$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{u}$})/\partial\mbox{\boldmath$\xi$}]+\,\textup{{E}}[\partial(\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\mbox{\boldmath$\xi$})/\partial\mbox{\boldmath$\xi$}]\bigg{)}\end{split}$
(69)
Note that the second summation starts at $t=2$ and $\xi$ is
$\mbox{$\boldsymbol{x}$}_{1}$ instead of $\mbox{$\boldsymbol{x}$}_{0}$.
After pulling the constants out of the expectations, we use relations (2) and
(2) to take the derivative:
$\begin{split}&\partial\Psi/\partial\mbox{\boldmath$\xi$}=-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{1}]^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}-\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{1}]^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}+2\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}+\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}+\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}\bigg{)}\\\
&\quad-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{2}]^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{2}]^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}+2\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}+\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}+\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\bigg{)}\end{split}$
(70)
This can be reduced to
$\begin{split}&\partial\Psi/\partial\mbox{\boldmath$\xi$}=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{1}]^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}-\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}-\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}+\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{2}]^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}-\mbox{\boldmath$\xi$}^{\top}\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}-\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\\\
&\quad=-\mbox{\boldmath$\xi$}^{\top}(\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}+\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$})+\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{1}]^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}-\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}+\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{2}]^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}-\mbox{$\mathbf{u}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$}\end{split}$
(71)
To solve for $\xi$, set the left side to zero (an $m\times 1$ matrix of
zeros), transpose the whole equation, and solve for $\xi$.
$\begin{split}\mbox{\boldmath$\xi$}=(\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}+\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$})^{-1}(\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}(\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{1}]-\mbox{$\mathbf{a}$})+\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{2}]-\mbox{$\mathbf{u}$}))\\\
\end{split}$ (72)
Thus, when $\mbox{\boldmath$\xi$}\equiv\mbox{$\boldsymbol{x}$}_{1}$, the new
$\xi$ for EM iteration $j+1$ is
$\begin{split}\mbox{\boldmath$\xi$}_{j+1}=(\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}\mbox{$\mathbf{Z}$}+\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}\mbox{$\mathbf{B}$})^{-1}(\mbox{$\mathbf{Z}$}^{\top}\mbox{$\mathbf{R}$}^{-1}(\widetilde{\mbox{$\mathbf{y}$}}_{1}-\mbox{$\mathbf{a}$})+\mbox{$\mathbf{B}$}^{\top}\mbox{$\mathbf{Q}$}^{-1}(\widetilde{\mbox{$\mathbf{x}$}}_{2}-\mbox{$\mathbf{u}$}))\end{split}$
(73)
## 4 The time-varying MARSS model with linear constraints
The first part of this report dealt with the case of a MARSS model (equation
1) where the parameters are time-constant and where all the elements in a
parameter matrix are estimated with no constraints. I will now describe the
derivation of an EM algorithm to solve a much more general MARSS model
(equation 74), which is a time-varying MARSS model where the MARSS parameter
matrices are written as a linear equation
$\mbox{$\mathbf{f}$}+\mbox{$\mathbf{D}$}\mbox{$\mathbf{m}$}$. This is a very
general form of a MARSS model, of which many (most) multivariate
autoregressive Gaussian models are a special case. This general MARSS model
includes as special cases, MARSS models with covariates (many VARSS models
with exogeneous variables), multivariate AR lag-p models and multivariate
moving average models, and MARSS models with linear constraints placed on the
elements within the model parameters. The objective is to derive one EM
algorithm for the whole class, thus a uniform approach to fitting these
models.
The time-varying MARSS model is written:
$\displaystyle\mbox{$\boldsymbol{x}$}_{t}=\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}_{t}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{w}$}_{t},\text{
where }\mbox{$\mathbf{W}$}_{t}\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{Q}$}_{t})$
(74a)
$\displaystyle\mbox{$\boldsymbol{y}$}_{t}=\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{v}$}_{t},\text{
where }\mbox{$\mathbf{V}$}_{t}\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{R}$}_{t})$
(74b)
$\displaystyle\mbox{$\boldsymbol{x}$}_{t_{0}}=\mbox{\boldmath$\xi$}+\mbox{$\mathbf{F}$}\mbox{$\mathbf{l}$},\text{
where }t_{0}=0\text{ or }t_{0}=1$ (74c)
$\displaystyle\mbox{$\mathbf{L}$}\sim\,\textup{{MVN}}(0,\mbox{\boldmath$\Lambda$})$
(74d) $\displaystyle\begin{bmatrix}\mbox{$\mathbf{w}$}_{t}\\\
\mbox{$\mathbf{v}$}_{t}\end{bmatrix}\sim\,\textup{{MVN}}(0,\Sigma),\quad\Sigma=\begin{bmatrix}\mbox{$\mathbf{Q}$}_{t}&0\\\
0&\mbox{$\mathbf{R}$}_{t}\end{bmatrix}$ (74e)
This looks quite similar to the previous non-time varying MARSS model, but now
the model parameters, $\mathbf{B}$, $\mathbf{u}$, $\mathbf{Q}$, $\mathbf{Z}$,
$\mathbf{a}$ and $\mathbf{R}$, have a $t$ subscript and we have a multiplier
matrix on the error terms $\mbox{$\mathbf{v}$}_{t}$,
$\mbox{$\mathbf{w}$}_{t}$, $\mathbf{l}$. The $\mbox{$\mathbf{H}$}_{t}$
multiplier is $m\times s$, so we now have $s$ state errors instead of $m$. The
$\mbox{$\mathbf{G}$}_{t}$ multiplier is $n\times k$, so we now have $k$
observation errors instead of $n$. The $\mathbf{F}$ multiplier is $m\times j$,
so now we can have some initial states ($j$ of them) be stochastic and others
be fixed. I assume that appropriate constraints are put on $\mathbf{G}$ and
$\mathbf{H}$ so that the resulting MARSS model is not under- or over-
constrained131313For example, if both $\mathbf{G}$ and $\mathbf{H}$ are column
vectors, then the system is over-constrained and has no solution.. The
notation/presentation here was influenced by SJ Koopman’s work, esp. Koopman
and Ooms, (2011) and Koopman, (1993), but in these works,
$\mbox{$\mathbf{Q}$}_{t}$ and $\mbox{$\mathbf{R}$}_{t}$ equal $\mathbf{I}$ and
the variance-covariance structures are instead specified only by
$\mbox{$\mathbf{H}$}_{t}$ and $\mbox{$\mathbf{G}$}_{t}$. I keep
$\mbox{$\mathbf{Q}$}_{t}$ and $\mbox{$\mathbf{R}$}_{t}$ in my formulation as
it seems more intuitive (to me) in the context of the EM algorithm and the
required joint-likelihood function.
We can rewrite this MARSS model using vec relationships (table 3):
$\begin{gathered}\mbox{$\boldsymbol{x}$}_{t}=(\mbox{$\boldsymbol{x}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})+\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t})+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{w}$}_{t},\mbox{$\mathbf{W}$}_{t}\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{Q}$}_{t})\\\
\mbox{$\boldsymbol{y}$}_{t}=(\mbox{$\boldsymbol{x}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})+\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t})+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{v}$}_{t},\mbox{$\mathbf{V}$}_{t}\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{R}$}_{t})\\\
\mbox{$\boldsymbol{x}$}_{t_{0}}=\mbox{\boldmath$\xi$}+\mbox{$\mathbf{F}$}\mbox{$\mathbf{l}$},\mbox{$\mathbf{L}$}\sim\,\textup{{MVN}}(0,\mbox{\boldmath$\Lambda$})\end{gathered}$
(75)
Each model parameter, $\mbox{$\mathbf{B}$}_{t}$, $\mbox{$\mathbf{u}$}_{t}$,
$\mbox{$\mathbf{Q}$}_{t}$, $\mbox{$\mathbf{Z}$}_{t}$,
$\mbox{$\mathbf{a}$}_{t}$, and $\mbox{$\mathbf{R}$}_{t}$, is written as a
time-varying linear model,
$\mbox{$\mathbf{f}$}_{t}+\mbox{$\mathbf{D}$}_{t}\mbox{$\mathbf{m}$}$, where
$\mathbf{f}$ and $\mathbf{D}$ are fully-known (not estimated and no missing
values) and $\mathbf{m}$ is a column vector of the estimates elements of the
parameter matrix:
$\begin{split}\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})&=\mbox{$\mathbf{f}$}_{t,b}+\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta}\\\
\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t})&=\mbox{$\mathbf{f}$}_{t,u}+\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon}\\\
\,\textup{{vec}}(\mbox{$\mathbf{Q}$}_{t})&=\mbox{$\mathbf{f}$}_{t,q}+\mbox{$\mathbf{D}$}_{t,q}\mbox{$\mathbf{q}$}\\\
\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})&=\mbox{$\mathbf{f}$}_{t,z}+\mbox{$\mathbf{D}$}_{t,z}\boldsymbol{\zeta}\\\
\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t})&=\mbox{$\mathbf{f}$}_{t,a}+\mbox{$\mathbf{D}$}_{t,a}\boldsymbol{\alpha}\\\
\,\textup{{vec}}(\mbox{$\mathbf{R}$}_{t})&=\mbox{$\mathbf{f}$}_{t,r}+\mbox{$\mathbf{D}$}_{t,r}\mbox{$\mathbf{r}$}\\\
\,\textup{{vec}}(\mbox{\boldmath$\Lambda$})&=\mbox{$\mathbf{f}$}_{\lambda}+\mbox{$\mathbf{D}$}_{\lambda}\boldsymbol{\lambda}\\\
\,\textup{{vec}}(\mbox{\boldmath$\xi$})&=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}\end{split}$
(76)
The estimated parameters are now the column vectors, $\boldsymbol{\beta}$,
$\boldsymbol{\upsilon}$, $\mathbf{q}$, $\boldsymbol{\zeta}$,
$\boldsymbol{\alpha}$, $\mathbf{r}$, $\mathbf{p}$ and $\boldsymbol{\lambda}$.
The time-varying aspect comes from the time-varying $\mathbf{f}$ and
$\mathbf{D}$. Note that variance-covariance matrices must be positive-definite
and we cannot specify a form that cannot be estimated. Fixing the diagonal
terms and estimating the off-diagonals would not be allowed. Thus the
$\mathbf{f}$ and $\mathbf{D}$ terms for $\mathbf{Q}$, $\mathbf{R}$ and
$\Lambda$ are limited. For the other parameters, the forms are fairly
unrestricted, except that the $\mathbf{D}$s need to be full rank so that we
are not specifying an under-constrained model. ’Full rank’ will imply that we
are not trying to estimate confounded matrix elements; for example, trying to
estimate $a_{1}$ and $a_{2}$ but only $a_{1}+a_{2}$ appear in the model.
The temporally variable MARSS model, equation (75) together with (76), looks
rather different than other temporally variable MARSS models, such as a VARSSX
or MARSS with covariates model, in the literature. But those models are
special cases of this equation. By deriving an EM algorithm for this more
general (if unfamiliar) form, I then have an algorithm for many different
types of time-varying MARSS models with linear constraints on the parameter
elements. Below I show some examples.
### 4.1 MARSS model with linear constraints
We can use equation (75) to put linear constraints on the elements of the
parameters, $\mathbf{B}$, $\mathbf{u}$, $\mathbf{Q}$, $\mathbf{Z}$,
$\mathbf{a}$, $\mathbf{R}$, $\xi$ and $\Lambda$. Here is an example of a
simple MARSS model with linear constraints:
$\displaystyle\begin{bmatrix}x_{1}\\\
x_{2}\end{bmatrix}_{t}=\begin{bmatrix}a&0\\\
0&2a\end{bmatrix}\begin{bmatrix}x_{1}\\\
x_{2}\end{bmatrix}_{t-1}+\begin{bmatrix}w_{1}\\\
w_{2}\end{bmatrix}_{t},\quad\begin{bmatrix}w_{1}\\\
w_{2}\end{bmatrix}_{t}\sim\,\textup{{MVN}}\begin{pmatrix}\begin{bmatrix}0.1\\\
u+0.1\end{bmatrix},\begin{bmatrix}q_{11}&q_{12}\\\
q_{21}&q_{22}\end{bmatrix}\end{pmatrix}$ $\displaystyle\begin{bmatrix}y_{1}\\\
y_{2}\\\ y_{3}\end{bmatrix}_{t}=\begin{bmatrix}c&3c+2d+1\\\ c&d\\\
c+e+2&e\end{bmatrix}\begin{bmatrix}x_{1}\\\
x_{2}\end{bmatrix}_{t}+\begin{bmatrix}v_{1}\\\ v_{2}\\\
v_{3}\end{bmatrix}_{t},$ $\displaystyle\begin{bmatrix}v_{1}\\\ v_{2}\\\
v_{3}\end{bmatrix}_{t}\sim\,\textup{{MVN}}\begin{pmatrix}\begin{bmatrix}a_{1}\\\
a_{2}\\\ 0\end{bmatrix},\begin{bmatrix}r&0&0\\\ 0&2r&0\\\
0&0&4r\end{bmatrix}\end{pmatrix}$ $\displaystyle\begin{bmatrix}x_{1}\\\
x_{2}\end{bmatrix}_{0}\sim\,\textup{{MVN}}\begin{pmatrix}\begin{bmatrix}\pi\\\
\pi\end{bmatrix},\begin{bmatrix}1&0\\\ 0&1\end{bmatrix}\end{pmatrix}$
Linear constraints mean that elements of a matrix may be fixed to a specific
numerical value or specified as a linear combination of values (which can be
shared within a matrix but not shared between matrices).
Let’s say we have some parameter matrix $\mathbf{M}$ (here $\mathbf{M}$ could
be any of the parameters in the MARSS model) where each matrix element is
written as a linear model of some potentially shared values:
$\mbox{$\mathbf{M}$}=\begin{bmatrix}a+2c+2&0.9&c\\\ -1.2&a&0\\\
0&3c+1&b\end{bmatrix}$
Thus each $i$-th element in $\mathbf{M}$ can be written as
$\beta_{i}+\beta_{a,i}a+\beta_{b,i}b+\beta_{c,i}c$, which is a linear
combination of three estimated values $a$, $b$ and $c$. The matrix
$\mathbf{M}$ can be rewritten in terms of a $\beta_{i}$ part and the part
involving the $\beta_{-,j}$’s:
$\mbox{$\mathbf{M}$}=\begin{bmatrix}2&0.9&0\\\ -1.2&0&0\\\
0&1&0\end{bmatrix}+\begin{bmatrix}a+2c&0&c\\\ 0&a&0\\\
0&3c&b\end{bmatrix}=\mbox{$\mathbf{M}$}_{\text{fixed}}+\mbox{$\mathbf{M}$}_{\text{free}}$
The vec function turns any matrix into a column vector by stacking the columns
on top of each other. Thus,
$\,\textup{{vec}}(\mbox{$\mathbf{M}$})=\begin{bmatrix}a+2c+2\\\ -1.2\\\ 0\\\
0.9\\\ a\\\ 3c+1\\\ c\\\ 0\\\ b\end{bmatrix}$
We can now write $\,\textup{{vec}}(\mbox{$\mathbf{M}$})$ as a linear
combination of
$\mbox{$\mathbf{f}$}=\,\textup{{vec}}(\mbox{$\mathbf{M}$}_{\text{fixed}})$ and
$\mbox{$\mathbf{D}$}\mbox{$\mathbf{m}$}=\,\textup{{vec}}(\mbox{$\mathbf{M}$}_{\text{free}})$.
$\mathbf{m}$ is a $p\times 1$ column vector of the $p$ free values, in this
case $p=3$ and the free values are $a,b,c$. $\mathbf{D}$ is a design matrix
that translates $\mathbf{m}$ into
$\,\textup{{vec}}(\mbox{$\mathbf{M}$}_{\text{free}})$. For example,
$\,\textup{{vec}}(\mbox{$\mathbf{M}$})=\begin{bmatrix}a+2c+2\\\ -1.2\\\ 0\\\
0.9\\\ a\\\ 3c+1\\\ c\\\ 0\\\ b\end{bmatrix}=\begin{bmatrix}0\\\ -1.2\\\ 2\\\
0.9\\\ 0\\\ 1\\\ 0\\\ 0\\\ 0\end{bmatrix}+\begin{bmatrix}1&2&0\\\ 0&0&0\\\
0&0&0\\\ 0&0&0\\\ 1&0&0\\\ 0&0&3\\\ 0&0&1\\\ 0&0&0\\\
0&1&0\end{bmatrix}\begin{bmatrix}a\\\ b\\\
c\end{bmatrix}=\mbox{$\mathbf{f}$}+\mbox{$\mathbf{D}$}\mbox{$\mathbf{m}$}$
There are constraints on $\mathbf{D}$. Your $\mathbf{D}$ matrix needs to
describe a solvable linear set of equations. Basically it needs to be full
rank (rank $p$ where $p$ is the number of columns in $\mathbf{D}$ or free
values you are trying to estimate), so that you can estimate each of the $p$
free values. For example, if $a+b$ always appeared together, then $a+b$ can be
estimated but not $a$ and $b$ separately. Note, if $\mathbf{M}$ is fixed, then
$\mathbf{D}$ is undefined but that is fine because in this case, there will be
no update equation needed; you just use the fixed value of $\mathbf{M}$ in the
algorithm.
Table 3: Kronecker and vec relations. Here $\mathbf{A}$ is $n\times m$, $\mathbf{B}$ is $m\times p$, $\mathbf{C}$ is $p\times q$, and $\mathbf{E}$ and $\mathbf{D}$ are $p\times p$. $\mathbf{a}$ is a $m\times 1$ column vector and $\mathbf{b}$ is a $p\times 1$ column vector. The symbol $\otimes$ stands for the Kronecker product: $\mbox{$\mathbf{A}$}\otimes\mbox{$\mathbf{C}$}$ is a $np\times mq$ matrix. The identity matrix, $\mbox{$\mathbf{I}$}_{n}$, is a $n\times n$ diagonal matrix with ones on the diagonal. $\,\textup{{vec}}(\mbox{$\mathbf{a}$})=\,\textup{{vec}}(\mbox{$\mathbf{a}$}^{\top})=\mbox{$\mathbf{a}$}$ | (77)
---|---
The vec of a column vector (or its transpose) is itself.
$\mbox{$\mathbf{a}$}=(\mbox{$\mathbf{a}$}^{\top}\otimes\mbox{$\mathbf{I}$}_{1})$
$\,\textup{{vec}}(\mbox{$\mathbf{A}$}\mbox{$\mathbf{a}$})=(\mbox{$\mathbf{a}$}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})\,\textup{{vec}}(\mbox{$\mathbf{A}$})=\mbox{$\mathbf{A}$}\mbox{$\mathbf{a}$}$ | (78)
$\,\textup{{vec}}(\mbox{$\mathbf{A}$}\mbox{$\mathbf{a}$})=\mbox{$\mathbf{A}$}\mbox{$\mathbf{a}$}$
since $\mathbf{A}$$\mathbf{a}$ is itself an $m\times 1$ column vector.
$\,\textup{{vec}}(\mbox{$\mathbf{A}$}\mbox{$\mathbf{B}$})=(\mbox{$\mathbf{I}$}_{p}\otimes\mbox{$\mathbf{A}$})\,\textup{{vec}}(\mbox{$\mathbf{B}$})=(\mbox{$\mathbf{B}$}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})\,\textup{{vec}}(\mbox{$\mathbf{A}$})$ | (79)
$\,\textup{{vec}}(\mbox{$\mathbf{A}$}\mbox{$\mathbf{B}$}\mbox{$\mathbf{C}$})=(\mbox{$\mathbf{C}$}^{\top}\otimes\mbox{$\mathbf{A}$})\,\textup{{vec}}(\mbox{$\mathbf{B}$})$ | (80)
$(\mbox{$\mathbf{A}$}\otimes\mbox{$\mathbf{B}$})(\mbox{$\mathbf{C}$}\otimes\mbox{$\mathbf{D}$})=(\mbox{$\mathbf{A}$}\mbox{$\mathbf{C}$}\otimes\mbox{$\mathbf{B}$}\mbox{$\mathbf{D}$})$ | (81)
$(\mbox{$\mathbf{a}$}\otimes\mbox{$\mathbf{I}$}_{p})\mbox{$\mathbf{C}$}=(\mbox{$\mathbf{a}$}\otimes\mbox{$\mathbf{C}$})$ | (82)
$\mbox{$\mathbf{C}$}(\mbox{$\mathbf{a}$}^{\top}\otimes\mbox{$\mathbf{I}$}_{q})=(\mbox{$\mathbf{a}$}^{\top}\otimes\mbox{$\mathbf{C}$})$
$\mbox{$\mathbf{E}$}(\mbox{$\mathbf{a}$}^{\top}\otimes\mbox{$\mathbf{D}$})=\mbox{$\mathbf{E}$}\mbox{$\mathbf{D}$}(\mbox{$\mathbf{a}$}^{\top}\otimes\mbox{$\mathbf{I}$}_{p})=(\mbox{$\mathbf{a}$}^{\top}\otimes\mbox{$\mathbf{E}$}\mbox{$\mathbf{D}$})$
$(\mbox{$\mathbf{a}$}\otimes\mbox{$\mathbf{I}$}_{p})\mbox{$\mathbf{C}$}(\mbox{$\mathbf{b}$}^{\top}\otimes\mbox{$\mathbf{I}$}_{q})=(\mbox{$\mathbf{a}$}\mbox{$\mathbf{b}$}^{\top}\otimes\mbox{$\mathbf{C}$})$ | (83)
$(\mbox{$\mathbf{a}$}\otimes\mbox{$\mathbf{a}$})=\,\textup{{vec}}(\mbox{$\mathbf{a}$}\mbox{$\mathbf{a}$}^{\top})$ | (84)
$(\mbox{$\mathbf{a}$}^{\top}\otimes\mbox{$\mathbf{a}$}^{\top})=(\mbox{$\mathbf{a}$}\otimes\mbox{$\mathbf{a}$})^{\top}=(\,\textup{{vec}}(\mbox{$\mathbf{a}$}\mbox{$\mathbf{a}$}^{\top}))^{\top}$
$(\mbox{$\mathbf{A}$}^{\top}\otimes\mbox{$\mathbf{B}$}^{\top})=(\mbox{$\mathbf{A}$}\otimes\mbox{$\mathbf{B}$})^{\top}$ | (85)
### 4.2 A MARSS model with exogenous variables
The following is a commonly seen MARSS model with covariates
$\mbox{$\mathbf{g}$}_{t}$ and $\mbox{$\mathbf{h}$}_{t}$ appearing as additive
elements:
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}&=\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{C}$}\mbox{$\mathbf{g}$}_{t}+\mbox{$\mathbf{w}$}_{t}\\\
\mbox{$\boldsymbol{y}$}_{t}&=\mbox{$\mathbf{Z}$}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{F}$}\mbox{$\mathbf{h}$}_{t}+\mbox{$\mathbf{v}$}_{t}\end{split}$
We would typically want to estimate $\mathbf{C}$ or $\mathbf{F}$ which are the
influence of our covariates on our responses, $\boldsymbol{x}$ or
$\boldsymbol{y}$. Let’s say there are $p$ covariates in
$\mbox{$\mathbf{h}$}_{t}$ and $q$ covariates in $\mbox{$\mathbf{g}$}_{t}$.
Then we can write the above in vec form:
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}&=(\mbox{$\boldsymbol{x}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$})+(\mbox{$\mathbf{h}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{p})\,\textup{{vec}}(\mbox{$\mathbf{C}$})+\mbox{$\mathbf{w}$}_{t}\\\
\mbox{$\boldsymbol{y}$}_{t}&=(\mbox{$\boldsymbol{x}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})\,\textup{{vec}}(\mbox{$\mathbf{Z}$})+(\mbox{$\mathbf{g}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{q})\,\textup{{vec}}(\mbox{$\mathbf{D}$})+\mbox{$\mathbf{v}$}_{t}\end{split}$
(86)
Let’s say we put no constraints $\mathbf{B}$, $\mathbf{Z}$, $\mathbf{Q}$,
$\mathbf{R}$, $\xi$, or $\Lambda$. Then in the form of equation (75),
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}&=(\mbox{$\boldsymbol{x}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})+\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t})+\mbox{$\mathbf{w}$}_{t}\\\
\mbox{$\boldsymbol{y}$}_{t}&=(\mbox{$\boldsymbol{x}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})+\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t})+\mbox{$\mathbf{v}$}_{t},\end{split}$
with the parameters defined as follows:
$\begin{split}\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})&=\mbox{$\mathbf{f}$}_{t,b}+\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta};\,\mbox{$\mathbf{f}$}_{t,b}=0;\,\mbox{$\mathbf{D}$}_{t,b}=1;\,\boldsymbol{\beta}=\,\textup{{vec}}(\mbox{$\mathbf{B}$})\\\
\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t})&=\mbox{$\mathbf{f}$}_{t,u}+\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon};\,\mbox{$\mathbf{f}$}_{t,u}=0;\,\mbox{$\mathbf{D}$}_{t,u}=(\mbox{$\mathbf{h}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{p});\,\boldsymbol{\upsilon}=\,\textup{{vec}}(\mbox{$\mathbf{C}$})\\\
\,\textup{{vec}}(\mbox{$\mathbf{Q}$}_{t})&=\mbox{$\mathbf{f}$}_{t,q}+\mbox{$\mathbf{D}$}_{t,q}\mbox{$\mathbf{q}$};\,\mbox{$\mathbf{f}$}_{t,q}=0;\,\mbox{$\mathbf{D}$}_{t,q}=\mbox{$\mathbf{D}$}_{q}\\\
\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})&=\mbox{$\mathbf{f}$}_{t,z}+\mbox{$\mathbf{D}$}_{t,z}\boldsymbol{\zeta};\,\mbox{$\mathbf{f}$}_{t,z}=0;\,\mbox{$\mathbf{D}$}_{t,z}=1;\,\boldsymbol{\zeta}=\,\textup{{vec}}(\mbox{$\mathbf{Z}$})\\\
\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t})&=\mbox{$\mathbf{f}$}_{t,a}+\mbox{$\mathbf{D}$}_{t,a}\boldsymbol{\alpha};\,\mbox{$\mathbf{f}$}_{t,a}=0;\,\mbox{$\mathbf{D}$}_{t,a}=(\mbox{$\mathbf{g}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{q});\,\boldsymbol{\alpha}=\,\textup{{vec}}(\mbox{$\mathbf{F}$})\\\
\,\textup{{vec}}(\mbox{$\mathbf{R}$}_{t})&=\mbox{$\mathbf{f}$}_{t,r}+\mbox{$\mathbf{D}$}_{t,r}\mbox{$\mathbf{r}$};\,\mbox{$\mathbf{f}$}_{t,r}=0;\,\mbox{$\mathbf{D}$}_{t,r}=\mbox{$\mathbf{D}$}_{r}\\\
\,\textup{{vec}}(\mbox{\boldmath$\Lambda$})&=\mbox{$\mathbf{f}$}_{\lambda}+\mbox{$\mathbf{D}$}_{\lambda}\boldsymbol{\lambda};\,\mbox{$\mathbf{f}$}_{\lambda}=0\\\
\,\textup{{vec}}(\mbox{\boldmath$\xi$})&=\mbox{\boldmath$\xi$}=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$};\,\mbox{$\mathbf{f}$}_{\xi}=0;\,\mbox{$\mathbf{D}$}_{\xi}=1\end{split}$
Note that variance-covariance matrices are never unconstrained really so we
use $\mbox{$\mathbf{D}$}_{q}$, $\mbox{$\mathbf{D}$}_{r}$ and
$\mbox{$\mathbf{D}$}_{\lambda}$ to specify the symmetry within the matrix.
The transformation of the simple MARSS with covariates (equation 86) into the
form of equation (75) may seem a little painful, but the advantage is that a
single EM algorithm can be used for a large class of models. Presumably, the
transformation of the equation will be hidden from users by a wrapper function
that does the reformulation before passing the model to the general EM
algorithm. In the MARSS R package, this reformultion is done in the
`MARSS.marxss` function.
### 4.3 A general MARSS model with exogenous variables
Let’s imagine now a very general MARSS model with various ‘inputs’. ‘ input’
here just means that it is some fully known matrix rather than something we
are estimating. It could be a sequence of 0s and 1s if for example we were
fitting a before/after sort of model. Below the letters with a $t$ subscript
are the inputs, except $\boldsymbol{x}$, $\boldsymbol{y}$, $\mathbf{w}$ and
$\mathbf{v}$.
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}&=\mbox{$\mathbf{J}$}_{t}\mbox{$\mathbf{B}$}\mbox{$\mathbf{L}$}_{t}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{C}$}_{t}\mbox{$\mathbf{U}$}\mbox{$\mathbf{g}$}_{t}+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{w}$}_{t}\\\
\mbox{$\boldsymbol{y}$}_{t}&=\mbox{$\mathbf{M}$}_{t}\mbox{$\mathbf{Z}$}\mbox{$\mathbf{N}$}_{t}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{F}$}_{t}\mbox{$\mathbf{A}$}\mbox{$\mathbf{h}$}_{t}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{v}$}_{t}\end{split}$
(87)
In vec form, this is:
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}&=(\mbox{$\boldsymbol{x}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})(\mbox{$\mathbf{L}$}_{t}^{\top}\otimes\mbox{$\mathbf{J}$}_{t})\,\textup{{vec}}(\mbox{$\mathbf{B}$})+(\mbox{$\mathbf{g}$}_{t}^{\top}\otimes\mbox{$\mathbf{C}$}_{t})\,\textup{{vec}}(\mbox{$\mathbf{U}$})+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{w}$}_{t}\\\
&=(\mbox{$\boldsymbol{x}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})(\mbox{$\mathbf{L}$}_{t}^{\top}\otimes\mbox{$\mathbf{J}$}_{t})(\mbox{$\mathbf{f}$}_{b}+\mbox{$\mathbf{D}$}_{b}\boldsymbol{\beta})+(\mbox{$\mathbf{g}$}_{t}^{\top}\otimes\mbox{$\mathbf{C}$}_{t})(\mbox{$\mathbf{f}$}_{u}+\mbox{$\mathbf{D}$}_{u}\boldsymbol{\upsilon})+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{w}$}_{t}\\\
\mbox{$\mathbf{W}$}_{t}&\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{Q}$}\mbox{$\mathbf{G}$}_{t}^{\top})\\\
\\\
\mbox{$\boldsymbol{y}$}_{t}&=(\mbox{$\boldsymbol{x}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})(\mbox{$\mathbf{N}$}_{t}^{\top}\otimes\mbox{$\mathbf{M}$}_{t})\,\textup{{vec}}(\mbox{$\mathbf{Z}$})+(\mbox{$\mathbf{h}$}_{t}^{\top}\otimes\mbox{$\mathbf{F}$}_{t})\,\textup{{vec}}(\mbox{$\mathbf{A}$})+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{v}$}_{t}\\\
&=(\mbox{$\boldsymbol{x}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})\mathbb{Z}_{t}(\mbox{$\mathbf{f}$}_{z}+\mbox{$\mathbf{D}$}_{z}\boldsymbol{\zeta})+\mathbb{A}_{t}(\mbox{$\mathbf{f}$}_{a}+\mbox{$\mathbf{D}$}_{a}\boldsymbol{\alpha})+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{v}$}_{t}\\\
\mbox{$\mathbf{V}$}_{t}&\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}\mbox{$\mathbf{H}$}_{t}^{\top})\\\
\\\
\mbox{$\boldsymbol{X}$}_{t_{0}}&\sim\,\textup{{MVN}}(\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$},\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}),\text{
where
}\,\textup{{vec}}(\mbox{\boldmath$\Lambda$})=\mbox{$\mathbf{f}$}_{\lambda}+\mbox{$\mathbf{D}$}_{\lambda}\boldsymbol{\lambda}\end{split}$
(88)
We could write down a likelihood function for this model but written this way,
the model presumes that
$\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}\mbox{$\mathbf{H}$}_{t}^{\top}$,
$\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{Q}$}\mbox{$\mathbf{G}$}_{t}^{\top}$,
and $\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}$
are valid variance-covariance matrices. I will actually write this model
differently below because I don’t want to make that assumption.
We define the $\mathbf{f}$ and $\mathbf{D}$ parameters as follows.
$\begin{split}\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})&=\mbox{$\mathbf{f}$}_{t,b}+\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta}=(\mbox{$\mathbf{L}$}_{t}^{\top}\otimes\mbox{$\mathbf{J}$}_{t})\mbox{$\mathbf{f}$}_{b}+(\mbox{$\mathbf{L}$}_{t}^{\top}\otimes\mbox{$\mathbf{J}$}_{t})\mbox{$\mathbf{D}$}_{b}\boldsymbol{\beta}\\\
\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t})&=\mbox{$\mathbf{f}$}_{t,u}+\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon}=(\mbox{$\mathbf{g}$}_{t}^{\top}\otimes\mbox{$\mathbf{C}$}_{t})\mbox{$\mathbf{f}$}_{u}+(\mbox{$\mathbf{g}$}_{t}^{\top}\otimes\mbox{$\mathbf{C}$}_{t})\mbox{$\mathbf{D}$}_{u}\boldsymbol{\upsilon}\\\
\,\textup{{vec}}(\mbox{$\mathbf{Q}$}_{t})&=\mbox{$\mathbf{f}$}_{t,q}+\mbox{$\mathbf{D}$}_{t,q}\mbox{$\mathbf{q}$}=(\mbox{$\mathbf{G}$}_{t}\otimes\mbox{$\mathbf{G}$}_{t})\mbox{$\mathbf{f}$}_{q}+(\mbox{$\mathbf{G}$}_{t}\otimes\mbox{$\mathbf{G}$}_{t})\mbox{$\mathbf{D}$}_{q}\mbox{$\mathbf{q}$}\\\
\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})&=\mbox{$\mathbf{f}$}_{t,z}+\mbox{$\mathbf{D}$}_{t,z}\boldsymbol{\zeta}=(\mbox{$\mathbf{N}$}_{t}^{\top}\otimes\mbox{$\mathbf{M}$}_{t})\mbox{$\mathbf{f}$}_{z}+(\mbox{$\mathbf{N}$}_{t}^{\top}\otimes\mbox{$\mathbf{M}$}_{t})\mbox{$\mathbf{D}$}_{z}\boldsymbol{\zeta}\\\
\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t})&=\mbox{$\mathbf{f}$}_{t,a}+\mbox{$\mathbf{D}$}_{t,a}\boldsymbol{\alpha}=(\mbox{$\mathbf{h}$}_{t}^{\top}\otimes\mbox{$\mathbf{F}$}_{t})\mbox{$\mathbf{f}$}_{a}+(\mbox{$\mathbf{h}$}_{t}^{\top}\otimes\mbox{$\mathbf{F}$}_{t})\mbox{$\mathbf{D}$}_{a}\boldsymbol{\alpha}\\\
\,\textup{{vec}}(\mbox{$\mathbf{R}$}_{t})&=\mbox{$\mathbf{f}$}_{t,r}+\mbox{$\mathbf{D}$}_{t,r}\mbox{$\mathbf{r}$}=(\mbox{$\mathbf{H}$}_{t}\otimes\mbox{$\mathbf{H}$}_{t})\mbox{$\mathbf{f}$}_{q}+(\mbox{$\mathbf{H}$}_{t}\otimes\mbox{$\mathbf{H}$}_{t})\mbox{$\mathbf{D}$}_{r}\mbox{$\mathbf{r}$}\\\
\,\textup{{vec}}(\mbox{\boldmath$\Lambda$})&=\mbox{$\mathbf{f}$}_{\lambda}+\mbox{$\mathbf{D}$}_{\lambda}\boldsymbol{\lambda}=0+\mbox{$\mathbf{D}$}_{\lambda}\boldsymbol{\lambda}\\\
\,\textup{{vec}}(\mbox{\boldmath$\xi$})&=\mbox{\boldmath$\xi$}=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}=0+1\mbox{$\mathbf{p}$}\end{split}$
Here, for example $\mbox{$\mathbf{f}$}_{b}$ and $\mbox{$\mathbf{D}$}_{b}$
indicate the linear constraints on $\mathbf{B}$ and
$\mbox{$\mathbf{f}$}_{t,b}$ is
$(\mbox{$\mathbf{L}$}_{t}^{\top}\otimes\mbox{$\mathbf{J}$}_{t})\mbox{$\mathbf{f}$}_{b}$
and $\mbox{$\mathbf{D}$}_{t,b}$ is
$(\mbox{$\mathbf{L}$}_{t}^{\top}\otimes\mbox{$\mathbf{J}$}_{t})\mbox{$\mathbf{D}$}_{b}$.
The elements of $\mathbf{B}$ that are being estimated are $\boldsymbol{\beta}$
arranged as a column vector.
As usual, this reformulation looks cumbersome, but would be hidden from the
user presumably.
### 4.4 The expected log-likelihood function
As mentioned above, we do not necessarily want to assume that
$\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{G}$}_{t}^{\top}$,
$\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{Q}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top}$,
and $\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}$
are valid variance-covariance matrices. This would rule out many MARSS models
that we would like to fit. For example, if $\mbox{$\mathbf{Q}$}=\sigma^{2}$
and $\mbox{$\mathbf{H}$}=\begin{bmatrix}1\\\ 1\\\ 1\end{bmatrix}$,
$\mbox{$\mathbf{H}$}\mbox{$\mathbf{Q}$}\mbox{$\mathbf{H}$}^{\top}$ would be an
invalid variance-variance matrix. However, this is a valid MARSS model.
Instead I will define
$\Phi_{t}=(\mbox{$\mathbf{H}$}_{t}^{\top}\mbox{$\mathbf{H}$}_{t})^{-1}\mbox{$\mathbf{H}$}_{t}^{\top}$,
$\Xi_{t}=(\mbox{$\mathbf{G}$}_{t}^{\top}\mbox{$\mathbf{G}$}_{t})^{-1}\mbox{$\mathbf{G}$}_{t}^{\top}$,
and
$\Pi=(\mbox{$\mathbf{F}$}^{\top}\mbox{$\mathbf{F}$})^{-1}\mbox{$\mathbf{F}$}^{\top}$.
I then require that the inverses of
$\mbox{$\mathbf{G}$}_{t}^{\top}\mbox{$\mathbf{G}$}_{t}$,
$\mbox{$\mathbf{H}$}_{t}^{\top}\mbox{$\mathbf{H}$}_{t}$, and
$\mbox{$\mathbf{F}$}^{\top}\mbox{$\mathbf{F}$}$ exist and that
$\mbox{$\mathbf{f}$}_{t,q}+\mbox{$\mathbf{D}$}_{t,q}\mbox{$\mathbf{q}$}$,
$\mbox{$\mathbf{f}$}_{t,r}+\mbox{$\mathbf{D}$}_{t,r}\mbox{$\mathbf{r}$}$, and
$\mbox{$\mathbf{f}$}_{\lambda}+\mbox{$\mathbf{D}$}_{\lambda}\boldsymbol{\lambda}$
specify valid variance-covariance matrices. These are much less stringent
restrictions.
For the purpose of writing down the expected log-likelihood, our MARSS model
is now written
$\begin{gathered}\Phi_{t}\mbox{$\boldsymbol{x}$}_{t}=\Phi_{t}(\mbox{$\boldsymbol{x}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})+\Phi_{t}\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t})+\mbox{$\mathbf{w}$}_{t},\quad\text{
where }\mbox{$\mathbf{W}$}_{t}\sim\mathrm{MVN}(0,\mbox{$\mathbf{Q}$}_{t})\\\
\Xi_{t}\mbox{$\boldsymbol{y}$}_{t}=\Xi_{t}(\mbox{$\boldsymbol{x}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})+\Xi_{t}\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t})+\mbox{$\mathbf{v}$}_{t},\quad\text{
where }\mbox{$\mathbf{V}$}_{t}\sim\mathrm{MVN}(0,\mbox{$\mathbf{R}$}_{t})\\\
\Pi\mbox{$\boldsymbol{x}$}_{t_{0}}=\Pi\mbox{\boldmath$\xi$}+\mbox{$\mathbf{l}$},\quad\text{
where
}\mbox{$\mathbf{L}$}\sim\,\textup{{MVN}}(0,\mbox{\boldmath$\Lambda$})\end{gathered}$
(89)
As mentioned before, this relies on $\mathbf{G}$ and $\mathbf{H}$ having forms
that do not lead to over- or under-constrained linear systems.
To derive the EM update equations, we need the expected log-likelihood
function for the time-varying MARSS model. Using equation (89), we get
$\begin{split}&\,\textup{{E}}_{\text{{\bf
XY}}}[\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{Y}$},\mbox{$\boldsymbol{X}$};\Theta)]=-\frac{1}{2}\,\textup{{E}}_{\text{{\bf
XY}}}\bigg{(}\sum_{1}^{T}(\mbox{$\boldsymbol{Y}$}_{t}-(\mbox{$\boldsymbol{X}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})-\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t}))^{\top}\Xi_{t}^{\top}\mbox{$\mathbf{R}$}_{t}^{-1}\Xi_{t}\\\
&\quad(\mbox{$\boldsymbol{Y}$}_{t}-(\mbox{$\boldsymbol{X}$}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})-\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t}))+\sum_{1}^{T}\log|\mbox{$\mathbf{R}$}_{t}|\\\
&\quad+\sum_{t_{0}+1}^{T}(\mbox{$\boldsymbol{X}$}_{t}-(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})-\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t}))^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}_{t}^{-1}\Phi_{t}\\\
&\quad(\mbox{$\boldsymbol{X}$}_{t}-(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})-\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t}))+\sum_{t_{0}+1}^{T}\log|\mbox{$\mathbf{Q}$}_{t}|\\\
&\quad+(\mbox{$\boldsymbol{X}$}_{t_{0}}-\,\textup{{vec}}(\mbox{\boldmath$\xi$}))^{\top}\Pi^{\top}\mbox{\boldmath$\Lambda$}^{-1}\Pi(\mbox{$\boldsymbol{X}$}_{t_{0}}-\,\textup{{vec}}(\mbox{\boldmath$\xi$}))+\log|\mbox{\boldmath$\Lambda$}|+\log
2\pi\bigg{)}\end{split}$ (90)
If any $\mbox{$\mathbf{G}$}_{t}$, $\mbox{$\mathbf{H}$}_{t}$ or $\mathbf{F}$ is
all zero, then the line in the likelihood with $\mbox{$\mathbf{R}$}_{t}$,
$\mbox{$\mathbf{Q}$}_{t}$ or $\Lambda$, respectively, does not appear. If any
$\mbox{$\boldsymbol{x}$}_{t_{0}}$ are fixed, meaning all zero row in
$\mathbf{F}$, that
$\mbox{$\boldsymbol{X}$}_{t_{0}}\equiv\mbox{\boldmath$\xi$}$ anywhere it
appears in the likelihood. The way I have written the general equation, some
$\mbox{$\boldsymbol{x}$}_{t_{0}}$ might be fixed and others stochastic.
The vec of the model parameters are defined as follows:
$\begin{split}\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})&=\mbox{$\mathbf{f}$}_{t,b}+\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta}\\\
\,\textup{{vec}}(\mbox{$\mathbf{u}$}_{t})&=\mbox{$\mathbf{f}$}_{t,u}+\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon}\\\
\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})&=\mbox{$\mathbf{f}$}_{t,z}+\mbox{$\mathbf{D}$}_{t,z}\boldsymbol{\zeta}\\\
\,\textup{{vec}}(\mbox{$\mathbf{a}$}_{t})&=\mbox{$\mathbf{f}$}_{t,a}+\mbox{$\mathbf{D}$}_{t,a}\boldsymbol{\alpha}\\\
\,\textup{{vec}}(\mbox{$\mathbf{Q}$}_{t})&=\mbox{$\mathbf{f}$}_{t,q}+\mbox{$\mathbf{D}$}_{t,q}\mbox{$\mathbf{q}$}\\\
\,\textup{{vec}}(RR_{t})&=\mbox{$\mathbf{f}$}_{t,r}+\mbox{$\mathbf{D}$}_{t,r}\mbox{$\mathbf{r}$}\\\
\,\textup{{vec}}(\mbox{\boldmath$\xi$})&=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}\\\
\,\textup{{vec}}(\mbox{\boldmath$\Lambda$})&=\mbox{$\mathbf{f}$}_{\lambda}+\mbox{$\mathbf{D}$}_{\lambda}\boldsymbol{\lambda}\\\
\Phi_{t}&=(\mbox{$\mathbf{G}$}_{t}^{\top}\mbox{$\mathbf{G}$}_{t})^{-1}\mbox{$\mathbf{G}$}_{t}^{\top}\\\
\Xi_{t}&=(\mbox{$\mathbf{H}$}_{t}^{\top}\mbox{$\mathbf{H}$}_{t})^{-1}\mbox{$\mathbf{H}$}_{t}^{\top}\\\
\Pi&=(\mbox{$\mathbf{F}$}^{\top}\mbox{$\mathbf{F}$})^{-1}\mbox{$\mathbf{F}$}^{\top}\end{split}$
## 5 The constrained update equations
The derivation proceeds by taking the partial derivative of equation 90 with
respect to the estimated terms, the $\boldsymbol{\zeta}$,
$\boldsymbol{\alpha}$, etc, setting the derivative to zero, and solving for
those estimated terms. Conceptually, the algebraic steps in the derivation are
similar to those in the unconstrained derivation.
### 5.1 The general $\mathbf{u}$ update equations
We take the derivative of $\Psi$ (equation 90) with respect to
$\boldsymbol{\upsilon}$.
$\begin{split}\partial\Psi/\partial\boldsymbol{\upsilon}&=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\partial(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon}])/\partial\boldsymbol{\upsilon}-\partial(\,\textup{{E}}[\boldsymbol{\upsilon}^{\top}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\boldsymbol{X}$}_{t}])/\partial\boldsymbol{\upsilon}\\\
&+\partial(\,\textup{{E}}[((\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t}))^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon}])/\partial\boldsymbol{\upsilon}+\partial(\,\textup{{E}}[\boldsymbol{\upsilon}^{\top}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})])/\partial\boldsymbol{\upsilon}\\\
&+\partial(\boldsymbol{\upsilon}^{\top}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon})/\partial\boldsymbol{\upsilon}+\partial(\,\textup{{E}}[\mbox{$\mathbf{f}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon}])/\partial\boldsymbol{\upsilon}+\partial(\,\textup{{E}}[\boldsymbol{\upsilon}^{\top}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{f}$}_{t,u}])/\partial\boldsymbol{\upsilon}\bigg{)}\end{split}$
(91)
where $\mathbb{Q}_{t}=\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}_{t}^{-1}\Phi_{t}$.
Since $\boldsymbol{\upsilon}$ is to the far left or right in each term, the
derivative is simple using the derivative terms in table 3.1.
$\partial\Psi/\partial\boldsymbol{\upsilon}$ becomes:
$\begin{split}\partial\Psi/\partial\boldsymbol{\upsilon}&=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-2\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}]+2\,\textup{{E}}[((\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t}))^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}]\\\
&\quad+2(\boldsymbol{\upsilon}^{\top}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u})+2\,\textup{{E}}[\mbox{$\mathbf{f}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}]\bigg{)}\end{split}$
(92)
Set the left side to zero and transpose the whole equation.
$\begin{split}\mathbf{0}=\sum_{t=1}^{T}\bigg{(}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]-\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})-\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}\boldsymbol{\upsilon}-\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{f}$}_{t,u}\bigg{)}\end{split}$
(93)
Thus,
$\big{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}\big{)}\boldsymbol{\upsilon}=\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\big{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]-(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})-\mbox{$\mathbf{f}$}_{t,u}\big{)}$
(94)
We solve for $\boldsymbol{\upsilon}$, and the new $\boldsymbol{\upsilon}$ for
the $j+1$ iteration of the EM algorithm is
$\begin{split}\boldsymbol{\upsilon}_{j+1}&=\bigg{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}\bigg{)}^{-1}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\big{(}\widetilde{\mbox{$\mathbf{x}$}}_{t}-(\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})\,\textup{{vec}}(\mbox{$\mathbf{B}$}_{t})-\mbox{$\mathbf{f}$}_{t,u}\big{)}\end{split}$
(95)
The update equation requires that
$\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}$
is invertible. It generally will be if
$\Phi_{t}\mbox{$\mathbf{Q}$}_{t}\Phi_{t}^{\top}$ is a proper variance-
covariance matrix (positive semi-definite) and $\mbox{$\mathbf{D}$}_{t,u}$ is
full rank. If $\mbox{$\mathbf{G}$}_{t}$ has all-zero rows then
$\Phi_{t}\mbox{$\mathbf{Q}$}_{t}\Phi_{t}^{\top}$ has zeros on the diagonal and
we have a partially deterministic model. In this case, $\mathbb{Q}_{t}$ will
have all-zero row/columns and
$\mbox{$\mathbf{D}$}_{t,u}^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{D}$}_{t,u}$ will
not be invertible unless the corresponding row of $\mbox{$\mathbf{D}$}_{t,u}$
is zero. This means that if one of the $\boldsymbol{x}$ rows is fully
deterministic then the corresponding row of $\mathbf{u}$ would need to be
fixed. We can get around this, however. See section 7 on the modifications to
the update equation when some of the $\boldsymbol{x}$’s are fully
deterministic.
### 5.2 The general $\mathbf{a}$ update equation
The derivation of the update equation for $\boldsymbol{\alpha}$ with fixed and
shared values is completely analogous to the derivation for
$\boldsymbol{\upsilon}$. We take the derivative of $\Psi$ with respect to
$\boldsymbol{\alpha}$ and arrive at the analogous:
$\begin{split}\boldsymbol{\alpha}_{j+1}&=\big{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,a}^{\top}\mathbb{R}_{t}\mbox{$\mathbf{D}$}_{t,a}\big{)}^{-1}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,a}^{\top}\mathbb{R}_{t}\big{(}\widetilde{\mbox{$\mathbf{y}$}}_{t}-(\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\otimes\mbox{$\mathbf{I}$}_{n})\,\textup{{vec}}(\mbox{$\mathbf{Z}$}_{t})-\mbox{$\mathbf{f}$}_{t,a}\big{)}\\\
&=\big{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,a}^{\top}\mathbb{R}_{t}\mbox{$\mathbf{D}$}_{t,a}\big{)}^{-1}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,a}^{\top}\mathbb{R}_{t}\big{(}\widetilde{\mbox{$\mathbf{y}$}}_{t}-\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{f}$}_{t,a}\big{)}\end{split}$
(96)
$\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,a}^{\top}\mathbb{R}_{t}\mbox{$\mathbf{D}$}_{t,a}$
must be invertible.
### 5.3 The general $\xi$ update equation, stochastic initial state
When $\mbox{$\boldsymbol{x}$}_{0}$ is treated as stochastic with an unknown
mean and known variance, the derivation of the update equation for $\xi$ with
fixed and shared values is as follows. Take the derivative of $\Psi$ (using
equation 90) with respect to $\mathbf{p}$:
$\partial\Psi/\partial\mbox{$\mathbf{p}$}=\big{(}\widetilde{\mbox{$\mathbf{x}$}}_{0}^{\top}\mathbb{L}-\mbox{\boldmath$\xi$}^{\top}\mathbb{L}\big{)}$
(97)
Replace $\xi$ with
$\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}$, set
the left side to zero and transpose:
$\mathbf{0}=\mbox{$\mathbf{D}$}_{\xi}^{\top}\big{(}\mathbb{L}\widetilde{\mbox{$\mathbf{x}$}}_{0}-\mathbb{L}\mbox{$\mathbf{f}$}_{\xi}+\mathbb{L}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}\big{)}$
(98)
Thus,
$\mbox{$\mathbf{p}$}_{j+1}=\big{(}\mbox{$\mathbf{D}$}_{\xi}^{\top}\mathbb{L}\mbox{$\mathbf{D}$}_{\xi}\big{)}^{-1}\mbox{$\mathbf{D}$}_{\xi}^{\top}\mathbb{L}(\widetilde{\mbox{$\mathbf{x}$}}_{0}-\mbox{$\mathbf{f}$}_{\xi})$
(99)
and the new $\xi$ is then,
$\mbox{\boldmath$\xi$}_{j+1}=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}_{j+1},$
(100)
When the initial state is defined as at $t=1$, replace
$\widetilde{\mbox{$\mathbf{x}$}}_{0}$ with
$\widetilde{\mbox{$\mathbf{x}$}}_{1}$ in equation 99.
### 5.4 The general $\xi$ update equation, fixed
$\mbox{$\boldsymbol{x}$}_{0}$
For the case, $\mbox{$\boldsymbol{x}$}_{0}$ is treated as fixed, i.e. as
another parameter, and $\Lambda$ does not appear in the equation. It will be
easier to work with $\Psi$ written as follows:
$\begin{split}&\,\textup{{E}}_{\text{{\bf
XY}}}[\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{Y}$},\mbox{$\boldsymbol{X}$};\Theta)]=-\frac{1}{2}\,\textup{{E}}_{\text{{\bf
XY}}}\bigg{(}\sum_{1}^{T}(\mbox{$\boldsymbol{Y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})^{\top}\mathbb{R}_{t}(\mbox{$\boldsymbol{Y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})+\sum_{1}^{T}\log|\mbox{$\mathbf{R}$}_{t}|\\\
&\quad+\sum_{1}^{T}(\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}-\mbox{$\mathbf{u}$}_{t})^{\top}\mathbb{Q}_{t}(\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}-\mbox{$\mathbf{u}$}_{t})+\sum_{1}^{T}\log|\mbox{$\mathbf{Q}$}_{t}|+\log
2\pi\bigg{)}\\\
&\quad\mbox{$\boldsymbol{x}$}_{0}\equiv\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}\end{split}$
(101)
This is the same as equation (90) except not written in vec form and $\Lambda$
does not appear. Take the derivative of $\Psi$ using equation (101). Terms not
involving $\mathbf{p}$ will drop out:
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{p}$}=-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\partial(\mathbb{P}_{1}^{\top}\mathbb{Q}_{1}\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})/\partial\mbox{$\mathbf{p}$}]-\,\textup{{E}}[\partial(\mbox{$\mathbf{p}$}^{\top}(\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{D}$}_{\xi})^{\top}\mathbb{Q}_{1}\mathbb{P}_{1})/\partial\mbox{$\mathbf{p}$}]\\\
&\quad+\,\textup{{E}}[\partial(\mbox{$\mathbf{p}$}^{\top}(\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{D}$}_{\xi})^{\top}\mathbb{Q}_{1}\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})/\partial\mbox{$\mathbf{p}$}]\bigg{)}\end{split}$
(102)
where
$\mathbb{P}_{1}=\mbox{$\boldsymbol{X}$}_{1}-\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{f}$}_{\xi}-\mbox{$\mathbf{u}$}_{1}$
(103)
After pulling the constants out of the expectations and taking the derivative,
we arrive at:
$\begin{split}\partial\Psi/\partial\mbox{$\mathbf{p}$}=-\frac{1}{2}\bigg{(}-2\,\textup{{E}}[\mathbb{P}_{1}]^{\top}\mathbb{Q}_{1}\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{D}$}_{\xi}+2\mbox{$\mathbf{p}$}^{\top}(\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{D}$}_{\xi})^{\top}\mathbb{Q}_{1}\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{D}$}_{\xi}\bigg{)}\end{split}$
(104)
Set the left side to zero, and solve for $\mathbf{p}$.
$\mbox{$\mathbf{p}$}=(\mbox{$\mathbf{D}$}_{\xi}^{\top}\mbox{$\mathbf{B}$}_{1}^{\top}\mathbb{Q}_{1}\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{D}$}_{\xi})^{-1}\mbox{$\mathbf{D}$}_{\xi}^{\top}\mbox{$\mathbf{B}$}_{1}^{\top}\mathbb{Q}_{1}(\widetilde{\mbox{$\mathbf{x}$}}_{1}-\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{f}$}_{\xi}-\mbox{$\mathbf{u}$}_{1})$
(105)
This equation requires that the inverse right of the $=$ exists and it might
not if $\mbox{$\mathbf{B}$}_{t}$ or $\mathbb{Q}_{1}$ has any all zero
rows/columns. In that case, defining
$\mbox{\boldmath$\xi$}\equiv\mbox{$\boldsymbol{x}$}_{1}$ might work (section
5.5) or the problematic rows of $\xi$ could be fixed. The new $\xi$ is then,
$\mbox{\boldmath$\xi$}_{j+1}=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}_{j+1},$
(106)
### 5.5 The general $\xi$ update equation, fixed
$\mbox{$\boldsymbol{x}$}_{1}$
When $\mbox{$\boldsymbol{x}$}_{1}$ is treated as fixed, i.e. as another
parameter, and $\Lambda$ does not appear, the expected log likelihood, $\Psi$,
is written as follows:
$\begin{split}&\,\textup{{E}}_{\text{{\bf
XY}}}[\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{Y}$},\mbox{$\boldsymbol{X}$};\Theta)]=-\frac{1}{2}\,\textup{{E}}_{\text{{\bf
XY}}}\bigg{(}\sum_{1}^{T}(\mbox{$\boldsymbol{Y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})^{\top}\mathbb{R}_{t}(\mbox{$\boldsymbol{Y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})+\sum_{1}^{T}\log|\mbox{$\mathbf{R}$}_{t}|\\\
&\quad+\sum_{2}^{T}(\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}-\mbox{$\mathbf{u}$}_{t})^{\top}\mathbb{Q}_{t}(\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}-\mbox{$\mathbf{u}$}_{t})+\sum_{2}^{T}\log|\mbox{$\mathbf{Q}$}_{t}|+\log
2\pi\bigg{)}\\\
&\quad\mbox{$\boldsymbol{x}$}_{1}\equiv\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}\end{split}$
(107)
Take the derivative of $\Psi$ using equation (107):
$\begin{split}&\partial\Psi/\partial\mbox{$\mathbf{p}$}=-\frac{1}{2}\bigg{(}-\,\textup{{E}}[\partial(\mathbb{O}_{1}^{\top}\mathbb{R}_{1}\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})/\partial\mbox{$\mathbf{p}$}]-\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})^{\top}\mathbb{R}_{1}\mathbb{O}_{1})/\partial\mbox{$\mathbf{p}$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})^{\top}\mathbb{R}_{1}\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})/\partial\mbox{$\mathbf{p}$}]-\,\textup{{E}}[\partial(\mathbb{P}_{2}^{\top}\mathbb{Q}_{2}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})/\partial\mbox{$\mathbf{p}$}]-\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})^{\top}\mathbb{Q}_{2}\mathbb{P}_{2})/\partial\mbox{$\mathbf{p}$}]\\\
&\quad+\,\textup{{E}}[\partial((\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})^{\top}\mathbb{Q}_{2}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})/\partial\mbox{$\mathbf{p}$}]\bigg{)}\end{split}$
(108)
where
$\begin{split}\mathbb{P}_{2}&=\mbox{$\boldsymbol{X}$}_{2}-\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{f}$}_{\xi}-\mbox{$\mathbf{u}$}_{2}\\\
\mathbb{O}_{1}&=\mbox{$\boldsymbol{Y}$}_{1}-\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{f}$}_{\xi}-\mbox{$\mathbf{a}$}_{1}\\\
\end{split}$ (109)
In terms of the Kalman smoother output the new $\xi$ for EM iteration $j+1$
when $\mbox{\boldmath$\xi$}\equiv\mbox{$\boldsymbol{x}$}_{1}$ is
$\begin{split}\mbox{$\mathbf{p}$}_{j+1}&=((\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{D}$}_{\xi})^{\top}\mathbb{R}_{1}\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{D}$}_{\xi}+(\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{\xi})^{\top}\mathbb{Q}_{2}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{\xi})^{-1}((\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{D}$}_{\xi})^{\top}\mathbb{R}_{1}\widetilde{\mathbb{O}}_{1}+(\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{\xi})^{\top}\mathbb{Q}_{2}\widetilde{\mathbb{P}}_{2})\end{split}$
(110)
where
$\begin{split}\widetilde{\mathbb{P}}_{2}&=\widetilde{\mbox{$\mathbf{x}$}}_{2}-\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{f}$}_{\xi}-\mbox{$\mathbf{u}$}_{2}\\\
\widetilde{\mathbb{O}}_{1}&=\widetilde{\mbox{$\mathbf{y}$}}_{1}-\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{f}$}_{\xi}-\mbox{$\mathbf{a}$}_{1}\end{split}$
(111)
The new $\xi$ is
$\mbox{\boldmath$\xi$}_{j+1}=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}_{j+1},$
(112)
### 5.6 The general $\mathbf{B}$ update equation
Take the derivative of $\Psi$ with respect to $\boldsymbol{\beta}$; terms in
$\Psi$ do not involve $\boldsymbol{\beta}$ will equal 0 and drop out.
$\begin{split}\partial\Psi/\partial\boldsymbol{\beta}&=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-\partial(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta}])/\partial\boldsymbol{\beta}-\partial(\,\textup{{E}}[(\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta})^{\top}\mathbb{Q}_{t}\mbox{$\boldsymbol{X}$}_{t}])/\partial\boldsymbol{\beta}\\\
&+\partial(\,\textup{{E}}[(\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta})^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta}])/\partial\boldsymbol{\beta}+\partial(\,\textup{{E}}[\mbox{$\mathbf{u}$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta}])/\partial\boldsymbol{\beta}+\partial((\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta})^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{u}$}_{t})/\partial\boldsymbol{\beta}\\\
&+\partial(\,\textup{{E}}[(\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{f}$}_{t,b})^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta}])/\partial\boldsymbol{\beta}+\partial(\,\textup{{E}}[(\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta})^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{f}$}_{t,b}])/\partial\boldsymbol{\beta}\bigg{)}\end{split}$
(113)
where
$\mbox{\boldmath$\Upsilon$}_{t}=(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})$
(114)
Since $\boldsymbol{\beta}$ is to the far left or right in each term, the
derivative is simple using the derivative terms in table 3.1.
$\partial\Psi/\partial\boldsymbol{\beta}$ becomes:
$\begin{split}\partial\Psi/\partial\boldsymbol{\upsilon}&=-\frac{1}{2}\sum_{t=1}^{T}\bigg{(}-2\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}]+2(\beta^{\top}\mbox{$\mathbf{D}$}_{t,b}^{\top}\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b})\\\
&+2\,\textup{{E}}[\mbox{$\mathbf{u}$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}]+2\,\textup{{E}}[(\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{f}$}_{t,b})^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}\mbox{$\mathbf{D}$}_{t,b}]\bigg{)}\end{split}$
(115)
Note that $\boldsymbol{X}$ appears in $\mbox{\boldmath$\Upsilon$}_{t}$ but not
in other terms. We need to keep track of where $\boldsymbol{X}$ appears so the
we keep the expectation brackets around any terms involving $\boldsymbol{X}$.
$\begin{split}\partial\Psi/\partial\boldsymbol{\beta}=\sum_{t=1}^{T}\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}]\mbox{$\mathbf{D}$}_{t,b}-\mbox{$\mathbf{u}$}_{t}^{\top}\mathbb{Q}_{t}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}]\mbox{$\mathbf{D}$}_{t,b}-\boldsymbol{\beta}^{\top}\mbox{$\mathbf{D}$}_{t,b}^{\top}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}]\mbox{$\mathbf{D}$}_{t,b}-\mbox{$\mathbf{f}$}_{t,b}^{\top}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}]\mbox{$\mathbf{D}$}_{t,b}\bigg{)}\end{split}$
(116)
Set the left side to zero and transpose the whole equation.
$\mathbf{0}=\sum_{t=1}^{T}\bigg{(}\mbox{$\mathbf{D}$}_{t,b}^{\top}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{$\boldsymbol{X}$}_{t}]-\mbox{$\mathbf{D}$}_{t,b}^{\top}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}]^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{u}$}_{t}-\mbox{$\mathbf{D}$}_{t,b}^{\top}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}]\mbox{$\mathbf{f}$}_{t,b}-\mbox{$\mathbf{D}$}_{t,b}^{\top}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}]\mbox{$\mathbf{D}$}_{t,b}\boldsymbol{\beta}\bigg{)}$
(117)
Thus,
$\big{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,b}^{\top}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}]\mbox{$\mathbf{D}$}_{t,b}\big{)}\boldsymbol{\beta}=\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,b}^{\top}\big{(}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{$\boldsymbol{X}$}_{t}]-\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}]^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{u}$}_{t}-\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}]\mbox{$\mathbf{f}$}_{t,b}\big{)}$
(118)
Now we need to deal with the expectations.
$\begin{split}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{\boldmath$\Upsilon$}_{t}]&=\,\textup{{E}}[(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})^{\top}\mathbb{Q}_{t}(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})]\\\
&=\,\textup{{E}}[(\mbox{$\boldsymbol{X}$}_{t-1}\otimes\mbox{$\mathbf{I}$}_{m})\mathbb{Q}_{t}(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})]\\\
&=\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mathbb{Q}_{t}]\\\
&=\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]\otimes\mathbb{Q}_{t}\\\
&=\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\otimes\mathbb{Q}_{t}\end{split}$ (119)
$\begin{split}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}^{\top}\mathbb{Q}_{t}\mbox{$\boldsymbol{X}$}_{t}]&=\,\textup{{E}}[(\mbox{$\boldsymbol{X}$}_{t-1}^{\top}\otimes\mbox{$\mathbf{I}$}_{m})^{\top}\mathbb{Q}_{t}\mbox{$\boldsymbol{X}$}_{t}]\\\
&=\,\textup{{E}}[(\mbox{$\boldsymbol{X}$}_{t-1}\otimes\mbox{$\mathbf{I}$}_{m})\mathbb{Q}_{t}\mbox{$\boldsymbol{X}$}_{t}]\\\
&=\,\textup{{E}}[(\mbox{$\boldsymbol{X}$}_{t-1}\otimes\mathbb{Q}_{t})\mbox{$\boldsymbol{X}$}_{t}]\\\
&=\,\textup{{E}}[\,\textup{{vec}}(\mathbb{Q}_{t}\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top})]\\\
&=\,\textup{{vec}}(\mathbb{Q}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1})\end{split}$
(120)
$\begin{split}\,\textup{{E}}[\mbox{\boldmath$\Upsilon$}_{t}]^{\top}\mathbb{Q}_{t}\mbox{$\mathbf{u}$}_{t}&=(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]\otimes\mbox{$\mathbf{I}$}_{m})\mathbb{Q}_{t}\mbox{$\mathbf{u}$}_{t}\\\
&=(\widetilde{\mbox{$\mathbf{x}$}}_{t-1}\otimes\mathbb{Q}_{t})\mbox{$\mathbf{u}$}_{t}\\\
&=\,\textup{{vec}}(\mathbb{Q}_{t}\mbox{$\mathbf{u}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top})\end{split}$
(121)
Thus,
$\begin{split}\big{(}\sum_{t=1}^{T}&\mbox{$\mathbf{D}$}_{t,b}^{\top}(\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\otimes\mathbb{Q}_{t})\mbox{$\mathbf{D}$}_{t,b}\big{)}\boldsymbol{\beta}=\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,b}^{\top}\big{(}\,\textup{{vec}}(\mathbb{Q}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1})-(\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\otimes\mathbb{Q}_{t})\mbox{$\mathbf{f}$}_{t,b}-\,\textup{{vec}}(\mathbb{Q}_{t}\mbox{$\mathbf{u}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top})\big{)}\end{split}$
(122)
Then $\boldsymbol{\beta}$ for the $j+1$ iteration of the EM algorithm is then:
$\begin{split}\boldsymbol{\beta}=\bigg{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,b}^{\top}(\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\otimes\mathbb{Q}_{t})\mbox{$\mathbf{D}$}_{t,b}\bigg{)}^{-1}\times\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,b}^{\top}\big{(}\,\textup{{vec}}(\mathbb{Q}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1})-(\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\otimes\mathbb{Q}_{t})\mbox{$\mathbf{f}$}_{t,b}-\,\textup{{vec}}(\mathbb{Q}_{t}\mbox{$\mathbf{u}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top})\big{)}\end{split}$
(123)
This requires that
$\mbox{$\mathbf{D}$}_{t,b}^{\top}(\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\otimes\mathbb{Q}_{t})\mbox{$\mathbf{D}$}_{t,b}$
is invertible, and as usual we will run into trouble if
$\Phi_{t}\mbox{$\mathbf{Q}$}_{t}\Phi_{t}^{\top}$ has zeros on the diagonal.
See section 7.
### 5.7 The general $\mathbf{Z}$ update equation
The derivation of the update equation for $\boldsymbol{\zeta}$ with fixed and
shared values is analogous to the derivation for $\boldsymbol{\beta}$. The
update equation for $\boldsymbol{\zeta}$ is
$\begin{split}\boldsymbol{\zeta}_{j+1}=\big{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,z}^{\top}(\widetilde{\mbox{$\mathbf{P}$}}_{t}\otimes\mathbb{R}_{t})\mbox{$\mathbf{D}$}_{t,z}\big{)}\boldsymbol{\beta}\times\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,z}^{\top}\big{(}\,\textup{{vec}}(\mathbb{R}_{t}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t})-(\widetilde{\mbox{$\mathbf{P}$}}_{t}\otimes\mathbb{R}_{t})\mbox{$\mathbf{f}$}_{t,z}-\,\textup{{vec}}(\mathbb{R}_{t}\mbox{$\mathbf{a}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top})\big{)}\end{split}$
(124)
This requires that
$\mbox{$\mathbf{D}$}_{t,z}^{\top}(\widetilde{\mbox{$\mathbf{P}$}}_{t}\otimes\mathbb{R}_{t})\mbox{$\mathbf{D}$}_{t,z}$
is invertible. If $\Xi_{t}\mbox{$\mathbf{R}$}_{t}\Xi_{t}^{\top}$ has zeros on
the diagonal, this will not be the case. See section 7.
### 5.8 The general $\mathbf{Q}$ update equation
A general analytical solution for $\mathbf{Q}$ is problematic because the
inverse of $\mbox{$\mathbf{Q}$}_{t}$ appears in the likelihood and
$\mbox{$\mathbf{Q}$}_{t}^{-1}$ cannot always be rewritten as a function of
$\,\textup{{vec}}(\mbox{$\mathbf{Q}$}_{t})$. However, in a few important
special—yet quite broad— cases, an analytical solution can be derived. The
most general of these special cases is a block-symmetric matrix with optional
independent fixed blocks (subsection 5.8.5). Indeed, all other cases
(diagonal, block-diagonal, unconstrained, equal variance-covariance) except
one (a replicated block-diagonal) are special cases of the blocked matrix with
optional independent fixed blocks.
Unlike the other parameters, I need to put constraints on $\mathbf{f}$ and
$\mathbf{D}$. I constrain $\mathbf{D}$ to be a design matrix. It has only 1s
and 0s, and the rows sums are either 1 or 0. Thus terms like $q_{1}+q_{2}$ are
not allowed. A non-zero value in $\mathbf{f}$ is only allowed if the
corresponding row in $\mathbf{D}$ is all zero. Thus elements like
$f_{1}+q_{1}$ are not allowed in $\mathbf{Q}$. These constraints, especially
the constraint that $\mathbf{D}$ only has 0s and 1s, might be loosened, but
with the addition of $\mbox{$\mathbf{G}$}_{t}$, we still have a very wide
class of $\mathbf{Q}$ matrices.
The general update equation for $\mathbf{Q}$ with these constraints is
$\begin{split}\mbox{$\mathbf{q}$}_{j+1}&=\big{(}\sum_{t=1}^{T}(\mbox{$\mathbf{D}$}_{t,q}^{\top}\mbox{$\mathbf{D}$}_{t,q})\big{)}^{-1}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,q}^{\top}\,\textup{{vec}}(\SS_{t})\\\
\text{where
}\SS_{t}&=\Phi_{t}\big{(}\widetilde{\mbox{$\mathbf{P}$}}_{t}-\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}\mbox{$\mathbf{B}$}_{t}^{\top}-\mbox{$\mathbf{B}$}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t-1,t}-\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{u}$}_{t}^{\top}-\mbox{$\mathbf{u}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}+\\\
&\quad\mbox{$\mathbf{B}$}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\mbox{$\mathbf{B}$}_{t}^{\top}+\mbox{$\mathbf{B}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}\mbox{$\mathbf{u}$}_{t}^{\top}+\mbox{$\mathbf{u}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\mbox{$\mathbf{B}$}_{t}^{\top}+\mbox{$\mathbf{u}$}_{t}\mbox{$\mathbf{u}$}_{t}^{\top}\big{)}\Phi_{t}^{\top}\\\
\mbox{$\mathbf{Q}$}_{t}&=\mbox{$\mathbf{f}$}_{t,q}+\mbox{$\mathbf{D}$}_{t,q}\mbox{$\mathbf{q}$}\\\
\text{where}\\\
\Phi_{t}=(\mbox{$\mathbf{G}$}_{t}^{\top}\mbox{$\mathbf{G}$}_{t})^{-1}\mbox{$\mathbf{G}$}_{t}^{\top}\end{split}$
(125)
The vec of $\mbox{$\mathbf{Q}$}_{t}$ is written in the form of
$\,\textup{{vec}}(\mbox{$\mathbf{Q}$}_{t})=\mbox{$\mathbf{f}$}_{t,q}+\mbox{$\mathbf{D}$}_{t,q}\boldsymbol{q}$,
where $\mbox{$\mathbf{f}$}_{t,q}$ is a $p^{2}\times 1$ column vector of the
fixed values including zero, $\mbox{$\mathbf{D}$}_{t,q}$ is the $p^{2}\times
s$ design matrix, and $\boldsymbol{q}$ is a column vector of the $s$ free
values in $\mbox{$\mathbf{Q}$}_{t}$. This requires that
$(\mbox{$\mathbf{D}$}_{t,q}^{\top}\mbox{$\mathbf{D}$}_{t,q})$ be invertible,
which in a valid model must be true; if is not true you have specified an
invalid variance-covariance structure since the implied variance-covariance
matrix will not be full-rank and not invertible and thus an invalid variance-
covariance matrix.
Below I show how the $\mathbf{Q}$ update equation arises by working through a
few of the special cases. In these derivations the $q$ subscript is left off
the $\mathbf{D}$ and $\mathbf{f}$ matrices.
#### 5.8.1 Special case: diagonal $\mathbf{Q}$ matrix (with shared or unique
parameters)
Let $\mathbf{Q}$ be a non-time varying diagonal matrix with fixed and shared
values such that it takes a form like so:
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}q_{1}&0&0&0&0\\\ 0&f_{1}&0&0&0\\\
0&0&q_{2}&0&0\\\ 0&0&0&f_{2}&0\\\ 0&0&0&0&q_{2}\end{bmatrix}$
Here, $f$’s are fixed values (constants) and $q$’s are free parameters
elements. The $f$ and $q$ do not occur together; i.e. there are no terms like
$f_{1}+q_{1}$.
The vec of $\mbox{$\mathbf{Q}$}^{-1}$ can be written then as
$\,\textup{{vec}}(\mbox{$\mathbf{Q}$}^{-1})=\mbox{$\mathbf{f}$}^{*}_{q}+\mbox{$\mathbf{D}$}_{q}\boldsymbol{q^{*}}$,
where $\mbox{$\mathbf{f}$}^{*}$ is like $\mbox{$\mathbf{f}$}_{q}$ but with the
corresponding $i$-th non-zero fixed values replaced by $1/f_{i}$ and
$\boldsymbol{q^{*}}$ is a column vector of 1 over the $q_{i}$ values. For the
example above,
$\boldsymbol{q^{*}}=\begin{bmatrix}1/q_{1}\\\ 1/q_{2}\end{bmatrix}$
Take the partial derivative of $\Psi$ with respect to $\boldsymbol{q^{*}}$. We
can do this because $\mbox{$\mathbf{Q}$}^{-1}$ is diagonal and thus each
element of $\boldsymbol{q^{*}}$ is independent of the other elements;
otherwise we would not necessarily be able to vary one element of
$\boldsymbol{q^{*}}$ while holding the other elements constant.
$\begin{split}&\partial\Psi/\partial\boldsymbol{q^{*}}=-\frac{1}{2}\sum_{t=1}^{T}\partial\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\boldsymbol{X}$}_{t}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}]\\\
&\quad-\,\textup{{E}}[(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\boldsymbol{X}$}_{t}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\mathbf{u}$}_{t}]\\\
&\quad-\,\textup{{E}}[\mbox{$\mathbf{u}$}_{t}^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\boldsymbol{X}$}_{t}]+\,\textup{{E}}[(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}]\\\
&\quad+\,\textup{{E}}[(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\mathbf{u}$}_{t}]+\,\textup{{E}}[\mbox{$\mathbf{u}$}_{t}^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}]+\mbox{$\mathbf{u}$}_{t}^{\top}\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}^{-1}\Phi_{t}\mbox{$\mathbf{u}$}_{t}\bigg{)}/\partial\boldsymbol{q^{*}}\\\
&-\partial\big{(}\frac{T}{2}\log|\mbox{$\mathbf{Q}$}|\big{)}/\partial\boldsymbol{q^{*}}\\\
\end{split}$ (126)
Using the same vec operations as in the derivations for $\mathbf{B}$ and
$\mathbf{Z}$, pull $\mbox{$\mathbf{Q}$}^{-1}$ out from the middle and replace
the expectations with the Kalman smoother output.141414Another, more common,
way to do this is to use a “trace trick”,
$\,\textup{{trace}}(\mbox{$\mathbf{a}$}^{\top}\mbox{$\mathbf{A}$}\mbox{$\mathbf{b}$})=\,\textup{{trace}}(\mbox{$\mathbf{A}$}\mbox{$\mathbf{b}$}\mbox{$\mathbf{a}$}^{\top})$,
to pull $\mbox{$\mathbf{Q}$}^{-1}$ out.
$\begin{split}&\partial\Psi/\partial\boldsymbol{q^{*}}=-\frac{1}{2}\sum_{t=1}^{T}\partial\bigg{(}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\otimes\mbox{$\boldsymbol{X}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\otimes(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}]-\,\textup{{E}}[(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\otimes\mbox{$\boldsymbol{X}$}_{t}^{\top}]\\\
&\quad-\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}\otimes\mbox{$\mathbf{u}$}_{t}^{\top}]-\,\textup{{E}}[\mbox{$\mathbf{u}$}_{t}^{\top}\otimes\mbox{$\boldsymbol{X}$}_{t}^{\top}]+\,\textup{{E}}[(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\otimes(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}]\\\
&\quad+\,\textup{{E}}[(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}\otimes\mbox{$\mathbf{u}$}_{t}^{\top}]+\,\textup{{E}}[\mbox{$\mathbf{u}$}_{t}^{\top}\otimes(\mbox{$\mathbf{B}$}\mbox{$\boldsymbol{X}$}_{t-1})^{\top}]+(\mbox{$\mathbf{u}$}_{t}^{\top}\otimes\mbox{$\mathbf{u}$}_{t}^{\top})\bigg{)}(\Phi_{t}\otimes\Phi_{t})^{\top}\,\textup{{vec}}(\mbox{$\mathbf{Q}$}^{-1})/\partial\boldsymbol{q^{*}}\\\
&-\partial\bigg{(}\frac{T}{2}\log|\mbox{$\mathbf{Q}$}|\bigg{)}/\partial\boldsymbol{q^{*}}\\\
&\quad=-\frac{1}{2}\sum_{t=1}^{T}\,\textup{{vec}}(\SS_{t})^{\top}\partial\big{(}\,\textup{{vec}}(\mbox{$\mathbf{Q}$}^{-1})\big{)}/\partial\boldsymbol{q^{*}}+\partial\big{(}\frac{T}{2}\log|\mbox{$\mathbf{Q}$}^{-1}|\big{)}/\partial\boldsymbol{q^{*}}\\\
&\text{where }\\\
&\SS_{t}=\Phi_{t}\big{(}\widetilde{\mbox{$\mathbf{P}$}}_{t}-\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}\mbox{$\mathbf{B}$}_{t}^{\top}-\mbox{$\mathbf{B}$}\widetilde{\mbox{$\mathbf{P}$}}_{t-1,t}-\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{u}$}_{t}^{\top}-\mbox{$\mathbf{u}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}+\\\
&\quad\mbox{$\mathbf{B}$}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t-1}\mbox{$\mathbf{B}$}_{t}^{\top}+\mbox{$\mathbf{B}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}\mbox{$\mathbf{u}$}_{t}^{\top}+\mbox{$\mathbf{u}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\mbox{$\mathbf{B}$}_{t}^{\top}+\mbox{$\mathbf{u}$}_{t}\mbox{$\mathbf{u}$}_{t}^{\top}\big{)}\Phi_{t}^{\top}\end{split}$
(127)
This reduction used
$(\Phi_{t}\otimes\Phi_{t})(\mbox{$\boldsymbol{X}$}\otimes\mbox{$\boldsymbol{X}$})=\,\textup{{vec}}(\mbox{$\boldsymbol{X}$}\mbox{$\boldsymbol{X}$}^{\top})=\,\textup{{vec}}(\Phi_{t}\,\textup{{vec}}(\mbox{$\boldsymbol{X}$}\mbox{$\boldsymbol{X}$}^{\top})\Phi_{t}^{\top}).$
I also replaced $\log|\mbox{$\mathbf{Q}$}|$ with
$-\log|\mbox{$\mathbf{Q}$}^{-1}|$; the determinant of a diagonal matrix is the
product of its diagonal elements. Thus,
$\begin{split}&\partial\Psi/\partial\boldsymbol{q^{*}}=-\bigg{(}\frac{1}{2}\sum_{t=1}^{T}\,\textup{{vec}}(\SS_{t})^{\top}(\mbox{$\mathbf{f}$}^{*}+\mbox{$\mathbf{D}$}_{q}\boldsymbol{q^{*}})\\\
&\quad-\frac{1}{2}\sum_{t=1}^{T}(\log(f^{*}_{1})+\log(f^{*}_{2})...k\log(q^{*}_{1})+l\log(q^{*}_{2})...)\bigg{)}/\partial\boldsymbol{q^{*}}\\\
\end{split}$ (128)
where $k$ is the number of times $q_{1}$ appears on the diagonal of
$\mathbf{Q}$ and $l$ is the number of times $q_{2}$ appears, etc. Taking the
derivatives,
$\begin{split}&\partial\Psi/\partial\boldsymbol{q^{*}}==\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{q}^{\top}\,\textup{{vec}}(\SS_{t})-\frac{1}{2}\sum_{t=1}^{T}(\log(f^{*}_{1})+...k\log(q^{*}_{1})+l\log(q^{*}_{2})...)/\partial\boldsymbol{q^{*}}\\\
&\quad=\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{q}^{\top}\,\textup{{vec}}(\SS_{t})-\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{q}^{\top}\mbox{$\mathbf{D}$}_{q}\boldsymbol{q}\end{split}$
(129)
$\mbox{$\mathbf{D}$}_{q}^{\top}\mbox{$\mathbf{D}$}_{q}$ is a $s\times s$
matrix with $k$, $l$, etc. along the diagonal and thus is invertible; as
usual, $s$ is the number of free elements in $\mathbf{Q}$. Set the left side
to zero (a $1\times s$ matrix of zeros) and solve for $\boldsymbol{q}$. This
gives us the update equation for $\mathbf{q}$ and $\mathbf{Q}$:
$\begin{split}\mbox{$\mathbf{q}$}_{j+1}&=\big{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{q}^{\top}\mbox{$\mathbf{D}$}_{q}\big{)}^{-1}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{q}^{\top}\,\textup{{vec}}(\SS_{t})\\\
\,\textup{{vec}}(\mbox{$\mathbf{Q}$})_{j+1}&=\mbox{$\mathbf{f}$}+\mbox{$\mathbf{D}$}_{q}\boldsymbol{q}_{j+1}\end{split}$
(130)
Since in this example, $\mbox{$\mathbf{D}$}_{q}$ is time-constant, this
reduces to
$\boldsymbol{q}_{j+1}=\frac{1}{T}(\mbox{$\mathbf{D}$}_{q}^{\top}\mbox{$\mathbf{D}$}_{q})^{-1}\mbox{$\mathbf{D}$}_{q}^{\top}\sum_{t=1}^{T}\,\textup{{vec}}(\SS_{t})$
$\SS_{t}$ is defined in equation (127).
#### 5.8.2 Special case: $\mathbf{Q}$ with one variance and one covariance
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}\alpha&\beta&\beta&\beta\\\
\beta&\alpha&\beta&\beta\\\ \beta&\beta&\alpha&\beta\\\
\beta&\beta&\beta&\alpha\end{bmatrix}\quad\quad\mbox{$\mathbf{Q}$}^{-1}=\begin{bmatrix}f(\alpha,\beta)&g(\alpha,\beta)&g(\alpha,\beta)&g(\alpha,\beta)\\\
g(\alpha,\beta)&f(\alpha,\beta)&g(\alpha,\beta)&g(\alpha,\beta)\\\
g(\alpha,\beta)&g(\alpha,\beta)&f(\alpha,\beta)&g(\alpha,\beta)\\\
g(\alpha,\beta)&g(\alpha,\beta)&g(\alpha,\beta)&f(\alpha,\beta)\end{bmatrix}$
This is a matrix with a single shared variance parameter on the diagonal and a
single shared covariance on the off-diagonals. The derivation is the same as
for the diagonal case, until the step involving the differentiation of
$\log|\mbox{$\mathbf{Q}$}^{-1}|$:
$\begin{split}&\partial\Psi/\partial\boldsymbol{q^{*}}=\partial\bigg{(}-\frac{1}{2}\sum_{t=1}^{T}\big{(}\,\textup{{vec}}(\SS_{t})^{\top}\big{)}\,\textup{{vec}}(\mbox{$\mathbf{Q}$}^{-1})+\frac{T}{2}\log|\mbox{$\mathbf{Q}$}^{-1}|\bigg{)}/\partial\boldsymbol{q^{*}}\\\
\end{split}$ (131)
It does not make sense to take the partial derivative of
$\log|\mbox{$\mathbf{Q}$}^{-1}|$ with respect to
$\,\textup{{vec}}(\mbox{$\mathbf{Q}$}^{-1})$ because many elements of
$\mbox{$\mathbf{Q}$}^{-1}$ are shared so it is not possible to fix one element
while varying another. Instead, we can take the partial derivative of
$\log|\mbox{$\mathbf{Q}$}^{-1}|$ with respect to $g(\alpha,\beta)$ which is
$\sum_{\\{i,j\\}\in\text{set}_{g}}\partial\log|\mbox{$\mathbf{Q}$}^{-1}|/\partial\boldsymbol{q^{*}}_{i,j}$.
Set $g$ is those $i,j$ values where $\boldsymbol{q^{*}}=g(\alpha,\beta)$.
Because $g()$ and $f()$ are different functions of both $\alpha$ and $\beta$,
we can hold one constant while taking the partial derivative with respect to
the other (well, presuming there exists some combination of $\alpha$ and
$\beta$ that would allow that). But if we have fixed values on the off-
diagonal, this would not be possible. In this case (see below), we cannot hold
$g()$ constant while varying $f()$ because both are only functions of
$\alpha$:
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}\alpha&f&f&f\\\ f&\alpha&f&f\\\
f&f&\alpha&f\\\
f&f&f&\alpha\end{bmatrix}\quad\quad\mbox{$\mathbf{Q}$}^{-1}=\begin{bmatrix}f(\alpha)&g(\alpha)&g(\alpha)&g(\alpha)\\\
g(\alpha)&f(\alpha)&g(\alpha)&g(\alpha)\\\
g(\alpha)&g(\alpha)&f(\alpha)&g(\alpha)\\\
g(\alpha)&g(\alpha)&g(\alpha)&f(\alpha)\end{bmatrix}$
Taking the partial derivative of $\log|\mbox{$\mathbf{Q}$}^{-1}|$ with respect
to $\boldsymbol{q^{*}}=\big{[}\begin{smallmatrix}f(\alpha,\beta)\\\
g(\alpha,\beta)\end{smallmatrix}\big{]}$, we arrive at the same equation as
for the diagonal matrix:
$\begin{split}&\partial\Psi/\partial\boldsymbol{q^{*}}=\frac{1}{2}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}^{\top}\,\textup{{vec}}(\SS_{t})-\frac{1}{2}\sum_{t=1}^{T}(\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{D}$})\boldsymbol{q}\end{split}$
(132)
where here $\mbox{$\mathbf{D}$}^{\top}\mbox{$\mathbf{D}$}$ is a $2\times 2$
diagonal matrix with the number of times $f(\alpha,\beta)$ appears in element
$(1,1)$ and the number of times $g(\alpha,\beta)$ appears in element $(2,2)$
of $\mathbf{D}$; $s=2$ here since there are only 2 free parameters in
$\mathbf{Q}$.
Setting to zero and solving for $\boldsymbol{q^{*}}$ leads to the exact same
update equation as for the diagonal $\mathbf{Q}$, namely equation (130) in
which $\mbox{$\mathbf{f}$}_{q}=0$ since there are no fixed values.
#### 5.8.3 Special case: a block-diagonal matrices with replicated blocks
Because these operations extend directly to block-diagonal matrices, all
results for individual matrix types can be extended to a block-diagonal matrix
with those types:
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}\mathbb{B}_{1}&0&0\\\
0&\mathbb{B}_{2}&0\\\ 0&0&\mathbb{B}_{3}\\\ \end{bmatrix}$
where $\mathbb{B}_{i}$ is a matrix from any of the allowed matrix types, such
as unconstrained, diagonal (with fixed or shared elements), or equal variance-
covariance. Blocks can also be shared:
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}\mathbb{B}_{1}&0&0\\\
0&\mathbb{B}_{2}&0\\\ 0&0&\mathbb{B}_{2}\\\ \end{bmatrix}$
but the entire block must be identical $(\mathbb{B}_{2}\equiv\mathbb{B}_{3})$;
one cannot simply share individual elements in different blocks. Either all
the elements in two (or 3, or 4…) blocks are shared or none are shared.
This is ok:
$\begin{bmatrix}c&d&d&0&0&0\\\ d&c&d&0&0&0\\\ d&d&c&0&0&0\\\ 0&0&0&c&d&d\\\
0&0&0&d&c&d\\\ 0&0&0&d&d&c\\\ \end{bmatrix}$
This is not ok:
$\begin{bmatrix}c&d&d&0&0\\\ d&c&d&0&0\\\ d&d&c&0&0\\\ 0&0&0&c&d\\\
0&0&0&d&c\end{bmatrix}\text{ nor }\begin{bmatrix}c&d&d&0&0&0\\\ d&c&d&0&0&0\\\
d&d&c&0&0&0\\\ 0&0&0&c&e&e\\\ 0&0&0&e&c&e\\\ 0&0&0&e&e&c\\\ \end{bmatrix}$
The first is bad because the blocks are not identical; they need the same
dimensions as well as the same values. The second is bad because again the
blocks are not identical; all values must be the same.
#### 5.8.4 Special case: a symmetric blocked matrix
The same derivation translates immediately to blocked symmetric $\mathbf{Q}$
matrices with the following form:
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}\mathbb{E}_{1}&\mathbb{C}_{1,2}&\mathbb{C}_{1,3}\\\
\mathbb{C}_{1,2}&\mathbb{E}_{2}&\mathbb{C}_{2,3}\\\
\mathbb{C}_{1,3}&\mathbb{C}_{2,3}&\mathbb{E}_{3}\\\ \end{bmatrix}$
where the $\mathbb{E}$ are as above matrices with one value on the diagonal
and another on the off-diagonals (no zeros!). The $\mathbb{C}$ matrices have
only one free value or are all zero. Some $\mathbb{C}$ matrices can be zero
while are others are non-zero, but a individual $\mathbb{C}$ matrix cannot
have a combination of free values and zero values; they have to be one or the
other. Also the whole matrix must stay block symmetric. Additionally, there
can be shared $\mathbb{E}$ or $\mathbb{C}$ matrices but the whole matrix needs
to stay block-symmetric. Here are the forms that $\mathbb{E}$ and $\mathbb{C}$
can take:
$\mathbb{E}_{i}=\begin{bmatrix}\alpha&\beta&\beta&\beta\\\
\beta&\alpha&\beta&\beta\\\ \beta&\beta&\alpha&\beta\\\
\beta&\beta&\beta&\alpha\end{bmatrix}\quad\quad\mathbb{C}_{i}=\begin{bmatrix}\chi&\chi&\chi&\chi\\\
\chi&\chi&\chi&\chi\\\ \chi&\chi&\chi&\chi\\\
\chi&\chi&\chi&\chi\end{bmatrix}\text{ or }\begin{bmatrix}0&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\ 0&0&0&0\end{bmatrix}$
The following are block-symmetric:
$\begin{bmatrix}\mathbb{E}_{1}&\mathbb{C}_{1,2}&\mathbb{C}_{1,3}\\\
\mathbb{C}_{1,2}&\mathbb{E}_{2}&\mathbb{C}_{2,3}\\\
\mathbb{C}_{1,3}&\mathbb{C}_{2,3}&\mathbb{E}_{3}\\\ \end{bmatrix}\text{ and
}\begin{bmatrix}\mathbb{E}&\mathbb{C}&\mathbb{C}\\\
\mathbb{C}&\mathbb{E}&\mathbb{C}\\\ \mathbb{C}&\mathbb{C}&\mathbb{E}\\\
\end{bmatrix}$ $\text{ and
}\begin{bmatrix}\mathbb{E}_{1}&\mathbb{C}_{1}&\mathbb{C}_{1,2}\\\
\mathbb{C}_{1}&\mathbb{E}_{1}&\mathbb{C}_{1,2}\\\
\mathbb{C}_{1,2}&\mathbb{C}_{1,2}&\mathbb{E}_{2}\\\ \end{bmatrix}$
The following are NOT block-symmetric:
$\begin{bmatrix}\mathbb{E}_{1}&\mathbb{C}_{1,2}&0\\\
\mathbb{C}_{1,2}&\mathbb{E}_{2}&\mathbb{C}_{2,3}\\\
0&\mathbb{C}_{2,3}&\mathbb{E}_{3}\end{bmatrix}\text{ and
}\begin{bmatrix}\mathbb{E}_{1}&0&\mathbb{C}_{1}\\\
0&\mathbb{E}_{1}&\mathbb{C}_{2}\\\
\mathbb{C}_{1}&\mathbb{C}_{2}&\mathbb{E}_{2}\end{bmatrix}\text{ and
}\begin{bmatrix}\mathbb{E}_{1}&0&\mathbb{C}_{1,2}\\\
0&\mathbb{E}_{1}&\mathbb{C}_{1,2}\\\
\mathbb{C}_{1,2}&\mathbb{C}_{1,2}&\mathbb{E}_{2}\\\ \end{bmatrix}$ $\text{ and
}\begin{bmatrix}\mathbb{U}_{1}&\mathbb{C}_{1,2}&\mathbb{C}_{1,3}\\\
\mathbb{C}_{1,2}&\mathbb{E}_{2}&\mathbb{C}_{2,3}\\\
\mathbb{C}_{1,3}&\mathbb{C}_{2,3}&\mathbb{E}_{3}\end{bmatrix}\text{ and
}\begin{bmatrix}\mathbb{D}_{1}&\mathbb{C}_{1,2}&\mathbb{C}_{1,3}\\\
\mathbb{C}_{1,2}&\mathbb{E}_{2}&\mathbb{C}_{2,3}\\\
\mathbb{C}_{1,3}&\mathbb{C}_{2,3}&\mathbb{E}_{3}\end{bmatrix}$
In the first row, the matrices have fixed values (zeros) and free values
(covariances) on the same off-diagonal row and column. That is not allowed. If
there is a zero on a row or column, all other terms on the off-diagonal row
and column must be also zero. In the second row, the matrix is not block-
symmetric since the upper corner is an unconstrained block ($\mathbb{U}_{1}$)
in the left matrix and diagonal block ($\mathbb{D}_{1}$) in the right matrix
instead of a equal variance-covariance matrix ($\mathbb{E}$).
#### 5.8.5 The general case: a block-diagonal matrix with general blocks
In it’s most general form, $\mathbf{Q}$ is allowed to have a block-diagonal
form where the blocks, here called $\mathbb{G}$ are any of the previous
allowed cases. No shared values across $\mathbb{G}$’s; shared values are
allowed within $\mathbb{G}$’s.
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}\mathbb{G}_{1}&0&0\\\
0&\mathbb{G}_{2}&0\\\ 0&0&\mathbb{G}_{3}\\\ \end{bmatrix}$
The $\mathbb{G}$’s must be one of the special cases listed above:
unconstrained, diagonal (with fixed or shared values), equal variance-
covariance, block diagonal (with shared or unshared blocks), and block-
symmetric (with shared or unshared blocks). Fixed blocks are allowed, but then
the covariances with the free blocks must be zero:
$\mbox{$\mathbf{Q}$}=\begin{bmatrix}\mathbb{F}&0&0&0\\\
0&\mathbb{G}_{1}&0&0\\\ 0&0&\mathbb{G}_{2}&0\\\
0&0&0&\mathbb{G}_{3}\end{bmatrix}$
Fixed blocks must have only fixed values (zero is a fixed value) but the fixed
values can be different from each other. The free blocks must have only free
values (zero is not a free value).
### 5.9 The general $\mathbf{R}$ update equation
The $\mathbf{R}$ update equation for blocked symmetric matrices with optional
independent fixed blocks is completely analogous to the $\mathbf{Q}$ equation.
Thus if $\mathbf{R}$ has the form
$\mbox{$\mathbf{R}$}=\begin{bmatrix}\mathbb{F}&0&0&0\\\
0&\mathbb{G}_{1}&0&0\\\ 0&0&\mathbb{G}_{2}&0\\\
0&0&0&\mathbb{G}_{3}\end{bmatrix}$
Again the $\mathbb{G}$’s must be one of the special cases listed above:
unconstrained, diagonal (with fixed or shared values), equal variance-
covariance, block diagonal (with shared or unshared blocks), and block-
symmetric (with shared or unshared blocks). Fixed blocks are allowed, but then
the covariances with the free blocks must be zero. Elements like $f_{i}+r_{j}$
and $r_{i}+r_{j}$ are not allowed in $\mathbf{R}$. Only elements of the form
$f_{i}$ and $r_{i}$ are allowed. If an element has a fixed component, it must
be completely fixed. Each element in $\mathbf{R}$ can have only one of the
elements in $\mathbf{r}$, but multiple elements in $\mathbf{R}$ can have the
same $\mathbf{r}$ element.
The update equation is
$\begin{split}&\mbox{$\mathbf{r}$}_{j+1}=\bigg{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,r}^{\top}\mbox{$\mathbf{D}$}_{t,r}\bigg{)}^{-1}\,\textup{{vec}}\bigg{(}\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,r}^{\top}\mbox{$\mathbf{R}$}_{t,{j+1}}\bigg{)}\\\
&\quad\quad\quad\quad\,\textup{{vec}}(\mbox{$\mathbf{R}$})_{t,j+1}=\mbox{$\mathbf{f}$}_{t,r}+\mbox{$\mathbf{D}$}_{t,r}\mbox{$\mathbf{r}$}_{j+1}\end{split}$
(133)
The $\mbox{$\mathbf{R}$}_{t,j+1}$ used at time step $t$ in equation (133) is
the term that appears in the summation in the unconstrained update equation
with no missing values (equation 53):
$\begin{split}\mbox{$\mathbf{R}$}_{t,j+1}=\Xi_{t}\bigg{(}\widetilde{\mbox{$\mathbf{O}$}}_{t}-\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}\mbox{$\mathbf{Z}$}_{t}^{\top}-\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}^{\top}-\widetilde{\mbox{$\mathbf{y}$}}_{t}\mbox{$\mathbf{a}$}_{t}^{\top}-\mbox{$\mathbf{a}$}_{t}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}+\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t}\mbox{$\mathbf{Z}$}_{t}^{\top}+\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}\mbox{$\mathbf{a}$}_{t}^{\top}+\mbox{$\mathbf{a}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\mbox{$\mathbf{Z}$}_{t}^{\top}+\mbox{$\mathbf{a}$}_{t}\mbox{$\mathbf{a}$}_{t}^{\top}\bigg{)}\Xi_{t}^{\top}\end{split}$
(134)
where
$\Xi_{t}=(\mbox{$\mathbf{H}$}_{t}^{\top}\mbox{$\mathbf{H}$}_{t})^{-1}\mbox{$\mathbf{H}$}_{t}^{\top}$.
## 6 Computing the expectations in the update equations
For the update equations, we need to compute the expectations of
$\mbox{$\boldsymbol{X}$}_{t}$ and $\mbox{$\boldsymbol{Y}$}_{t}$ and their
products conditioned on 1) the observed data
$\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)$ and 2) the parameters
at time $t$, $\Theta_{j}$. This section shows how to compute these
expectations. Throughout the section, I will normally leave off the
conditional $\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}$
when specifying an expectation. Thus any $\,\textup{{E}}[]$ appearing without
its conditional is conditioned on
$\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}$. However if
there are additional or different conditions those will be shown. Also all
expectations are over the joint distribution of $XY$ unless explicitly
specified otherwise.
Before commencing, we need some notation for the observed and unobserved
elements of the data. The $n\times 1$ vector $\mbox{$\boldsymbol{y}$}_{t}$
denotes the potential observations at time $t$. If some elements of
$\mbox{$\boldsymbol{y}$}_{t}$ are missing, that means some elements are equal
to NA (or some other missing values marker):
$\mbox{$\boldsymbol{y}$}_{t}=\begin{bmatrix}y_{1}\\\ NA\\\ y_{3}\\\ y_{4}\\\
NA\\\ y_{6}\end{bmatrix}$ (135)
We denote the non-missing observations as $\mbox{$\boldsymbol{y}$}_{t}(1)$ and
the missing observations as $\mbox{$\boldsymbol{y}$}_{t}(2)$. Similar to
$\mbox{$\boldsymbol{y}$}_{t}$, $\mbox{$\boldsymbol{Y}$}_{t}$ denotes all the
$\boldsymbol{Y}$ random variables at time $t$. The
$\mbox{$\boldsymbol{Y}$}_{t}$’s with an observation are
$\mbox{$\boldsymbol{Y}$}_{t}(1)$ and those without an observation are denoted
$\mbox{$\boldsymbol{Y}$}_{t}(2)$.
Let $\mbox{\boldmath$\Omega$}_{t}^{(1)}$ be the matrix that extracts only
$\mbox{$\boldsymbol{Y}$}_{t}(1)$ from $\mbox{$\boldsymbol{Y}$}_{t}$ and
$\mbox{\boldmath$\Omega$}_{t}(2)$ be the matrix that extracts only
$\mbox{$\boldsymbol{Y}$}_{t}(2)$. For the example above,
$\begin{split}&\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{\boldmath$\Omega$}_{t}^{(1)}\mbox{$\boldsymbol{Y}$}_{t},\quad\mbox{\boldmath$\Omega$}_{t}^{(1)}=\begin{bmatrix}1&0&0&0&0&0\\\
0&0&1&0&0&0\\\ 0&0&0&1&0&0\\\ 0&0&0&0&0&1\\\ \end{bmatrix}\\\
&\mbox{$\boldsymbol{Y}$}_{t}(2)=\mbox{\boldmath$\Omega$}_{t}^{(2)}\mbox{$\boldsymbol{Y}$}_{t},\quad\mbox{\boldmath$\Omega$}_{t}^{(2)}=\begin{bmatrix}0&1&0&0&0&0\\\
0&0&0&0&1&0\end{bmatrix}\end{split}$ (136)
We will define another set of matrices that zeros out the missing or non-
missing values. Let $\mbox{$\mathbf{I}$}_{t}^{(1)}$ denote a diagonal matrix
that zeros out the $\mbox{$\boldsymbol{Y}$}_{t}(2)$ in
$\mbox{$\boldsymbol{Y}$}_{t}$ and $\mbox{$\mathbf{I}$}_{t}^{(2)}$ denote a
matrix that zeros out the $\mbox{$\boldsymbol{Y}$}_{t}(1)$ in
$\mbox{$\boldsymbol{Y}$}_{t}$. For the example above,
$\begin{split}\mbox{$\mathbf{I}$}_{t}^{(1)}&=(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}\mbox{\boldmath$\Omega$}_{t}^{(1)}=\begin{bmatrix}1&0&0&0&0&0\\\
0&0&0&0&0&0\\\ 0&0&1&0&0&0\\\ 0&0&0&1&0&0\\\ 0&0&0&0&0&0\\\ 0&0&0&0&0&1\\\
\end{bmatrix}\quad\text{and}\\\
\mbox{$\mathbf{I}$}_{t}^{(2)}&=(\mbox{\boldmath$\Omega$}_{t}^{(2)})^{\top}\mbox{\boldmath$\Omega$}_{t}^{(2)}=\begin{bmatrix}0&0&0&0&0&0\\\
0&1&0&0&0&0\\\ 0&0&0&0&0&0\\\ 0&0&0&0&0&0\\\ 0&0&0&0&1&0\\\ 0&0&0&0&0&0\\\
\end{bmatrix}.\end{split}$ (137)
### 6.1 Expectations involving only $\mbox{$\boldsymbol{X}$}_{t}$
The Kalman smoother provides the expectations involving only
$\mbox{$\boldsymbol{X}$}_{t}$ conditioned on all the data from time 1 to $T$.
$\displaystyle\widetilde{\mbox{$\mathbf{x}$}}_{t}=\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]$
(138a)
$\displaystyle\widetilde{\mbox{$\mathbf{V}$}}_{t}=\,\textup{{var}}[\mbox{$\boldsymbol{X}$}_{t}]$
(138b)
$\displaystyle\widetilde{\mbox{$\mathbf{V}$}}_{t,t-1}=\,\textup{{cov}}[\mbox{$\boldsymbol{X}$}_{t},\mbox{$\boldsymbol{X}$}_{t-1}]$
(138c) From $\widetilde{\mbox{$\mathbf{x}$}}_{t}$,
$\widetilde{\mbox{$\mathbf{V}$}}_{t}$, and
$\widetilde{\mbox{$\mathbf{V}$}}_{t,t-1}$, we compute
$\displaystyle\widetilde{\mbox{$\mathbf{P}$}}_{t}=\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]=\widetilde{\mbox{$\mathbf{V}$}}_{t}+\widetilde{\mbox{$\mathbf{x}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}$
(138d)
$\displaystyle\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}=\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]=\widetilde{\mbox{$\mathbf{V}$}}_{t,t-1}+\widetilde{\mbox{$\mathbf{x}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}$
(138e)
The $\widetilde{\mbox{$\mathbf{P}$}}_{t}$ and
$\widetilde{\mbox{$\mathbf{P}$}}_{t,t-1}$ equations arise from the
computational formula for variance (equation 12). Note the smoother is
different than the Kalman filter as the filter does not provide the
expectations of $\mbox{$\boldsymbol{X}$}_{t}$ conditioned on all the data
(time 1 to $T$) but only on the data up to time $t$.
The first part of the Kalman smoother algorithm is the Kalman filter which
gives the expectation at time $t$ conditioned on the data up to time $t$. The
following the filter as shown in (Shumway and Stoffer,, 2006, sec. 6.2, p.
331), although the notation is a little different.
$\displaystyle\mbox{$\boldsymbol{x}$}_{t}^{t-1}=\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{x}$}_{t-1}^{t-1}+\mbox{$\mathbf{u}$}_{t}$
(139a)
$\displaystyle\mbox{$\mathbf{V}$}_{t}^{t-1}=\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{V}$}_{t-1}^{t-1}\mbox{$\mathbf{B}$}_{t}^{\top}+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{Q}$}_{t}\mbox{$\mathbf{G}$}_{t}^{\top}$
(139b)
$\displaystyle\mbox{$\boldsymbol{x}$}_{t}^{t}=\mbox{$\boldsymbol{x}$}_{t}^{t-1}+\mbox{$\mathbf{K}$}_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}^{t-1}-\mbox{$\mathbf{a}$}_{t})$
(139c)
$\displaystyle\mbox{$\mathbf{V}$}_{t}^{t}=(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{K}$}_{t}\mbox{$\mathbf{Z}$}_{t})\mbox{$\mathbf{V}$}_{t}^{t-1}$
(139d)
$\displaystyle\mbox{$\mathbf{K}$}_{t}=\mbox{$\mathbf{V}$}_{t}^{t-1}\mbox{$\mathbf{Z}$}_{t}^{\top}(\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{V}$}_{t}^{t-1}\mbox{$\mathbf{Z}$}_{t}^{\top}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top})^{-1}$
(139e)
The Kalman smoother and lag-1 covariance smoother compute the expectations
conditioned on all the data, 1 to $T$:
$\displaystyle\mbox{$\boldsymbol{x}$}_{t-1}^{T}=\mbox{$\boldsymbol{x}$}_{t-1}^{t-1}+\mbox{$\mathbf{J}$}_{t-1}(\mbox{$\boldsymbol{x}$}_{t}^{T}-\mbox{$\boldsymbol{x}$}_{t}^{t-1})$
(140a)
$\displaystyle\mbox{$\mathbf{V}$}_{t-1}^{T}=\mbox{$\mathbf{V}$}_{t-1}^{t-1}+\mbox{$\mathbf{J}$}_{t-1}(\mbox{$\mathbf{V}$}_{t}^{T}-\mbox{$\mathbf{V}$}_{t}^{t-1})\mbox{$\mathbf{J}$}_{t}^{\top}$
(140b)
$\displaystyle\mbox{$\mathbf{J}$}_{t-1}=\mbox{$\mathbf{V}$}_{t-1}^{t-1}\mbox{$\mathbf{B}$}_{t}^{\top}(\mbox{$\mathbf{V}$}_{t}^{t-1})^{-1}$
(140c)
$\displaystyle\mbox{$\mathbf{V}$}_{T,T-1}^{T}=(\mbox{$\mathbf{I}$}-\mbox{$\mathbf{K}$}_{T}\mbox{$\mathbf{Z}$}_{T})\mbox{$\mathbf{B}$}_{T}\mbox{$\mathbf{V}$}_{T-1}^{T-1}$
(140e)
$\displaystyle\mbox{$\mathbf{V}$}_{t-1,t-2}^{T}=\mbox{$\mathbf{V}$}_{t-1}^{t-1}\mbox{$\mathbf{J}$}_{t-2}^{\top}+\mbox{$\mathbf{J}$}_{t-1}((\mbox{$\mathbf{V}$}_{t,t-1}^{T}-\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{V}$}_{t-1}^{t-1}))\mbox{$\mathbf{J}$}_{t-2}^{\top}$
(140f)
The classic Kalman smoother is an algorithm to compute these expectations
conditioned on no missing values in $\boldsymbol{y}$. However, the algorithm
can be easily modified to give the expected values of $\boldsymbol{X}$
conditioned on the incomplete data,
$\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)$ (Shumway and Stoffer,,
2006, sec. 6.4, eqn 6.78, p. 348). In this case, the usual filter and smoother
equations are used with the following modifications to the parameters and data
used in the equations. If the $i$-th element of $\mbox{$\boldsymbol{y}$}_{t}$
is missing, zero out the $i$-th rows in $\mbox{$\boldsymbol{y}$}_{t}$,
$\mathbf{a}$ and $\mathbf{Z}$. Thus if the 2nd and 5th elements of
$\mbox{$\boldsymbol{y}$}_{t}$ are missing,
$\mbox{$\boldsymbol{y}$}_{t}^{*}=\begin{bmatrix}y_{1}\\\ 0\\\ y_{3}\\\
y_{4}\\\ 0\\\ y_{6}\\\
\end{bmatrix},\quad\mbox{$\mathbf{a}$}_{t}^{*}=\begin{bmatrix}a_{1}\\\ 0\\\
a_{3}\\\ a_{4}\\\ 0\\\ a_{6}\\\
\end{bmatrix},\quad\mbox{$\mathbf{Z}$}_{t}^{*}=\begin{bmatrix}z_{1,1}&z_{1,2}&...\\\
0&0&...\\\ z_{3,1}&z_{3,2}&...\\\ z_{4,1}&z_{4,2}&...\\\ 0&0&...\\\
z_{6,1}&z_{6,2}&...\\\ \end{bmatrix}$ (141)
The $\mbox{$\mathbf{R}$}_{t}$ parameter used in the filter equations is also
modified. We need to zero out the covariances between the non-missing,
$\mbox{$\boldsymbol{y}$}_{t}(1)$, and missing,
$\mbox{$\boldsymbol{y}$}_{t}(2)$, data. For the example above, if
$\mbox{$\mathbf{R}$}_{t}=\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}\mbox{$\mathbf{H}$}_{t}^{\top}=\begin{bmatrix}r_{1,1}&r_{1,2}&r_{1,3}&r_{1,4}&r_{1,5}&r_{1,6}\\\
r_{2,1}&r_{2,2}&r_{2,3}&r_{2,4}&r_{2,5}&r_{2,6}\\\
r_{3,1}&r_{3,2}&r_{3,3}&r_{3,4}&r_{3,5}&r_{3,6}\\\
r_{4,1}&r_{4,2}&r_{4,3}&r_{4,4}&r_{4,5}&r_{4,6}\\\
r_{5,1}&r_{5,2}&r_{5,3}&r_{5,4}&r_{5,5}&r_{5,6}\\\
r_{6,1}&r_{6,2}&r_{6,3}&r_{6,4}&r_{6,5}&r_{6,6}\\\ \end{bmatrix}$ (142)
then the $\mbox{$\mathbf{R}$}_{t}$ we use at time $t$, will have zero
covariances between the non-missing elements 1,3,4,6 and the missing elements
2,5:
$\mbox{$\mathbf{R}$}_{t}^{*}=\begin{bmatrix}r_{1,1}&0&r_{1,3}&r_{1,4}&0&r_{1,6}\\\
0&r_{2,2}&0&0&r_{2,5}&0\\\ r_{3,1}&0&r_{3,3}&r_{3,4}&0&r_{3,6}\\\
r_{4,1}&0&r_{4,3}&r_{4,4}&0&r_{4,6}\\\ 0&r_{5,2}&0&0&r_{5,5}&0\\\
r_{6,1}&0&r_{6,3}&r_{6,4}&0&r_{6,6}\\\ \end{bmatrix}$ (143)
Thus, the data and parameters used in the filter and smoother equations are
$\begin{split}\mbox{$\boldsymbol{y}$}_{t}^{*}&=\mbox{$\mathbf{I}$}_{t}^{(1)}\mbox{$\boldsymbol{y}$}_{t}\\\
\mbox{$\mathbf{a}$}_{t}^{*}&=\mbox{$\mathbf{I}$}_{t}^{(1)}\mbox{$\mathbf{a}$}_{t}\\\
\mbox{$\mathbf{Z}$}_{t}^{*}&=\mbox{$\mathbf{I}$}_{t}^{(1)}\mbox{$\mathbf{Z}$}_{t}\\\
\mbox{$\mathbf{R}$}_{t}^{*}&=\mbox{$\mathbf{I}$}_{t}^{(1)}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{I}$}_{t}^{(1)}+\mbox{$\mathbf{I}$}_{t}^{(2)}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{I}$}_{t}^{(2)}\end{split}$
(144)
$\mbox{$\mathbf{a}$}_{t}^{*}$, $\mbox{$\mathbf{Z}$}_{t}^{*}$ and
$\mbox{$\mathbf{R}$}_{t}^{*}$ only are used in the Kalman filter and smoother.
They are not used in the EM update equations. However when coding the
algorithm, it is convenient to replace the NAs (or whatever the missing values
placeholder is) in $\mbox{$\boldsymbol{y}$}_{t}$ with zero so that there is
not a problem with NAs appearing in the computations.
### 6.2 Expectations involving $\mbox{$\boldsymbol{Y}$}_{t}$
First, replace the missing values in $\mbox{$\boldsymbol{y}$}_{t}$ with
zeros151515The only reason is so that in your computer code, if you use NA or
NaN as the missing value marker, NA-NA=0 and 0*NA=0 rather than NA. and then
the expectations are given by the following equations. The derivations for
these equations are given in the subsections to follow.
$\displaystyle\widetilde{\mbox{$\mathbf{y}$}}_{t}$
$\displaystyle=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}]=\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{a}$}_{t})$
(145a) $\displaystyle\widetilde{\mbox{$\mathbf{O}$}}_{t}$
$\displaystyle=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}]=\mbox{$\mathbf{I}$}_{t}^{(2)}(\nabla_{t}\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top}+\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t}\mbox{$\mathbf{Z}$}_{t}^{\top}\nabla_{t}^{\top})\mbox{$\mathbf{I}$}_{t}^{(2)}+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}$
(145b) $\displaystyle\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}$
$\displaystyle=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]=\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t}+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}$
(145c) $\displaystyle\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t,t-1}$
$\displaystyle=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]=\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t,t-1}+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}$
(145d) $\displaystyle\text{where }\nabla_{t}$
$\displaystyle=\mbox{$\mathbf{I}$}-\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}(\mbox{\boldmath$\Omega$}_{t}^{(1)}\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top})^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}$
(145e) $\displaystyle\text{and }\mbox{$\mathbf{I}$}_{t}^{(2)}$
$\displaystyle=(\mbox{\boldmath$\Omega$}_{t}^{(2)})^{\top}\mbox{\boldmath$\Omega$}_{t}^{(2)}$
(145f)
If $\mbox{$\boldsymbol{y}$}_{t}$ is all missing,
$\mbox{\boldmath$\Omega$}_{t}^{(1)}$ is a $0\times n$ matrix, and we define
$(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}(\mbox{\boldmath$\Omega$}_{t}^{(1)}\mbox{$\mathbf{R}$}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top})^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}$
to be a $n\times n$ matrix of zeros. If $\mbox{$\mathbf{R}$}_{t}$ is diagonal,
then
$\mbox{$\mathbf{R}$}_{t}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}(\mbox{\boldmath$\Omega$}_{t}^{(1)}\mbox{$\mathbf{R}$}_{t}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top})^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}=\mbox{$\mathbf{I}$}_{t}^{(1)}$
and $\nabla_{t}=\mbox{$\mathbf{I}$}_{t}^{(2)}$. This will mean that in
$\widetilde{\mbox{$\mathbf{y}$}}_{t}$ the $\mbox{$\boldsymbol{y}$}_{t}(2)$ are
given by
$\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}+\mbox{$\mathbf{a}$}_{t}$,
as expected when $\mbox{$\boldsymbol{y}$}_{t}(1)$ and
$\mbox{$\boldsymbol{y}$}_{t}(2)$ are independent.
If there are zeros on the diagonal of $\mbox{$\mathbf{R}$}_{t}$ (section 7),
the definition of $\nabla_{t}$ is changed slightly from that shown in equation
145. Let $\mho_{t}^{(r)}$ be the matrix that extracts the elements of
$\mbox{$\boldsymbol{y}$}_{t}$ where $\mbox{$\boldsymbol{y}$}_{t}(i)$ is not
missing AND
$\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}(i,i)\mbox{$\mathbf{H}$}_{t}^{\top}$
is not zero. Then
$\nabla_{t}=\mbox{$\mathbf{I}$}-\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top}(\mho_{t}^{(r)})^{\top}(\mho_{t}^{(r)}\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top}(\mho_{t}^{(r)})^{\top})^{-1}\mho_{t}^{(r)}$
(146)
### 6.3 Derivation of the expected value of $\mbox{$\boldsymbol{Y}$}_{t}$
In the MARSS equation, the observation errors are denoted
$\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{v}$}_{t}$. $\mbox{$\mathbf{v}$}_{t}$ is
a specific realization from a random variable $\mbox{$\mathbf{V}$}_{t}$ that
is distributed multivariate normal with mean 0 and variance
$\mbox{$\mathbf{R}$}_{t}$. $\mbox{$\mathbf{V}$}_{t}$ is not to be confused
with $\widetilde{\mbox{$\mathbf{V}$}}_{t}$ in equation 138, which is
unrelated161616I apologize for the confusing notation, but
$\widetilde{\mbox{$\mathbf{V}$}}_{t}$ and $\mbox{$\mathbf{v}$}_{t}$ are
somewhat standard in the MARSS literature and it is standard to use a capital
letter to refer to a random variable. Thus $\mbox{$\mathbf{V}$}_{t}$ would be
the standard way to refer to the random variable associated with
$\mbox{$\mathbf{v}$}_{t}$. to $\mbox{$\mathbf{V}$}_{t}$. If there are no
missing values, then we condition on
$\mbox{$\boldsymbol{Y}$}_{t}=\mbox{$\boldsymbol{y}$}_{t}$ and
$\begin{split}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}_{t}=\mbox{$\boldsymbol{y}$}_{t}]=\mbox{$\boldsymbol{y}$}_{t}\\\
\end{split}$ (147)
If there are no observed values, then
$\begin{split}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}]=\,\textup{{E}}[\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{V}$}_{t}]=\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}+\mbox{$\mathbf{a}$}_{t}\end{split}$
(148)
If only some of the $\mbox{$\boldsymbol{Y}$}_{t}$ are observed, then we use
the conditional probability for a multivariate normal distribution (here shown
for a bivariate case):
$\text{If, }\begin{bmatrix}Y_{1}\\\
Y_{2}\end{bmatrix}\sim\,\textup{{MVN}}\biggl{(}\begin{bmatrix}\mu_{1}\\\
\mu_{2}\end{bmatrix},\begin{bmatrix}\Sigma_{11}&\Sigma_{12}\\\
\Sigma_{21}&\Sigma_{22}\end{bmatrix}\biggr{)}$ (149)
Then,
$\begin{split}(Y_{1}|Y_{1}=y_{1})&=y_{1},\quad\text{and}\\\
(Y_{2}|Y_{1}=y_{1})&\sim\,\textup{{MVN}}(\bar{\mu},\bar{\Sigma}),\quad\text{where}\\\
\bar{\mu}&=\mu_{2}+\Sigma_{21}\Sigma_{11}^{-1}(y_{1}-\mu_{1})\\\
\bar{\Sigma}&=\Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12}\end{split}$
(150)
From this property, we can write down the distribution of
$\mbox{$\boldsymbol{Y}$}_{t}$ conditioned on
$\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1)$ and
$\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}$:
$\begin{split}\begin{bmatrix}\mbox{$\boldsymbol{Y}$}_{t}(1)|\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}\\\
\mbox{$\boldsymbol{Y}$}_{t}(2)|\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}\end{bmatrix}&\sim\\\
&\,\textup{{MVN}}\biggl{(}\begin{bmatrix}\mbox{\boldmath$\Omega$}_{t}^{(1)}(\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t})\\\
\mbox{\boldmath$\Omega$}_{t}^{(2)}(\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t})\end{bmatrix},\begin{bmatrix}(\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top})_{11}&(\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top})_{12}\\\
(\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top})_{21}&(\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top})_{22}\end{bmatrix}\biggr{)}\end{split}$
(151)
Thus,
$\begin{split}(\mbox{$\boldsymbol{Y}$}_{t}(1)&|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t})=\mbox{\boldmath$\Omega$}_{t}^{(1)}\mbox{$\boldsymbol{y}$}_{t}\quad\text{and}\\\
(\mbox{$\boldsymbol{Y}$}_{t}(2)&|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t})\sim\,\textup{{MVN}}(\ddot{\mu},\ddot{\Sigma})\quad\text{where}\\\
\ddot{\mu}&=\mbox{\boldmath$\Omega$}_{t}^{(2)}(\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t})+\ddot{\mbox{$\mathbf{R}$}}_{t,21}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}-\mbox{$\mathbf{a}$}_{t})\\\
\ddot{\Sigma}&=\ddot{\mbox{$\mathbf{R}$}}_{t,22}-\ddot{\mbox{$\mathbf{R}$}}_{t,21}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\ddot{\mbox{$\mathbf{R}$}}_{t,12}\\\
\ddot{\mbox{$\mathbf{R}$}}_{t}&=\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top}\end{split}$
(152)
Note that since we are conditioning on
$\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}$, we can replace
$\boldsymbol{Y}$ by $\mbox{$\boldsymbol{Y}$}_{t}$ in the conditional:
$\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}].$
From this and the distributions in equation (152), we can write down
$\widetilde{\mbox{$\mathbf{y}$}}_{t}=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1),\Theta_{j}]$:
$\begin{split}\widetilde{\mbox{$\mathbf{y}$}}_{t}&=\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]\\\
&=\int_{\mbox{$\boldsymbol{x}$}_{t}}\int_{\mbox{$\boldsymbol{y}$}_{t}}\mbox{$\boldsymbol{y}$}_{t}f(\mbox{$\boldsymbol{y}$}_{t}|\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{x}$}_{t})d\mbox{$\boldsymbol{y}$}_{t}f(\mbox{$\boldsymbol{x}$}_{t})d\mbox{$\boldsymbol{x}$}_{t}\\\
&=\,\textup{{E}}_{X}[\,\textup{{E}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]]\\\
&=\,\textup{{E}}_{X}[\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})]\\\
&=\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{a}$}_{t})\\\
\text{where
}\nabla_{t}&=\mbox{$\mathbf{I}$}-\ddot{\mbox{$\mathbf{R}$}}_{t}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}\end{split}$
(153)
$(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}$
is a $n\times n$ matrix with 0s in the non-(11) positions. If the $k$-th
element of $\mbox{$\boldsymbol{y}$}_{t}$ is observed, then $k$-th row and
column of $\nabla_{t}$ will be zero. Thus if there are no missing values at
time $t$, $\nabla_{t}=\mbox{$\mathbf{I}$}-\mbox{$\mathbf{I}$}=0$. If there are
no observed values at time $t$, $\nabla_{t}$ will reduce to $\mathbf{I}$.
### 6.4 Derivation of the expected value of
$\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}$
The following outlines a171717The following derivations are painfully ugly,
but appear to work. There are surely more elegant ways to do this; at least,
there must be more elegant notations. derivation. If there are no missing
values, then we condition on
$\mbox{$\boldsymbol{Y}$}_{t}=\mbox{$\boldsymbol{y}$}_{t}$ and
$\begin{split}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}&\mbox{$\boldsymbol{Y}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]=\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}_{t}=\mbox{$\boldsymbol{y}$}_{t}]\\\
&=\mbox{$\boldsymbol{y}$}_{t}\mbox{$\boldsymbol{y}$}_{t}^{\top}.\end{split}$
(154)
If there are no observed values at time $t$, then
$\begin{split}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}&\mbox{$\boldsymbol{Y}$}_{t}^{\top}]\\\
&=\,\textup{{var}}[\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{V}$}_{t}]+\,\textup{{E}}[\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{V}$}_{t}]\,\textup{{E}}[\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{V}$}_{t}]^{\top}\\\
&=\,\textup{{var}}[\mbox{$\mathbf{V}$}_{t}]+\,\textup{{var}}[\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}]+(\,\textup{{E}}[\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}+\mbox{$\mathbf{a}$}_{t}]+\,\textup{{E}}[\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{V}$}_{t}])(\,\textup{{E}}[\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}+\mbox{$\mathbf{a}$}_{t}]+\,\textup{{E}}[\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{V}$}_{t}])^{\top}\\\
&=\ddot{\mbox{$\mathbf{R}$}}_{t}+\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t}\mbox{$\mathbf{Z}$}_{t}^{\top}+(\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}+\mbox{$\mathbf{a}$}_{t})(\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}+\mbox{$\mathbf{a}$}_{t})^{\top}\end{split}$
(155)
When only some of the $\mbox{$\boldsymbol{Y}$}_{t}$ are observed, we use again
the conditional probability of a multivariate normal (equation 149). From this
property, we know that
$\begin{split}&\,\textup{{var}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}(2)\mbox{$\boldsymbol{Y}$}_{t}(2)^{\top}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]=\ddot{\mbox{$\mathbf{R}$}}_{t,22}-\ddot{\mbox{$\mathbf{R}$}}_{t,21}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\ddot{\mbox{$\mathbf{R}$}}_{t,12},\\\
&\,\textup{{var}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}(1)|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]=0\\\
\text{and
}&\,\textup{{cov}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}(1),\mbox{$\boldsymbol{Y}$}_{t}(2)|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]=0\\\
\\\ \text{Thus
}&\,\textup{{var}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]\\\
&=(\mbox{\boldmath$\Omega$}_{t}^{(2)})^{\top}(\ddot{\mbox{$\mathbf{R}$}}_{t,22}-\ddot{\mbox{$\mathbf{R}$}}_{t,21}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\ddot{\mbox{$\mathbf{R}$}}_{t,12})\mbox{\boldmath$\Omega$}_{t}^{(2)}\\\
&=(\mbox{\boldmath$\Omega$}_{t}^{(2)})^{\top}(\mbox{\boldmath$\Omega$}_{t}^{(2)}\ddot{\mbox{$\mathbf{R}$}}_{t}(\mbox{\boldmath$\Omega$}_{t}^{(2)})^{\top}-\mbox{\boldmath$\Omega$}_{t}^{(2)}\ddot{\mbox{$\mathbf{R}$}}_{t}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}\ddot{\mbox{$\mathbf{R}$}}_{t}(\mbox{\boldmath$\Omega$}_{t}^{(2)})^{\top})\mbox{\boldmath$\Omega$}_{t}^{(2)}\\\
&=\mbox{$\mathbf{I}$}_{t}^{(2)}(\ddot{\mbox{$\mathbf{R}$}}_{t}-\ddot{\mbox{$\mathbf{R}$}}_{t}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}\ddot{\mbox{$\mathbf{R}$}}_{t})\mbox{$\mathbf{I}$}_{t}^{(2)}\\\
&=\mbox{$\mathbf{I}$}_{t}^{(2)}\nabla_{t}\ddot{\mbox{$\mathbf{R}$}}_{t}\mbox{$\mathbf{I}$}_{t}^{(2)}\end{split}$
(156)
The $\mbox{$\mathbf{I}$}_{t}^{(2)}$ bracketing both sides is zero-ing out the
rows and columns corresponding to the $\mbox{$\boldsymbol{y}$}_{t}(1)$ values.
Now we can compute the
$\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]$.
The subscripts are added to the E to emphasize that we are breaking the
multivariate expectation into an inner and outer expectation.
$\begin{split}\widetilde{\mbox{$\mathbf{O}$}}_{t}&=\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]=\,\textup{{E}}_{X}[\,\textup{{E}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{Y}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]]\\\
&=\,\textup{{E}}_{X}\bigl{[}\,\textup{{var}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]\\\
&\quad+\,\textup{{E}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]\,\textup{{E}}_{Y|x}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1),\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\boldsymbol{x}$}_{t}]^{\top}\bigr{]}\\\
&=\,\textup{{E}}_{X}[\mbox{$\mathbf{I}$}_{t}^{(2)}\nabla_{t}\ddot{\mbox{$\mathbf{R}$}}_{t}\mbox{$\mathbf{I}$}_{t}^{(2)}]+\,\textup{{E}}_{X}[(\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t}))(\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t}))^{\top}]\\\
&=\mbox{$\mathbf{I}$}_{t}^{(2)}\nabla_{t}\ddot{\mbox{$\mathbf{R}$}}_{t}\mbox{$\mathbf{I}$}_{t}^{(2)}+\,\textup{{var}}_{X}\bigl{[}\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})\bigr{]}\\\
&\quad+\,\textup{{E}}_{X}[\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})]\,\textup{{E}}_{X}[\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})]^{\top}\\\
&=\mbox{$\mathbf{I}$}_{t}^{(2)}\nabla_{t}\ddot{\mbox{$\mathbf{R}$}}_{t}\mbox{$\mathbf{I}$}_{t}^{(2)}+\mbox{$\mathbf{I}$}_{t}^{(2)}\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t}\mbox{$\mathbf{Z}$}_{t}^{\top}\nabla_{t}^{\top}\mbox{$\mathbf{I}$}_{t}^{(2)}+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}\\\
\end{split}$ (157)
Thus,
$\widetilde{\mbox{$\mathbf{O}$}}_{t}=\mbox{$\mathbf{I}$}_{t}^{(2)}(\nabla_{t}\ddot{\mbox{$\mathbf{R}$}}_{t}+\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t}\mbox{$\mathbf{Z}$}_{t}^{\top}\nabla_{t}^{\top})\mbox{$\mathbf{I}$}_{t}^{(2)}+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{y}$}}_{t}^{\top}\\\
$ (158)
### 6.5 Derivation of the expected value of
$\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}$
If there are no missing values, then we condition on
$\mbox{$\boldsymbol{Y}$}_{t}=\mbox{$\boldsymbol{y}$}_{t}$ and
$\begin{split}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]=\mbox{$\boldsymbol{y}$}_{t}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{\top}]=\mbox{$\boldsymbol{y}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\end{split}$
(159)
If there are no observed values at time $t$, then
$\begin{split}\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}&\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]\\\
&=\,\textup{{E}}[(\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{V}$}_{t})\mbox{$\boldsymbol{X}$}_{t}^{\top}]\\\
&=\,\textup{{E}}[\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}+\mbox{$\mathbf{a}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}+\mbox{$\mathbf{V}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}]\\\
&=\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t}+\mbox{$\mathbf{a}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}+\,\textup{{cov}}[\mbox{$\mathbf{V}$}_{t},\mbox{$\boldsymbol{X}$}_{t}]+\,\textup{{E}}[\mbox{$\mathbf{V}$}_{t}]\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]^{\top}\\\
&=\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{P}$}}_{t}+\mbox{$\mathbf{a}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\end{split}$
(160)
Note that $\mbox{$\mathbf{V}$}_{t}$ and $\mbox{$\boldsymbol{X}$}_{t}$ are
independent (equation 1). $\,\textup{{E}}[\mbox{$\mathbf{V}$}_{t}]=0$ and
$\,\textup{{cov}}[\mbox{$\mathbf{V}$}_{t},\mbox{$\boldsymbol{X}$}_{t}]=0$.
Now we can compute the
$\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}_{(}1)=\mbox{$\boldsymbol{y}$}(1)]$.
$\begin{split}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}&=\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]\\\
&=\,\textup{{cov}}[\mbox{$\boldsymbol{Y}$}_{t},\mbox{$\boldsymbol{X}$}_{t}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1)]+\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{X}$}_{t}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]^{\top}\\\
&=\,\textup{{cov}}[\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})+\mbox{$\mathbf{V}$}^{*}_{t},\mbox{$\boldsymbol{X}$}_{t}]+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\\\
&=\,\textup{{cov}}[\mbox{$\boldsymbol{y}$}_{t},\mbox{$\boldsymbol{X}$}_{t}]-\,\textup{{cov}}[\nabla_{t}\mbox{$\boldsymbol{y}$}_{t},\mbox{$\boldsymbol{X}$}_{t}]+\,\textup{{cov}}[\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t},\mbox{$\boldsymbol{X}$}_{t}]+\,\textup{{cov}}[\nabla_{t}\mbox{$\mathbf{a}$}_{t},\mbox{$\boldsymbol{X}$}_{t}]\\\
&\quad+\,\textup{{cov}}[\mbox{$\mathbf{V}$}^{*}_{t},\mbox{$\boldsymbol{X}$}_{t}]+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\\\
&=0-0+\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t}+0+0+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\\\
&=\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t}+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}^{\top}\end{split}$
(161)
This uses the computational formula for covariance:
$\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}\mbox{$\boldsymbol{X}$}^{\top}]=\,\textup{{cov}}[\mbox{$\boldsymbol{Y}$},\mbox{$\boldsymbol{X}$}]+\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}]\,\textup{{E}}[\mbox{$\boldsymbol{X}$}]^{\top}$.
$\mbox{$\mathbf{V}$}^{*}_{t}$ is a random variable with mean 0 and variance
$\ddot{\mbox{$\mathbf{R}$}}_{t,22}-\ddot{\mbox{$\mathbf{R}$}}_{t,21}(\ddot{\mbox{$\mathbf{R}$}}_{t,11})^{-1}\ddot{\mbox{$\mathbf{R}$}}_{t,12}$
from equation (152). $\mbox{$\mathbf{V}$}^{*}_{t}$ and
$\mbox{$\boldsymbol{X}$}_{t}$ are independent of each other, thus
$\,\textup{{cov}}[\mbox{$\mathbf{V}$}^{*}_{t},\mbox{$\boldsymbol{X}$}_{t}^{\top}]=0$.
### 6.6 Derivation of the expected value of
$\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}$
The derivation of
$\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]$
is similar to the derivation of
$\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}]$:
$\begin{split}\widetilde{\mbox{$\mathbf{y}\mathbf{x}$}}_{t}&=\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]\\\
&=\,\textup{{cov}}[\mbox{$\boldsymbol{Y}$}_{t},\mbox{$\boldsymbol{X}$}_{t-1}|\mbox{$\boldsymbol{Y}$}_{t}(1)=\mbox{$\boldsymbol{y}$}_{t}(1)]+\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{Y}$}_{t}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]\,\textup{{E}}_{XY}[\mbox{$\boldsymbol{X}$}_{t-1}^{\top}|\mbox{$\boldsymbol{Y}$}(1)=\mbox{$\boldsymbol{y}$}(1)]^{\top}\\\
&=\,\textup{{cov}}[\mbox{$\boldsymbol{y}$}_{t}-\nabla_{t}(\mbox{$\boldsymbol{y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})+\mbox{$\mathbf{V}$}^{*}_{t},\mbox{$\boldsymbol{X}$}_{t-1}]+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\\\
&=\,\textup{{cov}}[\mbox{$\boldsymbol{y}$}_{t},\mbox{$\boldsymbol{X}$}_{t-1}]-\,\textup{{cov}}[\nabla_{t}\mbox{$\boldsymbol{y}$}_{t},\mbox{$\boldsymbol{X}$}_{t-1}]+\,\textup{{cov}}[\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t},\mbox{$\boldsymbol{X}$}_{t-1}]\\\
&\quad+\,\textup{{cov}}[\nabla_{t}\mbox{$\mathbf{a}$}_{t},\mbox{$\boldsymbol{X}$}_{t-1}]+\,\textup{{cov}}[\mbox{$\mathbf{V}$}^{*}_{t},\mbox{$\boldsymbol{X}$}_{t-1}]+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\\\
&=0-0+\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t,t-1}+0+0+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\\\
&=\nabla_{t}\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{V}$}}_{t,t-1}+\widetilde{\mbox{$\mathbf{y}$}}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t-1}^{\top}\end{split}$
(162)
## 7 Degenerate variance models
It is possible that the model has deterministic and probabilistic elements;
mathematically this means that either $\mbox{$\mathbf{G}$}_{t}$,
$\mbox{$\mathbf{H}$}_{t}$ or $\mathbf{F}$ have all zero rows, and this means
that some of the observation or state processes are deterministic. Such models
often arise when a MAR-p is put into MARSS-1 form. Assuming the model is
solvable (one solution and not over-determined), we can modify the Kalman
smoother and EM algorithm to handle models with deterministic elements.
The motivation behind the degenerate variance modification is that we want to
use one set of EM update equations for all models in the MARSS
class—regardless of whether they are partially or fully degenerate. The
difficulties arise in getting the $\mathbf{u}$ and $\xi$ update equations. If
we were to fix these or make $\xi$ stochastic (a fixed mean and fixed
variance), most of the trouble in this section could be avoided. However,
fixing $\xi$ or making it stochastic is putting a prior on it and placing a
prior on the variance-covariance structure of $\xi$ that conflicts logically
with the model is often both unavoidable (since the correct variance-
covariance structure depends on the parameters you are trying to estimate) and
disasterous to one’s estimation although the problem is often difficult to
detect especially with long time series. Many papers have commented on this
subtle problem. So, we want to be able to estimate $\xi$ so we do not have to
specify $\Lambda$ (because we remove it from the model). Note that in a
univariate $\boldsymbol{x}$ model (one state), $\Lambda$ is just a variance so
we do not run into this trouble. The problems arise when $\boldsymbol{x}$ is
multivariate (¿1 state) and then we have to deal with the variance-covariance
structure of the initial states.
### 7.1 Rewriting the state and observation models for degenerate variance
systems
Let’s start with an example:
$\mbox{$\mathbf{R}$}_{t}=\begin{bmatrix}1&.2\\\ .2&1\end{bmatrix}\\\ \text{
and }\mbox{$\mathbf{H}$}_{t}=\begin{bmatrix}1&0\\\ 0&0\\\ 0&1\end{bmatrix}$
(163)
Let $\mbox{\boldmath$\Omega$}_{t,r}^{+}$ be a $p\times n$ matrix that extracts
the $p$ non-zero rows from $\mbox{$\mathbf{H}$}_{t}$. The diagonal matrix
$(\mbox{\boldmath$\Omega$}_{t,r}^{+})^{\top}\mbox{\boldmath$\Omega$}_{t,r}^{+}\equiv\mbox{$\mathbf{I}$}_{t,r}^{+}$
is a diagonal matrix that can zero out the $\mbox{$\mathbf{H}$}_{t}$ zero rows
in any $n$ row matrix.
$\begin{split}\mbox{\boldmath$\Omega$}_{t,r}^{+}&=\begin{bmatrix}1&0&0\\\
0&0&1\end{bmatrix}\quad\quad\mbox{$\mathbf{I}$}_{t,r}^{+}=(\mbox{\boldmath$\Omega$}_{t,r}^{+})^{\top}\mbox{\boldmath$\Omega$}_{t,r}^{+}=\begin{bmatrix}1&0&0\\\
0&0&0\\\ 0&0&1\end{bmatrix}\\\
\mbox{$\boldsymbol{y}$}_{t}^{+}&=\mbox{\boldmath$\Omega$}_{t,r}^{+}\mbox{$\boldsymbol{y}$}_{t}\\\
\end{split}$ (164)
Let $\mbox{\boldmath$\Omega$}_{t,r}^{(0)}$ be a $(n-p)\times n$ matrix that
extracts the $n-p$ zero rows from $\mbox{$\mathbf{H}$}_{t}$. For the example
above,
$\begin{split}\mbox{\boldmath$\Omega$}_{t,r}^{(0)}&=\begin{bmatrix}0&1&0\\\
\end{bmatrix}\quad\quad\mbox{$\mathbf{I}$}_{t,r}^{(0)}=(\mbox{\boldmath$\Omega$}_{t,r}^{(0)})^{\top}\mbox{\boldmath$\Omega$}_{t,r}^{(0)}=\begin{bmatrix}0&0&0\\\
0&1&0\\\ 0&0&0\end{bmatrix}\\\
\mbox{$\boldsymbol{y}$}_{t}^{(0)}&=\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\boldsymbol{y}$}_{t}\\\
\end{split}$ (165)
Similarly, $\mbox{\boldmath$\Omega$}_{t,q}^{+}$ extracts the non-zero rows
from $\mbox{$\mathbf{G}$}_{t}$ and $\mbox{\boldmath$\Omega$}_{t,q}^{(0)}$
extracts the zero rows. $\mbox{$\mathbf{I}$}_{t,q}^{+}$ and
$\mbox{$\mathbf{I}$}_{t,q}^{(0)}$ are defined similarly.
Using these definitions, we can rewrite the state process part of the MARSS
model by separating out the deterministic parts:
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}^{(0)}&=\mbox{\boldmath$\Omega$}_{t,q}^{(0)}\mbox{$\boldsymbol{x}$}_{t}=\mbox{\boldmath$\Omega$}_{t,q}^{(0)}(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}_{t})\\\
\mbox{$\boldsymbol{x}$}_{t}^{+}&=\mbox{\boldmath$\Omega$}_{t,q}^{+}\mbox{$\boldsymbol{x}$}_{t}=\mbox{\boldmath$\Omega$}_{t,q}^{+}(\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}_{t}+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{w}$}_{t})\\\
\mbox{$\mathbf{w}$}_{t}^{+}&\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{Q}$}_{t})\\\
\mbox{$\boldsymbol{x}$}_{0}&\sim\,\textup{{MVN}}(\mbox{\boldmath$\xi$},\mbox{\boldmath$\Lambda$})\end{split}$
(166)
Similarly, we can rewrite the observation process part of the MARSS model by
separating out the parts with no observation error:
$\begin{split}\mbox{$\boldsymbol{y}$}_{t}^{(0)}&=\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\boldsymbol{y}$}_{t}=\mbox{\boldmath$\Omega$}_{t,r}^{(0)}(\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t})\\\
&=\mbox{\boldmath$\Omega$}_{t,r}^{(0)}(\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t,q}^{+}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t,q}^{(0)}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t})\\\
\mbox{$\boldsymbol{y}$}_{t}^{+}&=\mbox{\boldmath$\Omega$}_{t,r}^{+}\mbox{$\boldsymbol{y}$}_{t}=\mbox{\boldmath$\Omega$}_{t,r}^{+}(\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{v}$}_{t})\\\
&=\mbox{\boldmath$\Omega$}_{t,r}^{+}(\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t,q}^{+}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t,q}^{(0)}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{v}$}_{t})\\\
\mbox{$\mathbf{v}$}_{t}^{+}&\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{R}$}_{t})\end{split}$
(167)
I am treating $\Lambda$ as fully stochastic for this example, but in general
$\mathbf{F}$ might have 0 rows.
In order for this to be solvable using an EM algorithm with the Kalman filter,
we require that no estimated $\mathbf{B}$ or $\mathbf{u}$ elements appear in
the equation for $\mbox{$\boldsymbol{y}$}_{t}^{(0)}$. Since the
$\mbox{$\boldsymbol{y}$}_{t}^{(0)}$ do not appear in the likelihood function
(since $\mbox{$\mathbf{H}$}_{t}^{(0)}=0$), $\mbox{$\boldsymbol{y}$}_{t}^{(0)}$
would not affect the estimate for the parameters appearing in the
$\mbox{$\boldsymbol{y}$}_{t}^{(0)}$ equation. This translates to the following
constraints, $(1_{1\times
m}\otimes\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t,q}^{(0)})\mbox{$\mathbf{D}$}_{t,b}$
is all zeros and
$\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t,q}^{(0)}\mbox{$\mathbf{D}$}_{u}$
is all zeros. Also notice that
$\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\mathbf{Z}$}_{t}$ and
$\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\mathbf{a}$}_{t}$ appear in the
$\mbox{$\boldsymbol{y}$}^{(0)}$ equation and not in the
$\mbox{$\boldsymbol{y}$}^{+}$ equation. This means that
$\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\mathbf{Z}$}_{t}$ and
$\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\mathbf{a}$}_{t}$ must be only
fixed terms.
In summary, the degenerate model becomes
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}^{(0)}&=\mbox{$\mathbf{B}$}_{t}^{(0)}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}_{t}^{(0)}\\\
\mbox{$\boldsymbol{x}$}_{t}^{+}&=\mbox{$\mathbf{B}$}_{t}^{+}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}_{t}^{+}+\mbox{$\mathbf{G}$}_{t}^{+}\mbox{$\mathbf{w}$}_{t}\\\
\mbox{$\mathbf{w}$}_{t}&\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{Q}$}_{t})\\\
\mbox{$\boldsymbol{x}$}_{0}&\sim\,\textup{{MVN}}(\mbox{\boldmath$\xi$},\mbox{\boldmath$\Lambda$})\\\
\mbox{$\boldsymbol{y}$}_{t}^{(0)}&=\mbox{$\mathbf{Z}$}^{(0)}\mbox{$\mathbf{I}$}_{q}^{+}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{Z}$}^{(0)}\mbox{$\mathbf{I}$}_{q}^{(0)}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}^{(0)}_{t}\\\
\mbox{$\boldsymbol{y}$}_{t}^{+}&=\mbox{$\mathbf{Z}$}_{t}^{+}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t}^{+}\mbox{$\mathbf{H}$}_{t}^{+}\mbox{$\mathbf{v}$}_{t}\\\
&=\mbox{$\mathbf{Z}$}_{t}^{+}\mbox{$\mathbf{I}$}_{q}^{+}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{Z}$}_{t}^{+}\mbox{$\mathbf{I}$}_{q}^{(0)}\mbox{$\boldsymbol{x}$}_{t}+\mbox{$\mathbf{a}$}_{t}^{+}+\mbox{$\mathbf{H}$}_{t}^{+}\mbox{$\mathbf{v}$}_{t}\\\
\mbox{$\mathbf{v}$}_{t}&\sim\,\textup{{MVN}}(0,\mbox{$\mathbf{R}$})\end{split}$
(168)
where
$\mbox{$\mathbf{B}$}_{t}^{(0)}=\mbox{\boldmath$\Omega$}_{t,q}^{(0)}\mbox{$\mathbf{B}$}_{t}$
and
$\mbox{$\mathbf{B}$}_{t}^{+}=\mbox{\boldmath$\Omega$}_{t,q}^{+}\mbox{$\mathbf{B}$}_{t}$
so that $\mbox{$\mathbf{B}$}_{t}^{(0)}$ are the rows of
$\mbox{$\mathbf{B}$}_{t}$ corresponding to the zero rows of
$\mbox{$\mathbf{G}$}_{t}$ and $\mbox{$\mathbf{B}$}_{t}^{+}$ are the rows of
$\mbox{$\mathbf{B}$}_{t}$ corresponding to non-zero rows of
$\mbox{$\mathbf{G}$}_{t}$. The other parameters are similarly defined:
$\mbox{$\mathbf{u}$}_{t}^{(0)}=\mbox{\boldmath$\Omega$}_{t,q}^{(0)}\mbox{$\mathbf{u}$}_{t}$
and
$\mbox{$\mathbf{u}$}_{t}^{+}=\mbox{\boldmath$\Omega$}_{t,q}^{+}\mbox{$\mathbf{u}$}_{t}$,
$\mbox{$\mathbf{Z}$}_{t}^{(0)}=\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\mathbf{Z}$}_{t}$
and
$\mbox{$\mathbf{Z}$}_{t}^{+}=\mbox{\boldmath$\Omega$}_{t,r}^{+}\mbox{$\mathbf{Z}$}_{t}$,
and
$\mbox{$\mathbf{a}$}_{t}^{(0)}=\mbox{\boldmath$\Omega$}_{t,r}^{(0)}\mbox{$\mathbf{a}$}_{t}$
and
$\mbox{$\mathbf{a}$}_{t}^{+}=\mbox{\boldmath$\Omega$}_{t,r}^{+}\mbox{$\mathbf{a}$}_{t}$.
### 7.2 Identifying the fully deterministic $\boldsymbol{x}$ rows
To derive EM update equations, we need to take the derivative of the expected
log-likelihood holding everything but the parameter of interest constant. If
there are deterministic $\mbox{$\boldsymbol{x}$}_{t}$ rows, then we cannot
hold these constant and do this partial differentiation with respect to the
state parameters. We need to identify these $\mbox{$\boldsymbol{x}$}_{t}$ rows
and remove them from the likelihood function by rewriting them in terms of
only the state parameters. For this derivation, I am going to make the
simplifying assumption that the locations of the 0 rows in
$\mbox{$\mathbf{G}$}_{t}$ and $\mbox{$\mathbf{H}$}_{t}$ are time-invariant.
This is not strictly necessary, but simplifies the algebra greatly.
For the deterministic $\mbox{$\boldsymbol{x}$}_{t}$ rows, the process equation
is
$\mbox{$\boldsymbol{x}$}_{t}=\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}_{t}$,
with the $\mbox{$\mathbf{w}$}_{t}$ term left off. When we do the partial
differentiation step in deriving the EM update equation for $\mathbf{u}$,
$\mathbf{B}$ or $\xi$, we will need to take a partial derivative while holding
$\mbox{$\boldsymbol{x}$}_{t}$ and $\mbox{$\boldsymbol{x}$}_{t-1}$ constant. We
cannot hold the deterministic rows of $\mbox{$\boldsymbol{x}$}_{t}$ and
$\mbox{$\boldsymbol{x}$}_{t-1}$ constant while changing the corresponding rows
of $\mbox{$\mathbf{u}$}_{t}$ and $\mbox{$\mathbf{B}$}_{t}$ (or $\xi$ if $t=0$
or $t=1$). If a row of $\mbox{$\boldsymbol{x}$}_{t}$ is fully deterministic,
then that $x_{i,t}$ must change when row $i$ of $\mbox{$\mathbf{u}$}_{t}$ or
$\mbox{$\mathbf{B}$}_{t}$ is changed. Thus we cannot do the partial
differentiation step required in the EM update equation derivation.
So we need to identify the fully deterministic $\mbox{$\boldsymbol{x}$}_{t}$
and treat them differently in our likelihood so we can derive the update
equation. I will use the terms ’deterministic’, ’indirectly stochastic’ and
’directly stochastic’ when referring to the $\mbox{$\boldsymbol{x}$}_{t}$
rows. Deterministic means that that $\mbox{$\boldsymbol{x}$}_{t}$ row (denoted
$\mbox{$\boldsymbol{x}$}_{t}^{d}$) has no state error terms appearing in it
(no $w$ terms) and can be written as a function of only the state parameters.
Indirectly stochastic (denoted $\mbox{$\boldsymbol{x}$}_{t}^{is}$) means that
the corresponding row of $\mbox{$\mathbf{G}$}_{t}$ is all zero (an
$\mbox{$\boldsymbol{x}$}_{t}^{(0)}$ row), but the
$\mbox{$\boldsymbol{x}$}_{t}$ row has a state error term ($w$) which it picked
up through $\mathbf{B}$ in one of the prior
$\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{x}$}_{t}$ steps. Directly
stochastic (the $\mbox{$\boldsymbol{x}$}_{t}^{+}$) means that the
corresponding row of $\mbox{$\mathbf{G}$}_{t}$ is non-zero and thus these row
pick up at state error term ($w_{t}$) at each time step. The stochastic
$\mbox{$\boldsymbol{x}$}_{t}$ are denoted $\mbox{$\boldsymbol{x}$}_{t}^{s}$
whether they are indirectly or directly stochastic.
How do you determine the $d$, or deterministic, set of
$\mbox{$\boldsymbol{x}$}_{t}$ rows? These are the rows with no $w$ terms, from
time $t$ or from prior $t$, in them at time $t$. Note that the location of the
$d$ rows is time-dependent, a row may be deterministic at time $t$ but pick up
a $w$ at time $t+1$ and thus be indirectly stochastic thereafter. I am
requiring that once a row becomes indirectly stochastic, it remains that way;
rows are not allowed to flip back and forth between deterministic (no $w$
terms in them) and indirectly stochastic (containing a $w$ term).
I will work through an example and then show a general algorithm to keep track
of the deterministic rows at time $t$.
Let $\mbox{$\boldsymbol{x}$}_{0}=\mbox{\boldmath$\xi$}$ (so $\mathbf{F}$ is
all zero and $\mbox{$\boldsymbol{x}$}_{0}$ is not stochastic). Define
$\mbox{$\mathbf{I}$}_{t}^{ds}$, $\mbox{$\mathbf{I}$}_{t}^{is}$, and
$\mbox{$\mathbf{I}$}_{t}^{d}$ as diagonal indicator matrices with a 1 at
$\mbox{$\mathbf{I}$}(i,i)$ if row $i$ is directly stochastic, indirectly
stochastic, or deterministic respectively.
$\mbox{$\mathbf{I}$}_{t}^{s}+\mbox{$\mathbf{I}$}_{t}^{is}+\mbox{$\mathbf{I}$}_{t}^{d}=\mbox{$\mathbf{I}$}_{m}$.
Let our state equation be
$\mbox{$\boldsymbol{X}$}_{t}=\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{w}$}_{t}$.
Let
$\mbox{$\mathbf{B}$}=\begin{bmatrix}1&1&0&0\\\ 1&0&0&0\\\ 0&1&0&0\\\
0&0&0&1\end{bmatrix}$ (169)
At $t=0$
$\mbox{$\boldsymbol{X}$}_{0}=\begin{bmatrix}\pi_{1}\\\ \pi_{2}\\\ \pi_{3}\\\
\pi_{4}\end{bmatrix}$ (170)
$\mbox{$\mathbf{I}$}_{0}^{d}=\begin{bmatrix}1&0&0&0\\\ 0&1&0&0\\\ 0&0&1&0\\\
0&0&0&1\end{bmatrix}\quad\mbox{$\mathbf{I}$}_{0}^{s}=\mbox{$\mathbf{I}$}_{0}^{is}=\begin{bmatrix}0&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\ 0&0&0&0\end{bmatrix}$ (171)
At $t=1$
$\mbox{$\boldsymbol{X}$}_{1}=\begin{bmatrix}\pi_{1}+\pi_{2}+w_{1}\\\
\pi_{1}\\\ \pi_{2}\\\ \pi_{4}\end{bmatrix}$ (172)
$\mbox{$\mathbf{I}$}_{0}^{d}=\begin{bmatrix}0&0&0&0\\\ 0&1&0&0\\\ 0&0&1&0\\\
0&0&0&1\end{bmatrix}\quad\mbox{$\mathbf{I}$}_{0}^{s}=\begin{bmatrix}1&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\
0&0&0&0\end{bmatrix}\quad\mbox{$\mathbf{I}$}_{0}^{is}=\begin{bmatrix}0&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\ 0&0&0&0\end{bmatrix}$ (173)
At $t=2$
$\mbox{$\boldsymbol{X}$}_{2}=\begin{bmatrix}\dots+w_{2}\\\
\pi_{1}+\pi_{2}+w_{1}\\\ \pi_{1}\\\ \pi_{4}\end{bmatrix}$ (174)
$\mbox{$\mathbf{I}$}_{0}^{d}=\begin{bmatrix}0&0&0&0\\\ 0&0&0&0\\\ 0&0&1&0\\\
0&0&0&1\end{bmatrix}\quad\mbox{$\mathbf{I}$}_{0}^{s}=\begin{bmatrix}1&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\
0&0&0&0\end{bmatrix}\quad\mbox{$\mathbf{I}$}_{0}^{is}=\begin{bmatrix}0&0&0&0\\\
0&1&0&0\\\ 0&0&0&0\\\ 0&0&0&0\end{bmatrix}$ (175)
By $t=3$, the system stabilizes
$\mbox{$\boldsymbol{X}$}_{3}=\begin{bmatrix}\dots+w_{1}+w_{2}+w_{3}\\\
\dots+w_{1}+w_{2}\\\ \pi_{1}+\pi_{2}+w_{1}\\\ \pi_{4}\end{bmatrix}$ (176)
$\mbox{$\mathbf{I}$}_{0}^{d}=\begin{bmatrix}0&0&0&0\\\ 0&0&0&0\\\ 0&0&0&0\\\
0&0&0&1\end{bmatrix}\quad\mbox{$\mathbf{I}$}_{0}^{s}=\begin{bmatrix}1&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\
0&0&0&0\end{bmatrix}\quad\mbox{$\mathbf{I}$}_{0}^{is}=\begin{bmatrix}0&0&0&0\\\
0&1&0&0\\\ 0&0&1&0\\\ 0&0&0&0\end{bmatrix}$ (177)
After time $t=3$ the location of the deterministic and indirectly stochastic
rows is stabilized and no longer changes.
In general, it can take up to $m$ time steps for the location of the
deterministic rows to stabilize. This is because $\mbox{$\mathbf{B}$}_{t}$ is
like an adjacency matrix, and I require that the location of the 0s in
$\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{2}\dots\mbox{$\mathbf{B}$}_{t}$
is time invariant. If we replace all non-zero elements in
$\mbox{$\mathbf{B}$}_{t}$ with 1, then we have an adjacency matrix, let’s call
it $\mathbf{M}$. If there is a path in $\mathbf{M}$ from $x_{j,t}$ to an
$x_{s,t}$ , then row $j$ will eventually be indirectly stochastic. Graph
theory tells us that it takes at most $m$ steps for a $m\times m$ adjacency
matrix to show full connectivity. This means that if element $j,i$ is 0 in
$M^{m}$ then row $j$ is not connected to row $i$ by any path and thus will
remain unconnected for $M^{t>m}$; note element $i,j$ can be 0 while $j,i$ is
not.
This means that
$\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{2}\dots\mbox{$\mathbf{B}$}_{t}$,
$t>m$, can be rearranged to look something like so where $ds$ are directly
stochastic, $is$ are indirectly stochastic, and $d$ are fully deterministic:
$\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{2}\dots\mbox{$\mathbf{B}$}_{t}=\begin{bmatrix}ds&ds&ds&ds&ds\\\
ds&ds&ds&ds&ds\\\ is&is&is&is&is\\\ 0&0&0&d&d\\\ 0&0&0&d&d\\\ \end{bmatrix}$
(178)
The $ds$’s, $is$’s and $d$’s are not all equal nor are they necessarily all
non-zero; I am just showing the blocks. The $d$ rows will always be
deterministic while the $is$ rows will only be deterministic for $t<m$ time
steps; the number of time steps depends on the form of $\mathbf{B}$.
Since my $\mbox{$\mathbf{B}$}_{t}$ matrices are small, I use a inefficient
strategy in my code to construct the indicator matrices
$\mbox{$\mathbf{I}$}_{d}^{t}$. I define $\mathbf{M}$ as
$\mbox{$\mathbf{B}$}_{t}$ with the non-zero $\mathbf{B}$ replaced with 1; I
require that the location of the non-zero elements in
$\mbox{$\mathbf{B}$}_{t}$ are time-invariant so there is only one
$\mathbf{M}$. Within the product $\mbox{$\mathbf{M}$}^{t}$, those rows where
only 0s appear in the ’stochastic’ columns (non-zero $\mbox{$\mathbf{G}$}_{t}$
rows) are the fully deterministic $\mbox{$\boldsymbol{x}$}_{t+1}$ rows. Note,
$t+1$ so one time step ahead. There are much faster algorithms for finding
paths, but my $\mathbf{M}$ tend to be small. Also, unfortunately, using
$\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{2}\dots\mbox{$\mathbf{B}$}_{t}$,
needed for the $\mbox{$\boldsymbol{x}$}_{t}^{d}$ function, in place of
$\mbox{$\mathbf{M}$}^{t}$ is not robust. Let’s say
$\mbox{$\mathbf{B}$}=\bigl{[}\begin{smallmatrix}-1&-1\\\
1&1\end{smallmatrix}\bigr{]}$ and
$\mbox{$\mathbf{G}$}=\bigl{[}\begin{smallmatrix}1\\\
0\end{smallmatrix}\bigr{]}$. Then $\mbox{$\mathbf{B}$}^{2}$ is a matrix of all
zeros even though the correct $\mbox{$\mathbf{I}$}_{2}^{d}$ is
$\bigl{[}\begin{smallmatrix}0&0\\\ 0&0\end{smallmatrix}\bigr{]}$ not
$\bigl{[}\begin{smallmatrix}0&0\\\ 0&1\end{smallmatrix}\bigr{]}$.
#### 7.2.1 Redefining the $\mbox{$\boldsymbol{x}$}_{t}^{d}$ elements in the
likelihood
By definition, all the $\mbox{$\mathbf{B}$}_{t}$ elements in the $ds$ and $is$
columns of the $d$ rows of $\mbox{$\mathbf{B}$}_{t}$ are 0 (see equation 178).
This is due to the constraint that I have imposed that locations of 0s in
$\mbox{$\mathbf{B}$}_{t}$ are time-invariant and the location of the zero rows
in $\mbox{$\mathbf{G}$}_{t}$ also time-invariant:
$\mbox{$\mathbf{I}$}_{q}^{+}$ and $\mbox{$\mathbf{I}$}_{q}^{(0)}$ are time-
constant.
Consider this $\mathbf{B}$ and $\mathbf{G}$, which would arise in a MARSS
version of an AR-3 model:
$\mbox{$\mathbf{B}$}=\begin{bmatrix}b_{1}&b_{2}&b_{3}\\\ 1&0&0\\\
0&1&0\end{bmatrix}\quad\mbox{$\mathbf{G}$}=\begin{bmatrix}1\\\ 0\\\
0\end{bmatrix}$ (179)
Using $\mbox{$\boldsymbol{x}$}_{0}=\mbox{\boldmath$\xi$}$:
$\mbox{$\boldsymbol{x}$}_{0}=\begin{bmatrix}\pi_{1}\\\ \pi_{2}\\\
\pi_{3}\end{bmatrix}\quad\mbox{$\boldsymbol{x}$}_{1}=\begin{bmatrix}\dots+w_{1}\\\
\pi_{1}\\\
\pi_{2}\end{bmatrix}\quad\mbox{$\boldsymbol{x}$}_{2}=\begin{bmatrix}\dots+w_{2}\\\
\dots+w_{1}\\\
\pi_{1}\end{bmatrix}\quad\mbox{$\boldsymbol{x}$}_{3}=\begin{bmatrix}\dots+w_{3}\\\
\dots+w_{2}\\\ \dots+w_{1}\end{bmatrix}$ (180)
The $\dots$ just represent ’some values’. The key part is the $w$ appearing
which is the stochasticity. At $t=1$, rows 2 and 3 are deterministic. At
$t=2$, row 3 is deterministic, and at $t=3$, no rows are deterministic.
We can rewrite the equation for the deterministic rows in
$\mbox{$\boldsymbol{x}$}_{t}$ as follows. Note that by definition, all the
non-$d$ columns in the $d$-rows of $\mbox{$\mathbf{B}$}_{t}$ are zero.
$\mbox{$\boldsymbol{x}$}_{t}^{d}$ is $\mbox{$\boldsymbol{x}$}_{t}$ with the
$d$ rows zeroed out, so
$\mbox{$\boldsymbol{x}$}_{t}^{d}=\mbox{$\mathbf{I}$}_{q,t}^{d}\mbox{$\boldsymbol{x}$}_{t}$.
$\begin{split}\mbox{$\boldsymbol{x}$}_{1}^{d}&=\mbox{$\mathbf{B}$}_{1}^{d}\mbox{$\boldsymbol{x}$}_{0}+\mbox{$\mathbf{u}$}_{1}^{d}\\\
&=\mbox{$\mathbf{B}$}_{1}^{d}\mbox{$\boldsymbol{x}$}_{0}+\mbox{$\mathbf{f}$}_{u,1}^{d}+\mbox{$\mathbf{D}$}_{u,1}^{d}\boldsymbol{\upsilon}\\\
&=\mbox{$\mathbf{I}$}_{1}^{d}(\mbox{$\mathbf{B}$}_{1}\mbox{$\boldsymbol{x}$}_{0}+\mbox{$\mathbf{f}$}_{u,1}+\mbox{$\mathbf{D}$}_{u,1}\boldsymbol{\upsilon})\\\
\mbox{$\boldsymbol{x}$}_{2}^{d}&=\mbox{$\mathbf{B}$}_{2}^{d}\mbox{$\boldsymbol{x}$}_{1}+\mbox{$\mathbf{u}$}_{2}^{d}\\\
&=\mbox{$\mathbf{B}$}_{2}^{d}(\mbox{$\mathbf{I}$}_{1}^{d}(\mbox{$\mathbf{B}$}_{1}\mbox{$\boldsymbol{x}$}_{0}+\mbox{$\mathbf{f}$}_{u,1}+\mbox{$\mathbf{D}$}_{u,1}\boldsymbol{\upsilon}))+\mbox{$\mathbf{f}$}_{u,2}^{d}+\mbox{$\mathbf{D}$}_{u,2}^{d}\boldsymbol{\upsilon}\\\
&=\mbox{$\mathbf{I}$}_{2}^{d}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{I}$}_{2}^{d}(\mbox{$\mathbf{I}$}_{1}^{d}(\mbox{$\mathbf{B}$}_{1}\mbox{$\boldsymbol{x}$}_{0}+\mbox{$\mathbf{f}$}_{u,1}+\mbox{$\mathbf{D}$}_{u,1}\boldsymbol{\upsilon}))+\mbox{$\mathbf{I}$}_{2}^{d}\mbox{$\mathbf{f}$}_{u,2}+\mbox{$\mathbf{I}$}_{2}^{d}\mbox{$\mathbf{D}$}_{u,2}\boldsymbol{\upsilon}\\\
&=\mbox{$\mathbf{I}$}_{2}^{d}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{1}\mbox{$\boldsymbol{x}$}_{0}+\mbox{$\mathbf{I}$}_{2}^{d}(\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{f}$}_{1,u}+\mbox{$\mathbf{f}$}_{2,u})+\mbox{$\mathbf{I}$}_{2}^{d}(\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{u,1}+\mbox{$\mathbf{D}$}_{u,2})\boldsymbol{\upsilon}\\\
&=\mbox{$\mathbf{I}$}_{2}^{d}(\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{1}\mbox{$\boldsymbol{x}$}_{0}+\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{f}$}_{1,u}+\mbox{$\mathbf{f}$}_{2,u}+(\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{D}$}_{u,1}+\mbox{$\mathbf{D}$}_{u,2})\boldsymbol{\upsilon})\\\
\dots{\\\ }\end{split}$ (181)
The messy part is keeping track of which rows are deterministic because this
will potentially change up to time $t=m$.
We can rewrite the function for $\mbox{$\boldsymbol{x}$}_{t}^{d}$, where
$t_{0}$ is the $t$ at which the initial state is defined. It is either $t=0$
or $t=1$.
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}^{d}&=\mbox{$\mathbf{I}$}_{t}^{d}(\mbox{$\mathbf{B}$}^{*}_{t}\mbox{$\boldsymbol{x}$}_{t_{0}}+\mbox{$\mathbf{f}$}^{*}_{t}+\mbox{$\mathbf{D}$}^{*}_{t}\boldsymbol{\upsilon})\\\
\text{where}&\\\ \mbox{$\mathbf{B}$}^{*}_{t_{0}}&=\mbox{$\mathbf{I}$}_{m}\\\
\mbox{$\mathbf{B}$}^{*}_{t}&=\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{B}$}^{*}_{t-1}\\\
\\\ \mbox{$\mathbf{f}$}^{*}_{t_{0}}&=0\\\
\mbox{$\mathbf{f}$}^{*}_{t}&=\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{f}$}^{*}_{t-1}+\mbox{$\mathbf{f}$}_{t,u}\\\
\\\ \mbox{$\mathbf{D}$}^{*}_{t_{0}}&=0\\\
\mbox{$\mathbf{D}$}^{*}_{t}&=\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{D}$}^{*}_{t-1}+\mbox{$\mathbf{D}$}_{t,u}\\\
\\\ \mbox{$\mathbf{I}$}_{t_{0}}^{d}&=\mbox{$\mathbf{I}$}_{m}\\\
\text{diag}(\mbox{$\mathbf{I}$}_{t_{0}+\tau}^{d})&=\text{apply}(\mbox{\boldmath$\Omega$}_{q}^{(0)}\mbox{$\mathbf{M}$}^{\tau}\mbox{\boldmath$\Omega$}_{q}^{+}==0,1,\text{all})\end{split}$
(182)
The bottom line is written in R: $\mbox{$\mathbf{I}$}_{t_{0}+\tau}^{d}$ is a
diagonal matrix with a 1 at $(i,i)$ where row $i$ of $\mathbf{G}$ is all 0 and
all $ds$ and $is$ columns in row $i$ of $\mbox{$\mathbf{M}$}^{t}$ are equal to
zero.
In the expected log-likelihood, the term
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{d}]=\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{d}|\mbox{$\boldsymbol{Y}$}=\mbox{$\boldsymbol{y}$}]$,
meaning the expected value of $\mbox{$\boldsymbol{X}$}_{t}^{d}$ conditioned on
the data, appears. Thus in the expected log-likelihood the function will be
written:
$\begin{split}\mbox{$\boldsymbol{X}$}_{t}^{d}&=\mbox{$\mathbf{I}$}_{t}^{d}(\mbox{$\mathbf{B}$}^{*}_{t}\mbox{$\boldsymbol{X}$}_{t_{0}}+\mbox{$\mathbf{f}$}^{*}_{t}+\mbox{$\mathbf{D}$}^{*}_{t}\boldsymbol{\upsilon})\\\
\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{d}]&=\mbox{$\mathbf{I}$}_{t}^{d}(\mbox{$\mathbf{B}$}^{*}_{t}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]+\mbox{$\mathbf{f}$}^{*}_{t}+\mbox{$\mathbf{D}$}^{*}_{t}\boldsymbol{\upsilon})\end{split}$
(183)
When the $j$-th row of $\mathbf{F}$ is all zero, meaning the $j$-th row of
$\mbox{$\boldsymbol{x}$}_{0}$ is fixed to be $\xi_{j}$, then
$\,\textup{{E}}[X_{t_{0},j}]\equiv\xi_{j}$. This is the case where we treat
$x_{t_{0},j}$ as fixed and we either estimate or specify its value. If
$\mbox{$\boldsymbol{x}$}_{t_{0}}$ is wholly treated as fixed, then
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]\equiv\mbox{\boldmath$\xi$}$
and $\Lambda$ does not appear in the model at all. In the general case, where
some $x_{t_{0},j}$ are treated as fixed and some as stochastic, we can write
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}^{d}]$ appearing in the expected
log-likelihood as:
$\begin{split}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]=(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{\lambda}^{(0)})\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]+\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{\boldmath$\xi$}\end{split}$
(184)
$\mbox{$\mathbf{I}$}_{\lambda}^{(0)}$ is a diagonal indicator matrix with 1 at
$(j,j)$ if row $j$ of $\mathbf{F}$ is all zero.
If $\mbox{$\mathbf{B}$}^{d,d}$ and $\mbox{$\mathbf{u}$}^{d}$ are time-
constant, we could use the matrix geometric series:
$\begin{split}\mbox{$\boldsymbol{x}$}_{t}^{d}=&(\mbox{$\mathbf{B}$}^{d,d})^{t}\mbox{$\boldsymbol{x}$}_{0}^{d}+\sum_{i=0}^{t-1}(\mbox{$\mathbf{B}$}^{d,d})^{i}\mbox{$\mathbf{u}$}^{d}=(\mbox{$\mathbf{B}$}^{d,d})^{t}\mbox{$\boldsymbol{x}$}_{0}^{d}+(\mbox{$\mathbf{I}$}-\mbox{$\mathbf{B}$}^{d,d})^{-1}(\mbox{$\mathbf{I}$}-(\mbox{$\mathbf{B}$}^{d,d})^{t})\mbox{$\mathbf{u}$}^{d},\quad\text{if
}\mbox{$\mathbf{B}$}^{d,d}\neq\mbox{$\mathbf{I}$}\\\
&\mbox{$\boldsymbol{x}$}_{0}^{d}+\mbox{$\mathbf{u}$}^{d},\quad\text{if
}\mbox{$\mathbf{B}$}^{d,d}=\mbox{$\mathbf{I}$}\end{split}$ (185)
where $\mbox{$\mathbf{B}$}^{d,d}$ is the block of $d$’s in equation 178.
#### 7.2.2 Dealing with the $\mbox{$\boldsymbol{x}$}_{t}^{is}$ elements in
the likelihood and associated parameter rows
Although $\mbox{$\mathbf{w}$}_{t}^{is}=0$, these terms are connected to the
stochastic $\boldsymbol{x}$’s in earlier time steps though $\mathbf{B}$, thus
all $\mbox{$\boldsymbol{x}$}_{t}^{is}$ are possible for a given
$\mbox{$\mathbf{u}$}_{t}$, $\mbox{$\mathbf{B}$}_{t}$ or $\xi$. However, all
$\mbox{$\boldsymbol{x}$}_{t}^{is}$ are not possible conditioned on
$\mbox{$\boldsymbol{x}$}_{t-1}$, so we are back in the position that we cannot
both change $\mbox{$\boldsymbol{x}$}_{t}$ and change
$\mbox{$\mathbf{u}$}_{t}$.
Recall that for the partial differentiation step in the EM algorithm, we need
to be able to hold the $E[\mbox{$\boldsymbol{X}$}_{t}]$ appearing in the
likelihood constant. We can deal with the deterministic
$\mbox{$\boldsymbol{x}$}_{t}$ because they are not stochastic and do not have
’expected values’. They can be removed from the likelihood by rewriting
$\mbox{$\boldsymbol{x}$}_{t}^{d}$ in terms of the model parameters. We cannot
do that for $\mbox{$\boldsymbol{x}$}_{t}^{is}$ because these $x$ are
stochastic. There is no equation for them; all $\mbox{$\boldsymbol{x}$}^{is}$
are possible but some are more likely than others. We also cannot replace
$\mbox{$\boldsymbol{x}$}_{t}^{is}$ with
$\mbox{$\mathbf{B}$}_{t}^{is}E[\mbox{$\boldsymbol{X}$}_{t-1}]+\mbox{$\mathbf{u}$}_{t}^{is}$
to force $\mbox{$\mathbf{B}$}_{t}^{is}$ and $\mbox{$\mathbf{u}$}^{is}$ to
appear in the $\boldsymbol{y}$ part of the likelihood. The reason is that
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]$ and
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t-1}]$ both appear in the likelihood
and we cannot hold both constant (as we must for the partial differentiation)
and at the same time change $\mbox{$\mathbf{B}$}_{t}^{is}$ or
$\mbox{$\mathbf{u}$}_{t}^{is}$ as we are doing when we differentiate with
respect to $\mbox{$\mathbf{B}$}_{t}^{is}$ or $\mbox{$\mathbf{u}$})_{t}^{is}$.
We cannot do that because $\mbox{$\boldsymbol{x}$}_{t}^{is}$ is constrained to
equal
$\mbox{$\mathbf{B}$}_{t}^{is}\mbox{$\boldsymbol{x}$}_{t-1}+\mbox{$\mathbf{u}$}_{t}^{is}$.
This effectively means that we cannot estimate $\mbox{$\mathbf{B}$}_{t}^{is}$
and $\mbox{$\mathbf{u}$}_{t}^{is}$ because we cannot rewrite
$\mbox{$\boldsymbol{x}$}_{t}^{is}$ in terms of only the model parameters. This
is specific to the EM algorithm because it is an iterative algorithm where the
expected $\mbox{$\boldsymbol{X}$}_{t}$ are computed with fixed parameters and
then the $\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]$ are held fixed at their
expected values while the parameters are updated. In my $\mathbf{B}$ update
equation, I assume that $\mbox{$\mathbf{B}$}_{t}^{(0)}$ is fixed for all $t$.
Thus I circumvent the problem altogether for $\mathbf{B}$. For $\mathbf{u}$, I
assume that only the $\mbox{$\mathbf{u}$}^{is}$ elements are fixed.
### 7.3 Expected log-likelihood for degenerate models
The basic idea is to replace
$\mbox{$\mathbf{I}$}_{q}^{d}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]$ with
a deterministic function involving only the state parameters (and
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]$ if
$\mbox{$\boldsymbol{X}$}_{t_{0}}$ is stochastic) . These appear in the
$\boldsymbol{y}$ part of the likelihood in
$\mbox{$\mathbf{Z}$}_{t}\mbox{$\boldsymbol{X}$}_{t}$ when the $d$ columns of
$\mbox{$\mathbf{Z}$}_{t}$ have non-zero values. They appear in the
$\boldsymbol{x}$ part of the likelihood in
$\mbox{$\mathbf{B}$}_{t}\mbox{$\boldsymbol{X}$}_{t-1}$ when the $d$ columns of
$\mbox{$\mathbf{B}$}_{t}$ have non-zero values. They do not appear in
$\mbox{$\boldsymbol{X}$}_{t}$ in the $\boldsymbol{x}$ part of the likelihood
because $\mathbb{Q}_{t}$ has all the non-$s$ columns and rows zeroed out
(non-$s$ includes both $d$ and $is$) and the element to the left of
$\mathbb{Q}_{t}$ is a row vector and to the right, it is a column vector. Thus
any $\mbox{$\boldsymbol{x}$}_{t}^{d}$ in $\mbox{$\boldsymbol{X}$}_{t}$ are
being zeroed out by $\mathbb{Q}_{t}$.
The first step is to pull out the
$\mbox{$\mathbf{I}$}_{t}^{d}\mbox{$\boldsymbol{X}$}_{t}$:
$\begin{split}\Psi^{+}&=\,\textup{{E}}[\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{Y}$}^{+},\mbox{$\boldsymbol{X}$}^{+};\Theta)]=\,\textup{{E}}[-\frac{1}{2}\sum_{1}^{T}\\\
&(\mbox{$\boldsymbol{Y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t}^{d})\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t}^{d}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})^{\top}\mathbb{R}_{t}\\\
&(\mbox{$\boldsymbol{Y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t}^{d})\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t}^{d}\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{a}$}_{t})-\frac{1}{2}\sum_{1}^{T}\log|\mbox{$\mathbf{R}$}_{t}|\\\
&-\frac{1}{2}\sum_{t_{0}+1}^{T}(\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{B}$}_{t}((\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t-1}^{d})\mbox{$\boldsymbol{X}$}_{t-1}+\mbox{$\mathbf{I}$}_{t-1}^{d}\mbox{$\boldsymbol{X}$}_{t-1})-\mbox{$\mathbf{u}$}_{t})^{\top}\mathbb{Q}_{t}\\\
&(\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{B}$}_{t}((\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t-1}^{d})\mbox{$\boldsymbol{X}$}_{t-1}+\mbox{$\mathbf{I}$}_{t-1}^{d}\mbox{$\boldsymbol{X}$}_{t-1})-\mbox{$\mathbf{u}$}_{t})-\frac{1}{2}\sum_{t_{0}+1}^{T}\log|\mbox{$\mathbf{Q}$}_{t}|\\\
&-\frac{1}{2}(\mbox{$\boldsymbol{X}$}_{t_{0}}-\mbox{\boldmath$\xi$})^{\top}\mathbb{L}(\mbox{$\boldsymbol{X}$}_{t_{0}}-\mbox{\boldmath$\xi$})-\frac{1}{2}\log|\mbox{\boldmath$\Lambda$}|-\frac{n}{2}\log
2\pi\end{split}$ (186)
See section 7.2 for the definition of $\mbox{$\mathbf{I}$}_{t}^{d}$.
Next we replace $\mbox{$\mathbf{I}$}_{q}^{d}\mbox{$\boldsymbol{X}$}_{t}$ with
equation (182). $\mbox{$\boldsymbol{X}$}_{t_{0}}$ will appear in this function
instead of $\mbox{$\boldsymbol{x}$}_{t_{0}}$. I rewrite
$\mbox{$\mathbf{u}$}_{t}$ as
$\mbox{$\mathbf{f}$}_{u,t}+\mbox{$\mathbf{D}$}_{u,t}\boldsymbol{\upsilon}$.
This gives us the expected log-likelihood:
$\begin{split}\Psi^{+}&=\,\textup{{E}}[\log\mbox{$\mathbf{L}$}(\mbox{$\boldsymbol{Y}$}^{+},\mbox{$\boldsymbol{X}$}^{+};\Theta)]=\,\textup{{E}}[-\frac{1}{2}\sum_{1}^{T}\\\
&(\mbox{$\boldsymbol{Y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t}^{d})\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t}^{d}(\mbox{$\mathbf{B}$}^{*}_{t}\mbox{$\boldsymbol{X}$}_{t_{0}}+\mbox{$\mathbf{f}$}^{*}_{t}+\mbox{$\mathbf{D}$}^{*}_{t}\boldsymbol{\upsilon})-\mbox{$\mathbf{a}$}_{t})^{\top}\mathbb{R}_{t}\\\
&(\mbox{$\boldsymbol{Y}$}_{t}-\mbox{$\mathbf{Z}$}_{t}(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t}^{d})\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t}^{d}(\mbox{$\mathbf{B}$}^{*}_{t}\mbox{$\boldsymbol{X}$}_{t_{0}}+\mbox{$\mathbf{f}$}^{*}_{t}+\mbox{$\mathbf{D}$}^{*}_{t}\boldsymbol{\upsilon})-\mbox{$\mathbf{a}$}_{t})-\frac{1}{2}\sum_{1}^{T}\log|\mbox{$\mathbf{R}$}_{t}|\\\
&-\frac{1}{2}\sum_{t_{0}+1}^{T}(\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{B}$}_{t}((\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t-1}^{d})\mbox{$\boldsymbol{X}$}_{t-1}+\mbox{$\mathbf{I}$}_{t-1}^{d}(\mbox{$\mathbf{B}$}^{*}_{t-1}\mbox{$\boldsymbol{X}$}_{t_{0}}+\mbox{$\mathbf{f}$}^{*}_{t-1}+\mbox{$\mathbf{D}$}^{*}_{t-1}\boldsymbol{\upsilon}))-\mbox{$\mathbf{f}$}_{u,t}-\mbox{$\mathbf{D}$}_{u,t}\boldsymbol{\upsilon})^{\top}\mathbb{Q}_{t}\\\
&(\mbox{$\boldsymbol{X}$}_{t}-\mbox{$\mathbf{B}$}_{t}((\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t-1}^{d})\mbox{$\boldsymbol{X}$}_{t-1}+\mbox{$\mathbf{I}$}_{t-1}^{d}(\mbox{$\mathbf{B}$}^{*}_{t-1}\mbox{$\boldsymbol{X}$}_{t_{0}}+\mbox{$\mathbf{f}$}^{*}_{t-1}+\mbox{$\mathbf{D}$}^{*}_{t-1}+\boldsymbol{\upsilon}))-\mbox{$\mathbf{f}$}_{u,t}-\mbox{$\mathbf{D}$}_{u,t}\boldsymbol{\upsilon})\\\
&-\frac{1}{2}\sum_{t_{0}}^{T}\log|\mbox{$\mathbf{Q}$}_{t}|-\frac{1}{2}(\mbox{$\boldsymbol{X}$}_{t_{0}}-\mbox{\boldmath$\xi$})^{\top}\mathbb{L}(\mbox{$\boldsymbol{X}$}_{t_{0}}-\mbox{\boldmath$\xi$})-\frac{1}{2}\log|\mbox{\boldmath$\Lambda$}|-\frac{n}{2}\log
2\pi\end{split}$ (187)
where $\mbox{$\mathbf{B}$}^{*}$, $\mbox{$\mathbf{f}$}^{*}$ and
$\mbox{$\mathbf{D}$}^{*}$ are defined in equation (182).
$\mathbb{R}_{t}=\Xi_{t}^{\top}\mbox{$\mathbf{R}$}_{t}^{-1}\Xi_{t}$ and
$\mathbb{Q}_{t}=\Phi_{t}^{\top}\mbox{$\mathbf{Q}$}_{t}^{-1}\Phi_{t}$,
$\mathbb{L}=\Pi^{\top}\mbox{\boldmath$\Lambda$}^{-1}\Pi$. When
$\mbox{$\boldsymbol{x}$}_{t_{0}}$ is treated as fixed, $\mathbb{L}=0$ and the
last line will drop out altogether, however in general some rows of
$\mbox{$\boldsymbol{x}$}_{t_{0}}$ could be fixed and others stochastic.
We can see directly in equation (187) where $\boldsymbol{\upsilon}$ appears in
the expected log-likelihood. Where $\mathbf{p}$ appears is less obvious
because it depends on $\mathbf{F}$, which specifies which rows of
$\mbox{$\boldsymbol{x}$}_{t_{0}}$ are fixed. From equation (184),
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]=(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{l}^{(0)})\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]+\mbox{$\mathbf{I}$}_{l}^{(0)}\mbox{\boldmath$\xi$}$
and
$\mbox{\boldmath$\xi$}=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}$.
Thus where $\mathbf{p}$ appears in the expected log-likelihood depends on the
location of zero rows in $\mathbf{F}$ (and thus the zero rows in the indicator
matrix $\mbox{$\mathbf{I}$}_{l}^{(0)}$). Recall that
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]$ appearing in the expected
log-likelihood function is conditioned on the data so
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]$ in $\Psi$ is not equal to
$\xi$ if $\mbox{$\boldsymbol{x}$}_{t_{0}}$ is stochastic.
The case where $\mbox{$\boldsymbol{x}$}_{t_{0}}$ is stochastic is a little odd
because conditioned on
$\mbox{$\boldsymbol{X}$}_{t_{0}}=\mbox{$\boldsymbol{x}$}_{t_{0}}$,
$\mbox{$\boldsymbol{x}$}_{t}^{d}$ is deterministic even though
$\mbox{$\boldsymbol{X}$}_{0}$ is a random variable in the model. Thus in the
model, $\mbox{$\boldsymbol{x}$}_{t}^{d}$ is a random variable through
$\mbox{$\boldsymbol{X}$}_{t_{0}}$. But when we do the partial differentiation
step for the EM algorithm, we hold $\boldsymbol{X}$ at its expected value thus
we are holding $\mbox{$\boldsymbol{X}$}_{t_{0}}$ at a specific value. We
cannot do that and change $\mathbf{u}$ at the same time because once we fix
$\mbox{$\boldsymbol{X}$}_{t_{0}}$ the $\mbox{$\boldsymbol{x}$}_{t}^{d}$ are
deterministic functions of $\mathbf{u}$.
### 7.4 Logical constraints to ensure a consistent system of equations
We need to ensure that the model remains internally consistent when
$\mathbf{R}$ or $\mathbf{Q}$ goes to zero and that we do not have an over- or
under-constrained system.
As an example of a solvable versus unsolvable model, consider the following.
$\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}=\begin{bmatrix}0&0\\\ 1&0\\\
0&1\\\ 0&0\end{bmatrix}\begin{bmatrix}a&0\\\ 0&b\\\
\end{bmatrix}=\begin{bmatrix}0&0&0&0\\\ 0&a&0&0\\\ 0&0&b&0\\\ 0&0&0&0\\\
\end{bmatrix},$ (188)
then following are bad versus ok $\mathbf{Z}$ matrices.
$\mbox{$\mathbf{Z}$}_{\text{bad}}=\begin{bmatrix}c&d&0\\\
z(2,1)&z(2,2)&z(2,3)\\\ z(3,1)&z(3,1)&z(3,1)\\\
c&d&0\end{bmatrix},\quad\mbox{$\mathbf{Z}$}_{\text{ok}}=\begin{bmatrix}c&0&0\\\
z(2,1)&z(2,2)&z(2,3)\\\ z(3,1)&z(3,1)&z(3,1)\\\ c&d\neq 0&0\end{bmatrix}$
(189)
Because $y_{t}(1)$ and $y_{t}(4)$ have zero observation variance, the first
$\mathbf{Z}$ reduces to this for $x_{t}(1)$ and $x_{t}(2)$:
$\begin{bmatrix}y_{t}(1)\\\
y_{t}(4)\end{bmatrix}=\begin{bmatrix}cx_{t}(1)+dx_{t}(2)\\\
cx_{t}(1)+dx_{t}(2)\end{bmatrix}$ (190)
and since $y_{t}(1)\neq y_{t}(4)$, potentially, that is not solvable. The
second $\mathbf{Z}$ reduces to
$\begin{bmatrix}y_{t}(1)\\\ y_{t}(4)\end{bmatrix}=\begin{bmatrix}cx_{t}(1)\\\
cx_{t}(1)+dx_{t}(4)\end{bmatrix}$ (191)
and that is solvable for any $y_{t}(1)$ and $y_{t}(4)$ combination. Notice
that in the latter case, $x_{t}(1)$ and $x_{t}(2)$ are fully specified by
$y_{t}(1)$ and $y_{t}(4)$.
#### 7.4.1 Constraint 1: $\mathbf{Z}$ does not lead to an over-determined
observation process
We need to ensure that a $\mbox{$\boldsymbol{x}$}_{t}$ exists for all
$\mbox{$\boldsymbol{y}$}^{(0)}_{t}$ such that:
$\,\textup{{E}}[\mbox{$\boldsymbol{y}$}^{(0)}_{t}]=\mbox{$\mathbf{Z}$}^{(0)}\,\textup{{E}}[\mbox{$\boldsymbol{x}$}_{t}]+\mbox{$\mathbf{a}$}^{(0)}.$
If $\mbox{$\mathbf{Z}$}^{(0)}$ is invertible, such a
$\mbox{$\boldsymbol{x}$}_{t}$ certainly exists. But we do not require that
only one $\mbox{$\boldsymbol{x}$}_{t}$ exists, simply that at least one
exists. Thus the system can be under-constrained but not over-constrained. One
way to test for this is to use the singular value decomposition (SVD) of
$\mbox{$\mathbf{Z}$}^{(0)}$. If the number of singular values of
$\mbox{$\mathbf{Z}$}^{(0)}$ is less than the number of columns in
$\mathbf{Z}$, which is the number of $\boldsymbol{x}$ rows, then
$\mbox{$\mathbf{Z}$}^{(0)}$ specifies an over-constrained system
($y=Zx$181818This is the classic problem of solving the system of linear
equations, which is standardly written $Ax=b$.) Using the R language, you
would test if the length of `svd(Z)$d` is less than than `dim(Z)[2]`. If
$\mbox{$\mathbf{Z}$}^{(0)}$ specifies and under-determined system, some of the
singular values would be equal to 0 (within machine tolerance). It is possible
that $\mbox{$\mathbf{Z}$}^{(0)}$ could specify both an over- and under-
determined system at the same time. That is, the number of singular values
could be less than the number of columns in $\mbox{$\mathbf{Z}$}^{(0)}$ and
some of the singular values could be 0.
Doesn’t a $\mathbf{Z}$ with more rows than columns automatically specify a
over-determined system? No. Considered this $\mathbf{Z}$
$\begin{bmatrix}1&0\\\ 0&1\\\ 0&0\end{bmatrix}$ (192)
This $\mathbf{Z}$ is fine, although obviously the last row of $\boldsymbol{y}$
will not hold any information about the $\boldsymbol{x}$. But it could have
information about $\mathbf{R}$ and $\mathbf{a}$, which might be shared with
the other $\boldsymbol{y}$, so we don’t want to prevent the user from
specifying a $\mathbf{Z}$ like this.
#### 7.4.2 Constraint 2: the state processes are not over-constrained.
We also need to be concerned with the state process being over-constrained
when both $\mbox{$\mathbf{Q}$}=0$ and $\mbox{$\mathbf{R}$}=0$ because we can
have a situation where the constraint imposed by the observation process is at
odds with the constraint imposed by the state process. Here is an example:
$\begin{split}\mbox{$\boldsymbol{y}$}_{t}=\begin{bmatrix}1&0\\\
0&1\end{bmatrix}\begin{bmatrix}x_{1}\\\ x_{2}\end{bmatrix}_{t}\\\
\begin{bmatrix}x_{1}\\\ x_{2}\end{bmatrix}_{t}=\begin{bmatrix}1&0\\\
0&0\end{bmatrix}\begin{bmatrix}x_{1}\\\
x_{2}\end{bmatrix}_{t-1}+\begin{bmatrix}w_{1}\\\
0\end{bmatrix}_{t-1}\end{split}$ (193)
In this case, some of the $x$’s are deterministic, $\mbox{$\mathbf{Q}$}=0$ and
not linked through $\mathbf{B}$ to a stochastic $x$, and the corresponding $y$
are also deterministic. These cases will show up as errors in the Kalman
filter/smoother because in the Kalman gain equation (equation 139e), the term
$\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{V}$}_{t}^{t-1}\mbox{$\mathbf{Z}$}_{t}^{\top}$
will appear when $\mbox{$\mathbf{R}$}=0$. We need to make sure that 0 rows in
$\mbox{$\mathbf{B}$}_{t}$, $\mbox{$\mathbf{Z}$}_{t}$ and
$\mbox{$\mathbf{Q}$}_{t}$ do not line up in such a way that 0 rows/cols do not
appear in
$\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{V}$}_{t}^{t-1}\mbox{$\mathbf{Z}$}_{t}^{\top}$
at the same place as 0 rows/cols in $\mathbf{R}$. In MARSS, this is checked by
doing a pre-run of the Kalman smoother to see if it throws an error in the
Kalman gain step.
## 8 EM algorithm modifications for degenerate models
The $\mathbf{R}$, $\mathbf{Q}$, $\mathbf{Z}$, and $\mathbf{a}$ update
equations are largely unchanged. The real difficulties arise for the
$\mathbf{u}$ and $\xi$ update equations when $\mbox{$\mathbf{u}$}^{(0)}$ or
$\mbox{\boldmath$\xi$}^{(0)}$ are estimated. For $\mathbf{B}$, I do not have a
degenerate update equation, so I need to assume that
$\mbox{$\mathbf{B}$}^{(0)}$ elements are fixed (not estimated).
### 8.1 $\mathbf{R}$ and $\mathbf{Q}$ update equations
The constrained update equations for $\mathbf{Q}$ and $\mathbf{R}$ work fine
because their update equations do not involve any inverses of non-invertible
matrices. However if
$\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}\mbox{$\mathbf{H}$}_{t}^{\top}$
is non-diagonal and there are missing values, then the $\mathbf{R}$ update
equation involves $\widetilde{\mbox{$\mathbf{y}$}}_{t}$. That will involve the
inverse of
$\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{11}\mbox{$\mathbf{H}$}_{t}^{\top}$
(section 6.2), which might have zeros on the diagonal. In that case, use the
$\nabla_{t}$ modification that deals with such zeros (equation 146).
### 8.2 $\mathbf{Z}$ and $\mathbf{a}$ update equations
We need to deal with $\mathbf{Z}$ and $\mathbf{a}$ elements that appear in
rows where the diagonal of $\mbox{$\mathbf{R}$}=0$. These values will not
appear in the likelihood function unless they also happen to also appear on
the rows where the diagonal of $\mathbf{R}$ is not 0 (because they are
constrained to be equal for example). However, in this case the
$\mbox{$\mathbf{Z}$}^{(0)}$ and $\mbox{$\mathbf{a}$}^{(0)}$ are logically
constrained by the equation
$\mbox{$\boldsymbol{y}$}_{t}^{(0)}=\mbox{$\mathbf{Z}$}_{t}^{(0)}\,\textup{{E}}[\mbox{$\boldsymbol{x}$}_{t}]+\mbox{$\mathbf{a}$}_{t}^{(0)}.$
Notice there is no $\mbox{$\mathbf{w}$}_{t}$ since $\mbox{$\mathbf{R}$}=0$ for
these rows. The $\,\textup{{E}}[\mbox{$\boldsymbol{x}$}_{t}]$ is ML estimate
of $\mbox{$\boldsymbol{x}$}_{t}$ computed in the Kalman smoother from the
parameter values at iteration $i$ of the EM algorithm, so there is no
information in this equation for $\mathbf{Z}$ and $\mathbf{a}$ at iteration
$i+1$. The nature of the smoother is that it will find the
$\mbox{$\boldsymbol{x}$}_{t}$ that is most consistent with the data. For
example if our $y=Zx+a$ equation looks like so
$\begin{bmatrix}0\\\ 2\\\ \end{bmatrix}=\begin{bmatrix}1\\\ 1\\\
\end{bmatrix}x,$ (194)
there is no $x$ that will solve this. However $x=1$ is the closest (lowest
squared error) and so this is the information in the data about $x$. The
Kalman filter will use this and the relative value of $\mathbf{Q}$ and
$\mathbf{R}$ to come up with the estimated $x$. In this case,
$\mbox{$\mathbf{R}$}=0$, so the information in the data will completely
determine $x$ and the smoother would return $x=1$ regardless of the process
equation.
The $\mathbf{a}$ and $\mathbf{Z}$ update equations require that
$\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,a}^{\top}\mathbb{R}_{t}\mbox{$\mathbf{D}$}_{t,a}$
and
$\sum_{t=1}^{T}\mbox{$\mathbf{D}$}_{t,z}^{\top}\mathbb{R}_{t}\mbox{$\mathbf{D}$}_{t,z}$
are invertible. If $\mbox{$\mathbf{Z}$}_{t}^{(0)}$ and
$\mbox{$\mathbf{a}$}_{t}^{(0)}$ are fixed, this will be satisfied, however the
restriction is a little less restrictive than that since it is possible that
$\mathbb{R}_{t}$ does not have zeros on the diagonal in the same places so
that the sum over $t$ could be invertible while the individual values at $t$
are not. The section on the summary of constraints has the test for this
constraint.
The update equations also involve $\widetilde{\mbox{$\mathbf{y}$}}_{t}$, and
the modified algorithm for $\widetilde{\mbox{$\mathbf{y}$}}_{t}$ when
$\mbox{$\mathbf{H}$}_{t}$ has all zero rows will be needed. Other than that,
the constrained update equations work (sections 5.2 and 5.7).
### 8.3 $\mathbf{u}$ update equation
Here I discuss the update for $\mathbf{u}$, or more specifically
$\boldsymbol{\upsilon}$ which appears in $\mathbf{u}$, when
$\mbox{$\mathbf{G}$}_{t}$ or $\mbox{$\mathbf{H}$}_{t}$ have zero rows. I
require that $\mbox{$\mathbf{u}$}^{is}_{t}$ is not estimated. All the
$\mbox{$\mathbf{u}$}^{is}_{t}$ are fixed values. The
$\mbox{$\mathbf{u}$}_{t}^{d}$ may be estimated or more specifically there may
be $\boldsymbol{\upsilon}$ in $\mbox{$\mathbf{u}$}_{t}^{d}$ that are
estimated;
$\mbox{$\mathbf{u}$}_{t}^{d}=\mbox{$\mathbf{f}$}_{u,t}^{d}+\mbox{$\mathbf{D}$}_{u,t}^{d}\boldsymbol{\upsilon}$.
For the constrained $\mathbf{u}$ update equation with deterministic
$\boldsymbol{x}$’s takes the following form. It is similar to the
unconstrained update equation except that that a part from the
$\boldsymbol{y}$ part of the likelihood now appears:
$\begin{split}\boldsymbol{\upsilon}_{j+1}=\bigg{(}\sum_{t=1}^{T}(\Delta_{t,2}^{\top}\mathbb{R}_{t}\Delta_{t,2}+\Delta_{t,4}^{\top}\mathbb{Q}_{t}\Delta_{t,4})\bigg{)}^{-1}\times\bigg{(}\sum_{t=1}^{T}\big{(}\Delta_{t,2}^{\top}\mathbb{R}_{t}\Delta_{t,1}+\Delta_{t,4}^{\top}\mathbb{Q}_{t}\Delta_{t,3}\big{)}\bigg{)}\\\
\end{split}$ (195)
Conceptually, I think the approach described here is the similar to the
approach presented in section 4.2.5 of (Harvey,, 1989), but it is more general
because it deals with the case where some $\mathbf{u}$ elements are shared
(linear functions of some set of shared values), possibly across deterministic
and stochastic elements. Also, I present it here within the context of the EM
algorithm, so solving for the maximum-likelihood $\mathbf{u}$ appears in the
context of maximizing $\Psi^{+}$ with respect to $\mathbf{u}$ for the update
equation at iteration $j+1$.
#### 8.3.1 $\mbox{$\mathbf{u}$}^{(0)}$ is not estimated
When $\mbox{$\mathbf{u}$}^{(0)}$ is not estimated (since it is at some user
defined value via $\mbox{$\mathbf{D}$}_{u}$ and $\mbox{$\mathbf{f}$}_{u}$),
the part we are estimating, $\mbox{$\mathbf{u}$}^{+}$, only appears in the
$\boldsymbol{x}$ part of the likelihood. The update equation for $\mathbf{u}$
remains equation (95).
#### 8.3.2 $\mbox{$\mathbf{u}$}^{d}$ is estimated
The derivation of the update equation proceeds as usual. We need to take the
partial derivative of $\Psi^{+}$ (equation 187) holding everything constant
except $\boldsymbol{\upsilon}$, elements of which might appear in both
$\mbox{$\mathbf{u}$}_{t}^{d}$ and $\mbox{$\mathbf{u}$}_{t}^{s}$ (but not
$\mbox{$\mathbf{u}$}_{t}^{is}$ since I require that
$\mbox{$\mathbf{u}$}_{t}^{is}$ has no estimated elements).
The expected log-likelihood takes the following form, where $t_{0}$ is the
time where the initial state is defined ($t=0$ or $t=1$):
$\begin{split}\Psi^{+}=-\frac{1}{2}\sum_{1}^{T}(\Delta_{t,1}-\Delta_{t,2}\boldsymbol{\upsilon})^{\top}\mathbb{R}_{t}(\Delta_{t,1}-\Delta_{t,2}\boldsymbol{\upsilon})-\frac{1}{2}\sum_{1}^{T}\log|\mbox{$\mathbf{R}$}_{t}|\\\
-\frac{1}{2}\sum_{t_{0}+1}^{T}(\Delta_{t,3}-\Delta_{t,4}\boldsymbol{\upsilon})^{\top}\mathbb{Q}_{t}(\Delta_{t,3}-\Delta_{t,4}\boldsymbol{\upsilon})-\frac{1}{2}\sum_{t_{0}+1}^{T}\log|\mbox{$\mathbf{Q}$}_{t}|\\\
-\frac{1}{2}(\mbox{$\boldsymbol{X}$}_{t_{0}}-\mbox{\boldmath$\xi$})^{\top}\mathbb{L}(\mbox{$\boldsymbol{X}$}_{t_{0}}-\mbox{\boldmath$\xi$})-\frac{1}{2}\log|\mbox{\boldmath$\Lambda$}|-\frac{n}{2}\log
2\pi\\\ \end{split}$ (196)
$\mathbb{L}=\mbox{$\mathbf{F}$}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\mbox{$\mathbf{F}$}$.
If $\mbox{$\boldsymbol{x}$}_{t_{0}}$ is treated as fixed, $\mathbf{F}$ is all
zero and the line with $\mathbb{L}$ drops out. If some but not all
$\mbox{$\boldsymbol{x}$}_{t_{0}}$ are treated as fixed, then only the
stochastic rows appear in the last line. In any case, the last line does not
contain $\boldsymbol{\upsilon}$, thus when we do the partial differentiation
with respect to $\boldsymbol{\upsilon}$, this line drops out.
The $\Delta$ terms are defined as:
$\begin{split}\Delta_{t,1}&=\widetilde{\mbox{$\mathbf{y}$}}_{t}-\mbox{$\mathbf{Z}$}_{t}(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t}^{d})\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t}^{d}(\mbox{$\mathbf{B}$}^{*}_{t}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]+\mbox{$\mathbf{f}$}^{*}_{t})-\mbox{$\mathbf{a}$}_{t}\\\
\Delta_{t,2}&=\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t}^{d}\mbox{$\mathbf{D}$}^{*}_{t}\\\
\Delta_{t_{0},3}&=0_{m\times 1}\\\
\Delta_{t,3}&=\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{B}$}_{t}(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t-1}^{d})\widetilde{\mbox{$\mathbf{x}$}}_{t-1}-\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{I}$}_{t-1}^{d}(\mbox{$\mathbf{B}$}^{*}_{t-1}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]+\mbox{$\mathbf{f}$}^{*}_{t-1})-\mbox{$\mathbf{f}$}_{t,u}\\\
\Delta_{t_{0},4}&=0_{m\times m}\mbox{$\mathbf{D}$}_{1,u}\\\
\Delta_{t,4}&=\mbox{$\mathbf{D}$}_{t,u}+\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{I}$}_{t-1}^{d}\mbox{$\mathbf{D}$}^{*}_{t-1}\\\
\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]&=((\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{\lambda}^{(0)})\widetilde{\mbox{$\boldsymbol{x}$}}_{t_{0}}+\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{\boldmath$\xi$})\end{split}$
(197)
$\mbox{$\mathbf{I}$}^{d}_{t}$, $\mbox{$\mathbf{B}$}_{t}^{*}$,
$\mbox{$\mathbf{f}$}_{t}*$, and $\mbox{$\mathbf{D}$}_{t}^{*}$ are defined in
equation (182). The values of these at $t_{0}$ is special so that the math
works out. The expectation ( E) has been subsumed into the $\Delta$s since
$\Delta_{2}$ and $\Delta_{4}$ do not involve $\boldsymbol{X}$ or
$\boldsymbol{Y}$, so terms like
$\mbox{$\boldsymbol{X}$}^{\top}\mbox{$\boldsymbol{X}$}$ never appear.
Take the derivative of this with respect to $\boldsymbol{\upsilon}$ and arrive
at:
$\begin{split}\boldsymbol{\upsilon}_{j+1}=\big{(}\sum_{t=1}^{T}\Delta_{t,4}^{\top}\mathbb{Q}_{t}\Delta_{t,4}+\sum_{t=1}^{T}\Delta_{t,2}^{\top}\mathbb{R}_{t}\Delta_{t,2}\big{)}^{-1}\times\bigg{(}\sum_{t=1}^{T}\Delta_{1,4}^{\top}\mathbb{Q}_{t}\Delta_{1,3}+\sum_{t=1}^{T}\Delta_{t,2}^{\top}\mathbb{R}_{t}\Delta_{t,1})\bigg{)}\end{split}$
(198)
### 8.4 $\xi$ update equation
#### 8.4.1 $\xi$ is stochastic
This means that none of the rows of $\mathbf{F}$ (in
$\mbox{$\mathbf{F}$}\lambda$) are zero, so
$\mbox{$\mathbf{I}$}_{\lambda}^{(0)}$ is all zero and the update equation
reduces to a constrained version of the classic $\xi$ update equation:
$\mbox{$\mathbf{p}$}_{j+1}=\big{(}\mbox{$\mathbf{D}$}_{\xi}^{\top}\mbox{\boldmath$\Lambda$}^{-1}\mbox{$\mathbf{D}$}_{\xi}\big{)}^{-1}\mbox{$\mathbf{D}$}_{\xi}^{\top}\mbox{\boldmath$\Lambda$}^{-1}(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]-\mbox{$\mathbf{f}$}_{\xi})$
(199)
#### 8.4.2 $\mbox{\boldmath$\xi$}^{(0)}$ is not estimated
When $\mbox{\boldmath$\xi$}^{(0)}$ is not estimated (because you fixed it as
some value), we do not need to take the partial derivative with respect to
$\mbox{\boldmath$\xi$}^{(0)}$ since we will not be estimating it. Thus the
update equation is unchanged from the constrained update equation.
#### 8.4.3 $\mbox{\boldmath$\xi$}^{(0)}$ is estimated
Using the same approach as for $\mathbf{u}$ update equation, we take the
derivative of (187) with respect to $\mathbf{p}$ where
$\mbox{\boldmath$\xi$}=\mbox{$\mathbf{f}$}_{\xi}+\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$}$.
$\Psi^{+}$ will take the following form:
$\begin{split}\Psi^{+}&=\\\
&-\frac{1}{2}\sum_{t=1}^{T}(\Delta_{t,5}-\Delta_{t,6}\mbox{$\mathbf{p}$})^{\top}\mathbb{R}_{t}(\Delta_{t,5}-\Delta_{t,6}\mbox{$\mathbf{p}$})-\frac{1}{2}\sum_{1}^{T}\log|\mbox{$\mathbf{R}$}_{t}|\\\
&-\frac{1}{2}\sum_{t=1}^{T}(\Delta_{t,7}-\Delta_{t,8}\mbox{$\mathbf{p}$})^{\top}\mathbb{Q}_{t}(\Delta_{t,7}-\Delta_{t,8}\mbox{$\mathbf{p}$})-\frac{1}{2}\sum_{1}^{T}\log|\mbox{$\mathbf{Q}$}_{t}|\\\
&-\frac{1}{2}(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]-\mbox{$\mathbf{f}$}_{\xi}-\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})^{\top}\mathbb{L}(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]-\mbox{$\mathbf{f}$}_{\xi}-\mbox{$\mathbf{D}$}_{\xi}\mbox{$\mathbf{p}$})-\frac{1}{2}\log|\mbox{\boldmath$\Lambda$}|\\\
&-\frac{n}{2}\log 2\pi\\\ \end{split}$ (200)
The $\Delta$’s are defined as follows using
$\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]=(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{l}^{(0)})\widetilde{\mbox{$\boldsymbol{x}$}}_{t_{0}}+\mbox{$\mathbf{I}$}_{l}^{(0)}\mbox{\boldmath$\xi$}$
where it appears in
$\mbox{$\mathbf{I}$}_{t}^{d}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]$.
$\begin{split}\Delta_{t,5}&=\widetilde{\mbox{$\mathbf{y}$}}_{t}-\mbox{$\mathbf{Z}$}_{t}(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t}^{d})\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t}^{d}(\mbox{$\mathbf{B}$}^{*}_{t}((\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{\lambda}^{(0)})\widetilde{\mbox{$\boldsymbol{x}$}}_{t_{0}}+\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\mathbf{f}$}_{\xi})+\mbox{$\mathbf{u}$}^{*}_{t})-\mbox{$\mathbf{a}$}_{t}\\\
\Delta_{t,6}&=\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{t}^{d}\mbox{$\mathbf{B}$}^{*}_{t}\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\mathbf{D}$}_{\xi}\\\
\Delta_{t_{0},7}&=0_{m\times 1}\\\
\Delta_{t,7}&=\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{B}$}_{t}(\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{t-1}^{d})\widetilde{\mbox{$\mathbf{x}$}}_{t-1}-\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{I}$}_{t-1}^{d}(\mbox{$\mathbf{B}$}^{*}_{t-1}((\mbox{$\mathbf{I}$}_{m}-\mbox{$\mathbf{I}$}_{l}^{(0)})\widetilde{\mbox{$\boldsymbol{x}$}}_{t_{0}}+\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\mathbf{f}$}_{\xi})+\mbox{$\mathbf{u}$}^{*}_{t-1})-\mbox{$\mathbf{u}$}_{t}\\\
\Delta_{t_{0},8}&=0_{m\times m}\mbox{$\mathbf{D}$}_{\xi}\\\
\Delta_{t,8}&=\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{I}$}_{t-1}^{d}\mbox{$\mathbf{B}$}^{*}_{t-1}\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\mathbf{D}$}_{\xi}\\\
\mbox{$\mathbf{u}$}^{*}_{t}=\mbox{$\mathbf{f}$}^{*}_{t}+\mbox{$\mathbf{D}$}^{*}_{t}\boldsymbol{\upsilon}\end{split}$
(201)
The expectation can be pulled inside the $\Delta$s since the $\Delta$s in
front of $\mathbf{p}$ do not involve $\boldsymbol{X}$ or $\boldsymbol{Y}$.
Take the derivative of this with respect to $\mathbf{p}$ and arrive at:
$\begin{split}\mbox{$\mathbf{p}$}_{j+1}&=\big{(}\sum_{t=1}^{T}\Delta_{t,8}^{\top}\mathbb{Q}_{t}\Delta_{t,8}+\sum_{t=1}^{T}\Delta_{t,6}^{\top}\mathbb{R}_{t}\Delta_{t,6}+\mbox{$\mathbf{D}$}_{\xi}^{\top}\mathbb{L}\mbox{$\mathbf{D}$}_{\xi}\big{)}^{-1}\times\\\
&\quad\bigg{(}\sum_{t=1}^{T}\Delta_{1,8}^{\top}\mathbb{Q}_{t}\Delta_{1,7}+\sum_{t=1}^{T}\Delta_{t,6}^{\top}\mathbb{R}_{t}\Delta_{t,5}+\mbox{$\mathbf{D}$}_{\xi}^{\top}\mathbb{L}(\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t_{0}}]-\mbox{$\mathbf{f}$}_{\xi})\bigg{)}\end{split}$
(202)
#### 8.4.4 When $\mbox{$\mathbf{H}$}_{t}$ has 0 rows in addition to
$\mbox{$\mathbf{G}$}_{t}$
When $\mbox{$\mathbf{H}$}_{t}$ has all zero rows, some of the $\mathbf{p}$ or
$\boldsymbol{\upsilon}$ may constrained by the model, but these constraints do
not appear in $\Psi^{+}$ since $\mathbb{R}_{t}$ zeros out those constraints.
For example, if $H_{t}$ is all zeros and
$\mbox{$\boldsymbol{x}$}_{1}\equiv\mbox{\boldmath$\xi$}$, then $\xi$ is
constrained to equal
$\mbox{$\mathbf{Z}$}^{-1}(\widetilde{\mbox{$\mathbf{y}$}}_{1}-\mbox{$\mathbf{a}$}_{1})$.
The model needs to be internally consistent and we need to be able to estimate
all the $\mathbf{p}$ and the $\boldsymbol{\upsilon}$. Rather than try to
estimate the correct $\mathbf{p}$ and $\boldsymbol{\upsilon}$ to ensure
internal consistency of the model with the data when some of the
$\mbox{$\mathbf{H}$}_{t}$ have 0 rows, I test by running the Kalman filter
with the degenerate variance modification (in particular the modification for
$\mathbf{F}$ with zero rows is critical) before starting the EM algorithm.
Then I test that
$\widetilde{\mbox{$\mathbf{y}$}}_{t}-\mbox{$\mathbf{Z}$}_{t}\widetilde{\mbox{$\mathbf{x}$}}_{t}-\mbox{$\mathbf{a}$}_{t}$
is all zeros. If it is not, within machine accuracy, then there is a problem.
This is reported and the algorithm stopped191919In some cases, it is easy to
determine the correct $\xi$. For example, when $\mbox{$\mathbf{H}$}_{t}$ is
all zero rows, $t_{0}=1$ and there is no missing data at time $t=1$,
$\mbox{\boldmath$\xi$}=\mbox{$\mathbf{Z}$}^{*}(\mbox{$\boldsymbol{y}$}_{1}-\mbox{$\mathbf{a}$}_{1})$,
where $\mbox{$\mathbf{Z}$}^{*}$ is the pseudoinverse. One would want to use
the SVD pseudoinverse calculation in case $\mathbf{Z}$ leads to an under-
constrained system (some of the singular values of $\mathbf{Z}$ are 0).
I also test that
$\big{(}\sum_{t=1}^{T}\Delta_{t,8}^{\top}\mathbb{Q}_{t}\Delta_{t,8}+\sum_{t=1}^{T}\Delta_{t,6}^{\top}\mathbb{R}_{t}\Delta_{t,6}+\mbox{$\mathbf{D}$}_{\xi}^{\top}\mathbb{L}\mbox{$\mathbf{D}$}_{\xi}\big{)}$
is invertible to ensure that all the $\mathbf{p}$ can be solved for, and I
test that
$\big{(}\sum_{t=1}^{T}\Delta_{t,4}^{\top}\mathbb{Q}_{t}\Delta_{t,4}+\sum_{t=1}^{T}\Delta_{t,2}^{\top}\mathbb{R}_{t}\Delta_{t,2}\big{)}$
is invertible so that all the $\boldsymbol{\upsilon}$ can be solved for. If
errors are present, they should be apparent in iteration 1, are reported and
the EM algorithm stopped.
### 8.5 $\mbox{$\mathbf{B}$}^{(0)}$ update equation for degenerate models
I do not have an update equation for $\mbox{$\mathbf{B}$}^{(0)}$ and for now,
I side-step this problem by requiring that any $\mbox{$\mathbf{B}$}^{(0)}$
terms are fixed.
## 9 Kalman filter and smoother modifications for degenerate models
### 9.1 Modifications due to degenerate $\mathbf{R}$ and $\mathbf{Q}$
[1/1/2012 note. These modifications mainly have to do with inverses that
appear in the Shumway and Stoffer’s presentation of the Kalman filter. Later I
want to switch to Koopman’s smoother algorithm which avoids these inverses
altogether.]
In principle, when either $\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{Q}$}_{t}$ or
$\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}$ has zero rows, the standard
Kalman filter/smoother equations would still work and provide the correct
state outputs and likelihood. In practice however errors will be generated
because under certain situations, one of the matrix inverses in the Kalman
filter/smoother equations will involve a matrix with a zero on the diagonal
and this will lead to the computer code throwing an error.
When $\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}$ has zero rows, problems
arise in the Kalman update part of the Kalman filter. The Kalman gain is
$\mbox{$\mathbf{K}$}_{t}=\mbox{$\mathbf{V}$}_{t}^{t-1}(\mbox{$\mathbf{Z}$}_{t}^{*})^{\top}(\mbox{$\mathbf{Z}$}_{t}^{*}\mbox{$\mathbf{V}$}_{t}^{t-1}(\mbox{$\mathbf{Z}$}_{t}^{*})^{\top}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}^{*}\mbox{$\mathbf{H}$}_{t}^{\top})^{-1}$
(203)
Here, $\mbox{$\mathbf{Z}$}_{t}^{*}$ is the missing values modified
$\mbox{$\mathbf{Z}$}_{t}$ matrix with the $i$-th rows zero-ed out if the
$i$-th element of $\mbox{$\boldsymbol{y}$}_{t}$ is missing (section 6.1,
equation 141). Thus if the $i$-th element of $\mbox{$\boldsymbol{y}$}_{t}$ is
missing and the $i$-th row of $\mbox{$\mathbf{H}$}_{t}$ is zero, the $(i,i)$
element of
$(\mbox{$\mathbf{Z}$}_{t}^{*}\mbox{$\mathbf{V}$}_{t}^{t-1}(\mbox{$\mathbf{Z}$}_{t}^{*})^{\top}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}^{*}\mbox{$\mathbf{H}$}_{t}^{\top})$
will be zero also and one cannot take its inverse. In addition, if the initial
value $\mbox{$\boldsymbol{x}$}_{1}$ is treated as fixed but unknown then
$\mbox{$\mathbf{V}$}_{1}^{0}$ will be a $m\times m$ matrix of zeros. Again in
this situation
$(\mbox{$\mathbf{Z}$}_{t}^{*}\mbox{$\mathbf{V}$}_{t}^{t-1}(\mbox{$\mathbf{Z}$}_{t}^{*})^{\top}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}^{*}\mbox{$\mathbf{H}$}_{t}^{\top})$
will have zeros at any $(i,i)$ elements where the $i$-th row of
$\mbox{$\mathbf{H}$}_{t}$ is also zero.
The first case, where zeros on the diagonal arise due to missing values in the
data, can be solved using the matrix which pulls out the rows and columns
corresponding to the non-missing values
($\mbox{\boldmath$\Omega$}_{t}^{(1)}$). Replace
$\big{(}\mbox{$\mathbf{Z}$}_{t}^{*}\mbox{$\mathbf{V}$}_{t}^{t-1}(\mbox{$\mathbf{Z}$}_{t}^{*})^{\top}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}^{*}\mbox{$\mathbf{H}$}_{t}^{\top}\big{)}^{-1}$
in equation (203) with
$(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}\big{(}\mbox{\boldmath$\Omega$}_{t}^{(1)}(\mbox{$\mathbf{Z}$}_{t}^{*}\mbox{$\mathbf{V}$}_{t}^{t-1}(\mbox{$\mathbf{Z}$}_{t}^{*})^{\top}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}^{*}\mbox{$\mathbf{H}$}_{t}^{\top})(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}\big{)}^{-1}\mbox{\boldmath$\Omega$}_{t}^{(1)}$
(204)
Wrapping in
$\mbox{\boldmath$\Omega$}_{t}^{(1)}(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}$
gets rid of all the zero rows/columns in
$\mbox{$\mathbf{Z}$}_{t}^{\prime}\mbox{$\mathbf{V}$}_{t}^{t-1}(\mbox{$\mathbf{Z}$}_{t}^{\prime})^{\top}+\mbox{$\mathbf{H}$}_{t}\mbox{$\mathbf{R}$}_{t}^{\prime}\mbox{$\mathbf{H}$}_{t}^{\top}$,
and the matrix is reassembled with the zero rows/columns reinserted by
wrapping in
$(\mbox{\boldmath$\Omega$}_{t}^{(1)})^{\top}\mbox{\boldmath$\Omega$}_{t}^{(1)}$.
This works because $\mbox{$\mathbf{R}$}_{t}^{\prime}$ is the missing values
modified $\mathbf{R}$ (section 1.3) and is block diagonal across the $i$ and
non-$i$ rows/columns, and $\mbox{$\mathbf{Z}$}_{t}^{\prime}$ has the
$i$-columns zero-ed out. Thus removing the $i$ columns and rows before taking
the inverse has no effect on the product $\mbox{$\mathbf{Z}$}_{t}(...)^{-1}$.
When $\mbox{$\mathbf{V}$}_{1}^{0}=\mathbf{0}$, set
$\mbox{$\mathbf{K}$}_{1}=\mathbf{0}$ without computing the inverse (see
equation 203 where $\mbox{$\mathbf{V}$}_{1}^{0}$ appears on the left).
There is also a numerical issue to deal with. When the $i$-th row of
$\mbox{$\mathbf{H}$}_{t}$ is zero, some of the elements of
$\mbox{$\boldsymbol{x}$}_{t}$ may be completely specified (fully known) given
$\mbox{$\boldsymbol{y}$}_{t}$. Let’s call these fully known elements of
$\mbox{$\boldsymbol{x}$}_{t}$, the $k$-th elements. In this case, the $k$-th
row and column of $\mbox{$\mathbf{V}$}_{t}^{t}$ must be zero because given
$y_{t}(i)$, $x_{t}(k)$ is known (is fixed) and its variance,
$\mbox{$\mathbf{V}$}_{t}^{t}(k,k)$, is zero. Because $\mbox{$\mathbf{K}$}_{t}$
is computed using a numerical estimate of the inverse, the standard
$\mbox{$\mathbf{V}$}_{t}^{t}$ update equation (which uses
$\mbox{$\mathbf{K}$}_{t}$) will cause these elements to be close to zero but
not precisely zero, and they may even be slightly negative on the diagonal.
This will cause serious problems when the Kalman filter output is passed on to
the EM algorithm. Thus after $\mbox{$\mathbf{V}$}_{t}^{t}$ is computed using
the normal Kalman update equation, we will want to explicitly zero out the $k$
rows and columns in the filter.
When $\mbox{$\mathbf{G}$}_{t}$ has zero rows, then we might also have similar
numerical errors in $\mathbf{J}$ in the Kalman smoother. The $\mathbf{J}$
equation is
$\begin{split}\mbox{$\mathbf{J}$}_{t}&=\mbox{$\mathbf{V}$}_{t-1}^{t-1}\mbox{$\mathbf{B}$}_{t}^{\top}(\mbox{$\mathbf{V}$}_{t}^{t-1})^{-1}\\\
&\text{where
}\mbox{$\mathbf{V}$}_{t}^{t-1}=\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{V}$}_{t-1}^{t-1}\mbox{$\mathbf{B}$}_{t}^{\top}+\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{Q}$}_{t}\mbox{$\mathbf{G}$}_{t}^{\top}\end{split}$
(205)
If there are zeros on the diagonals of ($\Lambda$ and/or
$\mbox{$\mathbf{B}$}_{t}$) and zero rows in $\mbox{$\mathbf{G}$}_{t}$ and
these zeros line up, then if the $\mbox{$\mathbf{B}$}_{t}^{(0)}$ and
$\mbox{$\mathbf{B}$}_{T}^{(1)}$ elements in $\mbox{$\mathbf{B}$}_{t}$ are
blocks202020This means the following. Let the rows where the diagonal elements
in $\mathbf{Q}$ equal zero be denoted $i$ and the the rows where there are
non-zero diagonals be denoted $j$. The $\mbox{$\mathbf{B}$}_{t}^{(0)}$
elements are the $\mbox{$\mathbf{B}$}_{t}$ elements where both row and column
are in $i$. The $\mbox{$\mathbf{B}$}_{t}^{(1)}$ elements are the $\mathbf{B}$
elements where both row and column are in $j$. If the
$\mbox{$\mathbf{B}$}_{t}^{(0)}$ and $\mbox{$\mathbf{B}$}_{t}^{(1)}$ elements
in $\mathbf{B}$ are blocks, this means all the $\mbox{$\mathbf{B}$}_{t}(i,j)$
are 0; no deterministic components interact with the stochastic components.,
there will be zeros on the diagonal of $\mbox{$\mathbf{V}$}_{t}^{t}$. Thus
there will be zeros on the diagonal of $\mbox{$\mathbf{V}$}_{t}^{t-1}$ and it
cannot be inverted. In this case, the corresponding elements of
$\mbox{$\mathbf{V}$}_{t}^{T}$ need to be zero since what’s happening is that
those elements are deterministic and thus have 0 variance.
We want to catch these zero variances in $\mbox{$\mathbf{V}$}_{t}^{t-1}$ so
that we can take the inverse. Note that this can only happen when there are
zeros on the diagonal of
$\mbox{$\mathbf{G}$}_{t}\mbox{$\mathbf{Q}$}_{t}\mbox{$\mathbf{G}$}_{t}^{\top}$
since
$\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{V}$}_{t-1}^{t-1}\mbox{$\mathbf{B}$}_{t}^{\top}$
can never be negative on the diagonal since
$\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{B}$}_{t}^{\top}$ must be positive-
definite and so is $\mbox{$\mathbf{V}$}_{t-1}^{t-1}$. The basic idea is the
same as above. We replace $(\mbox{$\mathbf{V}$}_{t}^{t-1})^{-1}$ with:
$(\mbox{\boldmath$\Omega$}_{Vt}^{+})^{\top}\big{(}\mbox{\boldmath$\Omega$}_{Vt}^{+}(\mbox{$\mathbf{V}$}_{t}^{t-1})(\mbox{\boldmath$\Omega$}_{Vt}^{+})^{\top}\big{)}^{-1}\mbox{\boldmath$\Omega$}_{Vt}^{+}$
(206)
where $\mbox{\boldmath$\Omega$}_{Vt}^{+}$ is a matrix that removes all the
positive $\mbox{$\mathbf{V}$}_{t}^{t-1}$ rows analogous to
$\mbox{\boldmath$\Omega$}_{t}^{(1)}$.
### 9.2 Modifications due to fixed initial states
When the initial state of $\boldsymbol{x}$ is fixed, then it is a bit like
$\mbox{\boldmath$\Lambda$}=0$ although actually $\Lambda$ does not appear in
the model and $\xi$ has a different interpretation.
When the initial state of $\boldsymbol{x}$ is treated as stochastic, then if
$t_{0}=0$, $\xi$ is the expected value of $\mbox{$\boldsymbol{x}$}_{0}$
conditioned on no data. In the Kalman filter this means
$\mbox{$\boldsymbol{x}$}_{0}^{0}=\mbox{\boldmath$\xi$}$ and
$\mbox{$\mathbf{V}$}_{0}^{0}=\mbox{\boldmath$\Lambda$}$; in words, the
expected value of $\mbox{$\boldsymbol{x}$}_{0}$ conditioned on
$\mbox{$\boldsymbol{y}$}_{0}$ is $\xi$ and the variance of
$\mbox{$\boldsymbol{x}$}_{0}^{0}$ conditioned on $\mbox{$\boldsymbol{y}$}_{0}$
is $\Lambda$. When $t_{0}=1$, then $\xi$ is the expected value of
$\mbox{$\boldsymbol{x}$}_{1}$ conditioned on no data. In the Kalman filter
this means $\mbox{$\boldsymbol{x}$}_{1}^{0}=\mbox{\boldmath$\xi$}$ and
$\mbox{$\mathbf{V}$}_{1}^{0}=\mbox{\boldmath$\Lambda$}$. Thus where $\xi$ and
$\Lambda$ appear in the Kalman filter equations is different depending on
$t_{0}$; the $\mbox{$\boldsymbol{x}$}_{t}^{t}$ and
$\mbox{$\mathbf{V}$}_{t}^{t}$ initial condition versus the
$\mbox{$\boldsymbol{x}$}_{t}^{t-1}$ and $\mbox{$\mathbf{V}$}_{t}^{t-1}$
initial condition.
When some or all of the $\mbox{$\boldsymbol{x}$}_{t_{0}}$ are fixed, denoted
the $\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\boldsymbol{x}$}_{t_{0}}$, the
fixed values are not a random variables. While technically speaking, the
expected value of a fixed value does not exist, we can think of it as a random
variable with a probability density function with all the weight on the fixed
value. Thus
$\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\,\textup{{E}}[\mbox{$\boldsymbol{x}$}_{t_{0}}]=\mbox{\boldmath$\xi$}$
regardless of the data. The data have no information for
$\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\boldsymbol{x}$}_{t_{0}}$ since we
fix $\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\boldsymbol{x}$}_{t_{0}}$ at
$\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{\boldmath$\xi$}$. If $t_{0}=0$, we
initialize the Kalman filter as usual with
$\mbox{$\boldsymbol{x}$}_{0}^{0}=\mbox{\boldmath$\xi$}$ and
$\mbox{$\mathbf{V}$}_{0}^{0}=\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}$,
where the fixed $\mbox{$\boldsymbol{x}$}_{t_{0}}$ rows correspond to the zero
row/columns in
$\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}$. The
Kalman filter will return the correct expectations even when some of the
diagonals of
$\mbox{$\mathbf{H}$}\mbox{$\mathbf{R}$}\mbox{$\mathbf{H}$}^{\top}$ or
$\mbox{$\mathbf{G}$}\mbox{$\mathbf{Q}$}\mbox{$\mathbf{G}$}^{\top}$ are 0—with
the constraint that we have no purely deterministic elements in the model
(meaning there are no errors terms from either $\mathbf{R}$ or $\mathbf{Q}$).
When $t_{0}=1$,
$\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\boldsymbol{x}$}_{1}^{0}$ and
$\mbox{$\mathbf{I}$}_{l}^{(0)}\mbox{$\boldsymbol{x}$}_{1}^{1}=\mbox{\boldmath$\xi$}$
regardless of the data and
$\mbox{$\mathbf{V}$}_{1}^{0}=\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}$
and
$\mbox{$\mathbf{V}$}_{1}^{1}=\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}$,
where the fixed rows of $\mbox{$\boldsymbol{x}$}_{1}$ correspond with the 0
row/columns in
$\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}$. We
also set $\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{$\mathbf{K}$}_{1}$, meaning
the rows of $\mbox{$\boldsymbol{x}$}_{1}$ that are fixed, to all zero because
$\mbox{$\mathbf{K}$}_{1}$ is the information in $\mbox{$\boldsymbol{y}$}_{1}$
regarding $\mbox{$\boldsymbol{x}$}_{1}$ and there is no information in the
data regarding the values of $\mbox{$\boldsymbol{x}$}_{1}$ that are fixed to
equal $\mbox{$\mathbf{I}$}_{\lambda}^{(0)}\mbox{\boldmath$\xi$}$.
With $\mbox{$\mathbf{V}$}_{1}^{1}$, $\mbox{$\boldsymbol{x}$}_{1}^{1}$ and
$\mbox{$\mathbf{K}$}_{1}$ set to their correct initial values, the normal
Kalman filter equations will work fine. However it is possible for the data at
$t=1$ to be inconsistent with the model if the rows of
$\mbox{$\boldsymbol{y}$}_{1}$ corresponding to any zero row/columns in
$\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}\mbox{$\mathbf{Z}$}_{1}^{\top}+\mbox{$\mathbf{H}$}_{1}\mbox{$\mathbf{R}$}_{1}\mbox{$\mathbf{H}$}_{1}^{\top}$
are not equal to
$\mbox{$\mathbf{Z}$}_{1}\mbox{\boldmath$\xi$}+\mbox{$\mathbf{a}$}_{1}$. Here
is a trivial example, let the model be $x_{t}=x_{t-1}+w_{t}$, $y_{t}=x_{t}$,
$x_{1}=1$. Then if $y_{1}$ is anything except 1, the model is impossible.
Technically, the likelihood of $x_{1}$ conditioned on $Y_{1}=y_{1}$ does not
exist since neither $x_{1}$ nor $y_{1}$ are realizations of a random variable
(since they are fixed), so when the likelihood is computed using the
innovations form of the likelihood, the $t=1$ does not appear, at least for
those $\mbox{$\boldsymbol{y}$}_{1}$ corresponding to any zero row/columns in
$\mbox{$\mathbf{Z}$}_{1}\mbox{$\mathbf{F}$}\mbox{\boldmath$\Lambda$}\mbox{$\mathbf{F}$}^{\top}\mbox{$\mathbf{Z}$}_{1}^{\top}+\mbox{$\mathbf{H}$}_{1}\mbox{$\mathbf{R}$}_{1}\mbox{$\mathbf{H}$}_{1}^{\top}$.
Thus these internal inconsistencies would neither provoke an error nor cause
Inf to be returned for the likelihood. In the MARSS package, the Kalman filter
has been modified to return LL=Inf and an error.
## 10 Summary of requirements for degenerate models
Below are discussed the update equations for the different parameters. Here I
summarize the constraints that are scattered throughout these subsections.
These requirements are coded into the function MARSSkemcheck() in the MARSS
package but some tests must be repeated in the function degen.test(), which
tests if any of the $\mathbf{R}$ or $\mathbf{Q}$ diagonals can be set to zero
if it appears they are going to zero. A model that is allowed when
$\mathbf{R}$ and $\mathbf{Q}$ are non-zero, might be disallowed if
$\mathbf{R}$ or $\mathbf{Q}$ diagonals were to be set to zero. degen.test()
does this check.
* •
$(\mbox{$\mathbf{I}$}_{m}\otimes\mbox{$\mathbf{I}$}_{r}^{(0)}\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{q}^{(0)})\mbox{$\mathbf{D}$}_{t,b}$,
is all zeros; is all zeros. If there is a all zero row in
$\mbox{$\mathbf{H}$}_{t}$ and it is linked (through $\mathbf{Z}$) to a all
zero row in $\mbox{$\mathbf{G}$}_{t}$, then the corresponding
$\mbox{$\mathbf{B}$}_{t}$ elements are fixed instead of estimated.
Corresponding $\mathbf{B}$ rows means those rows in $\mathbf{B}$ where there
is a non-zero column in $\mathbf{Z}$. We need
$\mbox{$\mathbf{I}$}_{r}^{(0)}\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{q}^{(0)}\mbox{$\mathbf{B}$}_{t}$
to only specify fixed $\mbox{$\mathbf{B}$}_{t}$ elements, which means
$\,\textup{{vec}}(\mbox{$\mathbf{I}$}_{r}^{(0)}\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{q}^{(0)}\mbox{$\mathbf{B}$}_{t}\mbox{$\mathbf{I}$}_{m})$
only specifies fixed values. This in turn leads to the condition above.
MARSSkemcheck()
* •
$(\mbox{$\mathbf{I}$}_{1}\otimes\mbox{$\mathbf{I}$}_{r}^{(0)}\mbox{$\mathbf{Z}$}_{t}\mbox{$\mathbf{I}$}_{q}^{(0)})\mbox{$\mathbf{D}$}_{t,u}$
is all zeros; if there is a all zero row in $\mbox{$\mathbf{H}$}_{t}$ and it
is linked (through $\mbox{$\mathbf{Z}$}_{t}$) to a all zero row in
$\mbox{$\mathbf{G}$}_{t}$, then the corresponding $\mbox{$\mathbf{u}$}_{t}$
elements are fixed instead of estimated. MARSSkemcheck()
* •
$(\mbox{$\mathbf{I}$}_{m}\otimes\mbox{$\mathbf{I}$}_{r}^{(0)})\mbox{$\mathbf{D}$}_{t,z}$,
where is all zeros; if $y$ has no observation error, then the corresponding
$\mbox{$\mathbf{Z}$}_{t}$ rows are fixed values.
$(\mbox{$\mathbf{I}$}_{m}\otimes\mbox{$\mathbf{I}$}_{r}^{(0)})$ is a diagonal
matrix with 1s for the rows of $\mbox{$\mathbf{D}$}_{t,z}$ that correspond to
elements of $\mbox{$\mathbf{Z}$}_{t}$ on the $R=0$ rows. MARSSkemcheck()
* •
$(\mbox{$\mathbf{I}$}_{1}\otimes\mbox{$\mathbf{I}$}_{r}^{(0)})\mbox{$\mathbf{D}$}_{t,a}$
is all zeros; if $y$ has no observation error, then the corresponding
$\mbox{$\mathbf{a}$}_{t}$ rows are fixed values. MARSSkemcheck()
* •
$(\mbox{$\mathbf{I}$}_{m}\otimes\mbox{$\mathbf{I}$}_{q}^{(0)})\mbox{$\mathbf{D}$}_{t,b}$
is all zeros. This means $\mbox{$\mathbf{B}$}^{(0)}$ (the whole row) is fixed.
While $\mbox{$\mathbf{B}$}^{d}$ could potentially be estimated potentially, my
derivation assumes it is not. MARSSkemcheck()
* •
$(\mbox{$\mathbf{I}$}_{1}\otimes\mbox{$\mathbf{I}$}_{q,t>m}^{is})\mbox{$\mathbf{D}$}_{t,u}$
is all zeros. This means $\mbox{$\mathbf{u}$}^{is}$ is fixed. Here $is$ is
defined as those rows that are indirectly stochastic at time $m$, where $m$ is
the dimension of $\mathbf{B}$; it can take up to $m$ steps for the $is$ rows
to be connected to the $s$ rows through $\mathbf{B}$. MARSSkemcheck()
* •
If $\mbox{$\mathbf{u}$}^{(0)}$ or $\mbox{\boldmath$\xi$}^{(0)}$ are being
estimated, then the adjacency matrices defined by $\mbox{$\mathbf{B}$}_{t}\neq
0$ are not time-varying. This means that the locations of the 0s in
$\mbox{$\mathbf{B}$}_{t}$ are not changing over time.
$\mbox{$\mathbf{B}$}_{t}$ however may be time-varying. MARSSkemcheck()
* •
$\mbox{$\mathbf{I}$}_{q}^{(0)}$ and $\mbox{$\mathbf{I}$}_{r}^{(0)}$ are time
invariant (an imposed assumption). This means that the location of the 0 rows
in $\mbox{$\mathbf{G}$}_{t}$ and $\mbox{$\mathbf{H}$}_{t}$ (and thus in
$\mbox{$\mathbf{w}$}_{t}$ and $\mbox{$\mathbf{v}$}_{t}$) are not changing
through time. It would be easy enough to allow $\mbox{$\mathbf{I}$}_{r}^{(0)}$
to be time varying, but to make my derivation easier, I assume it is time
constant.
* •
$\mbox{$\mathbf{Z}$}_{t}^{(0)}$ in
$\,\textup{{E}}[\mbox{$\boldsymbol{Y}$}_{t}^{(0)}]=\mbox{$\mathbf{Z}$}_{t}^{(0)}\,\textup{{E}}[\mbox{$\boldsymbol{X}$}_{t}]+\mbox{$\mathbf{a}$}_{t}^{(0)}$
does not imply an over-determined system of equations. Because the
$\mbox{$\mathbf{v}$}_{t}$ rows are zero for the ${(0)}$ rows of
$\boldsymbol{y}$, it must be possible for this equality to hold. This means
that $\mbox{$\mathbf{Z}$}_{t}^{(0)}$ cannot specify an over-determined system
although an underdetermined system is ok. MARSSkemcheck() checks by examining
the singlular values of $\mbox{$\mathbf{Z}$}_{t}^{(0)}$ returned from the
singlular value decomposition (svd). The number of singlular values must not
be less than $m$ (columns of $\mathbf{Z}$). If it is less than $m$, it means
the equation system is over-determined. Singular values equal to 0 are ok; it
means the system is under-determined given only the observation equation, but
that’s ok because we also have the state equation will determine the under
states and the Kalman smoother will presumably throw an error if the state
process is under-determined (if that would even make sense…).
* •
The state process cannot be over-determined via constraints imposed from the
deterministic observation process ($\mbox{$\mathbf{R}$}=0$) and the
deterministic state process ($\mbox{$\mathbf{Q}$}=0$). If this is the case the
Kalman gain equation (in the Kalman filter) will throw an error. Checked in
MARSS() via call to MARSSkf() before fitting call; degen.test(), in MARSSkem()
will also test via MARSSkf call if some R or Q are attempted to be set to 0.
If B or Z changes during kem or optim iterations such that this constraint
does not hold, then algorithm will exit with an error message.
* •
The location of the 0s in $\mathbf{B}$ are time-invariant. The $\mathbf{B}$
can be time-varying but not the location of 0s. Also, I want $\mathbf{B}$ to
be such that once a row becomes indirectly stochastic is stays that way. For
example, if $\mbox{$\mathbf{B}$}=\bigl{[}\begin{smallmatrix}0&1\\\
1&0\end{smallmatrix}\bigr{]}$, then row 2 flips back and forth from being
indirectly stochastic to deterministic.
The dimension of the identity matrices in the above constraints is given by
the subscript on $\mathbf{I}$ except when it is implicit.
## 11 Implementation comments
The EM algorithm is a hill-climbing algorithm and like all hill-climbing
algorithms it can get stuck on local maxima. There are a number approaches to
doing a pre-search of the initial conditions space, but a brute force random
Monte Carol search appears to work well (Biernacki et al.,, 2003). It is slow,
but normally sufficient. In my experience, Monte Carlo initial conditions
searches become important as the fraction of missing data in the data set
increases. Certainly an initial conditions search should be done before
reporting final estimates for an analysis. However in our212121“Our” and “we”
in this section means work and papers by E. E. Holmes and E.J. Ward. studies
on the distributional properties of parameter estimates, we rarely found it
necessary to do an initial conditions search.
The EM algorithm will quickly home in on parameter estimates that are close to
the maximum, but once the values are close, the EM algorithm can slow to a
crawl. Some researchers start with an EM algorithm to get close to the
maximum-likelihood parameters and then switch to a quasi-Newton method for the
final search. In many ecological applications, parameter estimates that differ
by less than 3 decimal places are for all practical purposes the same. Thus we
have not used the quasi-Newton final search.
Shumway and Stoffer (2006; chapter 6) imply in their discussion of the EM
algorithm that both $\xi$ and $\Lambda$ can be estimated, though not
simultaneously. Harvey (1989), in contrast, discusses that there are only two
allowable cases for the initial conditions: 1) fixed but unknown and 2) a
initial condition set as a prior. In case 1, $\xi$ is
$\mbox{$\boldsymbol{x}$}_{0}$ (or $\mbox{$\boldsymbol{x}$}_{1}$) and is then
estimated as a parameter; $\Lambda$ is held fixed at 0. In case 2, $\xi$ and
$\Lambda$ specify the mean and variance of $\mbox{$\boldsymbol{X}$}_{0}$ (or
$\mbox{$\boldsymbol{X}$}_{1}$) respectively. Neither are estimated; instead,
they are specified as part of the model.
As mentioned in the introduction, misspecification of the prior on
$\mbox{$\boldsymbol{x}$}_{0}$ can have catastrophic and undetectable effects
on your parameter estimates. For many MARSS models, you will never see this
problem. However, if you are fitting models that imply a correlation structure
between the hidden states (i.e. the variance-covariance matrix of the
$\boldsymbol{X}$’s is not diagonal), then your prior can definitely create
problems if it does not have the same correlation structure as that implied by
your MLE model. A common default is to use a prior with a diagonal variance-
covariance matrix. This can lead to serious problems if the implied variance-
covariance of the $\boldsymbol{X}$’s is not diagonal. A diffuse prior does not
get around this since it has a correlation structure also even if it has
infinite variance.
One way you can detect that you have a problem is to start the EM algorithm at
the outputs from a Newton-esque algorithm. If the EM estimates diverge and the
likelihood drops, you have a problem. Here are a few suggestions for getting
around the problem:
* •
Treat $\mbox{$\boldsymbol{x}$}_{0}$ as an estimated parameter and set
$\mbox{$\mathbf{V}$}_{0}$=0. If the model is not stable going backwards in
time, then treat $\mbox{$\boldsymbol{x}$}_{1}$ as the estimated parameter;
this will allow the data to constrain the $\mbox{$\boldsymbol{x}$}_{1}$
estimate (since there is no data at $t=0$, $\mbox{$\boldsymbol{x}$}_{0}$ has
no data to constrain it).
* •
Try a diffuse prior, but first read the info in the KFAS R package about
diffuse priors since MARSS uses the KFAS implementation. In particular, note
that you will still be imposing an information on the correlation structure
using a diffuse prior; whatever $\mbox{$\mathbf{V}$}_{0}$ you use is telling
the algorithm what correlation structure to use. If there is a mismatch
between the correlation structure in the prior and the correlation structure
implied by the MLE model, you will not be escaping the prior problem. But
sometimes you will know your implied correlation structure. For example, you
may know that the $\boldsymbol{x}$’s are independent or you may be able to
solve for the stationary distribution a priori if your stationary distribution
is not a function of the parameters you are trying to estimate. Other times
you are estimating a parameter that determines the correlation structure (like
$\mathbf{B}$) and you will not know a priori what the correlation structure
is.
In some cases, the update equation for one parameter needs other parameters.
Technically, the Kalman filter/smoother should be run between each parameter
update, however following Ghahramani and Hinton, (1996) the default MARSS
algorithm skips this step (unless the user sets `control$safe=TRUE`) and each
updated parameter is used for subsequent update equations. If you see warnings
that the log-likelihood drops, then try setting `control$safe=TRUE`. This will
increase computation time greatly.
## 12 MARSS R package
R code for the Kalman filter, Kalman smoother, and EM algorithm is provided as
a separate R package, MARSS, available on CRAN
(http://cran.r-project.org/web/packages/MARSS). MARSS was developed by
Elizabeth Holmes, Eric Ward and Kellie Wills and provides maximum-likelihood
estimation and model-selection for both unconstrained and constrained MARSS
models. The package contains a detailed user guide which shows various
applications. In addition to model fitting via the EM algorithm, the package
provides algorithms for bootstrapping, confidence intervals, auxiliary
residuals, and model selection criteria.
## References
* Biernacki et al., (2003) Biernacki, C., Celeux, G., and Govaert, G. (2003). Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate gaussian mixture models. Computational Statistics and Data Analysis, 41(3-4):561–575.
* Borman, (2009) Borman, S. (2009). The expectation maximization algorithm - a short tutorial.
* Ghahramani and Hinton, (1996) Ghahramani, Z. and Hinton, G. E. (1996). Parameter estimation for linear dynamical systems. Technical Report CRG-TR-96-2, University of Totronto, Dept. of Computer Science.
* Harvey, (1989) Harvey, A. C. (1989). Forecasting, structural time series models and the Kalman filter. Cambridge University Press, Cambridge, UK.
* Henderson and Searle, (1979) Henderson, H. V. and Searle, S. R. (1979). Vec and vech operators for matrices, with some uses in jacobians and multivariate statistics. The Canadian Journal of Statistics, 7(1):65–81.
* Johnson and Wichern, (2007) Johnson, R. A. and Wichern, D. W. (2007). Applied multivariate statistical analysis. Prentice Hall, Upper Saddle River, NJ.
* Koopman and Ooms, (2011) Koopman, S. and Ooms, M. (2011). Forecasting economic time series using unobserved components time series models, pages 129–162. Oxford University Press, Oxford.
* Koopman, (1993) Koopman, S. J. (1993). Distrubance smoother for state space models. Biometrika, 80(1):117–126.
* McLachlan and Krishnan, (2008) McLachlan, G. J. and Krishnan, T. (2008). The EM algorithm and extensions. John Wiley and Sons, Inc., Hoboken, NJ, 2nd edition.
* Roweis and Ghahramani, (1999) Roweis, S. and Ghahramani, Z. (1999). A unifying review of linear gaussian models. Neural Computation, 11:305–345.
* Shumway and Stoffer, (2006) Shumway, R. and Stoffer, D. (2006). Time series analysis and its applications. Springer-Science+Business Media, LLC, New York, New York, 2nd edition.
* Shumway and Stoffer, (1982) Shumway, R. H. and Stoffer, D. S. (1982). An approach to time series smoothing and forecasting using the EM algorithm. Journal of Time Series Analysis, 3(4):253–264.
* Wu et al., (1996) Wu, L. S.-Y., Pai, J. S., and Hosking, J. R. M. (1996). An algorithm for estimating parameters of state-space models. Statistics and Probability Letters, 28:99–106.
* Zuur et al., (2003) Zuur, A. F., Fryer, R. J., Jolliffe, I. T., Dekker, R., and Beukema, J. J. (2003). Estimating common trends in multivariate time series using dynamic factor analysis. Environmetrics, 14(7):665–685.
|
arxiv-papers
| 2013-02-16T02:00:03 |
2024-09-04T02:49:41.778071
|
{
"license": "Public Domain",
"authors": "Elizabeth E. Holmes",
"submitter": "Elizabeth Holmes",
"url": "https://arxiv.org/abs/1302.3919"
}
|
1302.3927
|
# the automatic additivity of $\xi-$Lie derivations on von Neumann algebras
Zhaofang Bai School of Mathematical Sciences, Xiamen University, Xiamen,
361005, P. R. China. [email protected] , Shuanping Du∗ School of
Mathematical Sciences, Xiamen University, Xiamen, 361005, P. R. China.
[email protected] and Yu Guo Department of Mathematics, Shanxi Datong
University, Datong, 037009, P. R. China [email protected]
###### Abstract.
Let ${\mathcal{M}}$ be a von Neumann algebra with no central summands of type
I1. It is shown that every nonlinear $\xi-$Lie derivation ($\xi\neq 1$) on
$\mathcal{M}$ is an additive derivation.
2000 Mathematical Subject Classification. Primary 47B47, 47B49
Key words and phrases. $\xi-$Lie derivation, Derivation, von Neumann algebra
This work was supported partially by National Natural Science Foundation of
China (11071201, 11001230), and the Fundamental Research Funds for the Central
Universities (2010121001).
This paper is in final form and no version of it will be submitted for
publication elsewhere.
∗ Correspondence author
## 1\. Introduction and main results
Let $\mathcal{A}$ be an associate ring (or an algebra over a field
$\mathbb{F}$). Then ${\mathcal{A}}$ is a Lie ring (Lie algebra) under the
product $[x,y]=xy-yx$, i.e., the commutator of $x$ and $y$. Recall that an
additive (linear) map $\delta:{\mathcal{A}}\rightarrow{\mathcal{A}}$ is called
an additive (linear) derivation if $\delta(xy)=\delta(x)y+x\delta(y)$ for all
$x,y\in{\mathcal{A}}$. Derivations are very important maps both in theory and
in applications, and have been studied intensively (see [8, 20, 21, 22] and
the references therein). More generally, an additive (linear) map $L$ from
${\mathcal{A}}$ into itself is called an additive (linear) Lie derivation if
$L([x,y])=[L(x),y]+[x,L(y)]$ for all $x,y\in{\mathcal{A}}$. The questions of
characterizing Lie derivations and revealing the relationship between Lie
derivations and derivations have received many mathematicians’ attention
recently (see [4, 9, 12, 16]). Very roughly speaking, additive (linear) Lie
derivations in the context prime rings (operator algebras) can be decomposed
as $\sigma+\tau$, where $\sigma$ is an additive (linear) derivation and $\tau$
is an additive (linear) map sending commutators into zero. Similarly,
associated with the Jordan product $xy+yx$. we have the conception of Jordan
derivation which is also studied intensively (see [5, 6, 9] and the references
therein).
Note that an important relation associated with the Lie product is the
commutativity. Two elements $x,y$ in an algebra ${\mathcal{A}}$ are
commutative if $xy=yx$, that is, their Lie product is zero. More generally, if
$\xi$ is a scalar and if $xy=\xi yx$, we say that $x$ commutes with $y$ up to
a factor $\xi$. The notion of commutativity up to a factor for pairs of
operators is also important and has been studied in the context of operator
algebras and quantum groups (Refs. [7, 11]). Motivated by this, the authors
introduce a binary operation $[x,y]_{\xi}=xy-\xi yx$, called $\xi$-Lie product
of $x,y$ (Ref. [17]). This product is found playing a more and more important
role in some research topics, and its study has recently attracted many
authors attention (for example, see [17, 18]). Then it is natural to introduce
the concept of $\xi$-Lie derivation. An additive (linear) map $L$ from
${\mathcal{A}}$ into itself is called a $\xi-$Lie derivation if
$L([x,y]_{\xi})=[L(x),y]_{\xi}+[x,L(y)]_{\xi}$ for all $x,y\in{\mathcal{A}}$.
This concept unifies several well-known notions. It is clear that a $\xi-$Lie
derivation is a derivation if $\xi=0$; is a Lie derivation if $\xi=1$; is a
Jordan derivation if $\xi=-1$. In [18], Qi and Hou characterized the additive
$\xi-$Lie derivation on nest algebras.
Let $\Phi:{\mathcal{A}}\rightarrow{\mathcal{A}}$ be a map (without the
additivity or linearity assumption). We say that $\Phi$ is a nonlinear
$\xi-$Lie derivation if
$\Phi([x,y]_{\xi})=[\Phi(x),y]_{\xi}+[x,\Phi(y)]_{\xi}$ for all
$x,y\in{\mathcal{A}}$. Recently, Yu and Zhang [24] described nonlinear Lie
derivation on triangular algebras. The aim of this note is to investigate
nonlinear $\xi-$Lie derivations on von Neumann algebras ($\xi\neq 1$) and to
reveal the relationship between such nonlinear $\xi-$Lie derivations and
additive derivations. Due to vital importance of derivations, we firstly
investigate nonlinear derivations. To our surprising, nonlinear derivations
are automatically additive. Our main results read as follows.
Theorem 1.1. Let ${\mathcal{M}}$ be a von Neumann algebra with no central
summands of type I1. If $\Phi:{\mathcal{M}}\rightarrow{\mathcal{M}}$ is a
nonlinear derivation, then $\Phi$ is an additive derivation.
The following result reveals the relationship between general nonlinear
$\xi-$Lie derivations and additive derivations.
Theorem 1.2. Let ${\mathcal{M}}$ be a von Neumann algebra with no central
summands of type I1. If $\xi$ is a scalar not equal $0,1$ and
$\Phi:{\mathcal{M}}\rightarrow{\mathcal{M}}$ is a nonlinear $\xi-$Lie
derivation, then $\Phi$ is an additive derivation and $\Phi(\xi T)=\xi\Phi(T)$
for all $T\in{\mathcal{M}}$.
It is worth mentioning that, as it turns out from Theorems 1.1 and Theorem
1.2, the additive structure and $\xi-$Lie multiplicative structure of von
Neumann algebra with no central summands of type I1 are very closely related
to each other. We remark that the question when a multiplicative map is
necessary additive is important in quantum mechanics and mathematics, and was
discussed for associative rings in the purely algebraic setting ([14], for a
recent systematic account, see [2]). In recent years, there is a growing
interest in studying the automatic additivity of maps determined by the action
on the product (see [1, 2, 13, 19, 23] and the references therein). We also
remark that if $\xi=1$, then $\xi-$Lie derivation is in fact a Lie derivation,
while Lie derivation is not necessary additive. For example, let $\sigma$ is
an additive derivation of ${\mathcal{M}}$ and $\tau$ is a mapping of
${\mathcal{M}}$ into its center ${\mathcal{Z}}_{\mathcal{M}}$ which maps
commutators into zero. Then $\sigma+\tau$ is a Lie derivation and such Lie
derivation is not additive in general.
## 2\. Notations and Preliminaries
Before embarking on the proof of our main results, we need some notations and
preliminaries about von Neumann algebras. A von Neumann algebra $\mathcal{M}$
is a weakly closed, self-adjoint algebra of operators on a Hilbert space $H$
containing the identity operator I. The set
${\mathcal{Z}}_{\mathcal{M}}=\\{S\in{\mathcal{M}}\mid ST=TS\text{ for all
}T\in{\mathcal{M}}\\}$ is called the center of ${\mathcal{M}}$. For
$A\in{\mathcal{M}}$, the central carrier of $A$, denoted by $\overline{A}$, is
the intersection of all central projections $P$ such that $PA=A$. It is well
known that the central carrier of $A$ is the projection with the range
$[{\mathcal{M}}A(H)]$, the closed linear span of $\\{MA(x)\mid
M\in{\mathcal{M}},x\in H\\}$. For each self-adjoint operator
$A\in{\mathcal{M}}$, we define the central core of $A$, denoted by
$\underline{A}$, to be $\sup\\{S\in{\mathcal{Z}}_{\mathcal{M}}\mid
S=S^{*},S\leq A\\}$. Clearly, one has $A-\underline{A}\geq 0$. Further if
$S\in{\mathcal{Z}}_{\mathcal{M}}$ and $A-\underline{A}\geq S\geq 0$ then
$S=0$. If $P$ is a projection it is clear that $\underline{P}$ is the largest
central projection $\leq P$. We call a projection core-free if
$\underline{P}=0$. It is easy to see that $\underline{P}=0$ if and only if
$\overline{I-P}=I$, here $\overline{I-P}$ denotes the central carrier of
$I-P$. We use [10] as a general reference for the theory of von Neumann
algebras.
In the following, there are several fundamental properties of von Neumann
algebras from [3, 15] which will be used frequently. For convenience, we list
them in a lemma.
Lemma 2.1. Let $\mathcal{M}$ be a von Neumann algebra.
(i) ([15, Lemma 4]) If ${\mathcal{M}}$ has no summands of type I1, then each
nonzero central projection of ${\mathcal{M}}$ is the central carrier of a
core-free projection of ${\mathcal{M}}$;
(ii) ([3, Lemma 2.6]) If ${\mathcal{M}}$ has no summands of type I1, then
$\mathcal{M}$ equals the ideal of $\mathcal{M}$ generated by all commutators
in $\mathcal{M}$.
By Lemma 2.1(i), one can find a non-trivial core-free projection with central
carrier $I$, denoted by $P_{1}$. Throughout this paper, $P_{1}$ is fixed.
Write $P_{2}=I-P_{1}$. By the definition of central core and central carrier,
$P_{2}$ is also core-free and $\overline{P_{2}}=I$. According to the two-side
Pierce decomposition of $\mathcal{M}$ relative $P_{1}$, denote
${\mathcal{M}}_{ij}=P_{i}{\mathcal{M}}P_{j}$, $i,j=1,2$, then we may write
${\mathcal{M}}={\mathcal{M}}_{11}+{\mathcal{M}}_{12}+{\mathcal{M}}_{21}+{\mathcal{M}}_{22}$.
In all that follows, when we write $T_{ij}$, $S_{ij}$, $M_{ij}$, it indicates
that they are contained in ${\mathcal{M}}_{ij}$. A conclusion which is used
frequently is $TM_{ij}=0$ for every $M_{ij}\in{\mathcal{M}}_{ij}$ implies that
$TP_{i}=0$. Indeed $TP_{i}MP_{j}=0$ for all $M\in{\mathcal{M}}$ together with
$\overline{P_{j}}=I$ gives $TP_{i}=0$. Similarly, if $M_{ij}T=0$ for every
$M_{ij}\in{\mathcal{M}}_{ij}$, then $T^{*}M_{ij}^{*}=0$ and so $P_{j}T=0$. If
$Z\in{\mathcal{Z}}_{\mathcal{M}}$ and $ZP_{i}=0$, then $ZMP_{i}=0$ for all
$M\in{\mathcal{M}}$ which implies $Z=0$.
The next lemma is technical which plays an important role in the proof of
Theorem 1.2.
Lemma 2.2. Let $T\in{\mathcal{M}}$, $\xi\neq 0,1$. Then
$T\in{\mathcal{M}}_{ij}+(\xi P_{i}+P_{j}){\mathcal{Z}}_{\mathcal{M}}$ ($1\leq
i\neq j\leq 2$) if and only if $[T,M_{ij}]_{\xi}=0$ for every
$M_{ij}\in{\mathcal{M}}_{ij}$;
Proof. The necessity is clear. Conversely, assume $[T,M_{ij}]_{\xi}=0$ for
every $M_{ij}\in{\mathcal{M}}_{ij}$. Write $T=\sum_{i,j=1}^{2}T_{ij}$. It
follows that $T_{ii}M_{ij}+T_{ji}M_{ij}=\xi(M_{ij}T_{jj}+M_{ij}T_{ji})$. Thus
$None$ $T_{ii}M_{ij}=\xi M_{ij}T_{jj}$
and $T_{ji}M_{ij}=0$. Noting that $\overline{P_{j}}=I$, we obtain
$T_{ji}=0.$
For every $M_{ii}\in{\mathcal{M}}_{ii}$, $M_{jj}\in{\mathcal{M}}_{jj}$,
$M_{ii}M_{ij},M_{ij}M_{jj}\in{\mathcal{M}}_{ij}$ and so $TM_{ii}M_{ij}=\xi
M_{ii}M_{ij}T$ and $TM_{ij}M_{jj}=\xi M_{ij}M_{jj}T$. From
$[T,M_{ij}]_{\xi}=0$, it follows that $TM_{ii}M_{ij}=M_{ii}TM_{ij}$, that is
$(TM_{ii}-M_{ii}T)M_{ij}=0$. Using $\overline{P_{j}}=I$ again, we have
$T_{ii}M_{ii}-M_{ii}T_{ii}=0$, i.e.,
$T_{ii}\in{\mathcal{Z}}_{P_{i}{\mathcal{M}}P_{i}}$. Thus
$T_{ii}=Z_{i}P_{i}$
for some central element $Z_{i}\in{\mathcal{Z}}_{\mathcal{M}}$. Similarly,
combining $TM_{ij}M_{jj}=\xi M_{ij}M_{jj}T$ and $[T,M_{ij}]_{\xi}=0$, we can
obtain
$T_{jj}=Z_{j}P_{j}$
for some central element $Z_{j}\in{\mathcal{Z}}_{\mathcal{M}}$. Now equation
(1) implies that $(Z_{i}-\xi Z_{j})M_{ij}=0$. From $\overline{P_{j}}=I$ and
$M_{ij}$ is arbitrary, it follows that $(Z_{i}-\xi Z_{j})P_{i}=0$. Since
$Z_{i}-\xi Z_{j}\in{\mathcal{Z}}_{\mathcal{M}}$, $M(Z_{i}-\xi
Z_{j})P_{i}=(Z_{i}-\xi Z_{j})MP_{i}=0$ for all $M\in\mathcal{M}$. By
$\overline{P_{i}}=I$, it follows that $Z_{i}=\xi Z_{j}$. So $T=T_{ij}+(\xi
P_{i}+P_{j})Z_{j}\in{\mathcal{M}}_{ij}+(\xi
P_{i}+P_{j}){\mathcal{Z}}_{\mathcal{M}}$.
## 3\. Proofs of main results
In the following, we are firstly aimed to prove Theorem 1.1.
Proof of Theorem 1.1. In what follows,
$\Phi:{\mathcal{M}}\rightarrow{\mathcal{M}}$ is a nonlinear derivation. We
will prove that $\Phi$ is additive, that is, for all $T,S\in{\mathcal{M}}$,
$\Phi(T+S)=\Phi(T)+\Phi(S)$. It is clear that $\Phi(0)=\Phi(0)0+0\Phi(0)=0$.
Note that $\Phi(P_{1}P_{2})=\Phi(P_{1})P_{2}+P_{1}\Phi(P_{2})=0$, multiplying
by $P_{2}$ from the both sides of this equation, we get
$P_{2}\Phi(P_{1})P_{2}=0$. Similarly, multiplying by $P_{1}$ from the both
sides of this equation, we have $P_{1}\Phi(P_{2})P_{1}=0$. For every
$M_{12}\in{\mathcal{M}}_{12}$,
$\Phi(M_{12})=\Phi(P_{1}M_{12})=\Phi(P_{1})M_{12}+P_{1}\Phi(M_{12})$ and so
$P_{1}\Phi(P_{1})M_{12}=0$. Hence $P_{1}\Phi(P_{1})P_{1}=0$. Similarly, from
$\Phi(M_{12})=\Phi(M_{12}P_{2})$, one can obtain $P_{2}\Phi(P_{2})P_{2}=0$.
Denote $T_{0}=P_{1}\Phi(P_{1})P_{2}-P_{2}\Phi(P_{1})P_{1}$. Define
$\Psi:{\mathcal{M}}\rightarrow{\mathcal{M}}$ by $\Psi(T)=\Phi(T)-[T,T_{0}]$
for every $T\in{\mathcal{M}}$. Then it is easy to see that $\Psi$ is also a
nonlinear derivation and $\Psi(P_{1})=\Psi(P_{2})=0$. Note that for every
$T\in{\mathcal{M}}:T\mapsto[T,T_{0}]$ is an additive derivation of
${\mathcal{M}}$. Therefore, without loss of generality, we may assume
$\Phi(P_{1})=\Phi(P_{2})=0$. Then for every $T_{ij}\in{\mathcal{M}}_{ij}$,
$\Phi(T_{ij})=P_{i}\Phi(M_{ij})P_{j}\in{\mathcal{M}}_{ij}$ ($i,j=1,2$).
Let $T$ be in $\mathcal{M}$, write $T=T_{11}+T_{12}+T_{21}+T_{22}$. In order
to prove the additivity of $\Phi$, we only need to show $\Phi$ is additive on
${\mathcal{M}}_{ij}(1\leq i,j\leq 2)$ and
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})$.
We will complete the proof by checking two claims.
Claim 1. $\Phi$ is additive on ${\mathcal{M}}_{ij}(1\leq i,j\leq 2)$.
Set $T_{ij},S_{ij},M_{ij}\in{\mathcal{M}}_{ij}$. From
$(T_{11}+T_{12})M_{12}=T_{11}M_{12}$, it follows that
$\Phi(T_{11}+T_{12})M_{12}+(T_{11}+T_{12})\Phi(M_{12})=\Phi(T_{11})M_{12}+T_{11}\Phi(M_{12}).$
Note that $\Phi(M_{12})\in{\mathcal{M}}_{12}$, so
$(\Phi(T_{11}+T_{12})-\Phi(T_{11}))M_{12}=0.$ Then
$(\Phi(T_{11}+T_{12})-\Phi(T_{11}))P_{1}=0$. This implies
$(\Phi(T_{11}+T_{12})-\Phi(T_{11})-\Phi(T_{12}))P_{1}=0.$
Similarly, from $(T_{11}+T_{12})M_{21}=T_{12}M_{21}$, we have
$(\Phi(T_{11}+T_{12})-\Phi(T_{11})-\Phi(T_{12}))M_{21}=0.$ Then
$(\Phi(T_{11}+T_{12})-\Phi(T_{11})-\Phi(T_{12}))P_{2}=0.$
Thus
$\Phi(T_{11}+T_{12})=\Phi(T_{11})+\Phi(T_{12}).$
Similarly, $\Phi(T_{12}+T_{22})=\Phi(T_{12})+\Phi(T_{22}).$ Since
$T_{12}+S_{12}=(P_{1}+T_{12})(P_{2}+S_{12})$, we have that
$\displaystyle\Phi(T_{12}+S_{12})$
$\displaystyle=\Phi(P_{1}+T_{12})(P_{2}+S_{12})+(P_{1}+T_{12})\Phi(P_{2}+S_{12})$
$\displaystyle=\Phi(T_{12})+\Phi(S_{12}).$
In the same way, one can show that
$\Phi(T_{21}+S_{21})=\Phi(T_{21})+\Phi(S_{21}).$ That is, $\Phi$ is additive
on ${\mathcal{M}}_{12},{\mathcal{M}}_{21}$.
From $(T_{11}+S_{11})M_{12}=T_{11}M_{12}+S_{11}M_{12}$, it follows that
$\begin{array}[]{rl}&\Phi(T_{11}+S_{11})M_{12}+(T_{11}+S_{11})\Phi(M_{12})\\\
=&\Phi(T_{11}M_{12})+\Phi(S_{11}M_{12})\\\
=&\Phi(T_{11})M_{12}+T_{11}\Phi(M_{12})+\Phi(S_{11})M_{12}+S_{11}\Phi(M_{12}).\end{array}$
Thus $(\Phi(T_{11}+S_{11})-\Phi(T_{11})-\Phi(S_{11}))M_{12}=0$. This yields
$(\Phi(T_{11}+S_{11})-\Phi(T_{11})-\Phi(S_{11}))P_{1}=0.$
Note that
$\Phi(T_{11}+S_{11})-\Phi(T_{11})-\Phi(S_{11})\in{\mathcal{M}}_{11}$. So
$\Phi(T_{11}+S_{11})=\Phi(S_{11})+\Phi(T_{11}).$
Similarly ,$\Phi(T_{22}+S_{22})=\Phi(T_{22})+\Phi(S_{22}).$ That is, $\Phi$ is
additive on ${\mathcal{M}}_{11},{\mathcal{M}}_{22}$, as desired.
Claim 2.
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})$
From $(T_{11}+T_{12}+T_{21}+T_{22})M_{12}=(T_{11}+T_{21})M_{12}$, we have
$\begin{array}[]{rl}&\Phi(T_{11}+T_{12}+T_{21}+T_{22})M_{12}+(T_{11}+T_{12}+T_{21}+T_{22})\Phi(M_{12})\\\
=&\Phi(T_{11}M_{12})+\Phi(T_{21}M_{12})\\\
=&\Phi(T_{11})M_{12}+T_{11}\Phi(M_{12})+\Phi(T_{21})M_{12}+T_{21}\Phi(M_{12}).\end{array}$
Then
$\begin{array}[]{ll}&(\Phi(T_{11}+T_{12}+T_{21}+T_{22})-\Phi(T_{11})-\Phi(T_{12})-\Phi(T_{21})-\Phi(T_{22}))M_{12}\\\
=&(\Phi(T_{11}+T_{12}+T_{21}+T_{22})-\Phi(T_{11})-\Phi(T_{21}))M_{12}=0.\end{array}$
This gives
$(\Phi(T_{11}+T_{12}+T_{21}+T_{22})-\Phi(T_{11})-\Phi(T_{12})-\Phi(T_{21})-\Phi(T_{22}))P_{1}=0.$
From $(T_{11}+T_{12}+T_{21}+T_{22})M_{21}=(T_{12}+T_{22})M_{21}$, it follows
that
$(\Phi(T_{11}+T_{22}+T_{12}+T_{21})-\Phi(T_{11})-\Phi(T_{12})-\Phi(T_{21})-\Phi(T_{22}))P_{2}=0.$
So
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})$.
Now, we turn to prove Theorem 1.2.
Proof of Theorem 1.2. We will finish the proof of the Theorem 1.2 by checking
several claims.
Claim 1. $\Phi(0)=0$ and there is $T_{0}\in{\mathcal{M}}$ such that
$\Phi(P_{i})=[P_{i},T_{0}]$ ($i=1,2$).
It is clear that
$\Phi(0)=\Phi([0,0]_{\xi})=[\Phi(0),0]_{\xi}+[0,\Phi(0)]_{\xi}=0$.
For every $M_{12}$,
$None$
$\begin{array}[]{ll}\Phi(M_{12})&=\Phi([P_{1},M_{12}]_{\xi})=[\Phi(P_{1}),M_{12}]_{\xi}+[P_{1},\Phi(M_{12})]_{\xi}\\\
&=\Phi(P_{1})M_{12}-\xi
M_{12}\Phi(P_{1})+P_{1}\Phi(M_{12})-\xi\Phi(M_{12})P_{1}.\end{array}$
Multiplying by $P_{1},P_{2}$ from the left and the right in equation (2)
respectively, we have
$P_{1}\Phi(P_{1})P_{1}M_{12}=\xi M_{12}P_{2}\Phi(P_{1})P_{2}.$
That is $[P_{1}\Phi(P_{1})P_{1}+P_{2}\Phi(P_{1})P_{2},M_{12}]_{\xi}=0$. Now
Lemma 2.2 yields that $P_{1}\Phi(P_{1})P_{1}+P_{2}\Phi(P_{1})P_{2}\in(\xi
P_{1}+P_{2}){\mathcal{Z}}_{\mathcal{M}}$. For every $M_{21}$,
$None$
$\begin{array}[]{ll}\Phi(M_{21})&=\Phi([P_{2},M_{21}]_{\xi})=[\Phi(P_{2}),M_{21}]_{\xi}+[P_{2},\Phi(M_{21})]_{\xi}\\\
&=\Phi(P_{2})M_{21}-\xi
M_{21}\Phi(P_{2})+P_{2}\Phi(M_{21})-\xi\Phi(M_{21})P_{2}.\end{array}$
Multiplying by $P_{2},P_{1}$ from the left and the right in equation (3)
respectively, we obtain
$P_{2}\Phi(P_{2})P_{2}M_{21}=\xi M_{21}P_{1}\Phi(P_{2})P_{1}.$
That is $[P_{2}\Phi(P_{2})P_{2}+P_{1}\Phi(P_{2})P_{1},M_{21}]_{\xi}=0$. Using
Lemma 2.2 again, we get
$P_{2}\Phi(P_{2})P_{2}+P_{1}\Phi(P_{2})P_{1}\in(P_{1}+\xi
P_{2}){\mathcal{Z}}_{\mathcal{M}}$. Assume
$P_{1}\Phi(P_{1})P_{1}+P_{2}\Phi(P_{1})P_{2}=(\xi P_{1}+P_{2})Z_{1}$ and
$P_{2}\Phi(P_{2})P_{2}+P_{1}\Phi(P_{2})P_{1}=(P_{1}+\xi P_{2})Z_{2}$,
$Z_{1},Z_{2}\in{\mathcal{Z}}_{\mathcal{M}}$. From $[P_{1},P_{2}]_{\xi}=0$, it
follows that
$\begin{array}[]{ll}&\Phi([P_{1},P_{2}]_{\xi})=[\Phi(P_{1}),P_{2}]_{\xi}+[P_{1},\Phi(P_{2})]_{\xi}\\\
=&\Phi(P_{1})P_{2}-\xi
P_{2}\Phi(P_{1})+P_{1}\Phi(P_{2})-\xi\Phi(P_{2})P_{1}\\\
=&(1-\xi)P_{1}\Phi(P_{2})P_{1}+(1-\xi)P_{2}\Phi(P_{1})P_{2}+P_{1}\Phi(P_{1})P_{2}\\\
&+P_{1}\Phi(P_{2})P_{2}-\xi P_{2}\Phi(P_{2})P_{1}-\xi P_{2}\Phi(P_{1})P_{1}\\\
=&0.\end{array}$
Then
$None$
$P_{1}\Phi(P_{2})P_{1}=P_{2}\Phi(P_{1})P_{2}=P_{1}\Phi(P_{1})P_{2}+P_{1}\Phi(P_{2})P_{2}=P_{2}\Phi(P_{1})P_{1}+P_{2}\Phi(P_{2})P_{1}=0.$
A direct computation shows that $[(\xi
P_{1}+P_{2})Z_{1},P_{2}]_{\xi}=[P_{1}\Phi(P_{1})P_{1}+P_{2}\Phi(P_{1})P_{2},P_{2}]_{\xi}=0$.
And so $(1-\xi)P_{2}Z_{1}=0$. Then $Z_{1}MP_{2}=0$ for all
$M\in{\mathcal{M}}$. Noting that ${\overline{P}_{2}}=I$, we have $Z_{1}=0$.
That is $P_{1}\Phi(P_{1})P_{1}+P_{2}\Phi(P_{1})P_{2}=0$. Similarly,
$P_{2}\Phi(P_{2})P_{2}+P_{1}\Phi(P_{2})P_{1}=0$. By (4),
$\Phi(P_{1})+\Phi(P_{2})=0$. Denote
$T_{0}=P_{1}\Phi(P_{1})P_{2}-P_{2}\Phi(P_{1})P_{1}$. Then it is easy to check
that $T_{0}$ is the desired.
Obviously, $T\mapsto[T,T_{0}]$ is an additive derivation. Without loss of
generality, we may assume that $\Phi(P_{1})=\Phi(P_{2})=0$.
If $\Phi$ is additive, then $\Phi(I)=\Phi(P_{1})+\Phi(P_{2})=0$.
$\Phi((1-\xi)T)=\Phi([I,T]_{\xi})=[I,\Phi(T)]_{\xi}=(1-\xi)\Phi(T)$ for all
$T\in{\mathcal{M}}$. So $\Phi(\xi T)=\xi\Phi(T)$ for all $T\in{\mathcal{M}}$.
Taking $T,S\in{\mathcal{M}}$ and noting that
$(1-\xi)[S,T]_{-1}=[S,T]_{\xi}+[T,S]_{\xi}$, we obtain that
$\begin{array}[]{ll}&\Phi((1-\xi)[S,T]_{-1})=\Phi([S,T]_{\xi})+\Phi([T,S]_{\xi})\\\
=&\Phi(S)T-\xi T\Phi(S)+S\Phi(T)-\xi\Phi(T)S+\Phi(T)S-\xi
S\Phi(T)+T\Phi(S)-\xi\Phi(S)T\\\
=&(1-\xi)(\Phi(S)T+S\Phi(T)+\Phi(T)S+T\Phi(S)).\end{array}$
Note that $\Phi((1-\xi)T)=(1-\xi)\Phi(T)$ for all $T\in{\mathcal{M}}$, it
follows that
$\Phi([S,T]_{-1})=[\Phi(S),T]_{-1}+[S,\Phi(T)]_{-1}$
for all $T,S\in{\mathcal{M}}$. Hence $\Phi$ is an additive Jordan derivation.
By [5], $\Phi$ is an additive derivation which is the conclusion of our
Theorem 1.2. Now we only need to show $\Phi$ is additive. For every
$T\in\mathcal{M}$, it has the form $T=T_{11}+T_{12}+T_{21}+T_{22}$. Just like
the proof of Theorem 1.1, we will show $\Phi$ is additive on
${\mathcal{M}}_{ij}(1\leq i,j\leq 2)$ and
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})$.
We divide the proof into several steps.
Claim 2. $\Phi(M_{ij})\in{\mathcal{M}}_{ij}$ for every
$M_{ij}\in{\mathcal{M}}_{ij}$ ($1\leq i\neq j\leq 2$).
We only treat the case $i=1,j=2$. The other case can be treated similarly.
Noting $[P_{1},M_{12}]_{\xi}=M_{12}$, we have
$\displaystyle\Phi(M_{12})$
$\displaystyle=\Phi([P_{1},M_{12}]_{\xi})=[\Phi(P_{1}),M_{12}]_{\xi}+[P_{1},\Phi(M_{12})]_{\xi}$
$\displaystyle=[P_{1},\Phi(M_{12})]_{\xi}=P_{1}\Phi(M_{12})-\xi\Phi(M_{12})P_{1}.$
Then
$None$ $P_{2}\Phi(M_{12})P_{2}=P_{1}\Phi(M_{12})P_{1}=0.$
Furthermore, $P_{2}\Phi(M_{12})P_{1}=0$, if $\xi\neq-1$, i.e.,
$\Phi(M_{12})\in{\mathcal{M}}_{12}$.
Next we treat the case $\xi=-1$. For every $M_{11}$,
$\displaystyle\Phi(M_{11}M_{12})$
$\displaystyle=\Phi([M_{11},M_{12}]_{-1})=[\Phi(M_{11}),M_{12}]_{-1}+[M_{11},\Phi(M_{12})]_{-1}$
$\displaystyle=\Phi(M_{11})M_{12}+M_{12}\Phi(M_{11})+M_{11}\Phi(M_{12})+\Phi(M_{12})M_{11}.$
By (5), we have
$P_{2}\Phi(M_{11}M_{12})P_{1}=\Phi(M_{12})M_{11}.$
Then for every $N_{11}$,
$P_{2}\Phi(N_{11}M_{11}M_{12})P_{1}=\Phi(M_{12})N_{11}M_{11}$. On the other
hand,
$P_{2}\Phi(N_{11}M_{11}M_{12})P_{1}=\Phi(M_{11}M_{12})N_{11}=\Phi(M_{12})M_{11}N_{11}.$
Thus $\Phi(M_{12})[N_{11},M_{11}]=0$. For every $R_{11}$,
$\Phi(M_{12})R_{11}[N_{11},M_{11}]=P_{2}\Phi(R_{11}M_{12})P_{1}[N_{11},M_{11}]=0.$
By Lemma 2.1(ii), $\Phi(M_{12})P_{1}=0$ which finishes the proof.
Claim 3. $\Phi(M_{ii})\in{\mathcal{M}}_{ii}$ for every
$M_{ii}\in{\mathcal{M}}_{ii}$ $(i=1,2)$.
Proof. Without loss of generality, we only treat the case $i=1$.
$\begin{array}[]{ll}\Phi(P_{1})&=\Phi([I,\frac{1}{1-\xi}P_{1}]_{\xi})=[\Phi(I),\frac{1}{1-\xi}P_{1}]_{\xi}+[I,\Phi(\frac{1}{1-\xi}P_{1})]_{\xi}\\\
&=\frac{1}{1-\xi}\Phi([I,P_{1}]_{\xi})+[I,\Phi(\frac{1}{1-\xi}P_{1})]_{\xi}\\\
&=\frac{1}{1-\xi}\Phi((1-\xi)P_{1})+(1-\xi)\Phi(\frac{1}{1-\xi}P_{1})=0.\end{array}$
Note that $\Phi((1-\xi)P_{1})=\Phi([P_{1},P_{1}]_{\xi})=0$, so
$\Phi(\frac{1}{1-\xi}P_{1})=0$.
$\displaystyle\Phi(M_{11})$
$\displaystyle=\Phi([\frac{1}{1-\xi}P_{1},M_{11}]_{\xi})=[\frac{1}{1-\xi}P_{1},\Phi(M_{11})]_{\xi}$
$\displaystyle=\frac{1}{1-\xi}(P_{1}\Phi(M_{11})-\xi\Phi(M_{11})P_{1}).$
This implies $\Phi(M_{11})\in{\mathcal{M}}_{11}$.
Claim 4. For every $T_{ii}$, $T_{ji}$ and $T_{ij}$ $(1\leq i\neq j\leq 2)$,
$\Phi(T_{ii}+T_{ij})=\Phi(T_{ii})+\Phi(T_{ij})$,
$\Phi(T_{ii}+T_{ji})=\Phi(T_{ii})+\Phi(T_{ji})$.
Assume $i=1,j=2$. For every $M_{12}\in{\mathcal{M}}_{12}$,
$[T_{11}+T_{12},M_{12}]_{\xi}=[T_{11},M_{12}]_{\xi}$, by Claim 2,
$[\Phi(T_{11}+T_{12}),M_{12}]_{\xi}+[T_{11}+T_{12},\Phi(M_{12})]_{\xi}=[\Phi(T_{11}),M_{12}]_{\xi}+[T_{11},\Phi(M_{12})]_{\xi},$
$[\Phi(T_{11}+T_{12})-\Phi(T_{11}),M_{12}]_{\xi}=0.$
From Lemma 2.2,
$\Phi(T_{11}+T_{12})-\Phi(T_{11})=P_{1}(\Phi(T_{11}+T_{12})-\Phi(T_{11}))P_{2}+(\xi
P_{1}+P_{2})Z$
for some central element $Z\in Z_{\mathcal{M}}.$ By computing,
$\displaystyle\Phi(T_{12})$
$\displaystyle=\Phi([P_{1},[T_{11}+T_{12},P_{2}]_{\xi}]_{\xi})$
$\displaystyle=[P_{1},[\Phi(T_{11}+T_{12}),P_{2}]_{\xi}]_{\xi}$
$\displaystyle=P_{1}\Phi(T_{11}+T_{22})P_{2}+{\xi}^{2}P_{2}\Phi(T_{11}+T_{22}).$
From Claim 2 and Claim 3, we know that
$\Phi(T_{12})=P_{1}\Phi(T_{11}+T_{12})P_{2}$ and $P_{1}\Phi(T_{11})P_{2}=0$.
Thus
$\Phi(T_{11}+T_{12})-\Phi(T_{11})=\Phi(T_{12})+(\xi P_{1}+P_{2})Z.$
Note that
$\displaystyle\Phi([T_{11}+T_{12},P_{2}]_{\xi})$
$\displaystyle=[\Phi(T_{11}+T_{12}),P_{2}]_{\xi}$
$\displaystyle=[\Phi(T_{11})+\Phi(T_{12})+(\xi P_{1}+P_{2})Z,P_{2}]_{\xi}.$
On the other hand,
$\Phi([T_{11}+T_{12},P_{2}]_{\xi})=\Phi([T_{12},P_{2}]_{\xi})=[\Phi(T_{12}),P_{2}]_{\xi}$.
Combining this with Claim 3, we have $[(\xi P_{1}+P_{2})Z,P_{2}]_{\xi}=0$ and
so $ZP_{2}=0$ which implies $Z=0$. Similarly,
$\Phi(T_{11}+T_{21})=\Phi(T_{11})+\Phi(T_{21})$. The rest goes similarly.
Claim 5. $\Phi$ is additive on ${\mathcal{M}}_{12}$ and ${\mathcal{M}}_{21}$.
Let $T_{12},S_{12}\in{\mathcal{M}}_{12}$. Since
$T_{12}+S_{12}=[P_{1}+T_{12},P_{2}+S_{12}]_{\xi}$, we have that
$\begin{array}[]{rl}&\Phi(T_{12}+S_{12})=[\Phi(P_{1}+T_{12}),P_{2}+S_{12}]_{\xi}+[P_{1}+T_{12},\Phi(P_{2}+S_{12})]_{\xi}\\\
=&[\Phi(P_{1})+\Phi(T_{12}),P_{2}+S_{12}]_{\xi}+[P_{1}+T_{12},\Phi(P_{2})+\Phi(S_{12})]_{\xi}\\\
=&\Phi(T_{12})+\Phi(S_{12}).\end{array}$
Similarly, $\Phi$ is additive on ${\mathcal{M}}_{21}$.
Claim 6. For every $T_{11}\in{\mathcal{M}}_{11}$,
$T_{22}\in{\mathcal{M}}_{22}$,
$\Phi(T_{11}+T_{22})=\Phi(T_{11})+\Phi(T_{22})$.
For every $M_{12}\in{\mathcal{M}}_{12}$,
$[T_{11}+T_{22},M_{12}]_{\xi}=T_{11}M_{12}-\xi M_{12}T_{22}$. From Claim 5, it
follows that
$\begin{array}[]{rl}&[\Phi(T_{11}+T_{22}),M_{12}]_{\xi}+[T_{11}+T_{22},\Phi(M_{12})]_{\xi}=\Phi([T_{11}+T_{22},M_{12}]_{\xi})\\\
=&\Phi(T_{11}M_{12})+\Phi(-\xi
M_{12}T_{22})=\Phi([T_{11},M_{12}]_{\xi})+\Phi([T_{22},M_{12}]_{\xi})\\\
=&[\Phi(T_{11}),M_{12}]_{\xi}+[T_{11},\Phi(M_{12})]_{\xi}+[\Phi(T_{22}),M_{12}]_{\xi}+[T_{22},\Phi(M_{12})]_{\xi}.\end{array}$
Thus $[\Phi(T_{11}+T_{22})-\Phi(T_{11})-\Phi(T_{22}),M_{12}]_{\xi}=0$. By
Lemma 2.2,
$\Phi(T_{11}+T_{22})-\Phi(T_{11})-\Phi(T_{22})\in{\mathcal{M}}_{12}+(\xi
P_{1}+P_{2}){\mathcal{Z}}_{\mathcal{M}}.$
On the other hand, $[T_{11}+T_{22},\frac{P_{1}}{1-\xi}]_{\xi}=T_{11}$. From
the proof of Claim 3, one can see $\Phi(\frac{P_{1}}{1-\xi})=0$. Hence
$[\Phi(T_{11}+T_{22}),\frac{P_{1}}{1-\xi}]_{\xi}=\Phi(T_{11})$, i.e.,
$(1-\xi)\Phi(T_{11})=\Phi(T_{11}+T_{22})P_{1}-\xi P_{1}\Phi(T_{11}+T_{22})$.
Multiplying by $P_{1}$ and $P_{2}$ from the left and the right in the above
equation, we have $P_{1}\Phi(T_{11}+T_{22})P_{2}=0$. So
$\Phi(T_{11}+T_{22})-\Phi(T_{11})-\Phi(T_{22})=(\xi P_{1}+P_{2})Z$
for some central element $Z\in Z_{\mathcal{M}}$. Combining $\Phi(P_{1})=0$ and
Claim 3, we conclude
$\begin{array}[]{ll}&\Phi([T_{11},P_{1}]_{\xi})=\Phi([T_{11}+T_{22},P_{1}]_{\xi})\\\
=&[\Phi(T_{11}+T_{22}),P_{1}]_{\xi}+[T_{11}+T_{22},\Phi(P_{1})]_{\xi}\\\
=&[\Phi(T_{11})+(\xi P_{1}+P_{2})Z,P_{1}]_{\xi}.\end{array}$
Thus $[(\xi P_{1}+P_{2})Z,P_{1}]_{\xi}=0$ which implies $Z=0$. This gives
$\Phi(T_{11}+T_{22})=\Phi(T_{11})+\Phi(T_{22})$.
Claim 7. For every $T_{ii},S_{ii}\in{\mathcal{M}}_{ii}$ $(i=1,2)$,
$\Phi(T_{ii}+S_{ii})=\Phi(T_{ii})+\Phi(S_{ii})$.
Assume $i=1$. For every $M_{12}\in{\mathcal{M}}_{12}$,
$[T_{11}+S_{11},M_{12}]_{\xi}=T_{11}M_{12}+S_{11}M_{12}$. From Claim 5, it
follows that
$\begin{array}[]{rl}&[\Phi(T_{11}+S_{11}),M_{12}]_{\xi}+[T_{11}+S_{11},\Phi(M_{12})]_{\xi}=\Phi([T_{11}+S_{11},M_{12}]_{\xi})\\\
=&\Phi(T_{11}M_{12})+\Phi(S_{11}M_{12})=\Phi([T_{11},M_{12}]_{\xi})+\Phi([S_{11},M_{12}]_{\xi})\\\
=&[\Phi(T_{11}),M_{12}]_{\xi}+[T_{11},\Phi(M_{12})]_{\xi}+[\Phi(S_{11}),M_{12}]_{\xi}+[S_{11},\Phi(M_{12})]_{\xi}.\end{array}$
Thus $[\Phi(T_{11}+S_{11})-\Phi(T_{11})-\Phi(S_{11}),M_{12}]_{\xi}=0$. By
Lemma 2.2,
$\Phi(T_{11}+S_{11})-\Phi(T_{11})-\Phi(S_{11})\in{\mathcal{M}}_{12}+(\xi
P_{1}+P_{2}){\mathcal{Z}}.$
On the other hand, Claim 3 tells us that
$P_{1}(\Phi(T_{11}+S_{11})-\Phi(T_{11})-\Phi(S_{11}))P_{2}=0$. So
$\Phi(T_{11}+S_{11})-\Phi(T_{11})-\Phi(S_{11})=(\xi P_{1}+P_{2})Z$
for some $Z\in{\mathcal{Z}}_{\mathcal{M}}$. This further indicates
$\displaystyle 0$
$\displaystyle=\Phi([T_{11}+S_{11},P_{2}]_{\xi})=[\Phi(T_{11}+S_{11}),P_{2}]_{\xi}$
$\displaystyle=[\Phi(T_{11})+\Phi(S_{11})+(\xi P_{1}+P_{2})Z,P_{2}]_{\xi}$
$\displaystyle=[(\xi P_{1}+P_{2})Z,P_{2}]_{\xi}.$
Then $P_{2}Z=0$, consequently, $Z=0$. That is, $\Phi$ is additive on
${\mathcal{M}}_{11}$. Similarly, $\Phi$ is additive on ${\mathcal{M}}_{22}$.
Claim 8. For every $T_{ii},T_{jj},T_{ij}$, $(1\leq i\neq j\leq 2)$
$\Phi(T_{ii}+T_{jj}+T_{ij})=\Phi(T_{ii})+\Phi(T_{jj})+\Phi(T_{ij})$ .
Assume $i=1,j=2$. For every $M_{12}\in{\mathcal{M}}_{12}$,
$[T_{11}+T_{22}+T_{12},M_{12}]_{\xi}=[T_{11}+T_{22},M_{12}]_{\xi}$. By Claim
6, it follows that
$[\Phi(T_{11}+T_{22}+T_{12}),M_{12}]_{\xi}+[T_{11}+T_{22}+T_{12},\Phi(M_{12})]_{\xi}=[\Phi(T_{11})+\Phi(T_{22}),M_{12}]_{\xi}+[T_{11}+T_{22},\Phi(M_{12})]_{\xi}.$
Thus $[\Phi(T_{11}+T_{22}+T_{12})-\Phi(T_{11})-\Phi(T_{22}),M_{12}]_{\xi}=0.$
From Lemma 2.2 and Claim 3, we obtain
$\begin{array}[]{ll}&\Phi(T_{11}+T_{22}+T_{12})-\Phi(T_{11})-\Phi(T_{22})\\\
=&P_{1}(\Phi(T_{11}+T_{22}+T_{12})-\Phi(T_{11})-\Phi(T_{22}))P_{2}+(\xi
P_{1}+P_{2})Z\\\ =&P_{1}\Phi(T_{11}+T_{22}+T_{12})P_{2}+(\xi
P_{1}+P_{2})Z\end{array}$
for some central element $Z$. A direct computation shows that
$\displaystyle\Phi(T_{12})$
$\displaystyle=\Phi(P_{1}(T_{11}+T_{22}+T_{12})P_{2})$
$\displaystyle=\Phi([P_{1},[T_{11}+T_{22}+T_{12},P_{2}]_{\xi}]_{\xi})$
$\displaystyle=[P_{1},[\Phi([T_{11}+T_{22}+T_{12},P_{2}]_{\xi})]_{\xi}$
$\displaystyle=P_{1}\Phi(T_{11}+T_{22}+T_{12})P_{2}.$
Thus
$\Phi(T_{11}+T_{22}+T_{12})=\Phi(T_{11})+\Phi(T_{22})+\Phi(T_{12})+(\xi
P_{1}+P_{2})Z.$
It is easy to see
$\begin{array}[]{ll}&[\Phi(T_{11}+T_{22}+T_{12}),P_{2}]_{\xi}=\Phi([T_{11}+T_{22}+T_{12},P_{2}]_{\xi})\\\
&=\Phi([T_{12}+T_{22},P_{2}]_{\xi})=[\Phi(T_{12})+\Phi(T_{22}),P_{2}]_{\xi}\end{array}.$
Then $[(\xi P_{1}+P_{2})Z,P_{2}]=0$, $ZP_{2}=0$ which implies $Z=0$. That is
$\Phi(T_{11}+T_{22}+T_{12})=\Phi(T_{11})+\Phi(T_{22})+\Phi(T_{12})$. The rest
goes similarly.
Claim 9. For every $T_{11},T_{12},T_{21},T_{22}$,
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})-\Phi(T_{11})-\Phi(T_{12})-\Phi(T_{21})-\Phi(T_{22})\in(\xi
P_{1}+P_{2}){\mathcal{Z}}_{\mathcal{M}}\cap(P_{1}+\xi
P_{2}){\mathcal{Z}}_{\mathcal{M}}.$
Consequently, if $\xi\neq-1$,
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})$.
For every $M_{12}\in{\mathcal{M}}_{12}$,
$[T_{11}+T_{12}+T_{21}+T_{22},M_{12}]_{\xi}=[T_{11}+T_{21}+T_{22},M_{12}]_{\xi}$.
From Claim 8, it follows that
$\begin{array}[]{ll}&[\Phi(T_{11}+T_{12}+T_{21}+T_{22}),M_{12}]_{\xi}+[T_{11}+T_{12}+T_{21}+T_{22},\Phi(M_{12})]_{\xi}\\\
=&[\Phi(T_{11})+\Phi(T_{21})+\Phi(T_{22}),M_{12}]_{\xi}+[T_{11}+T_{21}+T_{22},\Phi(M_{12})]_{\xi}.\end{array}$
Thus
$[\Phi(T_{11}+T_{12}+T_{21}+T_{22})-\Phi(T_{11})-\Phi(T_{21})-\Phi(T_{22}),M_{12}]_{\xi}=0.$
Since $\Phi(T_{12})\in{\mathcal{M}}_{12}$, we have
$[\Phi(T_{11}+T_{12}+T_{21}+T_{22})-\Phi(T_{11})-\Phi(T_{12})-\Phi(T_{21})-\Phi(T_{22}),M_{12}]_{\xi}=0.$
Similarly, from
$[T_{11}+T_{12}+T_{21}+T_{22},M_{21}]_{\xi}=[T_{11}+T_{12}+T_{22},M_{21}]_{\xi},$
we can obtain
$[\Phi(T_{11}+T_{12}+T_{21}+T_{22})-\Phi(T_{11})-\Phi(T_{12})-\Phi(T_{21})-\Phi(T_{22}),M_{21}]_{\xi}=0.$
From Lemma 2.2, it follows that
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})-\Phi(T_{11})-\Phi(T_{12})-\Phi(T_{21})-\Phi(T_{22})\in(\xi
P_{1}+P_{2}){\mathcal{Z}}_{\mathcal{M}}\cap(P_{1}+\xi
P_{2}){\mathcal{Z}}_{\mathcal{M}}.$
Note that if $\xi\neq-1$, $(\xi
P_{1}+P_{2}){\mathcal{Z}}_{\mathcal{M}}\cap(P_{1}+\xi
P_{2}){\mathcal{Z}}_{\mathcal{M}}=\\{0\\}$. Thus
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22}).$
Claim 10. If $\xi=-1$,
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})$
holds true, too.
By Claim 9, we may assume
$\Phi(T_{12}+T_{21})=\Phi(T_{12})+\Phi(T_{21})+(-P_{1}+P_{2})Z_{1}$,
$\Phi(T_{11}+T_{12}+T_{21})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+(-P_{1}+P_{2})Z_{2}$
and
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})+(-P_{1}+P_{2})Z_{3}$.
The following is devoted to showing $Z_{1}=Z_{2}=Z_{3}=0$. Since
$\Phi(T_{12}+T_{21})=\Phi([T_{12}+T_{21},P_{1}]_{-1})=[\Phi(T_{12}+T_{21}),P_{1}]_{-1}$,
substituting
$\Phi(T_{12}+T_{21})=\Phi(T_{12})+\Phi(T_{21})+(-P_{1}+P_{2})Z_{1}$ into above
equation, we have
$(-P_{1}+P_{2})Z_{1}=[(-P_{1}+P_{2})Z_{1},P_{1}]_{-1}=-2P_{1}Z_{1}$. Then
$Z_{1}P_{1}=Z_{1}P_{2}=0$ and so $Z_{1}=0$. From
$\begin{array}[]{ll}&[\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+(-P_{1}+P_{2})Z_{2},P_{2}]_{-1}\\\
=&[\Phi(T_{11}+T_{12}+T_{21}),P_{2}]_{-1}=\Phi([T_{11}+T_{12}+T_{21},P_{2}]_{-1})\\\
=&\Phi(T_{12}+T_{21})=\Phi(T_{12})+\Phi(T_{21})\\\
=&[\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21}),P_{2}]_{-1},\end{array}$
it follows that $[(-P_{1}+P_{2})Z_{2},P_{2}]=0$. Thus $Z_{2}=0$. At last,
$\begin{array}[]{ll}&[\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})+(-P_{1}+P_{2})Z_{3},P_{1}]_{-1}\\\
=&[\Phi(T_{11}+T_{12}+T_{21}+T_{22}),P_{1}]_{-1}=\Phi([T_{11}+T_{12}+T_{21}+T_{22},P_{1}]_{-1})\\\
=&\Phi([T_{11}+T_{12}+T_{21},P_{1}]_{-1})=[\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22}),P_{1}]_{-1}.\end{array}$
So $[(-P_{1}+P_{2})Z_{3},P_{1}]_{-1}=0$ which implies $Z_{3}=0$. Hence
$\Phi(T_{11}+T_{12}+T_{21}+T_{22})=\Phi(T_{11})+\Phi(T_{12})+\Phi(T_{21})+\Phi(T_{22})$,
as desired.
## References
* [1] R. An, J. Hou, Additivity of Jordan multiplicative maps on Jordan operator algebras, Taiwanese J. Math., 10(2006), 45-64.
* [2] Z.F. Bai, S.P. Du and J.C. Hou, Multiplicative Lie isomorphisms between prime rings, Communications in Algebra, 36(2008), 1626-1633.
* [3] M. Brešar, Centralizing mappings on von Neumann algebras, Proc. Ams. Math. Soc., 111(1991), 501-510.
* [4] M. Brešar, Commuting traces of biadditive mappings, commutativity preserving mappings, and Lie mappings, Trans. Amer. Math. Soc., 335(1993), 525-546.
* [5] M. Brešar, Jordan derivation on semiprime rings, Proc. Ams. Math. Soc., 104(1988), 1003-1006.
* [6] M. Brešar, Jordan derivations revised, Math. Proc. Cambridge Philos. Soc., 139(2005), 411-425.
* [7] J.A. Brooke, P. Brusch, B. Pearson, Commutativity up to a factor of bounded operators in complex Hilbert spaces, Roy. Soc. Lond. Proc. Ser. A Math. Phy. Eng. Sci. A, 458(2002), 109-118.
* [8] E. Christensen, Derivations of nest algebras, Ann. Math., 229(1977), 155-161.
* [9] B.E. Johnson, Symmetric amenability and the nonexistence of Lie and Jordan derivations, Math. Proc. Cambridge Philos. Soc., 120(1996), 455-473.
* [10] R.V. Kadison and J.R. Ringrose, Fundamentals of the theory of operator algebras, Vol. I, Acdemic Press, New York, 1983; Vol. II Acdemic Press, New York, 1986.
* [11] C. Kassel. Quantum Group, Springer-verlag, New York, 1995.
* [12] M. Mathieu, A.R. Villena, The structure of Lie derivations on C*-algebrs, J. Funct. Anal., 202(2003), 504-525.
* [13] F. Lu, Multiplicative mappings of operator algebras, Linear Algebra Appl., 347(2002), 283-291
* [14] W.S. Martindale III, When are multiplicative mappings additive?, Proc. Amer. Math. Soc., 21(1969), 695-698.
* [15] C.R. Miers, Lie isomorphisms of operator algebras, Pacific J. of Math., 38(1971), 717-735.
* [16] C.R. Miers, Lie derivations of von Neumann algebras, Duke Math. J., 40(1973), 403-409.
* [17] X.F. Qi, J. Hou, Characterizations of $\xi$-Lie multiplicative isomorphisms, Proceeding of the 3rd International workshop of Matrix analysis and Applications, 2009.
* [18] X.F. Qi, J. Hou, Additive Lie ($\xi$-Lie) derivations and generalized Lie ($\xi$-Lie) derivations on nest algebras, Linear Algebra Appl., 431(2009), 843-854.
* [19] X.F. Qi, J. Hou, Characterization of Lie multiplicative isomorphisms between nest algebras, Science China Mathematics, 54(2011), 2453-2462.
* [20] S. Sakai, Derivations of $W^{*}$-algebras, Ann. Math., 83(1966), 273-279.
* [21] P. Šemrl, Additive derivations of some operator algebras, Illinois J. Math., 35(1991), 234-240.
* [22] P. Šemrl, Rings derivations on standard operator algebras, J. Funct. Anal., 112(1993), 318-324.
* [23] Y. Wang, Additivity of multiplicative maps on triangular rings, Linear Algebra Appl., 434(2011), 625-635.
* [24] W.Y. Yu, J.H. Zhang, Nonlinear Lie derivations of triangular algerbas, Linear Algebra Appl., 432(2010), 2953-2960.
|
arxiv-papers
| 2013-02-16T03:59:53 |
2024-09-04T02:49:41.812879
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhaofang Bai, Shuanping Du, Yu Guo",
"submitter": "Zhaofang Bai",
"url": "https://arxiv.org/abs/1302.3927"
}
|
1302.3944
|
# Conformation-dependent electron transport through a biphenyl molecule:
Circular current and related issues
Santanu K. Maiti [email protected] Physics and Applied Mathematics
Unit, Indian Statistical Institute, 203 Barrackpore Trunk Road, Kolkata-700
108, India
###### Abstract
We investigate the conformation-dependent electron transfer in a biphenyl
molecule within a simple tight-binding framework. The overall junction current
and circular currents in two benzene rings driven by applied bias voltage are
calculated by using Green’s function formalism. Our analysis may provide the
possibilities of using organic molecules with loop substructures to design
molecular spintronic devices, indicating the emergence of molecular
spintronics.
###### pacs:
73.23.-b, 85.65.+h, 85.35.Ds
## I Introduction
The study of electron transport through a molecular system attached to
metallic electrodes is regarded as one of the most promising research fields
in nanoscale technology and physics nitzan1 . With the discovery of advanced
molecular scale measurement methodologies like scanning electro-chemical
microscopy (SESM), scanning tunneling microscopy (STM), atomic force
microscopy (AFM), etc., it is now possible to measure current flow through
single molecules or cluster of molecules sandwiched between two electrodes
chen . The proposed idea of designing molecule-based diode by Aviram and
Ratner aviram in 1974 first illustrates the possibility of using molecules as
active components of a device. Since then several ab-initio as well as model
calculations have been done to investigate molecular transport theoretically
ventra1 ; ventra2 ; sumit ; tagami ; orella1 ; orella2 ; arai ; walc ; san1 ;
san2 ; san3 ; san4 . But experimental realizations took a little longer time
to get feasible. In a pioneering experiment, Reed et al. investigated current-
voltage ($I$-$V$) characteristics in a single benzene molecule coupled to
metallic electrodes via thiol groups reed2 . Various other experiments using
molecules have also been reported in literature exploring many interesting
features e.g., ballistic transport, quantized conductance, negative
differential resistance (NDR), gate controlled transistor operation, memory
effects, bistable switching, conformational switching to name a few. Although
a lot of theoretical mag ; lau ; baer1 ; baer2 ; baer3 ; gold and
experimental reed1 ; tali ; fish ; cui ; gim studies have been made so far
using different molecules, yet several problems are to be solved for further
development of molecular electronics.
Most of the works associated with electronic conduction through molecular
bridge systems are mainly concerned on the overall conduction properties. But
a very few works are available where attention has been paid to the current
distribution within the molecule itself having single or multiple loop
substructures dist1 ; dist2 ; dist3 . Recently some interesting works have
been done by Nitzan et al. and few other groups where possible quantum
interference effects have been explored on current distribution through such
molecular geometries due to the existence of multiple pathways, yielding the
possibilities of voltage driven circular currents cir1 ; cir2 ; cir3 ; cir4 ;
cir5 ; cir6 . The appearance of circular currents in loop geometries have
already been reported in other contexts several years ago. This is commonly
known as persistent currents in mesoscopic conducting rings where the current
is induced by means of Aharonov-Bohm flux $\phi$ threaded by the ring butt2 ;
levy ; gefen ; skm1 ; skm2 ; skm3 ; skm4 ; skm5 ; skm6 . The reason behind
this current in isolated loop geometries is quite different from the previous
one where current is driven by an applied bias voltage. It has been verified
that the circular currents appear in molecular rings, driven by applied bias
voltage, produce considerable magnetic fields at the centers of these rings.
This phenomenon is somewhat interesting and can be used in different aspects
in the study of molecular transport. For example, in presence of a local spin
at the ring center one can regulate spin dependent transport through the
molecular wire by tuning the orientation of that local spin, and, also the
behavior of spin inelastic currents can be explained. In a recent work
Galperin et al. rai have proposed some results towards this direction. One
can also utilize this circular current generated magnetic field in other way
to control spin dependent transport through a molecular wire without changing
the orientation of the local spin. In that case we can change the strength of
the magnetic field by some mechanisms. To test it, biphenyl molecule may the
best example where two benzene rings are connected by a single C-C bond. It
has been examined latha that in the case of a biphenyl molecule, electronic
conductance changes significantly with the relative twist angle among two
benzene rings. For the planar conformation conductance becomes maximum, while
it gets decreased with increasing the twist angle. This phenomenon motivates
us to describe conformation-dependent circular currents in a biphenyl molecule
coupled to two metallic electrodes. We use a simple tight-binding (TB)
framework to describe the model quantum system and evaluate all the results
through Green’s function formalism. We believe that our present analysis will
certainly provide some important information that can be used to design
molecular spintronic devices in near future.
The structure of the paper is as follows. In section II, we describe the
molecular model and theoretical formulation for the calculations. The
essential results are presented in Section III which contains (a) transmission
probability as a function of injecting electron energy and junction current
through the molecular wire as a function of applied bias voltage for different
twist angles, and (b) conformation-dependent circular currents in two benzene
rings and associated magnetic fields at the ring centers. Finally, in section
IV, we summarize our main results and discuss their possible implications for
further study.
## II Molecular Model and Theoretical Formulation
### II.1 Tight-binding model
Figure 1 gives a schematic illustration of the molecular wire, where a
biphenyl molecule is coupled to two semi-infinite one-dimensional ($1$D)
metallic electrodes, commonly known as source and drain. Our analysis for the
present work is based on non-interacting electron picture, and, within this
framework, TB model is extremely suitable for analyzing electron transport
through a molecular bridge system.
Figure 1: (Color online). Schematic diagram of a biphenyl molecule attached to
two electrodes, namely, source and drain. The magenta arrow describes the
relative twist among the molecular rings.
The single particle Hamiltonian which captures the molecule and side-attached
electrodes gets the form:
$H=H_{mol}+H_{ele}+H_{tun}.$ (1)
The first term $H_{mol}$ corresponds to the Hamiltonian of the biphenyl
molecule sandwiched between two electrodes. Under nearest-neighbor hopping
approximation, the TB Hamiltonian of the molecule composed of $12$ ($N=12$)
atomic sites reads,
$\displaystyle H_{mol}$ $\displaystyle=$ $\displaystyle\sum_{i}\epsilon
c_{i}^{\dagger}c_{i}+\sum_{i}v\left[c_{i+1}^{\dagger}c_{i}+c_{i}^{\dagger}c_{i+1}\right]$
(2) $\displaystyle+$ $\displaystyle\sum_{j}\epsilon
c_{j}^{\dagger}c_{j}+\sum_{j}v\left[c_{j+1}^{\dagger}c_{j}+c_{j}^{\dagger}c_{j+1}\right]$
$\displaystyle+$ $\displaystyle
v_{4,7}\left[c_{4}^{\dagger}c_{7}+c_{7}^{\dagger}c_{4}\right]$
where the indices $i$ and $j$ are used for the left and right molecular rings,
respectively. $\epsilon$ denotes the on-site energy of an electron at
$i$-($j$-)th site and $v$ describes the isotropic nearest-neighbor coupling
between the molecular sites. $c_{i}^{\dagger}$($c_{j}^{\dagger}$) and
$c_{i}$($c_{j}$) are the creation and annihilation operators, respectively, of
an electron at the $i$-($j$-)th site. The last term in the right hand side of
Eq. 2 describes the coupling between two molecular rings. In terms of the
relative twist angle $\theta$ among these two rings, the coupling strength
$v_{4,7}$ gets the form: $v_{4,7}=v\cos\theta$.
The second and third terms of Eq. 1 describe the Hamiltonians for the $1$D
semi-infinite electrodes (source and drain) and molecule-to-electrode
coupling. In Wannier basis representation they are expressed as follows.
$\displaystyle H_{ele}$ $\displaystyle=$ $\displaystyle H_{S}+H_{D}$
$\displaystyle=$
$\displaystyle\sum_{\alpha=S,D}\left\\{\sum_{n}\epsilon_{0}d_{n}^{\dagger}d_{n}+\sum_{n}t_{0}\left[d_{n+1}^{\dagger}d_{n}+h.c.\right]\right\\},$
and,
$\displaystyle H_{tun}$ $\displaystyle=$ $\displaystyle H_{S,mol}+H_{D,mol}$
(4) $\displaystyle=$
$\displaystyle\tau_{S}[c_{p}^{{\dagger}}d_{0}+h.c.]+\tau_{D}[c_{q}^{{\dagger}}d_{N+1}+h.c.].$
Here, $\epsilon_{0}$ and $t_{0}$ correspond to the site energy and nearest-
neighbor hopping integral in the electrodes. $d_{n}^{{\dagger}}$ and $d_{n}$
are the creation and annihilation operators, respectively, of an electron at
the site $n$ of the electrodes. The coupling strength between the source and
the molecule is $\tau_{S}$, while it is $\tau_{D}$ between the molecule and
the drain. The source and drain are coupled to the molecule through $p$-th and
$q$-th atomic sites, respectively, those are variable.
### II.2 Two-terminal transmission probability and junction current
To obtain transmission probability of an electron from source to drain
electrode through the molecule, we use Green’s function formalism. Within the
regime of coherent transport and in the absence of Coulomb interaction this
technique is well applied.
The single particle Green’s function operator representing the entire system
for an electron with energy $E$ is defined as,
$G=\left(E-H+i\eta\right)^{-1}$ (5)
where, $\eta\rightarrow 0^{+}$. Following the matrix forms of $H$ and $G$, the
problem of finding $G$ in the full Hilbert space of $H$ can be mapped exactly
to a Green’s function $\mathcal{G}$ corresponding to an effective Hamiltonian
in the reduced Hilbert space of the molecule itself and we have datta ,
$\mbox{\boldmath${\mathcal{G}}$}=\left(\mbox{\boldmath$E-H_{mol}-\Sigma_{S}-\Sigma_{D}$}\right)^{-1}.$
(6)
Here, $\Sigma_{S}$ and $\Sigma_{D}$ are the contact self-energies introduced
to incorporate the effect of coupling of the molecule to the source and drain,
respectively. In terms of this effective Green’s function ${\mathcal{G}}$,
two-terminal transmission probability $T$ through the molecular wire can be
written as datta ,
$T=\mbox{Tr}\mbox{\boldmath[$\Gamma_{S}\mathcal{G}^{r}\Gamma_{D}\mathcal{G}^{a}$]},$
(7)
where, $\Gamma_{S}$ and $\Gamma_{D}$ are the coupling matrices, and,
${\mathcal{G}}^{r}$ and ${\mathcal{G}}^{a}$ are the retarded and advanced
Green’s functions, respectively.
With the knowledge of the transmission probability we compute overall junction
current ($I_{T}$) as a function of bias voltage ($V$) using the standard
formalism based on quantum scattering theory.
$I_{T}(V)=\frac{2e}{h}\int\limits_{-\infty}^{\infty}T\,[f_{S}(E)-f_{D}(E)]\,dE.$
(8)
Here, $f_{S}$ and $f_{D}$ are the Fermi functions of the source and drain,
respectively. At absolute zero temperature the above equation boils down to
the following expression.
$I_{T}(V)=\frac{2e}{h}\int\limits_{E_{F}-\frac{eV}{2}}^{E_{F}+\frac{eV}{2}}T(E)\,dE,$
(9)
where, $E_{F}$ is the equilibrium Fermi energy.
### II.3 Circular current and associated magnetic field
In order to calculate circular current in molecular rings of the biphenyl
molecule let us first concentrate on the current distribution in a simple loop
geometry illustrated in Fig. 2. A net current $I_{T}$
Figure 2: (Color online). Schematic view of current distribution through a
ring geometry coupled to two electrodes. The filled black circles correspond
to the positions of the atomic sites.
flows between two electrodes through a quantum ring, where $I_{1}$ and $I_{2}$
are the currents flowing through upper and lower arms of the ring,
respectively. We assign positive sign to the current propagating in the
counter-clockwise direction. With this current distribution, we define
circular current of the ring as cir1 ,
$I_{c}=\frac{1}{L}\left(I_{1}L_{1}+I_{2}L_{2}\right)$ (10)
where, $L_{1}$ and $L_{2}$ are the lengths of the upper and lower arms of the
ring, respectively, and $L=L_{1}+L_{2}$. Thus, in order to compute $I_{c}$,
following the above relation, we need to know the currents in different
branches of the loop geometry. We evaluate these currents by using Green’s
function formalism. At absolute zero temperature the current $I_{ij}$ flowing
from site $i$ to $j$ ($j=i\pm 1$) is given by the expression.
$I_{ij}=\int\limits_{E_{F}-\frac{eV}{2}}^{E_{F}+\frac{eV}{2}}J_{ij}(E)\,dE,$
(11)
where, $J_{ij}$ is the current density. In terms of the correlation function
$\mathcal{G}^{n}$ it can be written as dist3 ,
$J_{ij}=\frac{4e}{h}\mbox{Im}\mbox{\boldmath[$H_{ij}\mathcal{G}_{ij}^{n}$]},$
(12)
where, $\mathcal{G}^{n}$ $=$ $\mathcal{G}^{r}\Gamma_{S}\mathcal{G}^{a}$. This
correlation function is evaluated by setting the occupation function of the
source to unity and that of the drain to zero.
Finally, we determine the local magnetic field at any point $\vec{r}$ inside
the ring, associated with circular current $I_{c}$, from the Biot-Savart’s Law
cir1 ,
$\vec{B}(\vec{r})=\sum_{(i,j)}\int\frac{\mu_{0}}{4\pi}I_{ij}\frac{d\vec{r}\times(\vec{r}-\vec{r^{\prime}})}{|(\vec{r}-\vec{r^{\prime}})|^{3}},$
(13)
where, $\mu_{0}$ is the magnetic constant and $\vec{r^{\prime}}$ is the
position vector of an infinitesimal bond current element
$I_{ij}d\vec{r^{\prime}}$. Using the above expressions we evaluate circular
currents and associated magnetic fields in two molecular rings of the biphenyl
molecule.
Throughout this work, we assume that the entire voltage is dropped across the
molecule-to-electrode interfaces and this assumption is good enough for
molecules of smaller size. We also restrict ourselves at absolute zero
temperature and choose the units where $c=e=h=1$.
## III Numerical Results and Discussion
In this section we present numerical results computed for transmission
probability, overall junction current and circular currents in a biphenyl
molecule under conventional biased conditions. Throughout our analysis we set
the on-site energies in the molecule as well as in the source and drain
electrodes to zero, $\epsilon=\epsilon_{0}=0$. The nearest-neighbor coupling
strength in the electrodes ($t_{0}$) is fixed at $2$eV, while in the molecule
($v$) it is set at $1$eV. The coupling strengths of the molecule to the source
and drain electrodes, characterized by the parameters $\tau_{S}$ and
$\tau_{D}$, are also set at $1$eV. We fix the equilibrium Fermi energy $E_{F}$
at zero and measure the energy scale in unit of $v$.
### III.1 Transmission probability and junction current
In Fig. 3 we show the variation of transmission probability $T$ as a function
of injecting electron energy $E$ when the source and drain are coupled to the
biphenyl molecule at the sites $1$ ($p=1$) and $10$ ($q=10$), respectively.
The red curve corresponds to $\theta=0$, while the green and blue lines are
associated with $\theta=\pi/3$ and $\pi/2$, respectively.
Figure 3: (Color online). Transmission probability as a function of energy for
the biphenyl molecule when the electrodes are connected at the molecular sites
$1$ and $10$, as shown in Fig. 1. The red, green and blue lines correspond to
$\theta=0$, $\pi/3$ and $\pi/2$, respectively.
From the spectrum it is observed that the transmission probability exhibits
resonant peaks (red and green lines) for some particular energies and at these
resonances it ($T$) almost reaches to unity. All these resonant peaks are
associated with the energy eigenvalues of the molecule, and therefore, we can
say that the transmittance spectrum is a fingerprint of the electronic
structure of the molecule. The number of
Figure 4: (Color online). Total transport current as a function of bias
voltage when the electrodes are connected to the molecule following the same
configuration as prescribed in Fig. 3.
resonant peaks in $T$-$E$ spectrum and their corresponding widths for a
particular molecule-to-electrode configuration notably depend on the molecular
twist angle, which is clearly visible from the red and green curves. Depending
on the twist angle $\theta$, the resonating energy states are available at
different energies. For each of these energy eigenstates a resonant peak
appears in the transmission spectrum with a finite width associated with the
molecular coupling strength. Now if the resonating states with different
energies are very closely placed then the neighboring peaks can overlap with
each other which result a broader peak. For low enough molecular coupling, the
overlap between neighboring resonant peaks is no longer possible, and
therefore, separate peaks with identical broadening will be obtained in the
transmission spectrum. Obviously, for different states having identical energy
i.e., for degenerate states a single resonant peak is generated in the
transmission curve. The situation especially changes when $\theta=\pi/2$ i.e.,
one molecular ring becomes perpendicular with
Figure 5: (Color online). Total transport current for a specific voltage bias
as a function of twist angle for the biphenyl molecule. The source and drain
are coupled to the molecular sites $1$ and $10$, respectively.
respect to the other ring. In this particular case the transmission
probability completely disappears for the entire energy band spectrum. It is
shown by the blue curve in Fig. 3. With the increment
Figure 6: (Color online). Transmission probability as a function of energy for
the same parameter values used in Fig. 3, when the source and drain are
coupled to the molecular sites $6$ ($p=6$) and $9$ ($q=9$), respectively.
of the twist angle $\theta$ between these two molecular rings, the degree of
$\pi$-conjugation between them decreases, which results the reduction of the
junction conductance since the electronic transfer rate through the molecule
scales as the square of the $\pi$-overlap nitz . For $\theta=\pi/2$, the
$\pi$-conjugation between the molecular rings completely disappears, and
accordingly, vanishing transmission probability is obtained. Thus, rotating
one benzene molecule relative to the other one can regulate electronic
transmission through the biphenyl molecule and eventually one can get the
insulating phase, which leads to the possibility of getting a switching action
using this molecule.
The sharpness of the resonant peaks in $T$-$E$ spectrum strongly depends on
the molecular coupling strength to side-attached electrodes, and, it greatly
controls electron transfer through the bridge system. In the limit of weak-
coupling i.e., $\tau_{S}(\tau_{D})<<v$, sharp resonant peaks are observed in
transmission spectrum, whereas widths of these peaks get broadened in the
limit of strong-coupling, $\tau_{S}(\tau_{D})\sim v$.
Figure 7: (Color online). Total transport current as a function of bias
voltage when the electrodes are coupled to the molecule following the same
configuration as mentioned in Fig. 6.
The broadening of transmission peaks with the enhancement in coupling strength
is quantified by the imaginary parts of the self-energy matrices $\Sigma_{S}$
and $\Sigma_{D}$ which are incorporated in the transmittance expression via
the coupling matrices $\Gamma_{S}$ and $\Gamma_{D}$. This coupling effect on
electron transport has already been explored elaborately in literature sm1 ;
sm2 ; sm3 ; sm4 , and therefore, in the present work we do not illustrate it
further.
The fundamental features of electron transport will be more transparent from
our current-voltage characteristics. The overall junction current $I_{T}$
Figure 8: (Color online). Internal current distribution in the molecule for
its planar conformation when the bias voltage is fixed at $4$V. The source and
drain are attached to the atomic sites $1$ and $10$, respectively.
through the molecular wire following Landauer-Büttiker formalism (Eq. 9). In
Fig. 4 we display the variation of junction current $I_{T}$ as a function of
applied bias voltage $V$ for the biphenyl molecule where the electrodes are
connected at the molecular sites $1$ and $10$, same as in Fig. 1. The current
varies quite continuously with the voltage bias. Depending on the molecular
coupling to the source and drain electrodes, the current exhibits continuous-
like or step-like behavior since it is computed by integrating over the
transmission curve. For the weak molecular coupling step-like nature
associated with sharp resonant peaks in transmission spectrum will be
obtained, unlike to the continuous-like feature as observed in the limit of
strong-coupling. Therefore, for a fixed voltage bias one can regulate the
current amplitude by tuning molecular coupling strength, and, this phenomenon
provides an interesting behavior in designing molecular electronic devices.
Figure 4 reveals that the junction current decreases with increasing the
relative twist angle, following the $T$-$E$ characteristics. In addition to
this behavior, it is also important to note that the threshold bias voltage
$V_{th}$ of electron conduction firmly depends on the twist angle $\theta$,
which is clearly noticed by comparing the red and green curves in Fig. 4.
In order to explore the dependence of electron conduction through the biphenyl
molecule for any arbitrary angle of twist, in
Figure 9: (Color online). Internal current distribution in the molecule for
its planar conformation when the electrodes are coupled to the atomic sites
$6$ and $9$, for the same bias voltage taken in Fig. 8. The directions of the
corresponding magnetic fields at the ring centers are illustrated by the
encircled cross and dot representing downward (into page) and upward (out of
page) directions, respectively.
Fig. 5 we present the variation of junction current as a function of relative
twist angle $\theta$ for some typical bias voltages. The spectrum shows that
for the planar conformation total current amplitude becomes maximum and it
gradually decreases with the relative twist angle and eventually drops to zero
when $\theta=\pi/2$. Thus, at $\theta=\pi/2$ no electron conduction will take
place through this molecular bridge system, while for other choices of
$\theta$ electron can transfer through the molecule from the source to drain
electrode, which promotes a conformation-dependent switching action using the
biphenyl molecule.
A significant change in the transmission spectrum is realized when the
electrodes are coupled to the molecule in such a way that the upper and lower
arms of each molecular rings have unequal lengths. In Fig. 6 we present the
results for such a particular configuration where the source and drain are
attached to the molecular sites $6$ and $9$, respectively. The red, green and
blue curves correspond to the results for the identical parameter values
chosen in Fig. 3. From the transmission curves (red and green) we make out
that for a wide energy range across $E=0$, electron conduction does not take
place, and also the widths of some resonant peaks get reduced enormously
compared to the symmetric configuration where upper and lower arms in each of
the two molecular rings are identical (Fig. 3), even though the molecule-to-
electrode coupling strength is kept unchanged. This is solely due to the
effect of quantum interference among the electronic waves passing through
different arms of the molecular rings, and, it can be much clearly analyzed
through the following arguments. For a fixed molecular coupling, the
broadening of different resonant peaks which results from the overlap of
neighboring peaks depends on the location of energy levels, as discussed
earlier. The positions of these energy levels, on the other hand, are directly
associated with the molecule itself and the real parts of the self-energy
matrices $\Sigma_{S}$ and $\Sigma_{D}$ which correspond to the shift of the
energy eigenstates of the sample sandwiched between two electrodes. Thus for a
particular molecule-to-electrode configuration we get one set of resonating
energy levels, while for the other configuration a different set of energy
levels is obtained. These generate transmission peaks with different widths
associated with the level spacing. If the molecular coupling strength is low
enough, then a minor shift of molecular energy levels takes place, and
therefore, almost identical $T$-$E$ spectrum will be observed for different
molecule-to-electrode configurations. But, for moderate coupling strength one
can regulate electron conduction through the bridge system in a tunable way by
introducing more asymmetry among these two arms. This behavior is nicely
reflected in the current-voltage characteristics. The results are bestowed in
Fig. 7. It is clearly observed that for a particular bias voltage the current
amplitude decreases significantly compared to the symmetric configuration,
Fig. 4. The symmetry breaking among the molecular arms also tunes the
threshold voltage $V_{th}$ of electron conduction across the molecular wire
for a particular twist angle $\theta$, as found by comparing Figs. 4 and 7.
### III.2 Circular current and magnetic field
Now we focus our attention on the behavior of circular currents and associated
magnetic fields at the ring centers of the molecule. These factors are highly
sensitive to the molecule itself as well as the molecule-to-electrode
interface geometry. To address these issues, we start with the current
distribution within the molecule when it is coupled to the electrodes in such
a way that the upper and lower arms of the molecular rings have identical
lengths. The current distribution is shown in Fig. 8, where the blue and green
arrows indicate the bond currents in upper and lower arms of the rings,
respectively. The arrow sizes represent the magnitudes of bond currents and
they are computed when the bias voltage is fixed at $4$V. Here $I_{T}$ is the
net junction current, shown by the red arrow, which is distributed among
different branches at the junction point. Due to the geometrical symmetry
reasons, the magnitudes of the bond currents in upper and lower arms of the
two rings are exactly identical and since they are propagating in opposite
directions, no net circulating current will appear which results vanishing
magnetic fields at the ring centers.
In order to establish circular currents in these two molecular rings, we
Figure 10: (Color online). Circular currents and associated magnetic fields at
the ring centers of the biphenyl molecule as a function of bias voltage when
the electrodes are coupled to the molecule following the configuration
prescribed in Fig. 9. The red and black curves describe the results for the
left and right rings, respectively. The twist angle $\theta$ is fixed at zero.
attach the electrodes asymmetrically such that the upper and lower arms of the
rings have unequal lengths. The internal current distribution for such a
particular configuration is illustrated in Fig. 9, where we set the same bias
voltage as taken in Fig. 8. In this situation the bond currents get unequal
magnitudes, and accordingly, circular
Figure 11: (Color online). Circular currents and corresponding magnetic fields
at the ring centers for a specific voltage bias ($V=1.5$V) as a function of
twist angle when the molecule is sandwiched between electrodes according to
the configuration taken in Fig. 9. The red and black lines correspond to the
identical meaning as in Fig. 10.
currents are established in the two molecular rings. The magnetic fields at
the ring centers associated to these circular currents are also shown. In the
left ring, the magnetic field is directed along the downward direction,
represented by encircled cross, while in the other ring it is directed along
the upward direction, illustrated by encircled dot. We calculate the magnetic
fields using Biot-Savart’s Law, Eq. 13, and scale them in unit of
$6\mu_{0}/4\pi R$, where $R$ is the perpendicular distance from the center to
any arm of the molecular ring and the factor $6$ appears due to the existence
of six bonds in each benzene ring.
Figure 10 demonstrates the magnitudes of the circular currents and the
associated magnetic fields at the ring centers as a function of voltage bias
for the planar conformation of the biphenyl molecule. From the spectrum we
notice that in one voltage regime ($\sim 0-1.3$V) no circulating current
appears, while other voltage regimes finite circular currents are available
and the sign of these currents also changes depending on the voltage region.
The associated magnetic fields also follow the same behavior, as illustrated
in the spectrum. This phenomenon can be explained as follows. The circular
current in a loop geometry is associated with energy eigenstates those can be
described by current carrying states with the current flowing in opposite
directions. Now, for a finite bias voltage whenever one of these resonant
states lies in the Fermi window, associated with the applied voltage bias and
the nature of voltage drops along the molecular wire, we will get the
corresponding circular current. When more than one resonant states come within
this Fermi window, all of them contribute to the current which provide a net
signal, and, in the particular case when they mutually cancel each other, the
net signal becomes zero. The sign of this net circular current or direction of
associated magnetic field depends on which resonant states dominate the
others, which again certainly depends on the applied bias voltage. From the
above analysis we can clearly understand the vanishing nature of circulating
current in the above mentioned voltage region ($\sim 0-1.3$V), since up to
this voltage window no resonant state appears which can contribute to the
circulating current. This, on the other hand, is also justified from the red
curve in $T$-$E$ spectrum, Fig. 6, which is drawn for the planar conformation
of the molecule. It indicates that within the energy window $-0.65$ to $+0.65$
i.e., when the bias voltage becomes $1.3$V, no electron transmission takes
place through the molecule, which results zero circulating current. For the
other voltage regimes finite circular currents are available depending on the
voltage region.
Finally, we focus on the variation of circular currents and corresponding
magnetic fields at the ring centers as a function of relative twist angle
$\theta$, when the voltage bias is kept constant. The results are shown in
Fig. 11, where two different colored curves represent the similar meaning as
in Fig. 10. Quite interestingly we see that the magnitudes of the magnetic
fields at the centers of two molecular rings decreases monotonically with
relative twist angle $\theta$, and for large enough $\theta$ they eventually
reduce to zero. Thus, for a particular bias voltage one can tune the strength
of magnetic field established at the ring center due to this circular current
simply by twisting one benzene molecule relative to the other, and hence, by
placing a local spin or a magnetic ion at the ring center, which will interact
to the magnetic field, spin selective transmission can be achieved through
this molecular system. This conformation-dependent spin selective transmission
will be investigated in a recent forthcoming paper.
## IV Conclusion
In conclusion, we have investigated in detail the conformation-dependent two-
terminal electron transport through a biphenyl molecule within a simple tight-
binding framework using Green’s function formalism. Two principal results have
been obtained and analyzed. First, the dependence of molecular twist on
electronic transmission probability and the overall junction current have been
discussed. Our results lead to a possibility of getting the conformation-
dependent switching action using this molecule. Second, we have investigated
the variation of circular currents and associated magnetic fields developed at
the ring centers as a function of the relative twist angle. Tuning this angle,
one can tailor the strength of magnetic field at the ring centers, and, we
believe that the present analysis may provide the possibilities to design
molecular spintronic devices using organic molecules with loop substructures.
Throughout our work we have ignored the inter- and intra-site Coulomb
interactions as well as the effect of the electrodes, which we plan to
consider in our future works. Another important assumption is the zero
temperature approximation. Though all the results presented in this
communication are worked out at absolute zero temperature limit, the results
should remain valid even at finite temperatures ($\sim 300\,$K) since the
broadening of the energy levels of the biphenyl molecule due to its coupling
with the metal electrodes is much higher than that of the thermal broadening
datta .
## V Acknowledgment
The author is thankful to Prof. Abraham Nitzan for many stimulating
discussions.
## References
* (1) A. Nitzan and M. A. Ratner, Science 300, 1384 (2003).
* (2) F. Chen and N. J. Tao, Acc. Chem. Res. 42, 429 (2009).
* (3) A. Aviram and M. Ratner, Chem. Phys. Lett. 29, 277 (1974).
* (4) M. D. Ventra, S. T. Pentelides, and N. D. Lang, Appl. Phys. Lett. 76, 3448 (2000).
* (5) M. D. Ventra, N. D. Lang, and S. T. Pentelides, Chem. Phys. 281, 189 (2002).
* (6) D. M. Cardamone, C. A. Stafford, and S. Mazumdar, Nano Lett. 6, 2422 (2006).
* (7) K. Tagami, L. Wang, and M. Tsukada, Nano Lett. 4, 209 (2004).
* (8) P. Orellana and F. Claro, Phys. Rev. Lett. 90, 178302 (2003).
* (9) J. H. Ojeda, R. P. A. Lima, F. Domínguez-Adame, and P. A. Orellana, J. Phys.: Condens. Matter 21, 285105 (2009).
* (10) M. Araidai and M. Tsukada, Phys. Rev. B 81, 235114 (2010).
* (11) K. Walczak, Cent. Eur. J. Chem. 2, 524 (2004).
* (12) P. Dutta, S. K. Maiti, and S. N. Karmakar, Org. Electron. 11, 1120 (2010).
* (13) M. Dey, S. K. Maiti, and S. N. Karmakar, Org. Electron. 12, 1017 (2011).
* (14) S. K. Maiti, Physica B 394, 33 (2007).
* (15) S. K. Maiti, Solid State Commun. 150, 1269 (2010).
* (16) M. A. Reed, C. Zhou, C. J. Muller, T. P. Burgin, and J. M. Tour, Science 278, 252 (1997).
* (17) M. Magoga and C. Joachim, Phys. Rev. B 59, 16011 (1999).
* (18) J.-P. Launay and C. D. Coudret, in: A. Aviram and M. A. Ratner (Eds.), Molecular Electronics, New York Academy of Sciences, New York, (1998).
* (19) R. Baer and D. Neuhauser, Chem. Phys. 281, 353 (2002).
* (20) R. Baer and D. Neuhauser, J. Am. Chem. Soc. 124, 4200 (2002).
* (21) D. Walter, D. Neuhauser, and R. Baer, Chem. Phys. 299, 139 (2004).
* (22) R. H. Goldsmith, M. R. Wasielewski, and M. A. Ratner, J. Phys. Chem. B 110, 20258 (2006).
* (23) J. Chen, M. A. Reed, A. M. Rawlett, and J. M. Tour, Science 286, 1550 (1999).
* (24) T. Dadosh, Y. Gordin, R. Krahne, I. Khivrich, D. Mahalu, V. Frydman, J. Sperling, A. Yacoby, and I. Bar-Joseph, Nature 436, 677 (2005).
* (25) C. M. Fischer, M. Burghard, S. Roth, and K. V. Klitzing, Appl. Phys. Lett. 66, 3331 (1995).
* (26) X. D. Cui, A. Primak, X. Zarate, J. Tomfohr, O. F. Sankey, A. L. Moore, T. A. Moore, D. Gust, G. Harris, S. M. Lindsay, Science 294, 571 (2001).
* (27) J. K. Gimzewski and C. Joachim, Science 283, 1683 (1999).
* (28) M. Tsukada, K. Tagami, K. Hirose, and N. Kobayashi, J. Phys. Soc. Jpn. 74, 1079 (2005).
* (29) S. Nakanishi and M. Tsukada, Surf. Sci. 438, 305 (1999).
* (30) L. Wang, K. Tagami, and M. Tsukada, Jpn. J. Appl. Phys. 43, 2779 (2004).
* (31) D. Rai, O. Hod, and A. Nitzan, J. Phys. Chem. C 114, 20583 (2010).
* (32) D. Rai, O. Hod, and A. Nitzan, J. Phys. Chem. Lett. 2, 2118 (2011).
* (33) D. Rai, O. Hod, and A. Nitzan, Phys. Rev. B 85, 155440 (2012).
* (34) G. Stefanucci, E. Perfetto, S. Bellucci, and M. Cini, Phys. Rev. B 79, 073406 (2009).
* (35) K. Tagami and M. Tsukada, Curr. Appl. Phys. 3, 439 (2003).
* (36) P. Sautet and C. Joachim, Chem. Phys. Lett. 153, 511 (1988).
* (37) M. Büttiker, Y. Imry, and R. Landauer, Phys. Lett. A 96, 365 (1983).
* (38) L. P. Levy, G. Dolan, J. Dunsmuir, and H. Bouchiat, Phys. Rev. Lett. 64, 2074 (1990).
* (39) H. F. Cheung, Y. Gefen, E. K. Reidel, and W. H. Shih, Phys. Rev. B 37, 6050 (1988).
* (40) S. K. Maiti, J. Chowdhury, and S. N. Karmakar, Phys. Lett. A 332, 497 (2004).
* (41) S. K. Maiti, J. Chowdhury, and S. N. Karmakar, Solid State Commun. 135, 278 (2005).
* (42) S. K. Maiti, M. Dey, S. Sil, A. Chakrabarti, and S. N. Karmakar, Europhys. Lett. 95, 57008 (2011).
* (43) S. K. Maiti, J. Appl. Phys. 110, 064306 (2011).
* (44) S. K. Maiti, Solid State Commun. 150, 2212 (2010).
* (45) S. K. Maiti, Physica E 31, 117 (2006).
* (46) D. Rai and M. Galperin, Phys. Rev. B 86, 045420 (2012).
* (47) L. Venkataraman, J. E. Klare, C. Nuckolls, M. S. Hybertsen, and M. L. Steigerwald, Nature 442, 904 (2006).
* (48) S. Datta, Electronic transport in mesoscopic systems, Cambridge University Press, Cambridge (1997).
* (49) A. Nitzan, Annu. Rev. Phys. Chem. 52, 681 (2001).
* (50) S. K. Maiti, Org. Electron. 8, 575 (2007).
* (51) S. K. Maiti, Phys. Scr. 75, 62 (2007).
* (52) S. K. Maiti, Solid State Commun. 149, 2146 (2009).
* (53) S. K. Maiti, Solid State Commun. 149, 973 (2009).
|
arxiv-papers
| 2013-02-16T08:21:17 |
2024-09-04T02:49:41.820705
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Santanu K. Maiti",
"submitter": "Santanu Maiti K.",
"url": "https://arxiv.org/abs/1302.3944"
}
|
1302.3965
|
# Perturbation analysis of bounded homogeneous generalized inverses on Banach
spaces
Jianbing Cao
Department of mathematics, Henan Institute of Science and Technology
Xinxiang, Henan, 453003, P.R. China
Department of Mathematics, East China Normal University,
Shanghai 200241, P.R. China
Email: [email protected] Yifeng Xue
Department of Mathematics, East China Normal University,
Shanghai 200241, P.R. China Email: [email protected]; Corresponding
author
###### Abstract
Let $X,Y$ be Banach spaces and $T:X\to Y$ be a bounded linear operator. In
this paper, we initiate the study of the perturbation problems for bounded
homogeneous generalized inverse $T^{h}$ and quasi–linear projector generalized
inverse $T^{H}$ of $T$. Some applications to the representations and
perturbations of the Moore–Penrose metric generalized inverse $T^{M}$ of $T$
are also given. The obtained results in this paper extend some well–known
results for linear operator generalized inverses in this field.
2010 Mathematics Subject Classification: Primary 47A05; Secondary 46B20
Key words: homogeneous operator, stable perturbation, quasi–additivity,
generalized inverse.
## 1 Introduction
The expression and perturbation analysis of the generalized inverses (resp.
the Moore–Penrose inverses) of bounded linear operators on Banach spaces
(resp. Hilbert spaces) have been widely studied since Nashed’s book [18] was
published in 1976. Ten years ago, Chen and Xue proposed a notation so–called
the stable perturbation of a bounded operator instead of the rank–preserving
perturbation of a matrix in [8]. Using this new notation, they established the
perturbation analyses for the Moore–Penrose inverse and the least square
problem on Hilbert spaces in [6, 9, 26]. Meanwhile, Castro–González and Koliha
established the perturbation analysis for Drazin inverse by using of the
gap–function in [4, 5, 14]. Later, some of their results were generalized by
Chen and Xue in [27, 28] in terms of stable perturbation.
Throughout this paper, $X,Y$ are always Banach spaces over real field
$\mathbb{R}$ and $B(X,Y)$ is the Banach space consisting of bounded linear
operators from $X$ to $Y$. For $T\in B(X,Y)$, let $\mathcal{N}(T)$ (resp.
$\mathcal{R}(T)$) denote the null space (resp. range) of $T$. It is well–known
that if $\mathcal{N}(T)$ and $\mathcal{R}(T)$ are topologically complemented
in the spaces $X$ and $Y$, respectively, then there exists a (projector)
generalized inverse $T^{+}\in B(Y,X)$ of $T$ such that
$TT^{+}T=T,\quad T^{+}TT^{+}=T^{+},\quad T^{+}T=I_{X}-P_{\mathcal{N}(T)},\quad
TT^{+}=Q_{\mathcal{R}(T)},$
where $P_{\mathcal{N}(T)}$ and $Q_{\mathcal{R}(T)}$ are the bounded linear
projectors from $X$ and $Y$ onto $\mathcal{N}(T)$ and $\mathcal{R}(T)$,
respectively (cf. [6, 18, 25]). But, in general, not every closed subspace in
a Banach space is complemented. Thus the linear generalized inverse $T^{+}$ of
$T$ may not exist. In this case, we may seek other types of generalized
inverses for $T$. Motivated by the ideas of linear generalized inverses and
metric generalized inverses (cf. [18, 20]), by using the so–called homogeneous
(resp. quasi–linear) projector in Banach space, Wang and Li defined the
homogeneous (resp. quasi–linear) generalized inverse in [22]. Then, some
further study on these types of generalized inverses in Banach space was given
in [1, 17]. More important, from the results in [17, 20], we know that, in
some reflexive Banach spaces $X$ and $Y$, for an operator $T\in B(X,Y)$, there
may exists a bounded quasi–linear (projector) generalized inverse of $T$,
which is generally neither linear nor metric generalized inverse of $T$. So,
from this point of view, it is important and necessary to study homogeneous
and quasi–linear (projector) generalized inverses in Banach spaces.
Since the homogeneous (or quasi–linear) projector in Banach space are no
longer linear, the linear projector generalized inverse and the homogeneous
(or quasi–linear) projector generalized inverse of linear operator in Banach
spaces are quite different. Motivated by the new perturbation results of
closed linear generalized inverses [12], in this paper, we initiate the study
of the following problems for bounded homogeneous (resp. quasi–linear
projector) generalized inverse: let $T\in B(X,Y)$ with a bounded homogeneous
(resp. quasi–linear projector) generalized inverse $T^{h}$ (resp. $T^{H}$),
what conditions on the small perturbation $\delta T$ can guarantee that the
bounded homogeneous (resp. quasi–linear projector) generalized inverse
$\bar{T}^{h}$ (resp. $\bar{T}^{H}$) of the perturbed operator
$\bar{T}=T+\delta T$ exists? Furthermore, if it exists, when does
$\bar{T}^{h}$ (resp. $\bar{T}^{H}$) have the simplest expression
$(I_{X}+T^{h}\delta T)^{-1}T^{h}$ (resp. $(I_{X}+T^{H}\delta T)^{-1}T^{H}$)?
With the concept of the quasi–additivity and the notation of stable
perturbation in [8], we will present some perturbation results on homogeneous
generalized inverses and quasi–linear projector generalized inverses in Banach
spaces. Explicit representation and perturbation for the Moore–Penrose metric
generalized inverse of the perturbed operator are also given.
## 2 Preliminaries
Let $T\in B(X,Y)\backslash\\{0\\}$. The reduced minimum module $\gamma(T)$ of
$T$ is given by
$\gamma(T)=\inf\\{\|Tx\|\,|\,x\in X,\mathrm{dist}(x,\mathcal{N}(T))=1\\},$
(2.1)
where
$\mathrm{dist}(x,\mathcal{N}(T))=\inf\\{\|x-z\|\,|\,z\in\mathcal{N}(T)\\}$. It
is well–known that $\mathcal{R}(T)$ is closed in $Y$ iff $\gamma(T)>0$ (cf.
[16, 28]). From (2.1), we can obtain useful inequality as follows:
$\|Tx\|\geq\gamma(T)\,\mathrm{dist}(x,\mathcal{N}(T)),\quad\forall\,x\in X.$
Recall from [1, 23] that a subset $D$ in $X$ is called to be homogeneous if
$\lambda\,x\in D$ whenever $x\in D$ and $\lambda\in\mathbb{R}$; a mapping
$T\colon X\rightarrow Y$ is called to be a bounded homogeneous operator if $T$
maps every bounded set in $X$ into a bounded set in $Y$ and
$T(\lambda\,x)=\lambda\,T(x)$ for every $x\in X$ and every
$\lambda\in\mathbb{R}$.
Let $H(X,Y)$ denote the set of all bounded homogeneous operators from $X$ to
$Y$. Equipped with the usual linear operations on $H(X,Y)$ and norm on $T\in
H(X,Y)$ defined by $\|T\|=\sup\\{\|Tx\|\,|\,\|x\|=1,x\in X\\}$, we can easily
prove that $(H(X,Y),\|\cdot\|)$ is a Banach space (cf. [20, 23]).
###### Definition 2.1.
Let $M$ be a subset of $X$ and $T\colon X\rightarrow Y$ be a mapping. We call
$T$ is quasi–additive on $M$ if $T$ satisfies
$T(x+z)=T(x)+T(z),\qquad\forall\;x\in X,\;\forall\;z\in M.$
Now we give the concept of quasi–linear projector in Banach spaces.
###### Definition 2.2 (cf. [17, 20]).
Let $P\in H(X,X)$. If $P^{2}=P$, we call $P$ is a homogeneous projector. In
addition, if $P$ is also quasi–additive on $\mathcal{R}(P)$, i.e., for any
$x\in X$ and any $z\in\mathcal{R}(P)$,
$P(x+z)=P(x)+P(z)=P(x)+z,$
then we call $P$ is a quasi–linear projector.
Clearly, from Definition 2.2, we see that the bounded linear projectors,
orthogonal projectors in Hilbert spaces are all quasi–linear projector.
Let $P\in H(X,X)$ be a quasi–linear projector. Then by [17, Lemma 2.5],
$\mathcal{R}(P)$ is a closed linear subspace of $X$ and
$\mathcal{R}(I-P)=\mathcal{N}(P)$. Thus, we can define “the quasi–linearly
complement” of a closed linear subspace as follows. Let $V$ be a closed
subspace of $X$. If there exists a bounded quasi–linear projector $P$ on $X$
such that $V=\mathcal{R}(P)$, then $V$ is said to be bounded quasi–linearly
complemented in $X$ and $\mathcal{N}(P)$ is the bounded quasi–linear
complement of $V$ in $X$. In this case, as usual, we may write
$X=V\dotplus\mathcal{N}(P)$, where $\mathcal{N}(P)$ is a homogeneous subset of
$X$ and “$\dotplus$” means that $V\cap\mathcal{N}(P)=\\{0\\}$ and
$X=V+\mathcal{N}(P)$.
###### Definition 2.3.
Let $T\in B(X,Y)$. If there is $T^{h}\in H(Y,X)$ such that
$TT^{h}T=T,\ \quad T^{h}TT^{h}=T^{h},$
then we call $T^{h}$ is a bounded homogeneous generalized inverse of $T$.
Furthermore, if $T^{h}$ is also quasi–additive on $\mathcal{R}(T)$, i.e., for
any $y\in Y$ and any $z\in\mathcal{R}(T)$, we have
$T^{h}(y+z)=T^{h}(y)+T^{h}(z),$
then we call $T^{h}$ is a bounded quasi–linear generalized inverse of $T$.
Obviously, the concept of bounded homogeneous (or quasi-linear) generalized
inverse is a generalization of bounded linear generalized inverse.
Definition 2.3 was first given in paper [1] for linear transformations and
bounded linear operators. The existence of a homogeneous generalized inverse
of $T\in B(X,Y)$ is also given in [1]. In the following, we will give a new
proof of the existence of a homogeneous generalized inverse of a bounded
linear operator.
###### Proposition 2.4.
Let $T\in B(X,Y)\backslash\\{0\\}$. Then $T$ has a homogeneous generalized
inverse $T^{h}\in H(Y,X)$ iff $\mathcal{R}(T)$ is closed and there exist a
bounded quasi–linear projector $P_{\mathcal{N}(T)}\colon X\to\mathcal{N}(T)$
and a bounded homogeneous projector $Q_{\mathcal{R}(T)}:Y\to\mathcal{R}(T)$.
###### Proof.
Suppose that there is $T^{h}\in H(Y,X)$ such that $TT^{h}T=T$ and
$T^{h}TT^{h}=T^{h}$. Put $P_{\mathcal{N}(T)}=I_{X}-T^{h}T$ and
$Q_{\mathcal{R}(T)}=TT^{h}$. Then $P_{\mathcal{N}(T)}\in H(X,X)$,
$Q_{\mathcal{R}(T)}\in H(Y,Y)$ and
$\displaystyle P_{\mathcal{N}(T)}^{2}$
$\displaystyle=(I_{X}-T^{h}T)(I_{X}-T^{h}T)=I_{X}-T^{h}T-T^{h}T(I_{X}-T^{h}T)=P_{\mathcal{N}(T)},$
$\displaystyle Q_{\mathcal{R}(T)}^{2}$
$\displaystyle=TT^{h}TT^{h}=TT^{h}=Q_{\mathcal{R}(T)}.$
From $TT^{h}T=T$ and $T^{h}TT^{h}=T^{h}$, we can get that
$\mathcal{N}(T)=\mathcal{R}(P_{\mathcal{N}(T)})$ and
$\mathcal{R}(T)=\mathcal{R}(Q_{\mathcal{R}(T)})$. Since for any $x\in X$ and
any $z\in\mathcal{N}(T)$,
$\displaystyle P_{\mathcal{N}(T)}(x+z)$
$\displaystyle=x+z-T^{h}T(x+z)=x+z-T^{h}Tx$
$\displaystyle=P_{\mathcal{N}(T)}x+z=P_{\mathcal{N}(T)}x+P_{\mathcal{N}(T)}z,$
it follows that $P_{\mathcal{N}(T)}$ is quasi–linear. Obviously, we see that
$Q_{\mathcal{R}(T)}:Y\to\mathcal{R}(T)$ is a bounded homogeneous projector.
Now for any $x\in X$,
$\mathrm{dist}(x,\mathcal{N}(T))\leq\|x-P_{\mathcal{N}(T)}x\|=\|T^{h}Tx\|\leq\|T^{h}\|\|Tx\|.$
Thus, $\gamma(T)\geq\dfrac{1}{\|T^{h}\|}>0$ and hence $\mathcal{R}(T)$ is
closed in $Y$.
Conversely, for $x\in X$, let $[x]$ stand for equivalence class of $x$ in
$X/\mathcal{N}(T)$. Define mappings
$\phi\colon\mathcal{R}(I-P_{\mathcal{N}(T)})\rightarrow X/\mathcal{N}(T)$ and
$\hat{T}\colon X/\mathcal{N}(T)\rightarrow\mathcal{R}(T)$ respectively, by
$\phi(x)=[x],\quad\forall\,x\in\mathcal{R}(I-P_{\mathcal{N}(T)})\ \text{and}\
\hat{T}([z])=Tz,\quad\forall\,z\in X.$
Clearly, $\hat{T}$ is bijective. Noting that the quotient space
$X/\mathcal{N}(T)$ with the norm $\|[x]\|=\mathrm{dist}(x,\mathcal{N}(T))$,
$\forall\,x\in X$, is a Banach space (cf. [25]) and
$\|Tx\|\geq\gamma(T)\,\mathrm{dist}(x,\mathcal{N}(T))$ with $\gamma(T)>0$,
$\forall\,x\in X$, we have $\|\hat{T}[x]\|\geq\gamma(T)\|[x]\|$,
$\forall\,x\in X$. Therefore,
$\|\hat{T}^{-1}y\|\leq\dfrac{1}{\gamma(T)}\|y\|$,
$\forall\,y\in\mathcal{R}(T)$.
Since $P_{\mathcal{N}(T)}$ is a quasi–linear projector, it follows that $\phi$
is bijective and $\phi^{-1}([x])=(I-P_{\mathcal{N}(T)})x$, $\forall\,x\in X$.
Obviously, $\phi^{-1}$ is homogeneous and for any $z\in\mathcal{N}(T)$,
$\|\phi^{-1}([x])\|=\|(I-P_{\mathcal{N}(T)})(x-z)\|\leq(1+\|P_{\mathcal{N}(T)}\|)\|x-z\|$
which implies that $\|\phi^{-1}\|\leq 1+\|P_{\mathcal{N}(T)}\|$. Put
$T_{0}=\hat{T}\circ\phi\colon\mathcal{R}(I-P_{\mathcal{N}(T)})\rightarrow\mathcal{R}(T)$.
Then
$T_{0}^{-1}=\phi^{-1}\circ\hat{T}^{-1}\colon\mathcal{R}(T)\rightarrow\mathcal{R}(I-P_{\mathcal{N}(T)})$
is homogeneous and bounded with
$\|T_{0}^{-1}\|\leq\gamma(T)^{-1}(1+\|P_{\mathcal{N}(T)}\|)$. Set
$T^{h}=(I-P_{\mathcal{N}(T)})T_{0}^{-1}Q_{\mathcal{R}(T)}$. Then $T^{h}\in
H(Y,X)$ and
$TT^{h}T=T,\ T^{h}TT^{h}=T^{h},\ TT^{h}=Q_{\mathcal{R}(T)},\
T^{h}T=I_{X}-P_{\mathcal{N}(T)}.$
This finishes the proof. ∎
Recall that a closed subspace $V$ in $X$ is Chebyshev if for any $x\in X$,
there is a unique $x_{0}\in V$ such that $\|x-x_{0}\|=\mathrm{dist}(x,V)$.
Thus, for the closed Chebyshev space $V$, we can define a mapping
$\pi_{V}\colon X\rightarrow V$ by $\pi_{V}(x)=x_{0}$. $\pi_{V}$ is called to
be the metric projector from $X$ onto $V$. From [20], we know that $\pi_{V}$
is a quasi–linear projector with $\|\pi_{V}\|\leq 2$. Then by Proposition 2.4,
we have
###### Corollary 2.5 ([19, 20]).
Let $T\in B(X,Y)\backslash\\{0\\}$ with $\mathcal{R}(T)$ closed. Assume that
$\mathcal{N}(T)$ and $\mathcal{R}(T)$ are Chebyshev subspaces in $X$ and $Y$,
respectively. Then there is $T^{h}\in H(Y,X)$ such that
$TT^{h}T=T,\ T^{h}TT^{h}=T^{h},\ TT^{h}=\pi_{\mathcal{R}(T)},\
T^{h}T=I_{X}-\pi_{\mathcal{N}(T)}.$ (2.2)
The bounded homogeneous generalized inverse $T^{h}$ in (2.2) is called to be
the Moore–Penrose metric generalized inverse of $T$. Such $T^{h}$ in (2.2) is
unique and is denoted by $T^{M}$ (cf. [20]).
###### Corollary 2.6.
Let $T\in B(X,Y)\backslash\\{0\\}$ such that the bounded homogeneous
generalized inverse $T^{h}$ exists. Assume that $\mathcal{N}(T)$ and
$\mathcal{R}(T)$ are Chebyshev subspaces in $X$ and $Y$, respectively. Then
$T^{M}=(I_{X}-\pi_{\mathcal{N}(T)})T^{h}\pi_{\mathcal{R}(T)}$.
###### Proof.
Since $\mathcal{N}(T)$ and $\mathcal{R}(T)$ are Chebyshev subspaces, it
follows from Corollary 2.5 that $T$ has the unique Moore–Penrose metric
generalized inverse $T^{M}$ which satisfy
$TT^{M}T=T,\ T^{M}TT^{M}=T^{M},\ TT^{M}=\pi_{\mathcal{R}(T)},\
T^{M}T=I_{X}-\pi_{\mathcal{N}(T)}.$
Set $T^{\natural}=(I_{X}-\pi_{\mathcal{N}(T)})T^{h}\pi_{\mathcal{R}(T)}$. Then
$T^{\natural}=T^{M}TT^{h}TT^{M}=T^{M}TT^{M}=T^{M}.$ ∎
## 3 Perturbations for bounded homogeneous generalized inverse
In this section, we extend some perturbation results of linear generalized
inverses to bounded homogeneous generalized inverses. We start our
investigation with some lemmas, which are prepared for the proof of our main
results. The following result is well–known for bounded linear operators, we
generalize it to the bounded homogeneous operators in the following form.
###### Lemma 3.1.
Let $T\in H(X,Y)$ and $S\in H(Y,X)$ such that $T$ is quasi–additive on
$\mathcal{R}(S)$ and $S$ is quasi–additive on $\mathcal{R}(T)$, then
$I_{Y}+TS$ is invertible in $H(Y,Y)$ if and only if $I_{X}+ST$ is invertible
in $H(X,X)$.
###### Proof.
If there is a $\Phi\in H(Y,Y)$ be such that
$(I_{Y}+TS)\Phi=\Phi(I_{Y}+TS)=I_{Y}$, then
$\displaystyle I_{X}$ $\displaystyle=I_{X}+ST-ST=I_{X}+ST-S((I_{Y}+TS)\Phi)T$
$\displaystyle=I_{X}+ST-((S+STS)\Phi)T\quad(S\ \text{quasi-additive on}\
\mathcal{R}(T))$ $\displaystyle=I_{X}+ST-((I_{X}+ST)S\Phi)T$
$\displaystyle=(I_{X}+ST)(1_{X}-S\Phi T)\quad(T\ \text{quasi--additive on}\
\mathcal{R}(S)).$
Similarly, we also have $I_{X}=(I_{X}-S\Phi T)(I_{X}+ST)$. Thus, $I_{X}+ST$ is
invertible on $X$ with $(I_{X}+ST)^{-1}=(1_{X}-S\Phi T)\in H(X,X)$.
The converse can also be proved by using the same way as above. ∎
###### Lemma 3.2.
Let $T\in B(X,Y)$ such that $T^{h}\in H(Y,X)$ exists and let $\delta T\in
B(X,Y)$ such that $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$ and
$(I_{X}+T^{h}\delta T)$ is invertible in $B(X,X)$. Then $I_{Y}+\delta
TT^{h}:Y\rightarrow Y$ is invertible in $H(Y,Y)$ and
$\Phi=T^{h}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{h}\delta T)^{-1}T^{h}$ (3.1)
is a bounded homogeneous operator with $\mathcal{R}(\Phi)=\mathcal{R}(T^{h})$
and $\mathcal{N}(\Phi)=\mathcal{N}(T^{h})$.
###### Proof.
By Lemma 3.1, $I_{Y}+\delta TT^{h}:Y\rightarrow Y$ is invertible in $H(Y,Y)$.
Clearly, $I_{X}+T^{h}\delta T$ is linear bounded operator and $I_{Y}+\delta
TT^{h}\in H(Y,Y)$. From the equation
$(I_{X}+T^{h}\delta T)T^{h}=T^{h}(I_{Y}+\delta TT^{h})$
and $T^{h}\in H(Y,X)$, we get that $\Phi$ is a bounded homogeneous operator.
Finally, from (3.1), we can obtain that $\mathcal{R}(\Phi)=\mathcal{R}(T^{h})$
and $\mathcal{N}(\Phi)=\mathcal{N}(T^{h})$. ∎
Recall from [8] that for $T\in B(X,Y)$ with bounded linear generalized inverse
$T^{+}\in B(Y,X)$, we say that $\bar{T}=T+\delta T\in B(X,Y)$ is a stable
perturbation of $T$ if $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{+})=\\{0\\}$.
Now for $T\in B(X,Y)$ with $T^{h}\in H(Y,X)$, we also say that
$\bar{T}=T+\delta T\in B(X,Y)$ is a stable perturbation of $T$ if
$\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{h})=\\{0\\}$.
###### Lemma 3.3.
Let $T\in B(X,Y)$ such that $T^{h}\in H(Y,X)$ exists. Suppose that $\delta
T\in B(X,Y)$ such that $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$
and $I_{X}+T^{h}\delta T$ is invertible in $B(X,X)$ Put $\bar{T}=T+\delta T$.
If $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{h})=\\{0\\}$, then
$\mathcal{N}(\bar{T})=(I_{X}+T^{h}\delta T)^{-1}\mathcal{N}(T)\ \text{and}\
\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T).$
###### Proof.
Set $P=(I_{X}+T^{h}\delta T)^{-1}(I_{X}-T^{h}T)$. We first show that $P^{2}=P$
and $\mathcal{R}(P)=\mathcal{N}(\bar{T})$. Since $T^{h}TT^{h}=T^{h}$, we get
$(I_{X}-T^{h}T)T^{h}\delta T=0$ and then
$\displaystyle(I_{X}-T^{h}T)(I_{X}+T^{h}\delta T)=I_{X}-T^{h}T$ (3.2)
and so that
$\displaystyle I_{X}-T^{h}T=(I_{X}-T^{h}T)(I_{X}+T^{h}\delta T)^{-1}.$ (3.3)
Now, by using (3.2) and (3.3), it is easy to get $P^{2}=P$.
Since $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$, we see
$I_{X}-T^{h}T=(I_{X}+T^{h}\delta T)-T^{h}\bar{T}$. Then for any $x\in X$, we
have
$\displaystyle Px$ $\displaystyle=(I_{X}+T^{h}\delta T)^{-1}(I_{X}-T^{h}T)x$
$\displaystyle=(I_{X}+T^{h}\delta T)^{-1}[(I_{X}+T^{h}\delta
T)-T^{h}\bar{T}]x$ $\displaystyle=x-(I_{X}+T^{h}\delta T)^{-1}T^{h}\bar{T}x.$
(3.4)
From (3), we get that if $x\in\mathcal{N}(\bar{T})$, then
$x\in\mathcal{R}(P)$. Thus, $\mathcal{N}(\bar{T})\subset\mathcal{R}(P)$.
Conversely, let $z\in\mathcal{R}(P)$, then $z=Pz$. From (3), we get
$(I_{X}+T^{h}\delta T)^{-1}T^{h}\bar{T}x=0$. Therefore, we have
$\bar{T}x\in\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{h})=\\{0\\}$. Thus,
$x\in\mathcal{N}(\bar{T})$ and then $\mathcal{R}(P)=\mathcal{N}(\bar{T})$.
From the Definition of $T^{h}$, we have
$\mathcal{N}(T)=\mathcal{R}(I_{X}-T^{h}T)$. Thus,
$(I_{X}+T^{h}\delta T)^{-1}\mathcal{N}(T)=(I_{X}+T^{h}\delta
T)^{-1}\mathcal{R}(I_{X}-T^{h}T)=\mathcal{R}(P)=\mathcal{N}(\bar{T}).$
Now, we prove that $\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T)$.
From $(I_{Y}+\delta TT^{h})T=\bar{T}T^{h}T$, we get that $(I_{Y}+\delta
TT^{h})\mathcal{R}(T)\subset\mathcal{R}(\bar{T})$. On the other hand, since
$T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$ and
$\mathcal{R}(P)=\mathcal{N}(\bar{T})$, we have for any $x\in X$,
$\displaystyle 0$ $\displaystyle=\bar{T}Px=\bar{T}(I_{X}+T^{h}\delta
T)^{-1}(I_{X}-T^{h}T)x$ $\displaystyle=\bar{T}x-\bar{T}(I_{X}+T^{h}\delta
T)^{-1}(T^{h}\delta Tx+T^{h}{T}x)$
$\displaystyle=\bar{T}x-\bar{T}(I_{X}+T^{h}\delta
T)^{-1}T^{h}\bar{T}x=\bar{T}x-\bar{T}T^{h}(I_{Y}+\delta TT^{h})^{-1}\bar{T}x$
$\displaystyle=\bar{T}x-(I_{Y}+\delta TT^{h}-I_{Y}+TT^{h})(I_{Y}+\delta
TT^{h})^{-1}\bar{T}x$ $\displaystyle=(I_{Y}-TT^{h})(I_{Y}+\delta
TT^{h})^{-1}\bar{T}x.$ (3.5)
Since $\mathcal{N}(I_{Y}-TT^{h})=\mathcal{R}(T)$, it follows (3) that
$(I_{Y}+\delta TT^{h})^{-1}\mathcal{R}(\bar{T})\subset\mathcal{R}(T)$, that
is, $\mathcal{R}(\bar{T})\subset(I_{Y}+\delta TT^{h})\mathcal{R}(T)$.
Consequently, $\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T)$. ∎
Now we can present the main perturbation result for bounded homogeneous
generalized inverse on Banach spaces.
###### Theorem 3.4.
Let $T\in B(X,Y)$ such that $T^{h}\in H(Y,X)$ exists. Suppose that $\delta
T\in B(X,Y)$ such that $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$
and $I_{X}+T^{h}\delta T$ is invertible in $B(X,X)$. Put $\bar{T}=T+\delta T$.
Then the following statements are equivalent:
1. $(1)$
$\Phi=T^{h}(I_{Y}+\delta TT^{h})^{-1}$ is a bounded homogeneous generalized
inverse of $\bar{T}$;
2. $(2)$
$\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{h})=\\{0\\}$;
3. $(3)$
$\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T)$;
4. $(4)$
$(I_{Y}+T^{h}\delta T)\mathcal{N}(\bar{T})=\mathcal{N}(T)$;
5. $(5)$
$(I_{Y}+\delta TT^{h})^{-1}\bar{T}\mathcal{N}(T)\subset\mathcal{R}(T)$;
###### Proof.
We prove our theorem by showing that
$(3)\Rightarrow(5)\Rightarrow(4)\Rightarrow(2)\Rightarrow(3)\Rightarrow(1)\Rightarrow(3).$
$(3)\Rightarrow(5)$ This is obvious since $(I+\delta TT^{h})$ is invertible
and $\mathcal{N}(T)\subset X$.
$(5)\Rightarrow(4)$. Let $x\in\mathcal{N}(\bar{T})$, then we see
$(I_{X}+T^{h}\delta T)x=x-T^{h}Tx\in\mathcal{N}(T)$. Hence $(I_{X}+T^{h}\delta
T)\mathcal{N}(\bar{T})\subset\mathcal{N}(T)$. Now for any
$x\in\mathcal{N}(T)$, then by (5), there exists some $z\in X$ such that
$\bar{T}x=(I_{Y}+\delta TT^{h})Tz=\bar{T}T^{h}Tz.$ So
$x-T^{h}Tz\in\mathcal{N}(\bar{T})$ and hence
$(I_{X}+T^{h}\delta T)(x-T^{h}Tz)=(I_{X}-T^{h}T)(x-T^{h}Tz)=x.$
Consequently, $(I_{X}+T^{h}\delta T)\mathcal{N}(\bar{T})=\mathcal{N}(T)$.
$(4)\Rightarrow(2)$. Let $y\in R(\overline{T})\cap N(T^{h})$, then there
exists an $x\in X$ such that $y=\bar{T}x$ and $T^{h}\bar{T}x=0$. We can check
that
$T(I_{X}+T^{h}\delta T)x=Tx+TT^{h}\delta Tx=Tx+TT^{h}\bar{T}x-TT^{h}Tx=0.$
Thus, $(I_{X}+T^{h}\delta T)x\in\mathcal{N}(T)$. By (4),
$x\in\mathcal{N}(\bar{T})$ and so that $y=\bar{T}x=0$.
$(2)\Rightarrow(3)$ follows from Lemma 3.3.
$(3)\Rightarrow(1)$ Noting that by Lemma 3.2, we have
$\Phi=T^{h}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{h}\delta T)^{-1}T^{h}$
is a bounded homogeneous operator with $\mathcal{R}(\Phi)=\mathcal{R}(T^{h})$
and $\mathcal{N}(\Phi)=\mathcal{N}(T^{h})$. We need to prove that
$\Phi\bar{T}\Phi=\Phi$ and $\bar{T}\Phi\bar{T}=\bar{T}$. Since $T^{h}$ is
quasi–additive on $\mathcal{R}(\delta T)$, we have
$T^{h}\bar{T}=T^{h}T+T^{h}\delta T$. Therefore,
$\displaystyle\Phi\bar{T}\Phi$ $\displaystyle=(I_{Y}+T^{h}\delta
T)^{-1}T^{h}\bar{T}(I_{Y}+T^{h}\delta T)^{-1}T^{h}$
$\displaystyle=(I_{Y}+T^{h}\delta T)^{-1}T^{h}[(I_{X}+T^{h}\delta
T)-(I_{X}-T^{h}T)](I_{X}+T^{h}\delta T)^{-1}T^{h}$
$\displaystyle=(I_{X}+T^{h}\delta T)^{-1}T^{h}-(I_{X}+T^{h}\delta
T)^{-1}(I_{X}-T^{h}T)(I_{X}+T^{h}\delta T)^{-1}T^{h}$
$\displaystyle=(I_{X}+T^{h}\delta T)^{-1}T^{h}-(I_{X}+T^{h}\delta
T)^{-1}(I_{X}-T^{h}T)T^{h}(I_{X}+\delta TT^{h})^{-1}$ $\displaystyle=\Phi.$
$\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T)$ means that
$(I_{Y}-TT^{h})(I_{Y}+\delta TT^{h})^{-1}\bar{T}=0$. So
$\displaystyle\bar{T}\Phi\bar{T}$ $\displaystyle=(T+\delta
T)T^{h}(I_{Y}+T^{h}\delta T)^{-1}\bar{T}$ $\displaystyle=(I_{Y}+\delta
TT^{h}+TT^{h}-I_{Y})(I_{Y}+T^{h}\delta T)^{-1}\bar{T}$
$\displaystyle=\bar{T}.$
$(1)\Rightarrow(3)$ From $\bar{T}\Phi\bar{T}=\bar{T}$, we have
$(I_{Y}-TT^{h})(I_{Y}+\delta TT^{h})^{-1}\bar{T}=0$ by the proof of
$(3)\Rightarrow(1)$. Thus, $(I_{Y}+\delta
TT^{h})^{-1}\mathcal{R}(\bar{T})\subset\mathcal{R}(T)$. From $(I_{Y}+\delta
TT^{h})T=\bar{T}T^{h}T$, we get that $(I_{Y}+\delta
TT^{h})\mathcal{R}(T)\subset\mathcal{R}(\bar{T})$. So $(I_{Y}+\delta
TT^{h})\mathcal{R}(T)=\mathcal{R}(\bar{T})$. ∎
###### Corollary 3.5.
Let $T\in B(X,Y)$ such that $T^{h}\in H(Y,X)$ exists. Suppose that $\delta
T\in B(X,Y)$ such that $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$
and $\|T^{h}\delta T\|<1$. Put $\bar{T}=T+\delta T$. If
$\mathcal{N}(T)\subset\mathcal{N}(\delta T)$ or $\mathcal{R}(\delta
T)\subset\mathcal{R}(T)$, then $\bar{T}$ has a homogeneous bounded generalized
inverse
$\bar{T}^{h}=T^{h}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{h}\delta T)^{-1}T^{h}.$
###### Proof.
If $N(T)\subset N(\delta T)$, then $N(T)\subset N(\bar{T})$. So Condition (5)
of Theorem 3.4 holds. If $\mathcal{R}(\delta T)\subset R(T)$, then
$R(\bar{T})\subset\mathcal{R}(T)$. So
$\mathcal{R}(\bar{T})\cap\mathcal{N}(T)\subset\mathcal{R}(T)\cap\mathcal{N}(T^{h})=\\{0\\}$
and consequently, $\bar{T}$ has the homogeneous bounded generalized inverse
$T^{h}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{h}\delta T)^{-1}T^{h}$ by Theorem
3.4. ∎
###### Proposition 3.6.
Let $T\in B(X,Y)$ with $\mathcal{R}(T)$ closed. Assume that $\mathcal{N}(T)$
and $\mathcal{R}(T)$ are Chebyshev subspaces in $X$ and $Y$, respectively. Let
$\delta T\in B(X,Y)$ such that $T^{M}$ is quasi–additive on
$\mathcal{R}(\delta T)$ and $\|T^{M}\delta T\|<1$. Put $\bar{T}=T+\delta T$.
Suppose that $\mathcal{N}(\bar{T})$ and $\overline{\mathcal{R}(\bar{T})}$ are
Chebyshev subspaces in $X$ and $Y$, respectively. If
$\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{M})=\\{0\\}$, then
$\mathcal{R}(\bar{T})$ is closed in $Y$ and $\bar{T}$ has the Moore–Penrose
metric generalized inverse
$\bar{T}^{M}=(I_{X}-\pi_{\mathcal{N}(\bar{T})})(I_{X}+T^{M}\delta
T)^{-1}T^{M}\pi_{\mathcal{R}(\bar{T})}$
with $\|\bar{T}^{M}\|\leq\dfrac{2\|T^{M}\|}{1-\|T^{M}\delta T\|}$.
###### Proof.
$T^{M}$ exists by Corollary 2.5. Since $T^{M}\delta T$ is $\mathbb{R}$–linear
and $\|T^{M}\delta T\|<1$, we have $I_{X}+T^{M}\delta T$ is invertible in
$B(X,X)$. By Theorem 3.4 and Proposition 2.4,
$\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{M})=\\{0\\}$ implies that
$\mathcal{R}(\bar{T})$ is closed and $\bar{T}$ has a bounded homogeneous
generalized inverse $\bar{T}^{h}=(I_{X}+T^{M}\delta T)^{-1}T^{M}$. Then by
Corollary 2.6, $\bar{T}^{M}$ has the form
$\bar{T}^{M}=(I_{X}-\pi_{\mathcal{N}(\bar{T})})(I_{X}+T^{M}\delta
T)^{-1}T^{M}\pi_{\mathcal{R}(\bar{T})}.$
Note that
$\|x-\pi_{\mathcal{N}(\bar{T})}x\|=\mathrm{dist}(x,\mathcal{N}(\bar{T}))\leq\|x\|$,
$\forall\,x\in X$. So $\|I_{X}-\pi_{\mathcal{N}(\bar{T})}\|\leq 1$. Therefore,
$\|\bar{T}^{M}\|\leq\|I_{X}-\pi_{\mathcal{N}(\bar{T})}\|\|(I_{X}+T^{M}\delta
T)^{-1}T^{M}\|\|\pi_{\mathcal{R}(\bar{T})}\|\leq\frac{2\|T^{M}\|}{1-\|T^{M}\delta
T\|}.$
This completes the proof. ∎
## 4 Perturbation for quasi–linear projector generalized inverse
We have known that the range of a bounded qausi–linear projector on a Banach
space is closed(see [17, Lemma 2.5]). Thus, from Definition 2.3 and the proof
of Proposition 2.4, the following result is obvious.
###### Proposition 4.1.
Let $T\in B(X,Y)\backslash\\{0\\}$. Then $T$ has a bounded quasi–linear
generalized inverse $T^{h}\in H(Y,X)$ iff there exist a bounded linear
projector $P_{\mathcal{N}(T)}\colon X\to\mathcal{N}(T)$ and a bounded
quasi–linear projector $Q_{\mathcal{R}(T)}:Y\to\mathcal{R}(T)$.
Motivated by related results in papers [1, 17, 22] and the definition of the
oblique projection generalized inverses in Banach space(see [18, 25]), based
on Proposition 4.1, we can give the following definition of quasi–linear
projector generalized inverse of a bounded linear operator on Banach space.
###### Definition 4.2.
Let $T\in B(X,Y)$. Let $T^{H}\in H(Y,X)$ be a bounded homogeneous operator. If
there exist a bounded linear projector $P_{\mathcal{N}(T)}$ from $X$ onto
$\mathcal{N}(T)$ and a bounded quasi–linear projector $Q_{\mathcal{R}(T)}$
from $Y$ onto $\mathcal{R}(T)$, respectively, such that
$\displaystyle(1)\,TT^{H}T=T;\quad(2)\,T^{H}TT^{H}=T^{H};\quad(3)\,T^{H}T=I_{X}-P_{\mathcal{N}(T)};\quad(4)\,TT^{H}=Q_{\mathcal{R}(T)}.$
Then $T^{H}$ is called a quasi–linear projector generalized inverse of $T$.
For $T\in B(X,Y)$, if $T^{H}$ exists, then from Proposition 4.1 and Definition
2.3, we see that $\mathcal{R}(T)$ is closed and $T^{H}$ is quasi–additive on
$\mathcal{R}(T)$, in this case, we may call $T^{H}$ is a quasi–linear
operator. Choose $\delta T\in B(X,Y)$ such that $T^{H}$ is also quasi–additive
on $\mathcal{R}(\delta T)$, then $I_{X}+T^{H}\delta T$ is a bounded linear
operator and $I_{Y}+\delta TT^{H}$ is a bounded linear operator on
$\mathcal{R}(\bar{T})$.
###### Lemma 4.3.
Let $T\in B(X,Y)$ such that $T^{H}$ exists and let $\delta T\in B(X,Y)$ such
that $T^{H}$ is quasi–additive on $\mathcal{R}(\delta T)$. Put
$\bar{T}=T+\delta T$. Assumes that
$X=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H})$ and
$Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$. Then
1. $(1)$
$I_{X}+T^{H}\delta T:X\rightarrow X$ is a invertible bounded linear operator;
2. $(2)$
$I_{Y}+\delta TT^{H}:Y\rightarrow Y$ is a invertible quasi–linear operator;
3. $(3)$
$\Upsilon=T^{H}(I_{Y}+\delta TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is
a bounded homogeneous operator.
###### Proof.
Since $I_{X}+T^{H}\delta T\in B(X,X)$, we only need to show that
$\mathcal{N}(I_{X}+T^{H}\delta T)=\\{0\\}$ and $\mathcal{R}(I_{X}+T^{H}\delta
T)=X$ under the assumptions.
We first show that $\mathcal{N}(I_{X}+T^{H}\delta T)=\\{0\\}$. Let
$x\in\mathcal{N}(I_{X}+T^{H}\delta T)$, then
$(I_{X}+T^{H}\delta T)x=(I_{X}-T^{H}T)x+T^{H}\bar{T}x=0$
since $T^{H}$ is quasi–linear. Thus $(I_{X}-T^{H}T)x=0=T^{H}\bar{T}x$ and
hence $\bar{T}x\in\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{H})$. Noting that
$Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$, we have $\bar{T}x=0$ and
hence $x\in\mathcal{R}(T^{H})\cap\mathcal{N}(\bar{T})$. From
$X=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H})$, we get that $x=0$.
Now, we prove that $\mathcal{R}(I_{X}+T^{H}\delta T)=X$. Let $x\in X$ and put
$x_{1}=(I_{X}-T^{H}T)x$, $x_{2}=T^{H}Tx$. Since
$Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$, we have
$\mathcal{R}(T^{H})=T^{H}\mathcal{R}(\bar{T})$. Therefore, from
$X=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H})$, we get that
$\mathcal{R}(T^{H})=T^{H}\mathcal{R}(\bar{T})=T^{H}\bar{T}\mathcal{R}(T^{H})$.
Consequently, there is $z\in Y$ such that
$T^{H}(Tx_{2}-\bar{T}x_{1})=T^{H}\bar{T}T^{H}z$. Set $y=x_{1}+T^{H}z\in X$.
Noting that $T^{H}$ is quasi–additive on $\mathcal{R}(T)$ and
$\mathcal{R}(\delta T)$, respectively. we have
$\displaystyle(I_{X}+T^{H}\delta T)y$
$\displaystyle=(I_{X}-T^{H}T+T^{H}\bar{T})(x_{1}+T^{H}z)$
$\displaystyle=x_{1}+T^{H}\bar{T}x_{1}+T^{H}\bar{T}T^{H}z$
$\displaystyle=x_{1}+T^{H}\bar{T}x_{1}+T^{H}(Tx_{2}-\bar{T}x_{1})$
$\displaystyle=x.$
Therefore, $X=\mathcal{R}(I_{Y}+T^{H}\delta T)$.
Similar to Lemma 3.2, we have $\Upsilon=T^{H}(I_{Y}+\delta
TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is a bounded homogeneous
operator. ∎
###### Theorem 4.4.
Let $T\in B(X,Y)$ such that $T^{H}$ exists and let $\delta T\in B(X,Y)$ such
that $T^{H}$ is quasi–additive on $\mathcal{R}(\delta T)$. Put
$\bar{T}=T+\delta T$. Then the following statements are equivalent:
1. $(1)$
$I_{X}+T^{H}\delta T$ is invertible in $B(X,X)$ and
$\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{H})=\\{0\\};$
2. $(2)$
$I_{X}+T^{H}\delta T$ is invertible in $B(X,X)$ and
$\Upsilon=T^{H}(I_{Y}+\delta TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is
a quasi–linear projector generalized inverse of $\bar{T};$
3. $(3)$
$X=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H})$ and
$Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$, i.e.,
$\mathcal{N}(\bar{T})$ is topological complemented in $X$ and
$\mathcal{R}(\bar{T})$ is quasi–linearly complemented in $Y$.
###### Proof.
$(1)\Rightarrow(2)$ By Theorem 3.4, $\Upsilon=T^{H}(I_{Y}+\delta
TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is a bounded homogeneous
generalized inverse of $T$. Let $y\in Y$ and $z\in\mathcal{R}(\bar{T})$. Then
$z=Tx+\delta Tx$ for some $x\in X$. Since $T^{H}$ is quasi–additive on
$\mathcal{R}(T)$ and $\mathcal{R}(\delta T)$, it follows that
$T^{H}(y+z)=T^{H}(y+Tx+\delta Tx)=T^{H}(y)+T^{H}(Tx)+T^{H}(\delta
Tx)=T^{H}y+T^{H}z,$
i.e., $T^{H}$ is quasi–additive on $\mathcal{R}(\bar{T})$ and hence $\Upsilon$
is quasi–linear. Set
$\bar{P}=(I_{X}+T^{H}\delta
T)^{-1}(I_{X}-T^{H}T),\qquad\bar{Q}=\bar{T}(I_{X}+T^{H}\delta T)^{-1}T^{H}.$
Then, by the proof of Lemma 3.3, $\bar{P}\in H(X,X)$ is a projector with
$\mathcal{R}(\bar{P})=\mathcal{N}(\bar{T})$. Noting that $(I_{X}+T^{H}\delta
T)^{-1}$ and $I_{X}-T^{H}T$ are all linear. So $\bar{P}$ is linear.
Furthermore,
$\displaystyle\Upsilon\bar{T}\\!$ $\displaystyle=(I_{X}+T^{H}\delta
T)^{-1}T^{H}(T+\delta T)\\!$ $\displaystyle=(I_{X}+T^{H}\delta
T)^{-1}(I_{X}+T^{H}\delta T+T^{H}T-I_{X})\\!$ $\displaystyle=I_{X}-\bar{P}.$
Since $T^{H}$ is quasi–additive on $\mathcal{R}(\bar{T})$, it follow that
$\bar{Q}=\bar{T}(I+T^{H}\delta T)^{-1}T^{H}=\bar{T}\Upsilon$ is quasi–linear
and bounded with $\mathcal{R}(\bar{Q})\subset\mathcal{R}(\bar{T})$. Noting
that
$\displaystyle\bar{Q}$ $\displaystyle=\bar{T}T^{H}(I_{Y}+\delta
TT^{H})^{-1}=(I_{Y}+\delta TT^{H}+TT^{H}-I_{Y})(I_{Y}+\delta TT^{H})^{-1}$
$\displaystyle=I_{Y}-(I_{Y}-TT^{H})(I_{Y}+\delta TT^{H})^{-1}$
and $(I_{Y}+\delta TT^{H})^{-1}\mathcal{R}(\bar{T})=\mathcal{R}(T)$ by Lemma
3.3, we have
$\mathcal{R}(\bar{T})=\bar{Q}(\mathcal{R}(\bar{T}))\subset\mathcal{R}(\bar{Q})$.
Thus, $\mathcal{R}(\bar{Q})=\mathcal{R}(\bar{T})$. From
$\Upsilon\bar{T}=I_{X}-\bar{P}$ and
$\mathcal{R}(\bar{P})=\mathcal{N}(\bar{T})$, we see that
$\Upsilon\bar{T}\Upsilon=\Upsilon$, then we have
$\bar{Q}^{2}=\bar{T}(I_{X}+T^{H}\delta T)^{-1}T^{H}\bar{T}(I_{X}+T^{H}\delta
T)^{-1}T^{H}=\bar{T}\Upsilon\bar{T}\Upsilon=\bar{Q}.$
Therefore, by Definition 4.2, we get $\bar{T}^{H}=\Upsilon$.
$(2)\Rightarrow(3)$ From $\bar{T}^{H}=T^{H}(I_{Y}+\delta
TT^{h})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$, we obtain that
$\mathcal{R}(\bar{T}^{H})=\mathcal{R}(T^{H})$ and
$\mathcal{N}(\bar{T}^{H})=\mathcal{N}(T^{H})$. From
$\bar{T}\bar{T}^{H}\bar{T}=\bar{T}$,
$\bar{T}^{H}\bar{T}\bar{T}^{H}=\bar{T}^{H}$, we get that
$\mathcal{R}(I_{X}-\bar{T}^{H}\bar{T})=\mathcal{N}(\bar{T}),\
\mathcal{R}(\bar{T}^{H}\bar{T})=\mathcal{R}(\bar{T}^{H}),\
\mathcal{R}(\bar{T}\bar{T}^{H})=\mathcal{R}(\bar{T}),\
\mathcal{R}(I_{Y}-\bar{T}\bar{T}^{H})=\mathcal{N}(\bar{T}^{H})$
Thus $\mathcal{R}(\bar{T}^{H}\bar{)}=\mathcal{R}(T^{H})$ and
$\mathcal{R}(I_{Y}-\bar{T}\bar{T}^{H})=\mathcal{N}(T^{H})$. Therefore,
$\displaystyle X$
$\displaystyle=\mathcal{R}(I_{X}-\bar{T}^{H}\bar{T})\dotplus\mathcal{R}(\bar{T}^{H}\bar{T})=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H}),$
$\displaystyle Y$
$\displaystyle=\mathcal{R}(\bar{T}\bar{T}^{H})\dotplus\mathcal{R}(I_{Y}-\bar{T}\bar{T}^{H})=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H}).$
$(3)\Rightarrow(1)$ By Lemma 4.3, $I_{X}+T^{H}\delta T$ is invertible in
$H(X,X)$. Now from $Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$, we get
that $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{H})=\\{0\\}$. ∎
###### Lemma 4.5 ([2]).
Let $A\in B(X,X)$. Suppose that there exist two constants
$\lambda_{1},\lambda_{2}\in[0,1)$ such that
$\|Ax\|\leq\lambda_{1}\|x\|+\lambda_{2}\|(I+A)x\|,\quad\quad(\forall\;x\in
X).$
Then $I+A\colon X\rightarrow X$ is bijective. Moreover, for any $x\in X$,
$\displaystyle\frac{1-\lambda_{1}}{1+\lambda_{2}}\|x\|\leq\|(I+A)x\|\leq\frac{1+\lambda_{1}}{1-\lambda_{2}}\|x\|,\quad\frac{1-\lambda_{2}}{1+\lambda_{1}}\|x\|\leq\|(I+A)^{-1}x\|\leq\frac{1+\lambda_{2}}{1-\lambda_{1}}\|x\|.$
Let $T\in B(X,Y)$ such that $T^{H}$ exists. Let $\delta T\in B(X,Y)$ such that
$T^{H}$ is quasi–additive on $\mathcal{R}(\delta T)$ and satisfies
$\displaystyle\|T^{H}\delta
Tx\|\leq\lambda_{1}\|x\|+\lambda_{2}\|(I+T^{H}\delta T)x\|\quad(\forall\;x\in
X),$ (4.1)
where $\lambda_{1},\lambda_{2}\in[0,1)$.
###### Corollary 4.6.
Let $T\in B(X,Y)$ such that $T^{H}$ exists. Suppose that $\delta T\in B(X,Y)$
such that $T^{H}$ is quasi–additive on $\mathcal{R}(\delta T)$ and satisfies
(4.1). Put $\bar{T}=T+\delta T$. Then $I_{X}+T^{H}\delta T$ is invertible in
$H(X,X)$ and $\bar{T}^{H}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is well defined
with
$\dfrac{\|\bar{T}^{H}-T^{H}\|}{\|T^{H}\|}\leq\dfrac{(2+\lambda_{1})(1+\lambda_{2})}{(1-\lambda_{1})(1-\lambda_{2})}.$
###### Proof.
By using Lemma 4.5, we get that $I_{X}+T^{H}\delta T$ is invertible in
$H(X,X)$ and
$\displaystyle\|(I_{X}+T^{H}\delta
T)^{-1}\|\leq\dfrac{1+\lambda_{2}}{1-\lambda_{1}},\qquad\|I_{X}+T^{H}\delta
T\|\leq\dfrac{1+\lambda_{1}}{1-\lambda_{2}}.$ (4.2)
From Theorem 4.4, we see $\bar{T}^{H}=T^{H}(I_{Y}+\delta
TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is well–defined. Now we can
compute
$\displaystyle\dfrac{\|\bar{T}^{H}-T^{H}\|}{\|T^{H}\|}$
$\displaystyle\leq\dfrac{\|(I_{X}+T^{H}\delta
T)^{-1}T^{H}-T^{H}\|}{\|T^{H}\|}$
$\displaystyle\leq\dfrac{\|(I_{X}+T^{H}\delta T)^{-1}[I_{X}-(I_{X}+T^{H}\delta
T)]T^{H}\|}{\|T^{H}\|}$ $\displaystyle\leq\|(I_{X}+T^{H}\delta
T)^{-1}\|\|T^{H}\delta T\|.$ (4.3)
Since $\lambda_{2}\in[0,1)$, then from the second inequality in (4.2), we get
that $\|T^{H}\delta T\|\leq\dfrac{2+\lambda_{1}}{1-\lambda_{2}}$. Now, by
using (4) and (4.2), we can obtain
$\dfrac{\|\bar{T}^{H}-T^{H}\|}{\|T^{H}\|}\leq\dfrac{(2+\lambda_{1})(1+\lambda_{2})}{(1-\lambda_{1})(1-\lambda_{2})}.$
This completes the proof. ∎
###### Corollary 4.7.
Let $T\in B(X,Y)$ with $\mathcal{R}(T)$ closed. Assume that $\mathcal{R}(T)$
and $\mathcal{N}(T)$ are Chebyshev subspaces in $Y$ and $X$, respectively. Let
$\delta T\in B(X,Y)$ such that $\mathcal{R}(\delta T)\subset\mathcal{R}(T)$,
$\mathcal{N}(T)\subset\mathcal{N}(\delta T)$ and $\|T^{M}\delta T\|<1$. Put
$\bar{T}=T+\delta T$. If $T^{M}$ is quasi–additive on $\mathcal{R}(T)$, then
$\bar{T}^{M}=T^{M}(I_{Y}+\delta TT^{M})^{-1}=(I_{X}+T^{M}\delta T)^{-1}T^{M}$
with
$\dfrac{\|\bar{T}^{M}-T^{M}\|}{\|T^{M}\|}\leq\dfrac{\|T^{M}\delta
T\|}{1-\|T^{M}\delta T\|}.$
###### Proof.
From $\mathcal{R}(\delta T)\subset\mathcal{R}(T)$ and
$\mathcal{N}(T)\subset\mathcal{N}(\delta T)$, we get that
$\pi_{\mathcal{R}(T)}\delta T=\delta T$ and $\delta T\pi_{\mathcal{N}(T)}=0$,
that is, $TT^{M}\delta T=\delta T=\delta TT^{M}T$. Consequently,
$\bar{T}=T+\delta T=T(I_{X}+T^{M}\delta T)=(I_{Y}+\delta TT^{M})T$ (4.4)
Since $T^{M}$ is quasi–additive on $\mathcal{R}(T)$ and $\|T^{M}\delta T\|<1$,
we get that $I_{X}+T^{M}\delta T$ and $I_{Y}+\delta TT^{M}$ are all invertible
in $H(X,X)$. So from (4.4), we have $\mathcal{R}(\bar{T})=\mathcal{R}(T)$ and
$\mathcal{N}(\bar{T})=\mathcal{N}(T)$ and hence
$\bar{T}^{H}=T^{M}(I_{Y}+\delta TT^{M})^{-1}=(I_{X}+T^{M}\delta T)^{-1}T^{M}$
by Theorem 4.4. Finally, by Corollary 2.6,
$\displaystyle\bar{T}^{M}$
$\displaystyle=(I_{X}-\pi_{\mathcal{N}(\bar{T})})\bar{T}^{H}\pi_{\mathcal{R}(\bar{T})}=(I_{X}-\pi_{\mathcal{N}(T)})T^{M}(I_{Y}+\delta
TT^{M})^{-1}\pi_{\mathcal{R}(T)}$ $\displaystyle=(I_{X}+T^{M}\delta
T)^{-1}T^{M}\pi_{\mathcal{R}(T)}=(I_{X}+T^{M}\delta
T)^{-1}T^{M}=T^{M}(I_{Y}+\delta TT^{M})^{-1}$
and then
$\|\bar{T}^{M}-T^{M}\|\leq\|(I_{X}-T^{M}\delta
T)^{-1}-I_{X}\|\|T^{M}\|\leq\frac{\|T^{M}\delta T\|\|T^{M}\|}{1-\|T^{M}\delta
T\|}.$
The proof is completed. ∎
## References
* [1] X. Bai, Y. Wang, G. Liu and J. Xia, Definition and criterion of homogeneous generalized inverse, Acta Math. Sinica (Chin. Ser.),, 52 (2) (2009), 353–360.
* [2] P. Cazassa and O. Christensen, Perturbation of operators and applications to frame theory, J. Fourier Anal. Appl., 3 (5) (1997) 543–557.
* [3] N. Castro–González and J. Koliha, Perturbation of Drazin inverse for closed linear operators, Integral Equations and Operator Theory, 36 (2000), 92–106.
* [4] N. Castro–González, J. Koliha and V. Rakočević, Continuity and general perturbation of Drazin inverse for closed linear operators, Abstr. Appl. Anal., 7 (2002), 355–347.
* [5] N. Castro–González, J. Koliha and Y. Wei, Error bounds for perturbation of the Drazin inverse of closed operators with equal spectral idempotents, Appl. Anal., 81 (2002), 915–928.
* [6] G. Chen, M. Wei and Y. Xue, Perturbation analysis of the least square solution in Hilbert spaces, Linear Algebra Appl., 244 (1996), 69–80.
* [7] G. Chen, Y. Wei and Y. Xue, The generalized condition numbers of bounded linear operators in Banach spaces, J. Aust. Math. Soc., 76 (2004), 281–290.
* [8] G. Chen and Y. Xue, Perturbation analysis for the operator equation $Tx=b$ in Banach spaces, J. Math. Anal. Appl., 212 (1997), no. 1, 107–125.
* [9] G. Chen and Y. Xue, The expression of generalized inverse of the perturbed operators under type I perturbation in Hilbert spaces, Linear Algebra Appl., 285 (1998), 1–6.
* [10] J. Ding, New perturbation results on pseudo-inverses of linear operators in Banach spaces, Linear Algebra Appl., 362 (2003), no. 1, 229–235.
* [11] J. Ding, On the expression of generalized inverses of perturbed bounded linear operators, Missouri J. Math. Sci., 15 (2003), no. 1, 40–47.
* [12] F. Du and Y. Xue, The characterizations of the stable perturbation of a closed operator by a linear operator in Banach spaces, Linear Algebra Appl., (Accepted).
* [13] Q. Huang, On perturbations for oblique projection generalized inverses of closed linear operators in Banach spaces, Linear Algebra Appl., 434 (2011), no. 12, 2468–2474.
* [14] J. Koliha, Error bounds for a general perturbation of Drazin inverse, Appl. Math. Comput., 126 (2002), 181–185.
* [15] I. Singer, The Theory of Best Approximation and Functional Analysis, Springer-Verlag, New York, 1970.
* [16] T. Kato. Perturbation Theory for Linear Operators. Springer-Verlag, New York, 1984.
* [17] P. Liu and Y. Wang, The best generalized inverse of the linear operator in normed linear space, Linear Algebra Appl., 420 (2007) 9–19.
* [18] M. Z. Nashed (Ed.), Generalized inverse and Applications ,Academic Press, New York, 1976.
* [19] R. Ni, Moore-Penrose metric generalized inverses of linear operators in arbitrary Banach spaces. Acta Math. Sinica (Chin. Ser.), 49 (2006), no. 6, 1247–1252.
* [20] Y. Wang, Generalized Inverse of Operator in Banach Spaces and Applications, Science Press, Beijing, 2005.
* [21] H. Wang and Y. Wang, Metric generalized inverse of linear operator in Banach space, Chin. Ann. Math. B, 24 (4) (2003) 509–520.
* [22] Y. Wang and S. Li, Homogeneous generalized inverses of linear operators in Banach spaces, Acta Math. Sinica, 48 (2) (2005), 253–258.
* [23] Y. Wang and Sh. Pan, An approximation problem of the finite rank operator in Banach spaces, Sci. Chin. A, 46 (2) (2003) 245–250.
* [24] Y. Wei and J. Ding, Representations for Moore–Penrose inverses in Hilbert spaces, Appl. Math. Lett., 14 (2001) 599–604.
* [25] Y. Xue, Stable Perturbations of Operators and Related Topics, World Scientific, 2012.
* [26] Y. Xue and G. Chen, Some equivalent conditions of stable perturbation of operators in Hilbert spaces, Applied Math. Comput., 147 (2004), 765–772.
* [27] Y. Xue and G. Chen, Perturbation analysis for the Drazin inverse under stable perturbation in Banach space, Missouri J. Math. Sci., 19 (2007), 106–120.
* [28] Y. Xue, Stable perturbation in Banach algebras, J. Aust. Math. Soc., 83 (2007), 1–14.
|
arxiv-papers
| 2013-02-16T13:09:32 |
2024-09-04T02:49:41.828514
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jianbing Cao and Yifeng Xue",
"submitter": "Yifeng Xue",
"url": "https://arxiv.org/abs/1302.3965"
}
|
1302.3982
|
# Distributed boundary tracking using alpha and Delaunay-Čech shapes
Harish Chintakunta
North Carolina State University
[email protected] Hamid Krim
North Carolina State University
[email protected]
###### Abstract
For a given point set $S$ in a plane, we develop a distributed algorithm to
compute the $\alpha-$shape of $S$. $\alpha-$shapes are well known geometric
objects which generalize the idea of a convex hull, and provide a good
definition for the shape of $S$. We assume that the distances between pairs of
points which are closer than a certain distance $r>0$ are provided, and we
show constructively that this information is sufficient to compute the alpha
shapes for a range of parameters, where the range depends on $r$.
Such distributed algorithms are very useful in domains such as sensor
networks, where each point represents a sensing node, the location of which is
not necessarily known.
We also introduce a new geometric object called the Delaunay-Čech shape, which
is geometrically more appropriate than an $\alpha-$shape for some cases, and
show that it is topologically equivalent to $\alpha-$shapes.
## 1 Introduction
Many applications call for detecting and tracking the boundary of a
dynamically changing space of interest [4][3]. We would expect any algorithm
performing the task to include the following important properties: 1) the
boundary output is geometrically close to the actual boundary, and 2) the
interior of the boundary is topologically faithful to the original space. It
is often the case that we are only given random samples from the space. We may
then reconstruct the space by first placing balls of a certain radius around
these points, and then by taking the union of these balls. A good exposition
on relationship amongst the sampling density, the geometry of the underlying
space, and the radius of the balls may be found in [10].
In this paper, we start with the assumption that the union of the balls
described above is a good approximation to the space of interest. Note that in
some cases, this is by design. For example, in the case of systematic failures
in sensor networks [3], the failure in the nodes is caused by a spatially
propagating phenomenon, and our aim is to track its boundary. In this case, we
construct a space by taking the union of balls of radius $r_{c}/2$ around each
node, where $r_{c}$ is its radius of communication. The radius of
communication is the distance within which two nodes can communicate with each
other.
The problem may also be viewed as one of computing the boundary of a set of
points, provided with some geometric information. Given the binary
information, about nodes pair-wise location within a certain distance,
distributive algorithms exist to compute such a boundary [2]. These algorithms
are, unfortunately relatively slow, on account of the need for a global
structure to reach a decision on the membership of a node or of an edge to the
complex boundary. If on the other hand, we are provided with all pair-wise
distances of nodes within a neighborhood, the above decision may be locally
made by constructing an associated $\alpha-$shape.
Given a set of points $S$ in a plane, the $\alpha-$shape introduced in [5]
gives a generalization of the convex hull of $S$, and an intuitive definition
for the shape of points. More importantly, an $\alpha-$shape is the boundary
of an alpha complex, which has the same topology as that of the union of
balls. This relation amongst $\alpha-$shape, alpha complex and the union of
balls is contingent on certain relations between their parameters. We discuss
this in detail in Section 2.1. Such topological guarantees cannot be provided
by the boundary computed in [2].
The Delaunay triangulation gives sufficient information to compute the alpha
complex, and hence its boundary, the $\alpha-$shape. If we only require the
$\alpha-$shape, less information would be necessary. The work in [7] shows
that a global Delaunay triangulation is not necessary, and the alpha shape can
be computed using local Delaunay triangulations. Given the edge lengths of a
geometric graph, [7] constructs the local Delaunay triangulation by first
building local coordinates and then computing the Delaunay triangulation.
Computing the local coordinates, is however not robust and requires a high
density of nodes for accuracy. When given the edge length information, we show
that even local Delaunay triangulation is not necessary.
When there is a sufficient density of nodes, computing local coordinates is
accurate (probabilistically), and distributed algorithms exist for computing
modified versions of Delaunay triangulation [1, 9]. In this case, we define a
certain _Delaunay-Čech triangulation_ , which contains an alpha complex, and
which we show to be homotopy equivalent. For boundary tracking-based
applications, the boundary of Delaunay-Čech triangulation will serve as a
better geometric approximation to the boundary, while preserving the
topological features.
Our contributions in this article are:
* •
Given the distances between pairs of nodes whenever they are closer than
$r_{c}>0$, we develop an algorithm to compute the $\alpha-$shape for a range
of parameters, where this range depends on $r_{c}$.
* •
We introduce the Delaunay-Čech triangulation, defined in Section 2.2, and show
that it is homotopy equivalent to the alpha complex.
The remainder of the paper is organized as follows, in Section 2, we provide
some background information, along with a formulation of the problem. We
describe the distributed algorithm for computing an $\alpha-$shape in Section
3. The Delaunay-Čech triangulation and the Delaunay-Čech shape are defined in
Section 2.2, while the proof of its topological equivalence to the alpha
complex is given in Section 4. We conclude in Section 5 with some remarks.
## 2 Preliminaries
### 2.1 Alpha complex and $\alpha-$shape
Consider a set of nodes $V\subset\mathbb{R}^{2}$, and a parameter $r$. Let
$V_{i}$ be the voronoi cell associated with node $v_{i}\in V$ in the voronoi
decomposition of $V$. Define an alpha cell ($\alpha-$cell) of $v_{i}$ as
$\alpha(v_{i},r)=V_{i}\cap B(v_{i},r)$ where $B(v_{i},r/2)$ is the closed ball
of radius $r/2$ around $v_{i}$. The alpha complex, $A_{r}$ (we are assuming
$V$ is implied in this notation), is defined as the nerve complex of the alpha
cells, i.e., $(v_{0},v_{1},\ldots,v_{k})$ spans a $k-$simplex in $A_{r}$ if
$\bigcap_{i}\alpha(v_{i})\neq\varnothing$. Since the alpha cells are convex,
the nerve theorem [11, 8] implies that the alpha complex has the same homotopy
type as the union of the alpha cells, which in turn is equal to the union of
the balls $B(v_{i},r/2)$.
Given a set of nodes $V\subset\mathbb{R}^{2}$ 222The alpha shape is generally
defined for points in $\mathbb{R}^{k}$ for any dimension $k$., and a parameter
$r>0$, the alpha shape, $\partial A_{r}$, is a 1-dimensional complex which
generalizes the convex hull of $V$. To simplify the notation, we use
$(v_{i},v_{j})$ to denote an edge in a graph, a 1-simplex in a complex or the
underlying line segment. A 1-simplex $(v_{i},v_{j})$ belongs to $\partial
A_{r}$ if and only if a circle of radius $r/2$ passing through $v_{i}$ and
$v_{j}$ does not contain any other node inside it. By “inside” a circle, we
mean the interior of the ball to which this circle is a boundary. We say that
such a circle satisfies the “ _$\alpha-$ condition_”. $\partial A_{r}$ also
contains all the nodes $\\{v_{j}\\}$ such that a circle of radius $r$ passing
through $v_{j}$ satisfies the $\alpha-$condition.
For a 2-dimensional simplicial complex $K$, we define the boundary of $K$ to
be the union of all the $1$-simplices (along with their faces), where each is
a face of at most one $2-$simplex, and all $0-$simplices which are not faces
of any simplex in $K$. The alpha shape $\partial A_{r}$ is the boundary of the
alpha complex $A_{r}$[6].
### 2.2 Delaunay-Čech Shape
For a set of nodes $V\subset\mathbb{R}^{2}$ and a parameter $r>0$, define the
geometric graph $G_{r}=(V,E)$ to be the set of vertices $(V)$ and edges $(E)$,
where $e=(v_{i},v_{j})$ is in $E$ if the distance between $v_{i}$ and $v_{j}$
is less than or equal to $r$. Let $\check{C}(V,r)$ denote the Čech complex
with parameter $r$ (the nerve complex of the set of balls
$\\{B(v_{i},r/2)\\}$) and let $DT(V)$ be the Delaunay triangulation of $V$. We
define the Delaunay-Čech complex $D\check{C}_{r}$ with parameter $r$ as
$D\check{C}_{r}=DT(V)\cap\check{C}(V,r)$. We will show in Section 4, that
$D\check{C}_{r}$ is homotopy equivalent to $A_{r}$. We call the boundary of
$D\check{C}_{r}$, denoted by $\partial D\check{C}_{r}$ the Delaunay-Čech
shape.
## 3 Computing the alpha shape of points in $\mathbb{R}^{2}$
In order to compute $\partial A_{r}$, we take each edge in $G_{r}$ and check
if it is in $\partial A_{r}$. If an edge $e=(v_{i},v_{j})$ belongs to
$\partial A_{r}$, then the length of the line segment $(v_{i},v_{j})$ is less
than or equal to $r$. Otherwise, the $\alpha-$condition cannot be satisfied.
Edge $e$ hence also belongs to $G_{r}$, and consequently, checking for all the
edges in $G_{r}$ is sufficient to compute $\partial A_{r}$.
Given an edge $e=(v_{i},v_{j})$, there are two circles of radius $r$ passing
through $v_{i}$ and $v_{j}$. Let us call these circles $\mathcal{C}$ and
$\mathcal{C}^{\prime}$ (see Figure 1). The $\alpha-$ condition is satisfied if
and only if at most one of $\mathcal{C}$ and $\mathcal{C}^{\prime}$ contains
node(s) inside.
We consider all the nodes in $\mathcal{N}_{i}\cap\mathcal{N}_{j}$ (neighbors
common to both $v_{i}$ and $v_{j}$) , and perform a series of tests to verify
their location inside $\mathcal{C}$ and $\mathcal{C}^{\prime}$. It is simple
to see that considering nodes only in $\mathcal{N}_{i}\cap\mathcal{N}_{j}$ is
sufficient. The diameter of both $\mathcal{C}$ and $\mathcal{C}^{\prime}$ is
$r$. If $v_{k}$ lies in one of the circles, the distance between $v_{k}$ and
either of $v_{i}$ and $v_{j}$ is less than $r$, and hence, $v_{k}$ is a
neighbor to both.
We now derive the following:
1. 1.
A test to see if a node lies in both circles $\mathcal{C}$ and
$\mathcal{C}^{\prime}$. This immediately determines that $e$ does not belong
to $\partial A_{r}$.
2. 2.
A test to see if a node lies in exactly one of the circles $\mathcal{C}$ and
$\mathcal{C}^{\prime}$.
3. 3.
Given that there exists at least one node in one of the circles, a test to see
if a subsequent node lies in the other. This also, immediately determines that
$e$ does not belong to $\partial A$.
Let the angle subtended by the chord $v_{i}v_{j}$ on the bigger arc (of either
circle, see Figure 1) be $\theta$, hence making the angle subtended on the
smaller arc $\pi-\theta$. The angle which the chord subtends at the center,
$\omega$, may easily be computed using the law of cosines, and $\theta$ is
equal to $\omega/2$.
Let $v_{k}\in\mathcal{N}_{i}\cap\mathcal{N}_{j}$, and let
$\angle{v_{i}v_{k}v_{j}}=\phi_{k}$. Then, if $\phi_{k}>\pi-\theta$, $v_{k}$
lies inside both circles, and we immediately know that $e$ does not belong to
$\partial A_{r}$. If $\phi\leq\theta$, it lies inside neither circle. Let
$v_{k}$ be the first node satisfying $\theta<\phi\leq\pi-\theta$. Then $v_{k}$
lies in one of the circle. Without loss of generality, we assume $v_{k}$ lies
in $\mathcal{C}$.
Let $v_{l}$ be any subsequent node satisfying $\theta<\phi_{l}\leq\pi-\theta$.
If $v_{l}$ is not a neighbor of $v_{k}$, then $v_{l}$ lies in
$\mathcal{C}^{\prime}$, since any two nodes inside a circle of diameter $r$
will be neighbors. If $v_{k}$ and $v_{l}$ are neighbors, we know the length
$\|(v_{k}v_{l})\|$. Using the law of cosines, we compute the angle
$\angle{v_{k}v_{i}v_{l}}$ which we call $\beta$. If
$\beta=\angle{v_{k}v_{i}v_{j}}+\angle{v_{l}v_{i}v_{j}}$, $v_{l}$ lies in
$\mathcal{C}^{\prime}$, and if
$\beta=|\angle{v_{k}v_{i}v_{j}}-\angle{v_{l}v_{i}v_{j}}|$, $v_{l}$ lies in
$\mathcal{C}$. Figure 1 demonstrates this relationship between the angles.
computing the $\alpha-$shape
---
At each edge $e=(v_{i},v_{j})$ in $G$,
compute $\theta$
for each $v_{k}\in\mathcal{N}_{i}\cap\mathcal{N}_{j}$
compute $\phi_{k}$
if $\phi_{k}>\pi-\theta$
$e\not\in\partial A$, terminate
if $\phi_{k}\leq\theta$,
continue to next node
if $\theta<\phi\leq\pi-\theta$
is $v_{k}$ the first node satisfying this condition?
assign $v_{k}$ to $\mathcal{C}$
else
compute $\beta$
if $\beta=|\angle{v_{k}v_{i}v_{j}}-\angle{v_{l}v_{i}v_{j}}|$
continue to next node
else
$e\not\in\partial A$, terminate
$e\in\partial A$
Table 1: Algorithm for computing the $\alpha-$shape. Note that all the
computations require only local information.
The algorithm terminates when we determine that both circles $\mathcal{C}$ and
$\mathcal{C}^{\prime}$ contain at least one node, or there are no more nodes
in $\mathcal{N}_{i}\cap\mathcal{N}_{j}$ to consider. In the former case, the
edge $e$ does not belong to $\partial A_{r}$ and in the latter, $e$ belongs to
$\partial A_{r}$. Clearly, we can use the same algorithm to compute the
$\alpha-$shape for any parameter $0<q\leq r$. The algorithm is summarized in
Table 1. Figure 2 shows $\partial A_{r}$ for a set of points in
$\mathbb{R}^{2}$ computed using the algorithm in Table 1. The shaded region in
the Figure is the union of balls of radius $r/2$ centered at each point. Note
that the $\alpha-$shape is a boundary of an object which is homotopy
equivalent to the shaded region.
$v_{j}$$v_{i}$$v_{k}$$\theta$$\pi-\theta$$\phi_{k}$$\mathcal{C}^{\prime}$$\mathcal{C}$
$v_{i}$$v_{j}$$v_{k}$$v_{l}$$v_{l}$$\beta$$\beta$
Figure 1: (a)shows the cycles $\mathcal{C}$ and $\mathcal{C}^{\prime}$ which
pass through $v_{i}$ and $v_{j}$. $\phi_{k}$ satisfies
$\theta\leq\phi\leq\pi-\theta$ and $v_{k}$ lies in $\mathcal{C}$. (b)shows the
angle relationships. When $v_{l}$ also lies in $\mathcal{C}$, then
$\beta=|\angle{v_{k}v_{i}v_{j}}-\angle{v_{l}v_{i}v_{j}}|$, and if $v_{l}$ lies
in $\mathcal{C}^{\prime}$,
$\beta=\angle{v_{k}v_{i}v_{j}}+\angle{v_{l}v_{i}v_{j}}$ Figure 2:
$\alpha-$shape with parameter $r_{c}/2$ for a set of points in
$\mathbb{R}^{2}$ computed using algorithm in Table 1. The shaded region is the
union of balls of radius $r_{c}/2$ centered at each point.
## 4 Relation between $D\check{C}_{r}$ and $A_{r}$
Consider the Delaunay-Čech complex $D\check{C}_{r}$ as defined in Section 2.2,
and the $\alpha-$complex $A_{r}$, as defined in Section 2.1. We will show that
$D\check{C}_{r}$ has the same homotopy type as $A_{r}$, by showing that there
exists a bijective pairing between the 1-simplices and 2-simplices in
$D\check{C}_{r}\setminus A_{r}$, such that the pairing describes a homotopy
collapse. Note that both $A_{r}$ and $D\check{C}_{r}$ do not contain any
simplices of dimension greater than 2. Figure 3 shows such a homotopy
collapse.
$e$ Figure 3: Homotopy collapse of an edge into an adjacent 2-simplex
Let $F(G_{r})$ be the flag complex of $G_{r}$. Define the complex $U_{r}$ as
$U_{r}=DT(V)\cap F(G_{r})$. Since the Čech complex $\check{C}(V,r)$ is a
subcomplex of $F(G_{r})$, $D\check{C}_{r}$ is a subcomplex of $U_{r}$.
Let $\mathcal{T}_{e}$ denote the set of all 2-simplices to which $e$ is a face
in $U_{r}$, $\mathcal{T}_{e}^{r_{c}/2}\subseteq\mathcal{T}_{e}$ denote the
2-simplices in $\mathcal{T}_{e}$ with circum-radius less than or equal to
$r/2$, and $\mathcal{T}_{e}^{\pi/2}\subseteq\mathcal{T}_{e}$ denote the
2-simplices in $\mathcal{T}_{e}$ such that the angle opposite $e$ is greater
than $\pi/2$. Figure 4 shows examples of triangles with these properties.
$e$$\tau$$r_{c}/2$ (a) $\tau\in\mathcal{T}_{e}^{r_{c}/2}$
$e$$\tau$$>\pi/2$ (b) $\tau\in\mathcal{T}_{e}^{\pi/2}$
Figure 4: Examples of the triangle sets $\mathcal{T}_{e}^{r_{c}/2}$ and
$\mathcal{T}_{e}^{\pi/2}$. The circles in (a) have are centered at the nodes
with radius $r/2$. Since the circum-radius is less than $r/2$, the circles
have a common intersection with positive area.
In order to show the existence of a paring, we first analyze the triangles
surrounding an edge $e\in A_{r})$. The following lemma characterizes the
2-simplices in $A_{r}$ in terms of their circum-radius.
###### Lemma 4.1
A simplex $(v_{1},v_{2},v_{3})$ in $U_{r}$ is in $A_{r}$ iff the circum-radius
of the triangle $(v_{1},v_{2},v_{3})$ 222we use the notation
$(v_{1},v_{2},v_{3})$ to denote both the simplex and the underlying triangle
is less than or equal to $r/2$.
* Proof
Since $U_{r}\subseteq DT(V)$, $(v_{1},v_{2},v_{3})\in
U_{r}\Rightarrow(v_{1},v_{2},v_{3})\in DT(V)$. The circumradius is less than
or equal to $r/2$, _iff_ circumcenter belongs to all $\alpha-$cells
$\alpha(v_{1})$, $\alpha(v_{2})$ and $\alpha(v_{3})$. This results in the
three $\alpha-$ cells having a non-empty intersection, hence
$(v_{1},v_{2},v_{3})\in A_{r}$. $\blacksquare$
Lemmas 4.2 and 4.3 together impose conditions on cardinality of the sets
$\mathcal{T}_{e}^{r_{c}/2}$ and $\mathcal{T}_{e}^{\pi/2}$. We utilize these
conditions in Lemma 4.1 to show the existence of the pairing.
###### Lemma 4.2
Denote by $m_{e}$ the midpoint of the 1-simplex $e\in U_{r}$. $m_{e}$ is a
witness for $e$ iff $\mathcal{T}_{e}^{\pi/2}=\varnothing$
* Proof
$m_{e}$ is a witness for $e$ iff there does not exist any other node inside
the circle with $e$ as the diameter (illustrated in Figure 5(a)). This occurs
if and only if the angle opposite $e$ in any incident triangle is acute.
$\blacksquare$
$>\pi/2$$<\pi/2$$m_{e}$ (a) Construction for Lemma 4.2.
$\phi$$<\pi-\phi$$e$$v$$\tau_{1}$$\tau$ (b) Construction for Lemma 4.4
Figure 5:
###### Lemma 4.3
Consider the following statements
* •
$S_{1}:$ $\mathcal{T}_{e}^{\pi/2}=\varnothing$
* •
$S_{2}:$ $\mathcal{T}_{e}^{r/2}\neq\varnothing$
* •
$S_{3}:$ $e\in A_{r}$
$S_{1}\vee S_{2}$ is a necessary and sufficient condition for $S_{3}$.
* Proof
Let $e=(v_{1},v_{2})$. For sufficiency: from Lemma 4.2, $S_{1}$ implies
$m_{e}$ is a witness for $v_{1}$ and $v_{2}$, and $S_{2}$ implies $\exists$ a
witness (the circumcenter of one of the triangles in $\mathcal{T}_{e}^{r/2}$).
For necessity: if $S_{3}$ is true, then there exists a witness for $e$. If
$m_{e}$ is a witness, then $S_{1}$ is true. If $m_{e}$ is not a witness, then
$e$ shares a witness with a 2-simplex which is in $A_{r}$. From Lemma 4.1,
this implies $\mathcal{T}_{e}^{r/2}\neq\varnothing$. Therefore, $S_{2}$ is
true. $\blacksquare$
The above Lemma suggests the existence or non-existence of types of triangles
surrounding an edge in $A_{r}$. Lemma 4.4 and Theorem 4.1 further refine this
relationship, and precisely identify the triangle to be removed when an edge
is removed from $D\check{C}_{r}$.
###### Lemma 4.4
If $\mathcal{T}_{e}^{\pi/2}\neq\varnothing$, then
$|\mathcal{T}_{e}^{\pi/2}|=1$.
* Proof
Suppose $\mathcal{T}_{e}^{\pi/2}\neq\varnothing$. Let
$\tau\in\mathcal{T}_{e}^{\pi/2}$, and let the angle opposite $e$ in $\tau$ be
$\phi$ with $\phi>\pi/2$ (see Figure 5(b)). Let $\tau_{1}\neq\tau$ be incident
on $e$, with $v$ being the opposite vertex. Since $\tau\in U_{r}$, $v$ does
not lie inside the circum-center of $\tau$. This implies that the angle
opposite $e$ in $\tau_{1}$ is less than $\pi-\phi$ which is less than $\pi/2$.
$\blacksquare$
Let $C_{k}(K)$ denote the $k-$simplices in the complex $K$.
###### Theorem 4.1
Let $\mathcal{T}_{R}=C_{2}\left(D\check{C}_{r}\right)\setminus
C_{2}\left(A_{r}\right)$ and
$\mathcal{E}_{R}=C_{1}\left(D\check{C}_{r}\right)\setminus
C_{1}\left(A_{r}\right)$. There exists a bijective pairing
$P:\mathcal{E}_{R}\rightarrow\mathcal{T}_{R}$ such that $e$ is a face of
$P(e)$.
* Proof
Let $e\in D\check{C}_{r}$ but $e\not\in A_{r}$, from Lemma 4.3,
$\mathcal{T}_{e}^{r_{c}/2}=\varnothing$ and
$\mathcal{T}_{e}^{\pi/2}\neq\varnothing$. Since we assume $e\not\in A_{r}$,
$e$ cannot be a face of any 2-simplex in $A_{r}$. Owing to the condition
$\mathcal{T}_{e}^{r_{c}/2}=\varnothing$, Lemma 4.1 ensures that this is indeed
the case. Also, from Lemma 4.4, $|\mathcal{T}_{e}^{\pi/2}|=1$. Let
$\tau\in\mathcal{T}_{e}^{\pi/2}$. Note that $\tau$ is unique, and $\tau\not\in
A_{r}$. Further, since $\tau$ is an obtuse triangle, $\tau\in\check{C}(V,r)$,
and this implies $\tau\in D\check{C}_{r}$. The pairing $P$ is then defined as
$P(e)=\tau$. $\blacksquare$
For any simplicial complex $K$, let $\sigma_{1}$ and $\sigma_{2}$ be simplices
of dimension $1$ and $2$ such that $\sigma_{1}$ is a face of $\sigma_{2}$.
Then, there exists a deformation retraction $F_{\sigma_{1}}:K\rightarrow
K\setminus(\sigma_{1}\cup Int(\sigma_{2}))$, which “collapses” $\sigma_{1}$
into $\sigma_{2}$. Therefore, $K$ is homotopy equivalent to
$K\setminus(\sigma_{1}\cup Int(\sigma_{2}))$.
The removal of edges $\mathcal{E}_{R}$ and triangles $\mathcal{T}_{R}$
describes a finite sequence of deformation retractions via the pairing $P$.
When we collapse all the edges into their paired triangles, the resulting
complex is $A_{r}$. Each collapse is a homotopy equivalence, and a composition
of homotopy equivalences is a homotopy equivalence. This leads us to our main
theorem:
###### Theorem 4.2
The complexes $D\check{C}_{r}$ and $A_{r}$ are homotopy equivalent.
Figure 6 illustrates the above theorem using an example. Note that $A_{r}$ and
$D\check{C}_{r}$ are homotopy equivalent to each other and both are homotopy
equivalent to $R_{c}$ (the shaded region). Further, as seen, $D\check{C}_{r}$
is a better geometric approximation to $R_{c}$ than $A_{r}$. This is simply
because $A_{r}$ is a sub-complex of $D\check{C}_{r}$.
(a) $A_{r_{c}/2}(V)$ (b) $D\check{C}_{r_{c}/2}(V)$ (c) $A_{r_{c}/2}(V)$ super-
imposed over $D\check{C}_{r_{c}/2}(V)$
Figure 6: Figure shows the homotopy equivalence between $A_{r_{c}/2}(V)$ and
$D\check{C}_{r_{c}/2}(V)$. The shaded region is $R_{c}$. Note that
$D\check{C}_{r_{c}/2}(V)$ is a better geometric approximation to $R_{c}$ than
$A_{r_{c}/2}(V)$.
## 5 Conclusion
The algorithm described in Section 3 takes the edge lengths as inputs and
outputs the alpha shapes. We make no further assumptions on the node density,
and we need not compute any coordinates. The decision about an edge belonging
to an $\alpha-$shape is carried out by only looking at the local information,
i.e., considering only the points within a certain distance, and may therefore
be implemented distributively. In Section 2.2, we define the Delaunay-Čech
complex which contains the alpha complex. Its boundary, defined as a Delaunay-
Čech shape, is therefore a better geometric approximation for the union of
balls with an appropriate radius. We also show in Section 4 that, like the
$\alpha-$shape, the Delaunay-Čech shape remains topologically faithful to the
underlying space.
## References
* [1] Chen Avin. Random Geometric Graphs: An Algorithmic Perspective. PhD thesis, 2006.
* [2] H. Chintakunta and H. Krim. Divide and conquer: Localizing coverage holes in sensor networks. In Sensor Mesh and Ad Hoc Communications and Networks (SECON), 2010 7th Annual IEEE Communications Society Conference on, pages 1 –8, june 2010\.
* [3] H. Chintakunta and H. Krim. Detection and tracking of systematic time-evolving failures in sensor networks. In Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2011 4th IEEE International Workshop on, pages 373 –376, dec. 2011\.
* [4] S. Duttagupta, K. Ramamritham, and P. Kulkarni. Tracking dynamic boundaries using sensor network. Parallel and Distributed Systems, IEEE Transactions on, PP(99):1, 2011.
* [5] H. Edelsbrunner, D. Kirkpatrick, and R. Seidel. On the shape of a set of points in the plane. Information Theory, IEEE Transactions on, 29(4):551–559, 1983.
* [6] H. Edelsbrunner and E.P. Mücke. Three-dimensional alpha shapes. ACM Transactions on Graphics (TOG), 13(1):43–72, 1994.
* [7] M. Fayed and H.T. Mouftah. Localised alpha-shape computations for boundary recognition in sensor networks. Ad Hoc Networks, 7(6):1259–1269, 2009.
* [8] J. Leray. Sur la forme des espaces topologiques et sur les points fixes des représentations.(french). J. Math. Pures Appl.(9), 24:95–167, 1945.
* [9] X.Y. Li, G. Calinescu, P.J. Wan, and Y. Wang. Localized delaunay triangulation with application in ad hoc wireless networks. Parallel and Distributed Systems, IEEE Transactions on, 14(10):1035–1047, 2003.
* [10] Partha Niyogi, Stephen Smale, and Shmuel Weinberger. Finding the homology of submanifolds with high confidence from random samples. In Richard Pollack, J nos Pach, and Jacob E. Goodman, editors, Twentieth Anniversary Volume:, pages 1–23. Springer New York, 2009.
* [11] W.L. Tu and R. Bott. Differential forms in algebraic topology. Graduate Texts in Math, Springer Verlag, 1982.
|
arxiv-papers
| 2013-02-16T17:56:15 |
2024-09-04T02:49:41.835955
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Harish Chintakunta and Hamid Krim",
"submitter": "Harish Chintakunta",
"url": "https://arxiv.org/abs/1302.3982"
}
|
1302.4031
|
# Two particle excited states entanglement entropy in a one-dimensional ring
Richard Berkovits Department of Physics, Bar-Ilan University, Ramat-Gan
52900, Israel
###### Abstract
The properties of the entanglement entropy (EE) of two particle excited states
in a one-dimensional ring are studied. For a clean system we show analytically
that as long as the momenta of the two particles are not close, the EE is
twice the value of the EE of any single particle state. For almost identical
momenta the EE is lower than this value. The introduction of disorder is
numerically shown to lead to a decrease in the median EE of a two particle
excited state, while interactions (which have no effect for the clean case)
mitigate the decrease. For a ring which is of the same size as the
localization length, interaction increase the EE of a typical two particle
excited state above the clean system EE value.
###### pacs:
73.20.Fz,03.65.Ud,71.10.Pm,73.21.Hb
## I Introduction
There has been a growing interest in the behavior of entanglement entropy (EE)
amico08 in different physical fields. In condensed mater, much of the
interest stems from the behavior of the ground state EE in the presence of
quantum phase transitions (QPTs) amico08 ; vojta06 ; lehur08 ; goldstein11 .
The EE of a finite region A of a one-dimensional system grows logarithmically
as long as the region’s size $L_{A}$ is smaller than the correlation length
$\xi$ characterizing the system, while it saturates for $L_{A}>\xi$ holzhey94
; vidal03 ; calabrese04 . This behavior may be used in order to extract $\xi$,
for example the ground state localization length of the Anderson transition
berkovits12 .
A natural question is what is the behavior of the EE for the excited states
alba09 ? Beyond the growing interest in EE coming from the quantum
information circles, the question whether EE is a useful concept in studying
the behavior of excited states, is relevant to the condensed matter community.
Low lying excited states in the vicinity of a ground state quantum critical
point (QCP) should be strongly influenced by the critical point schadev99 ,
and one expects it to show in the behavior of the EE of these states.
Moreover, the whole concept of the many-body localization transition AGKL97 ;
gornyi05 ; basko06 is centered on the behavior of the excited states. The
localization-delocalization transition occurring at a critical excitation
energy should change the properties of excitations above it which should be
manifested in the properties of the excited states. Although much effort went
into trying to understand the transition using different properties of the
excited states (such as level statistics, inverse participation ratio,
conductance, and correlations) berkovits98 ; oganesyan07 ; monthus10 ;
berkelbach10 ; pal10 ; canovi11 ; cuevas12 , all these studies were performed
for rather small systems, and many questions remain open. Recently bardarson12
, time evolution of the entanglement of an initial state was studied, and it
showed signs of many-particle delocalization. Thus, EE seems as a useful tool
to study the many-body localization transition.
Unlike the ground state EE for which universal results exist, the
understanding of EE for the excited states is still a work in progress
alcaraz08 ; masanes09 ; berganza12 . Therefore, it would be useful to consider
a system for which the EE of the excited states is simple enough to describe
analytically, although it exhibits interesting behavior such as interaction
induced delocalization of the excited states. In this paper we study the EE
for such a system, namely two particles on a ring. The study of two
interacting particle (TIP) in a disordered one dimensional system has a long
history in the context of many particle delocalization problem. All single-
electron states for any amount of disorder are localized lee85 . This
continues to be true for two-electron states, however, the localization length
becomes longer as the repulsive interaction becomes stronger shepelyansky94 ;
imry95 . This interaction induced delocalization was confirmed numerically
frahm95 ; weinmann95 ; vonoppen96 ; jacquod97 . It is important to emphasize
that there is no enhancement of the localization length in the ground state.
The delocalization becomes significant only for higher excitations.
## II Clean Ring
For a clean ring composed of $N$ sites, the tight-binding Hamiltonian is given
by:
$\displaystyle H$ $\displaystyle=$
$\displaystyle\displaystyle\sum_{j=1}^{N}\epsilon_{j}{\hat{a}}^{\dagger}_{j}{\hat{a}}_{j}-t\displaystyle\sum_{j=1}^{N}(e^{i\alpha}{\hat{a}}^{\dagger}_{j}{\hat{a}}_{j+1}+h.c.),$
(1)
where for the clean case $\epsilon_{j}=0$, $t=1$ is the hopping matrix element
between neighboring sites, and ${\hat{a}}_{j}^{\dagger}$ is the creation
operator of a spinless electron at site $j$ on the ring. In order to break
symmetry (we shall see why this is important further on) a magnetic flux
$\phi$ threading the ring is introduced, where $\alpha=2\pi\phi/(\phi_{0}N)$,
and $\phi_{0}=hc/e$ is the quantum flux unit. The single-electron eigenvalues
are $\varepsilon(k)=-2t\cos(p-\alpha)$ where $p=2\pi k/N$, and $k=0,\pm 1,\pm
2,\ldots\pm N/2$. The eigenvectors are given by
$\displaystyle|k\rangle=(1/\sqrt{N})\displaystyle\sum_{j=1}^{N}\exp(\imath
pj){\hat{a}}^{\dagger}_{j}|\emptyset\rangle,$ (2)
where $|\emptyset\rangle$ is the vacuum state.
The two particle eigenvalues are
$\varepsilon(k1,k2)=-2t\left(\cos(p_{1}-\alpha)+\cos(p_{2}-\alpha)\right)$.
The eigenvector
$\displaystyle|k_{1},k_{2}\rangle=(1/N)\displaystyle\sum_{j_{1}>j_{2}=1}^{N}A(k_{1},j_{1},k_{2},j_{2}){\hat{a}}^{\dagger}_{j_{1}}{\hat{a}}^{\dagger}_{j_{2}}|\emptyset\rangle,$
(3)
where
$A(k_{1},j_{1},k_{2},j_{2})=\\\
\big{(}\exp(\imath(p_{1}j_{1}+p_{2}j_{2}))-\exp(\imath(p_{1}j_{2}+p_{2}j_{1}))\big{)}.$
(4)
Once the eigenvectors of the system are available one can (in principal)
calculate the EE. The entanglement between a region A (of length $N_{A}$) in
the system and the rest of the system (denoted by B) for a given eigenstate
$|\Psi\rangle$ is measured by the EE $S_{A/B}$. This EE is related to the
region’s reduced density matrix $\rho_{A/B}$, defined in the following way:
$\displaystyle\hat{\rho}_{A/B}={\rm Tr}_{B/A}|\Psi\rangle\langle\Psi|,$ (5)
where the trace is over region’s B or A degrees of freedom. The EE is related
to the eigenvalues $\lambda_{i}$ of the reduced density matrix:
$\displaystyle S_{A/B}=-\Sigma_{i}\lambda_{i}\ln(\lambda_{i}).$ (6)
One important result of this definition is the symmetry between the EE of the
two regions $S_{A}=S_{B}$.
Following Cheong and Henley cheong04 , one can write the pure state
$|\Psi\rangle=\displaystyle\sum_{i}|i_{A}\rangle|\phi_{B,i}\rangle$, where
$|i_{A}\rangle$ is a complete orthonormal many-body basis of region A, while
$|\phi_{B,i}\rangle$ is the state in region B associated with $|i_{A}\rangle$.
Please note that $|\phi_{B,i}\rangle$ is not normalized. Using that notation,
the reduced density matrix,
$\displaystyle\hat{\rho}_{A}=\displaystyle\sum_{i,j=1}^{N}|i_{A}\rangle\langle
j_{A}|,$ (7)
or in matrix form:
$\displaystyle\rho_{A}(i,j)=\langle\phi_{B,i}|\phi_{B,j}\rangle.$ (8)
Utilizing the occupation basis in the A region, i.e.,
$|i_{A}\rangle=|n^{i}_{1},n^{i}_{2},n^{i}_{3},\ldots,n^{i}_{N_{A}}\rangle$
(where $n^{i}_{j}=0,1$), one can define an operator,
${\hat{K}}_{i}=\displaystyle\prod_{s=1}^{N_{A}}[n^{i}_{s}{\hat{a}}^{\dagger}_{s}+(1-n^{i}_{s}){\hat{a}}_{s}{\hat{a}}^{\dagger}_{s}]$,
resulting in:
$\displaystyle\rho_{A}(i,j)=\langle\Psi|{\hat{K}}_{i}^{\dagger}{\hat{K}}_{j}|\Psi\rangle.$
(9)
It is important to note that $\rho_{A}(i,j)\neq 0$ only for states which have
the same number of particles $n^{i}_{A}$ in region A, where
$n^{i}_{A}=\displaystyle\sum_{s=1}^{N}n^{i}_{s}$. Thus, in this basis, the
reduced density matrix is composed of blocks which increases in size with
$n^{i}_{A}$. Thus for $n^{i}_{A}=0$, the block size is one, for $n^{i}_{A}=1$
it is $N_{A}$, for $n^{i}_{A}=2$ it is $N_{A}(N_{A}-1)/2$, etc.
Thus, the task of calculating the EE of a region A at a given excitation
$|\Psi\rangle$ is equivalent to calculating the eigenvalues of the matrix
$\rho_{A}(i,j)$. Since the blocks are uncoupled, it is possible to diagonalize
each block with a given number of particles, $\rho_{A}^{(\ell)}(i,j)$,
independently (where $\ell$ denotes the number of particles in region A). For
a state $|\Psi\rangle$, which is the ground state of a half-filled ring at
$N\rightarrow\infty$, it is possible (in several different ways) to show that
$S_{A}=-(1/3)\ln(x)+{\rm Const}$ amico08 , where $x=N_{A}/N$. For the excited
states the task becomes more difficult, and no simple and general result for
$S_{A}$ exists alba09 . Here we will calculate the $S_{A}$ for two-particle
excitations.
For the sake of completeness lets first consider single particle state
entanglement. In this simple case $\rho_{A}(i,j)$ is composed of two blocks:
$\rho_{A}^{(0)}(1,1)$ and $\rho_{A}^{(1)}(i,j)$ (where $i,j=1\ldots N_{A}$).
Direct evaluation of Eq. (9) for any single-particle state
$|\Psi\rangle=|k\rangle$ results in $\rho_{A}^{(0)}(1,1)=1-x$, while
$\rho_{A}^{(1)}(i,j)=(1/N)\exp(-\imath p(i-j))$. The latter is a Toeplitz
matrix, with one eigenvalue equal to $x$ and $N_{A}-1$ zero eigenvalues. As
expected, the EE for any single particle eigenstate $|k\rangle$ is equal to
$\displaystyle S_{A}=-x\ln(x)-(1-x)\ln(1-x)$ (10)
and does not depend on $|k\rangle$.
A note of caution is in place. Since the eigenvalues for $k$ and $-k$ are
degenerate, any linear combination of $|k\rangle$ and $|-k\rangle$ are an
excited state of Eq. (1). These linear combinations have different values of
the EE, and therefore, strictly speaking, the EE for degenerate excited states
is ill defined. We circumvent this problem by introducing a degeneracy
breaking magnetic flux $\phi$ into Eq. (1). As long as the degeneracy is
broken the EE of any excited state $|k_{1},k_{2}\rangle$ is well defined and
does not depend on $\phi$.
For two-particle states $|k_{1},k_{2}\rangle$, the reduced density matrix is
composed of three blocks: $\rho_{A}^{(0)}(1,1)$, $\rho_{A}^{(1)}(i,j)$ (of
size $N_{A}$) and $\rho_{A}^{2}(i,j)$ (size $N_{A}(N_{A}-1)/2$). For the zero
particle block:
$\rho_{A}^{0}(1,1)=\\\
\frac{1}{N^{2}}\displaystyle\sum_{j_{1}>j_{2}>N_{A}}^{N}|A(j_{1},k_{1},j_{2},k_{2})|^{2}=(1-x)^{2}-y^{2},$
(11)
where $y=(\sin(\pi(k_{2}-k_{1})x)/(\pi(k_{2}-k_{1})))$. Thus the eigenvalue of
this block is $(1-x)^{2}-y^{2}$. Using symmetry, one can immediately deduce
the eigenvalues of the two-particle reduced density matrix, without actually
diagonalizing the $N_{A}(N_{A}-1)/2$ matrix. Since $S_{A}=S_{B}$ the
contribution to the EE from $\rho_{A}^{(2)}(i,j)$, must be equal to the
contribution from $\rho_{B}^{(0)}(1,1)$. This infers that
$\rho_{A}^{(2)}(i,j)$ has only one non-zero eigenvalue. Seeing that region B’s
length is $N-N_{A}$, according to Eq. (11), the non-zero eigenvalue of
$\rho_{A}^{(2)}(i,j)$ is equal to $x^{2}-y^{2}$.
The one-particle block density matrix is given by:
$\rho_{A}^{(1)}(i,j)=\frac{1}{N^{2}}\displaystyle\sum_{j_{1}>N_{A}}^{N}A^{*}(j_{1},k_{1},i,k_{2})A(j_{1},k_{1},j,k_{2})=\\\
\frac{e^{-\imath
p_{1}(i-j)}}{N}\bigg{(}(1-x)(1+e^{-\imath(p_{2}-p_{1})(i-j)})+\\\
\frac{1}{\imath
N(p_{2}-p_{1})}\big{[}e^{-\imath(p_{2}-p_{1})i}(e^{\imath(p_{2}-p_{1})N_{A}}-1)\\\
-e^{\imath(p_{2}-p_{1})j}(e^{-\imath(p_{2}-p_{1})N_{A}}-1)\big{]}\bigg{)}.$
(12)
This cumbersome form is substantially simplified when $k_{2}-k_{1}$ is large.
In that case the second term in Eq. (12) may be neglected and the density
matrix block has a Toeplitz form
$\displaystyle\rho_{A}^{(1)}(i,j)=\frac{1-x}{N}\left(e^{-\imath
p_{1}(i-j)}+e^{-\imath p_{2}(i-j)}\right),$ (13)
with $N_{A}-2$ zero eigenvalues, and two degenerate eigenvalues equal to
$x(1-x)$. The second term is negligible also when $x=N_{A}/N\sim 1/2$, and
$k_{2}-k_{1}$ is even, resulting in the same eigenvalues. We do not have the
general solution, nevertheless, it can be shown numerically that
$\rho_{A}^{(1)}(i,j)$ has no more than two non-zero eigenvalues, which depend
only on the difference $k_{2}-k_{1}$. Moreover, since the sum of all
eigenvectors of the density matrix should be one, the sum of those two
eigenvalues should be $2x(1-x)+2y^{2}$. For the ground state (and excitations
for which $k_{2}-k_{1}=1$) the two eigenvalues are well described by
$2x(1-x)+(2-1/\pi)y^{2}$ and $y^{2}/\pi$.
Thus, the EE of two-particle states composed from two single-particle states
of significantly different wave numbers which are the majority of the two-
particle states, is
$\displaystyle S_{A}(k_{2}-k_{1}\gg 1)=-2\left[(1-x)\ln(1-x)-x\ln(x)\right].$
(14)
This is twice the EE of a single particle state (Eq. (10). Thus, as long as
the two occupied states $|k_{1}\rangle$ and $|k_{2}\rangle$ are far enough
from each other, the two-particle EE is just the sum of the EE of each
occupied state. In the opposite limit
$S_{A}(k_{2}-k_{1}=1)\sim-((1-x)^{2}-y^{2})\ln((1-x)^{2}-y^{2})\\\
-(x^{2}-y^{2})\ln(x^{2}-y^{2})-[y^{2}/\pi]\ln[y^{2}/\pi]\\\
-[2x(1-x)+(2-1/\pi)y^{2}]\ln[2x(1-x)+(2-1/\pi)y^{2}].$ (15)
The EE curves for other values of $k_{2}-k_{1}$ can be calculated numerically
by diagonalizing the $N_{A}\times N_{A}$ matrix representing $\rho_{A}^{(1)}$
(the two other eigenvalues for $\rho_{A}^{(0)}$ and $\rho_{A}^{(2)}$ are given
be Eq. (11)). The results are depicted in Fig. 1. All the values of the EE for
any value of $k_{2}-k_{1}$ fall in between those two limits, but as can be
seen in Fig. 1 they quite quickly fall on the $k_{2}-k_{1}\gg 1$ curve. Since
there is a large phase space for $k_{2}-k_{1}\gg 1$, a typical two particle
excitation corresponds to the EE described in Eq. 14.
Figure 1: (Color online) The EE of a two particle state $|k_{1},k_{2}\rangle$
of a clean system as function of region A’s size $x=N_{A}/N$ for a system of
length $N=1000$. The Heavy lines correspond to the analytic prediction,
(dotted for $k_{2}-k_{1}=1$, Eq. (15), dashed for $k_{2}-k_{1}\gg 1$, Eq.
(14)). The thin lines pertain to the numerically calculated EE for all values
of $k_{2}-k_{1}$ between $1\ldots 30$, where odd values are depicted by red
curves, and even one by red curves. It is clear that for $k_{2}-k_{1}$ larger
than $5$ the numerical curves fit Eq. (14) quite well. It is also clear that
all even $k_{2}-k_{1}$ reach the same EE value $S_{A}(x=1/2)=\ln(4)$ once
$N_{A}=N/2$.
Another interesting behavior that can be gleaned from Fig. 1 is that all two-
particle states of even $k_{2}-k_{1}$ reach the same EE value at $N_{A}=N/2$.
This stems from the structure of $\rho_{A}^{(1)}(i,j)$ (Eq. (12)), where the
two last terms are multiplied by
$e^{-i(p_{2}-p_{1})N_{A}}-1=e^{-i\pi(k_{2}-k_{1})}-1=0$ (since
$p_{2}-p_{1}=2\pi(k_{2}-k_{1})/N$) returning to the density matrix block with
a Toeplitz form depicted in Eq. (13), and the corresponding two degenerate
eigenvalues equal to $1/4$. Since at $N_{A}=N/2$, $x=1/2$, and $y=0$, the two
other block eigenvalues are also $1/4$, resulting in $S_{A}(k_{2}-k_{1}={\rm
odd},x=1/2)=\ln(4)$. Thus the largest EE in a two-particle clean ring system
is equal to $\ln(4)$.
## III Interacting Clean Ring
Incorporating nearest neighbor electron-electron interactions into the system,
results in adding an interaction term given by
$\displaystyle H_{\rm int}$ $\displaystyle=$ $\displaystyle
U\displaystyle\sum_{j=1}^{N}{\hat{a}}^{\dagger}_{j}{\hat{a}}_{j}{\hat{a}}^{\dagger}_{j+1}{\hat{a}}_{j+1}$
(16)
to the Hamiltonian, $H$, depicted in Eq. (1). In a clean system it is well
known that far from half-filling the system behaves as a Luttinger liquid for
any value of $U$ giamarchi . For the ground state EE of a clean system at
half-filling (and $U<2$, i.e., a Luttinger liquid) the EE changes only by an
overall constant amico08 ; berkovits12 , while retaining the same logarithmic
dependence. Thus, we expect that the EE of the two-particle states in a clean
system will not be essentially affected by the presence or absence of
electron-electron interactions. Unfortunately, it is not possible to calculate
analytically the two-particle states of the interacting system. Thus, we must
rely on a numerical solution for the problem.
Exact diagonalization is used to calculate all the eigevectors of $H+H_{\rm
int}$, represented by a $N(N-1)/2$ matrix. We have chosen a $100$ site system,
resulting in a matrix of size $4950$. A reduced density matrix $\rho_{A}$, of
size $1+N_{A}+N_{A}(N_{A}+1)/2$ is then constructed and diagonalized for each
eigenstate, and the EE is calculated using its eigenvectors according to Eq.
(6). The results are shown in Fig. 2, where the EE of $31$ states around the
ground state (i.e. the ground state and 1st - 30th excitation), and at quarter
of the two-particle band (1222th - 1252st excitation) are shown. In both cases
the EE for the non-interacting ($U=0$) as well as for the interacting ($U=1$)
cases are almost equal (the interacting case is larger by a minute constant
(of order $10^{-4}$ which can not be resolved at the resolution of the
figure). As expected around the ground-state the excitations belong to the low
$k_{2}-k_{1}$ sector while for the higher excitations most states corresponds
to large values of $k_{2}-k_{1}$, i.e., well described by Eq. (14).
Figure 2: (Color online) The EE of 31 states of a clean system as function of
region A’s size $N_{A}$ for a system of length $N=100$, in the absence ($U=0$,
black line), or presence ($U=1$, dotted red line) of electron-electron
interactions. Panel (a) depicts states in the vicinity of the ground-state
(the ground state and 1st - 30th excitation), panel (b) shows the excitations
around quarter of the two-particle band (1222th - 1252st excitation). The
Heavy dashed line correspond to the analytic prediction for $k_{2}-k_{1}\gg
1$, Eq. (14)). For a clean system the interaction has no influence on the EE.
## IV Interacting Disordered Ring
When disorder is added to a non-interacting system, all single particle states
become localized. For the many-particle states, the behavior is more involved.
As long as no interaction is present, the many-particle states remain
localized, both for the ground state apel82 as well as for all the excited
states. Once interaction is introduced, the ground-state as well as low lying
excitations remain localized, while above a critical energy the many-particle
excitations are predicted to delocalize AGKL97 ; gornyi05 ; basko06 . This
transition, termed the many-body or Fock space localization transition, stems
from the interactions coupling excitations with a different number of
electron-hole couples. This type of transition is irrelevant for two particle
systems. Nevertheless, as argued by Shepelynsky and Imry shepelyansky94 ;
imry95 , interaction between the pair of particles should enhance the two
particle localization length, compared to the single electron localization
length, as long as the two-particle level spacing is significantly smaller
than the single electron level spacing, i.e., for higher excitations.
Can we see any signature of the enhanced two-particle localization length in
the EE behavior of the excited states? First we have to understand the
influence of disorder on the EE. In Ref. berkovits12, it has been shown that
for the ground-state the EE saturates on the length scale of $\xi$, and does
not continue to grow logarithmic as in a clean system. Thus, the EE of a
disordered system is always lower than the EE of a clean system. One would
expect this feature to hold also for excited states. We check this assumption
by calculating the EE using the excitations of the Hamiltonian given in Eq.
(1), where the disorder is represented by a random on-site energy,
$\epsilon_{j}$ taken from a uniform distribution in the range $[-W/2,W/2]$.
For $W=3$, single electron states at the middle of the band are expected to
have a localization length $\xi\sim 10$ romer97 , while close to the band edge
the single electron states are supposed to be much more strongly localized
(Lifshitz tails) lifshitz64 .
The EE is calculated by exact diagonalization for systems of size $N=100$ as
described in the previous section. The results are presented in Fig. 3, where
the median EE in the vicinity of the ground-state (1st - 30th excitation), at
$1/16$ of the band (294th - 324th excitation), at $1/8$ of the band (603th -
633th excitation), and at $1/4$ of the band (1222th - 1252th excitation). The
median EE is taken across the 31 excitations in each segment and 50 different
realizations of disorder. We calculate the EE in the absence ($U=0$) and
presence ($U=1$, $U=3$) of electron-electron interactions. Around the ground
state, the interactions do not play a significant role, and for all cases the
EE is strongly suppressed compared to typical values for a clean system. This
is expected, since as mentioned, at the bottom of the band the states are
strongly localized, and therefore the EE is low.
For the non-interacting case, the EE higher in the band are less suppressed as
the localization length grows. The EE for high excitations in the interacting
case is always larger than for the corresponding non-interacting states. This
is a clear signature for the effect of interactions on localization of the
two-particle states, which become more entangled as interaction is present,
although there is no significant difference between $U=1$ and $U=3$. It is
also clear that for higher excitations (larger localization length) the
enhancement of the EE becomes stronger.
This enhancement could be expected on physical grounds. As has been shown
shepelyansky94 ; imry95 ; frahm95 ; weinmann95 ; vonoppen96 ; jacquod97 , the
localization length associated with an interacting two-particle state is
larger than for a non-interacting state with the same disorder. Thus, one
expects that the EE will also be larger and closer to its clean system value.
Figure 3: (Color online) The median EE of an ensemble of states for different
regions of the band collected from 20 different realizations of strong
disordered ($W=3$), as function of region A’s size $N_{A}$ for a system of
length $N=100$. Continuous lines depict the median EE in the vicinity of the
ground-state (1st - 30th excitation), dotted lines at a$1/16$ of the band
(294th - 324th excitation), dashed lines at a$1/8$ of the band (603th - 633th
excitation), and dot-dashed lines at a$1/4$ of the band (1222th - 1252th
excitation). Black lines correspond to no electron-electron interactions,
while red lines indicate the presence of moderate electron-electron
interactions ($U=1$), and green lines correspond to stronger interactions
($U=3$). The heavy dashed line correspond to the maximum EE for a clean system
(Eq. (14)). Error bar represent a range between the 40th and 60th percentiles.
We therefore also investigate the case of weaker disorder, for which the
localization length is of order of the system size ($W=1$, $\xi\sim 100$). As
can be seen in Fig. 4, for the non-interacting case a similar pattern to the
one observed Fig. 3 remains, although the EE is less suppressed by the weaker
disorder. As expected, the enhancement of the EE by interactions is stronger
for the weaker disorder. Surprisingly, above $1/8$ of the band (corresponding
to an excitation energy of $t$), the EE of the disordered interacting system
is significantly larger than the limit for a clean system ($\ln(4)$).
Although, extrapolating from the results presented in Fig. 3, increasing the
system size while keeping the disorder fixed will result in a decrease of the
EE below the clean system values once $L\gg\xi$. The increase above the clean
system excitation EE may stem from the fact that as long as the two particles
are confined within a single localization length the particles can not avoid
each other and spend much time close to each other, leading to an enhancement
of the EE. When the system size is much larger than the localization length,
the two particles can reside in different regions of the sample, and
interactions will not play an important role. However, this hand waving
picture requires further study.
Figure 4: (Color online) As in Fig. 3 for weak disordered ($W=1$). For high
enough excitation energy, the median EE in the presence of disorder and
interactions exceeds the maximum clean value indicated by the heavy dashed
line (Eq. (14)).
At first glance, these results seem to indicate that although interactions may
enhance the EE as long as $L<\xi$, they become irrelevant for $L\gg\xi$,
showing no support for the many-particle delocalization scenario AGKL97 ;
gornyi05 ; basko06 which should occur for $L\gg\xi$. This interpretation is
wrong, since the many-particle delocalization scenario deals with a constant
density of particles, and delocalization is predicted only when there are at
least a couple of particles in the range of a single particle localization
length. Thus the observed two-particle EE enhancement when the two particles
are within a distance of $\xi$, as well as the fact that the enhancement
increases significantly when the excitation energy, fits nicely with the
scenario promoted in Ref. basko06, . Of course, coupling between states with a
different number of electron-hole generation is crucial for the delocalization
scenario, and therefore a full demonstration of the delocalization transition
has to be performed for a finite electron density system. Nevertheless, the
fact that the two-particle behavior fits nicely with the delocalization
scenario is encouraging.
## V Conclusions
The properties of the EE of two particle excited states in a one-dimensional
ring were studied. For a clean system, the EE depends only on the difference
in momentum between the two particles. If the difference is large the EE
corresponds to the EE of two independent single particle states, i.e.,
$S_{A}=-2[x\ln(x)+(1-x)\ln(1-x)]$. On the other hand, if the momenta are
close, the EE of the two particle state is reduced compared to this value.
One may extrapolate that for $m$ particles on a $N$ site ring, as long as the
density is low ($m/N\ll 1$), the upper limit of the EE is
$S_{A}=-m[x\ln(x)+(1-x)\ln(1-x)]$, which is also the typical value. This will
be valid if the difference between the momenta of all particles taking part in
a particular many-particle excited state is large. If this is not the case we
expect the EE of the excited state to be lower. Further investigation of these
cases is underway.
We have verified numerically that disorder reduces the EE. Short range
particle interaction leads to an enhancement of the excited state EE, which
become very significant once the localization length is of order of the system
size. For high excitations the median EE of a many particle interacting
excitation is not only above the disordered case, but exceeds the clean system
limit. This may be related to the fact that localization forces the two
particles to dynamically spend more time in the vicinity of each other,
although this argument merits further study.
###### Acknowledgements.
Financial support from the Israel Science Foundation (Grant 686/10) is
gratefully acknowledged.
## References
* (1) For recent reviews see: L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Rev. Mod. Phys. 80, 517 (2008); J. Eisert, M. Cramer, and M. B. Plenio, Rev. Mod. Phys. 82, 277 (2010); and references therein.
* (2) M. Vojta, Phil. Mag. 86, 1807 (2006).
* (3) K. Le Hur, Ann. Phys. 323, 2208 (2008).
* (4) M. Goldstein, Y. Gefen and R. Berkovits, Phys. Rev. B 83, 245112 (2011).
* (5) C. Holzhey, F. Larsen, and F. Wilczek, Nucl. Phys. B 424, 443 (1994).
* (6) G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Phys. Rev. Lett. 90, 227902 (2003); J. I. Latorre, E. Rico, and G. Vidal, Quant. Inf. Comp. 4, 048 (2004).
* (7) P. Calabrese and J. Cardy, J. Stat. Mech. P06002 (2004).
* (8) R. Berkovits, Phys. Rev. Lett. 108, 176803 (2012).
* (9) V. Alba, M. Fagotti and P. Calabrese, J. Stat. Mech. P10020 (2009).
* (10) S. SachdevQuantum Phase Transitions Cambridge University Press (1999).
* (11) B. L. Altshuler, Y. Gefen, A. Kamenev, and L. S. Levitov, Phys. Rev. Lett. 78, 2803 (1997).
* (12) I. V. Gornyi, A. D. Mirlin, and D. G. Polyakov Phys. Rev. Lett. 95, 206603 (2005).
* (13) D. M. Basko, I. L. Aleiner, and B. L. Altshuler, Ann. Phys. (N.Y.) 321, 1126 (2006).
* (14) R. Berkovits and Y. Avishai, Phys. Rev. Lett. 80, 568 (1998); R. Berkovits and B. I. Shklovskii, J. Phys. Condens. Matter 11, 779 (1999).
* (15) V. Oganesyan and D. A. Huse, Phys. Rev. B 75, 155111 (2007); V. Oganesyan, A. Pal, and D. A. Huse, ibid. 80, 115104 (2009).
* (16) C. Monthus and T. Garel, Phys. Rev. B 81, 134202 (2010).
* (17) T. C. Berkelbach and D. R. Reichman, Phys. Rev. B 81, 224429 (2010).
* (18) A. Pal and D. A. Huse, Phys. Rev. B 82, 174411 (2010).
* (19) E. Canovi, D. Rossini, R. Fazio, G. E. Santoro, and A. Silva, Phys. Rev. B 83, 094431 (2011).
* (20) E. Cuevas, M. Feigel’man, L. Ioffe and M. Mezard. Nature Communications 3, 1128 (2012).
* (21) J. H. Bardarson, F. Pollmann, and J. E. Moore, Phys. Rev. Lett. 109, 017202 (2012).
* (22) F. C. Alcaraz and M. S. Sarandy, Phys. Rev. A 78 032319 (2008).
* (23) L. Masanes, Phys. Rev. A 80 052104 (2009).
* (24) M. I. Berganza, F. C. Alcaraz, and G. Sierra, J. Stat. Mech. P01016 (2012).
* (25) For a review see: P. A. Lee and T. V. Ramakrishnan, Rev. Mod. Phys. 57, 287 (1985).
* (26) D. L. Shepelyansky, Phys. Rev. Lett. 73, 2607 (1994).
* (27) Y. Imry, Europhys. Lett. 30, 405 (1995).
* (28) K. Frahm, A. Müller-Groeling, J.- L. Pichard, and D. Weinmann, Europhys. Lett. 31, 169 (1995).
* (29) D. Weinmann, A. Müller-Groeling, J.- L. Pichard, and K. Frahm, Phys. Rev. Lett. 75, 1598 (1995).
* (30) F. von Oppen, T. Wettig, and J. Müller, Phys. Rev. Lett. 76, 491 (1996).
* (31) Ph. Jacquod, D. L. Shepelyansky, and O. P. Sushkov, Phys. Rev. Lett. 78, 923 (1997).
* (32) S.-A. Cheong and C. L. Henley, Physical Review B, 69, 075111 (2004).
* (33) L. Gong and P. Tong, Phys. Rev. B 76, 085121 (2007).
* (34) T. Giamarchi, Quantum Physics in One Dimension (Oxford University Press, New York, 2003).
* (35) W. Apel, J. Phys. C 15, 1973 (1982); W. Apel and T. M. Rice, Phys. Rev. B 26, 7063 (1982); T. Giamarchi and H. J. Schulz, Phys. Rev. B 37, 325 (1988).
* (36) R. A. Römer and M. Schreiber, Phys. Rev. Lett. 78, 515 (1997).
* (37) T. M. Lifshitz, Adv. Phys. 13, 483 (1964).
|
arxiv-papers
| 2013-02-17T04:57:13 |
2024-09-04T02:49:41.842480
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Richard Berkovits",
"submitter": "Richard Berkovits",
"url": "https://arxiv.org/abs/1302.4031"
}
|
1302.4063
|
# Pattern Count on Multiply Restricted Permutations
Alina F. Y. Zhao
School of Mathematical Sciences and Institute of Mathematics
Nanjing Normal University, Nanjing 210023, PR China
[email protected]
###### Abstract
Previous work has studied the pattern count on singly restricted permutations.
In this work, we focus on patterns of length $3$ in multiply restricted
permutations, especially for double and triple pattern-avoiding permutations.
We derive explicit formulae or generating functions for various occurrences of
length $3$ patterns on multiply restricted permutations, as well as some
combinatorial interpretations for non-trivial pattern relationships.
Mathematics Subject Classification: 05A05, 05A15, 05A19
## 1 Introduction
Let ${\sigma}={\sigma}_{1}{\sigma}_{2}\cdots{\sigma}_{n}$ be a permutation in
the symmetric group $S_{n}$ written in one-line notation, and ${\sigma}_{i}$
is said to be a left to right maximum (resp. right to left maximum) if
${\sigma}_{i}>{\sigma}_{j}$ for all $j<i$ (resp. $j>i$). For a permutation
$q\in S_{k}$, we say that ${\sigma}$ contains $q$ as a pattern if there exist
$1\leq i_{1}\leq i_{2}\leq\cdots\leq i_{k}\leq n$ such that the entries
${\sigma}_{i_{1}}{\sigma}_{i_{2}}\cdots{\sigma}_{i_{k}}$ have the same
relative order as the entries of $q$, i.e., $q_{j}<q_{l}$ if and only if
${\sigma}_{i_{j}}<{\sigma}_{i_{l}}$ whenever $1\leq j,l\leq k$. We say that
${\sigma}$ avoids $q$ if ${\sigma}$ does not contain $q$ as a pattern.
For a pattern $q$, denote by $S_{n}(q)$ the set of all permutations in $S_{n}$
that avoiding the pattern $q$, and for $R\subseteq S_{k}$, we denote by
$S_{n}(R)=\bigcap_{q\in R}S_{n}(q)$, i.e., the set of permutations in $S_{n}$
which avoid every pattern contained in $R$. For two permutations ${\sigma}$
and $q$, we set $f_{q}({\sigma})$ as the number of occurrences of $q$ in
${\sigma}$ as a pattern, and we further denote the number of occurrences of
$q$ in a permutation set $\Omega$ by
$f_{q}(\Omega)=\sum_{{\sigma}\in\Omega}f_{q}({\sigma})$.
Recently Cooper [5] raised the problem of determining the total number
$f_{q}(S_{n}(r))$ of all $q$-patterns in the $r$-avoiding permutations of
length $n$. Bóna [3] discovered the generating functions of the sequence
$f_{q}(S_{n}(132))$ for the monotone $q$, and Bóna [2] further studied the
generating functions for other length $3$ patterns in the set $S_{n}(132)$,
and showed both algebraically and bijectively that
$f_{231}(S_{n}(132))=f_{312}(S_{n}(132))=f_{213}(S_{n}(132)).$
Rudolph [7] proved equipopularity relations between general length $k$
patterns on $132$-avoiding permutations based on the structure of their
corresponding binary plane trees. Moreover, Homberger [6] also presented exact
formulae for each length $3$ pattern in the set $S_{n}(123)$. Therefore, the
singly restricted permutations have been well studied by previous work,
whereas it remains open for multiply restricted permutations, e.g.,
$S_{n}(123,132)$.
| $f_{213}(n)=(n-3)2^{n-2}+1$ | | $\sum f_{231}(n)x^{n}=\sum f_{312}(n)x^{n}=\frac{x^{3}(1+2x)}{(1-x-x^{2})^{3}}$
---|---|---|---
$S_{n}(123,132)$ | $f_{231}(n)=f_{312}(n)=(n^{2}-5n+8)2^{n-3}-1$ | $S_{n}(123,132,213)$ | $\sum f_{321}(n)x^{n}=\frac{x^{3}(1+6x+12x^{2}+8x^{3})}{(1-x-x^{2})^{4}}$
| $f_{321}(n)=(n^{3}/3-2n^{2}+14n/3-5)2^{n-2}+1$ | | $f_{213}(n)=f_{312}(n)={n\choose 3}$
| $f_{123}(n)=(n-4)2^{n-1}+n+2$ | $S_{n}(123,132,231)$ | $f_{321}(n)=(n-2){n\choose 3}$
$S_{n}(132,213)$ | $f_{231}(n)=f_{312}(n)=(\frac{n^{2}}{4}-\frac{7n}{4}+4)2^{n}-n-4$ | | $f_{123}(n)=f_{312}(n)={n+1\choose 4}$
| $f_{321}(n)=(\frac{1}{12}n^{3}-\frac{3}{4}n^{2}+\frac{38}{12}n-6)2^{n}+n+6$ | $S_{n}(132,213,231)$ | $f_{321}(n)=n(n-2)(n-1)^{2}/12$
$S_{n}(132,231)$ | $f_{123}(n)=f_{213}(n)=f_{312}(n)=f_{321}(n)=\frac{2^{n}}{8}{n\choose 3}$ | | $f_{213}(n)=f_{231}(n)={n\choose 3}$
$S_{n}(132,312)$ | $f_{123}(n)=f_{213}(n)=f_{231}(n)=f_{321}(n)=\frac{2^{n}}{8}{n\choose 3}$ | $S_{n}(123,132,312)$ | $f_{321}(n)=(n-2){n\choose 3}$
| $f_{213}(n)=f_{231}(n)=f_{312}(n)={n+2\choose 5}$ | | $f_{132}(n)=f_{213}(n)={n+1\choose 4}$
$S_{n}(132,321)$ | $f_{123}(n)=\frac{7n^{5}}{120}-\frac{n^{4}}{3}+\frac{17n^{3}}{24}-\frac{2n^{2}}{3}+\frac{7}{30}$ | $S_{n}(123,231,312)$ | $f_{321}(n)=\frac{1}{12}n(n-2)(n-1)^{2}$
Table 1: The pattern count on doubly and triply restricted permutations.
In this paper, we are interested in pattern count on multiply restricted
permutations $S_{n}(R)$ for $R\subset S_{3}$, especially for double and triple
restrictions. We derive explicit formulae or generating functions for the
occurrences of each length $3$ pattern in multiply restricted permutations,
and the detailed results are summarized in Table 1. Also, we present some
combinatorial interpretations for non-trivial pattern relationships. It is
trivial to consider the restricted permutations of higher multiplicity since
there are only finite permutations, as shown in [8]. Therefore, this work
presents a complete study on length $3$ patterns of multiply restricted
permutations.
## 2 Doubly Restricted Permutations
For ${\sigma}\in S_{n}$, the following three operations are very useful in
pattern avoiding enumeration. The complement of ${\sigma}$ is given by
${\sigma}^{c}=(n+1-{\sigma}_{1})(n+1-{\sigma}_{2})\cdots(n+1-{\sigma}_{n})$,
its reverse is defined as
${\sigma}^{r}={\sigma}_{n}\cdots{\sigma}_{2}{\sigma}_{1}$ and the inverse
${\sigma}^{-1}$ is the group theoretic inverse permutation. For any set of
permutations $R$, let $R^{c}$ be the set obtained by complementing each
element of $R$, and the set $R^{r}$ and $R^{-1}$ are defined analogously.
###### Lemma 2.1.
Let $R\subseteq S_{k}$ be any set of permutations in $S_{k}$, and ${\sigma}\in
S_{n}$, we have
${\sigma}\in S_{n}(R)\Leftrightarrow{\sigma}^{c}\in
S_{n}(R^{c})\Leftrightarrow{\sigma}^{r}\in
S_{n}(R^{r})\Leftrightarrow{\sigma}^{-1}\in S_{n}(R^{-1}).$
From Lemma 2.1 and the known results on $S_{n}(123)$ and $S_{n}(132)$, we can
obtain each length $3$ pattern count in the singly restricted permutations
$S_{n}(r)$ for $r=213,231,312$ and $321$. In this section, we focus on the
number of length $3$ patterns in all doubly restricted permutations.
A composition of $n$ is an expression of $n$ as an ordered sum of positive
integers, and we say that $c$ has $k$ parts or $c$ is a $k$-composition if
there are exactly $k$ summands appeared in a composition $c$. Denote by
$\mathcal{C}_{n}$ and $\mathcal{C}_{n,k}$ the set of all compositions of $n$
and the set of $k$-compositions of $n$, respectively. It is known that
$|\mathcal{C}_{n}|=2^{n-1}$ and $|\mathcal{C}_{n,k}|={n-1\choose k-1}$ for
$n\geq 1$ with $|\mathcal{C}_{0}|=1$. For more details, see [9].
We begin with some enumerative results on compositions.
###### Lemma 2.2.
For $n\geq 1$, we have
$\displaystyle a(n)$ $\displaystyle:=$ $\displaystyle\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k-1}+c_{k}=n}c_{k}=2^{n}-1,$ (2.1) $\displaystyle b(n)$
$\displaystyle:=$ $\displaystyle\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k-1}+c_{k}=n}c_{k}(c_{k}-1)=2^{n+1}-2n-2,$ (2.2)
where the sum takes over all compositions of $n$.
Proof. For $c_{k}=m$ , we can regard $c_{1}+c_{2}+\cdots+c_{k-1}$ as a
composition of $n-m$. It is easy to get that the number of compositions of
$n-m$ is $2^{n-m-1}$ for $1\leq m\leq n-1$ and the number of empty
compositions is one. This follows that
$\displaystyle a(n)$ $\displaystyle=n+\sum_{m=1}^{n-1}m2^{n-m-1},$
$\displaystyle b(n)$ $\displaystyle=n(n-1)+\sum_{m=1}^{n-1}m(m-1)2^{n-m-1}.$
Let $g(x)=\sum_{i=0}^{n-1}x^{i}=\frac{1-x^{n}}{1-x}$, and we have
$\displaystyle
g^{\prime}(x)=\sum_{i=1}^{n-1}ix^{i-1}=\frac{(n-1)x^{n}-nx^{n-1}+1}{(1-x)^{2}},$
$\displaystyle
g^{\prime\prime}(x)=\sum_{i=1}^{n-1}i(i-1)x^{i-2}=\frac{(3n-n^{2}-2)x^{n}+(2n^{2}-4n)x^{n-1}+(n-n^{2})x^{n-2}+2}{(1-x)^{3}}.$
By setting $x=1/2$ in $g^{\prime}(x)$ and $g^{\prime\prime}(x)$, we get
$\displaystyle g^{\prime}(1/2)$ $\displaystyle=$ $\displaystyle
2^{2}((n-1)2^{-n}-n2^{-n+1}+1),$ $\displaystyle g^{\prime\prime}(1/2)$
$\displaystyle=$ $\displaystyle
2^{3}\left[(3n-n^{2}-2)2^{-n}+(2n^{2}-4n)2^{-n+1}+(n-n^{2})2^{-n+2}+2\right].$
Observing $a(n)=2^{n-2}g^{\prime}(1/2)+n$ and
$b(n)=2^{n-3}g^{\prime\prime}(1/2)+n(n-1)$, this lemma follows as desired.
###### Lemma 2.3.
For $n\geq 1$, we have
$\displaystyle c(n)$ $\displaystyle:=$ $\displaystyle\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k-1}+c_{k}=n}k=(n+1)2^{n-2},$ (2.3) $\displaystyle d(n)$
$\displaystyle:=$ $\displaystyle\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k-1}+c_{k}=n}k(k-1)=(n^{2}+n-2)2^{n-3},$ (2.4)
where the sum takes over all compositions of $n$.
Proof. Since the number of compositions of $n$ with $k$ parts is ${n-1\choose
k-1}$, we have
$c(n)=\sum_{k=1}^{n}k{n-1\choose k-1}\text{ and
}d(n)=\sum_{k=1}^{n}k(k-1){n-1\choose k-1}.$
Let $h(x)=x\sum_{i=1}^{n}{n-1\choose i-1}x^{i-1}=x(1+x)^{n-1}$. Then
$\displaystyle h^{\prime}(x)$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{n}i{n-1\choose i-1}x^{i-1}=(nx+1)(1+x)^{n-2},$
$\displaystyle h^{\prime\prime}(x)$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{n}i(i-1){n-1\choose
i-1}x^{i-2}=\big{(}n^{2}x+n(2-x)-2\big{)}(1+x)^{n-3}.$
We complete the proof by putting $x=1$ in the above formulae.
Based on Lemma 2.1, Simion and Schmidt [8] showed that the pairs of patterns
among the total ${6\choose 2}=15$ cases fall into the following $6$ classes.
###### Proposition 2.4 ([8]).
For every symmetric group $S_{n}$,
1. 1.
$|S_{n}(123,132)|=|S_{n}(123,213)|=|S_{n}(231,321)|=|S_{n}(312,321)|=2^{n-1}$;
2. 2.
$|S_{n}(132,213)|=|S_{n}(231,312)|=2^{n-1}$;
3. 3.
$|S_{n}(132,231)|=|S_{n}(213,312)|=2^{n-1}$;
4. 4.
$|S_{n}(132,312)|=|S_{n}(213,231)|=2^{n-1}$;
5. 5.
$|S_{n}(132,321)|=|S_{n}(123,231)|=|S_{n}(123,312)|=|S_{n}(213,321)|={n\choose
2}+1$;
6. 6.
$|S_{n}(123,321)|=0$ for $n\geq 5$.
Therefore, it is sufficient to consider the pattern count of the first set for
each class in the subsequent sections, and the other sets can be obtained by
taking the complement or reverse or inverse of the known results.
### 2.1 Pattern Count on $(123,132)$-Avoiding Permutations
We first present a bijection between $S_{n}(123,132)$ and $\mathcal{C}_{n}$ as
follows:
###### Lemma 2.5.
There is a bijection $\varphi_{1}$ between the sets $S_{n}(123,132)$ and
$\mathcal{C}_{n}$.
Proof. For any given ${\sigma}\in S_{n}(123,132)$, let
${\sigma}_{i_{1}},{\sigma}_{i_{2}},\ldots,{\sigma}_{i_{k}}$ be the $k$ right
to left maxima with $i_{1}<i_{2}<\cdots<i_{k}$, which yields that
$c=i_{1}+(i_{2}-i_{1})+\cdots+(i_{k-1}-i_{k-2})+(i_{k}-i_{k-1})$ is a
composition of $n$ since $i_{k}=n$. On the converse, let
$m_{i}=n-(c_{1}+\cdots+c_{i-1})$ for any given
$n=c_{1}+c_{2}+\cdots+c_{k}\in{\mathcal{C}}_{n}$, and we set
$\tau_{i}=m_{i}-1,m_{i}-2,\ldots,m_{i}-c_{i}+1,m_{i}$ for $1\leq i\leq k$.
Therefore, ${\sigma}=\tau_{1},\tau_{2},\ldots,\tau_{k}\in S_{n}(123,132)$ is
as desired. For example, we have ${\sigma}=8\,9\,7\,5\,4\,3\,6\,1\,2$ for the
composition $9=2+1+4+2$.
For a pattern $q$, we denote by $f_{q}(n):=\sum_{\sigma\in
S_{n}(123,132)}f_{q}(\sigma)$, i.e., the number of occurrences of the pattern
$q$ in $S_{n}(123,132)$. For simplicity, we will use this notation in
subsequent sections when the set in question is unambiguous. For convenience,
we denote by $\tau_{i}>\tau_{j}$ if all the elements in the subsequence
$\tau_{i}$ are larger than all the elements in subsequence $\tau_{j}$. Based
on Lemma 2.5, we have
###### Proposition 2.6.
For $n\geq 3$,
$\displaystyle f_{213}(n)$ $\displaystyle=$ $\displaystyle\sum_{k\geq 1\atop
c_{1}+c_{2}+\cdots+c_{k}=n}\sum_{i=1}^{k}{c_{i}-1\choose 2},$ (2.5)
$\displaystyle f_{231}(n)$ $\displaystyle=$ $\displaystyle\sum_{k\geq 1\atop
c_{1}+c_{2}+\cdots+c_{k}=n}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}(c_{i}-1).$
(2.6)
Proof. For each permutation ${\sigma}\in S_{n}(123,132)$ with
$\varphi_{1}({\sigma})=c_{1}+c_{2}+\cdots+c_{k}$, we can rewrite ${\sigma}$ as
${\sigma}=\tau_{1},\tau_{2},\ldots,\tau_{k}$ from Lemma 2.5. For $j>i$, since
$\tau_{i}>\tau_{j}$, and the elements except the last one are decreasing in
$\tau_{i}$, the pattern $213$ can only occur in every subsequence $\tau_{i}$.
Thus, we have ${c_{i}-1\choose 2}$ choices to choose two elements in
$\tau_{i}$ to play the role of $21$, and the last element of $\tau_{i}$ plays
the role of $3$, summing up all the number of $213$-patterns in subsequences
$\tau_{1},\tau_{2},\ldots,\tau_{k}$ gives the formula (2.5).
For the pattern $231$, we have $c_{i}-1$ choices in the subsequence $\tau_{i}$
to choose one element playing the role of $2$ and one choice (always the last
element of $\tau_{i}$) for $3$, and then, we have $c_{i+1}+\cdots+c_{k}$
choices to choose one element for the role of $1$ since all the elements after
$\tau_{i}$ are smaller than those in $\tau_{i}$. Summing up all the number of
$231$-patterns according to the position of $3$ gives the formula (2.6).
Based on the previous analysis, we now present our first main results of the
explicit formulae for pattern count in the set $S_{n}(123,132)$.
###### Theorem 2.7.
For $n\geq 3$, in the set $S_{n}(123,132)$, we have
$\displaystyle f_{213}(n)$ $\displaystyle=$ $\displaystyle(n-3)2^{n-2}+1,$
(2.7) $\displaystyle f_{231}(n)$ $\displaystyle=$ $\displaystyle
f_{312}(n)=(n^{2}-5n+8)2^{n-3}-1,$ (2.8) $\displaystyle f_{321}(n)$
$\displaystyle=$ $\displaystyle(n^{3}/3-2n^{2}+14n/3-5)2^{n-2}+1.$ (2.9)
Proof. From $S_{3}(123,132)=\\{213,231,312,321\\}$, it is obvious that
$f_{213}(3)=f_{231}(3)=1.$
To prove formula (2.7), from Prop 2.6, we observe that, for $n\geq 3$
$f_{213}(n+1)=\sum_{k\geq 1,c_{k}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k}{c_{i}-1\choose 2}+\sum_{k\geq
1,c_{k}\geq 2\atop c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k}{c_{i}-1\choose
2}.$
If $c_{k}=1$, then $k\geq 2$, and we further have
$\sum_{k\geq 1,c_{k}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k}{c_{i}-1\choose 2}=\sum_{k-1\geq
1\atop c_{1}+c_{2}+\cdots+c_{k-1}=n}\sum_{i=1}^{k-1}{c_{i}-1\choose
2}=f_{213}(n).$
If $c_{k}\geq 2$, then we set $c_{k}=1+r_{k}$, and from Lemma 2.2, it holds
that
$\displaystyle\sum_{k\geq 1,c_{k}\geq 2\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k}{c_{i}-1\choose 2}$
$\displaystyle=\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k-1}+r_{k}=n}\left[\sum_{i=1}^{k-1}{c_{i}-1\choose
2}+{r_{k}-1\choose 2}+(r_{k}-1)\right]$ $\displaystyle=f_{213}(n)+\sum_{k\geq
1\atop c_{1}+\cdots+c_{k-1}+r_{k}=n}(r_{k}-1)$
$\displaystyle=f_{213}(n)+a(n)-\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k-1}+r_{k}=n}1=f_{213}(n)+a(n)-2^{n-1}.$
Combining the above two cases, we have
$f_{213}(n+1)=2f_{213}(n)+2^{n-1}-1.$
This proves formula (2.7) by solving the recurrence with initial value
$f_{213}(3)=1$.
To prove formula (2.8), first observe that from Lemma 2.1, ${\sigma}\in
S_{n}(123,132)\Leftrightarrow{\sigma}^{-1}\in S_{n}(123,132)$, this implies
the first equality of formula (2.8) directly since $231^{-1}=312$. While for
the second equality of formula (2.8), by Prop 2.6, we have
$f_{231}(n+1)=\sum_{k\geq 1,c_{k}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}(c_{i}-1)+\sum_{k\geq
1,c_{k}\geq 2\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}(c_{i}-1).$
If $c_{k}=1$, then $k\geq 2$, and from Lemma 2.3 we have
$\displaystyle\sum_{k\geq 1,c_{k}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}(c_{i}-1)$
$\displaystyle=\sum_{k-1\geq 1\atop
c_{1}+\cdots+c_{k-1}=n}\sum_{i=1}^{k-2}\sum_{j=i+1}^{k-1}c_{j}(c_{i}-1)+\sum_{k-1\geq
1\atop c_{1}+\cdots+c_{k-1}=n}\sum_{i=1}^{k-1}(c_{i}-1)$
$\displaystyle=f_{231}(n)+\sum_{k-1\geq 1\atop c_{1}+\cdots+c_{k-1}=n}n-(k-1)$
$\displaystyle=f_{231}(n)-c(n)+n2^{n-1}.$
If $c_{k}\geq 2$, then we set $c^{\prime}_{k}=c_{k}-1$ and
$c^{\prime}_{i}=c_{i}$ for $1\leq i\leq k-1$. This holds that
$\displaystyle\sum_{k\geq 1,c_{k}\geq 2\atop
c_{1}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}(c_{i}-1)$
$\displaystyle=\sum_{k\geq 1\atop
c^{\prime}_{1}+\cdots+c^{\prime}_{k}=n}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c^{\prime}_{j}(c^{\prime}_{i}-1)+\sum_{k\geq
1\atop
c^{\prime}_{1}+\cdots+c^{\prime}_{k}=n}\sum_{i=1}^{k-1}(c^{\prime}_{i}-1)$
$\displaystyle=f_{231}(n)+\sum_{k\geq 1\atop
c^{\prime}_{1}+\cdots+c^{\prime}_{k}=n}(n-c^{\prime}_{k}-k+1)$
$\displaystyle=f_{231}(n)-a(n)-c(n)+(n+1)2^{n-1},$
where the last equality holds from Lemma 2.2 and Lemma 2.3. Therefore, after
simplification, we have
$f_{231}(n+1)=2f_{231}(n)+(n-2)2^{n-1}+1,$
which proves the second equality of formula (2.8) by using $f_{231}(3)=1$.
Note that the total number of all length $3$ patterns in a permutation
${\sigma}\in S_{n}$ is ${n\choose 3}$, for the set $S_{n}(123,132)$, this
gives the relation
$f_{213}(n)+2f_{231}(n)+f_{321}(n)={n\choose 3}2^{n-1}.$
Thus formula (2.9) is a direct computation of the above equation. This
completes the proof.
The first few values of $f_{q}(S_{n}(123,132))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $0$ | $0$ | $1$ | $1$ | $1$ | $1$ | $6$ | $0$ | $0$ | $49$ | $111$ | $111$ | $369$
$4$ | $0$ | $0$ | $5$ | $7$ | $7$ | $13$ | $7$ | $0$ | $0$ | $129$ | $351$ | $351$ | $1409$
$5$ | $0$ | $0$ | $17$ | $31$ | $31$ | $81$ | $8$ | $0$ | $0$ | $321$ | $1023$ | $1023$ | $4801$
### 2.2 Pattern Count on $(132,213)$-Avoiding Permutations
We begin with the following correspondence between $(132,213)$-avoiding
permutations and compositions of $n$.
###### Lemma 2.8.
There is a bijection $\varphi_{2}$ between the sets $S_{n}(132,213)$ and
$\mathcal{C}_{n}$.
Proof. Given ${\sigma}\in S_{n}(132,213)$, let
${\sigma}_{i_{1}},{\sigma}_{i_{2}},\ldots,{\sigma}_{i_{k}}$ be the $k$ right
to left maxima with $i_{1}<i_{2}<\cdots<i_{k}$. This follows that
$c=i_{1}+(i_{2}-i_{1})+\cdots+(i_{k-1}-i_{k-2})+(i_{k}-i_{k-1})$ is a
composition of $n$ since $i_{k}=n$. On the converse, given
$n=c_{1}+c_{2}+\cdots+c_{k}\in{\mathcal{C}}_{n}$, let
$m_{i}=n-(c_{1}+\cdots+c_{i-1})$ and
$\tau_{i}=m_{i}-c_{i}+1,m_{i}-c_{i}+2,\ldots,m_{i}-1,m_{i}$ for $1\leq i\leq
k$. We set ${\sigma}=\tau_{1},\tau_{2},\ldots,\tau_{k}$, and it is easy to
check that ${\sigma}\in S_{n}(132,213)$. For example, for the composition
$9=3+3+1+2$, we get ${\sigma}=7\,8\,9\,4\,5\,6\,3\,1\,2$.
Based on the above lemma, we have
###### Proposition 2.9.
For $n\geq 3$,
$\displaystyle f_{123}(n)$ $\displaystyle=\sum_{k\geq 1\atop
c_{1}+c_{2}+\cdots+c_{k}=n}\sum_{i=1}^{k}{c_{i}\choose 3},$ (2.10)
$\displaystyle f_{231}(n)$ $\displaystyle=\sum_{k\geq 1\atop
c_{1}+c_{2}+\cdots+c_{k}=n}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}{c_{i}\choose
2}.$ (2.11)
Proof. For a permutation ${\sigma}\in S_{n}(132,213)$ with
$\varphi_{2}({\sigma})=c_{1}+c_{2}+\cdots+c_{k}$, we rewrite ${\sigma}$ as
${\sigma}=\tau_{1},\tau_{2},\ldots,\tau_{k}$ by Lemma 2.8, and we see that the
pattern $123$ can only occur in every subsequence $\tau_{i}$ since
$\tau_{i}>\tau_{j}$ for $j>i$ and the elements in $\tau_{i}$ are increasing.
Thus, we have ${c_{i}\choose 3}$ choices to choose three elements in
$\tau_{i}$ to play the role of $123$, and formula (2.10) follows by summing
all $123$-patterns in subsequences $\tau_{1},\tau_{2},\ldots,\tau_{k}$.
For pattern $231$, we have ${c_{i}\choose 2}$ choices in the subsequence
$\tau_{i}$ to choose two elements to play the role of $23$, after this we have
$c_{i+1}+\cdots+c_{k}$ choices to choose one element in
$\tau_{i+1},\ldots,\tau_{k}$ for the role of $1$ since $\tau_{j}<\tau_{i}$ for
all $j>i$. Summing up all the number of $231$-patterns according to the
positions of $23$ gives the formula (2.11).
###### Theorem 2.10.
For $n\geq 3$, in the set $S_{n}(132,213)$, we have
$\displaystyle f_{123}(n)$ $\displaystyle=(n-4)2^{n-1}+n+2,$ (2.12)
$\displaystyle f_{231}(n)$
$\displaystyle=f_{312}(n)=(n^{2}-7n+16)2^{n-2}-n-4,$ (2.13) $\displaystyle
f_{321}(n)$ $\displaystyle=(n^{3}/3-3n^{2}+38n/3-24)2^{n-2}+n+6.$ (2.14)
Proof. From Prop 2.9, we have
$f_{123}(n+1)=\sum_{k\geq 1,c_{k}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k}{c_{i}\choose 3}+\sum_{k\geq
1,c_{k}\geq 2\atop c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k}{c_{i}\choose
3}.$
If $c_{k}=1$, then $k\geq 2$, and we have
$\displaystyle\sum_{k\geq 1,c_{k}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k}{c_{i}\choose 3}=\sum_{k-1\geq
1\atop c_{1}+c_{2}+\cdots+c_{k-1}=n}\sum_{i=1}^{k-1}{c_{i}\choose
3}=f_{123}(n).$
If $c_{k}\geq 2$, then we set $c_{k}=1+r_{k}$, where $r_{k}\geq 1$, and this
follows
$\displaystyle\sum_{k\geq 1,c_{k}\geq 2\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k}{c_{i}\choose 3}$
$\displaystyle=\sum_{k\geq 1\atop
c_{1}+c_{2}+\cdots+c_{k-1}+r_{k}=n}\left[\sum_{i=1}^{k-1}{c_{i}\choose
3}+{r_{k}\choose 3}+\frac{r_{k}(r_{k}-1)}{2}\right]$
$\displaystyle=f_{123}(n)+b(n)/2.$
Combining the above two cases, we get that
$f_{123}(n+1)=2f_{123}(n)+2^{n}-n-1,$
and formula (2.12) holds by solving the recurrence with initial value
$f_{123}(3)=1$.
From Lemma 2.1, we see that ${\sigma}\in
S_{n}(132,213)\Leftrightarrow{\sigma}^{-1}\in S_{n}(132,213)$, and this
follows that $f_{231}(n)=f_{312}(n)$ from $231^{-1}=312$.
To calculate $f_{231}(n)$, we have by using Prop 2.9 again
$f_{231}(n+1)=\sum_{k\geq 1,c_{k}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}{c_{i}\choose
2}+\sum_{k\geq 1,c_{k}\geq 2\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}{c_{i}\choose
2}.$
If $c_{k}=1$, then $k\geq 2$, and
$\displaystyle\sum_{k\geq 1,c_{k}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}{c_{i}\choose
2}$ $\displaystyle=\sum_{k-1\geq 1\atop
c_{1}+c_{2}+\cdots+c_{k-1}=n}\sum_{i=1}^{k-1}{c_{i}\choose
2}\left[\sum_{j=i+1}^{k-1}c_{j}+1\right]$
$\displaystyle=f_{231}(n)+\alpha(n),$
where $\alpha(n)=\sum\limits_{k\geq 1\atop
c_{1}+c_{2}+\cdots+c_{k}=n}\sum_{i=1}^{k}{c_{i}\choose 2}$. We further have
$\displaystyle\alpha(n)$ $\displaystyle=\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k}=n}\sum_{i=1}^{k}\left[{c_{i}-1\choose
2}+c_{i}-1\right]=f_{213}(S_{n}(123,132))+\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k}=n}(n-k)$
$\displaystyle=f_{213}(S_{n}(123,132))-c(n)+n2^{n-1},$
in which we use the derived formula
$f_{213}(S_{n}(123,132))=\sum\limits_{k\geq 1\atop
c_{1}+\cdots+c_{k}=n}\sum_{i=1}^{k}{c_{i}-1\choose 2}$.
If $c_{k}\geq 2$, then we set $c^{\prime}_{k}=c_{k}-1$ and
$c^{\prime}_{i}=c_{i}$ for $1\leq i\leq k-1$. This holds that
$\displaystyle\sum_{k\geq 1,c_{k}\geq 2\atop
c_{1}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c_{j}{c_{i}\choose 2}$
$\displaystyle=\sum_{k\geq 1\atop
c^{\prime}_{1}+\cdots+c^{\prime}_{k}=n}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}c^{\prime}_{j}{c^{\prime}_{i}\choose
2}+\sum_{k\geq 1\atop
c^{\prime}_{1}+\cdots+c^{\prime}_{k}=n}\sum_{i=1}^{k-1}{c^{\prime}_{i}\choose
2}$ $\displaystyle=f_{231}(n)+\beta(n),$
where $\beta(n)=\sum\limits_{k\geq 1\atop
c_{1}+\cdots+c_{k}=n}\sum_{i=1}^{k-1}{c_{i}\choose 2}$. Further, we can
rewrite $\beta(n)$ as
$\displaystyle\beta(n)$ $\displaystyle=\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k}=n}\sum_{i=1}^{k}{c_{i}\choose 2}-\sum_{k\geq 1\atop
c_{1}+\cdots+c_{k}=n}\frac{c_{k}(c_{k}-1)}{2}$
$\displaystyle=\alpha(n)-b(n)/2=f_{213}(S_{n}(123,132))-c(n)-b(n)/2+n2^{n-1}.$
Substituting the known formulae for $f_{213}(S_{n}(123,132))$, $c(n)$ and
$b(n)$, we get that
$f_{231}(n+1)=2f_{231}(n)+(2n-6)2^{n-1}+n+3,$
and formula (2.14) holds by solving this recurrence with initial condition
$f_{213}(3)=1$.
Finally, formula (2.14) follows from
$f_{123}(n)+2f_{231}(n)+f_{321}(n)={n\choose 3}2^{n-1}$, and this completes
the proof.
The first few values of $f_{q}(S_{n}(132,213))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $1$ | $0$ | $0$ | $1$ | $1$ | $1$ | $6$ | $72$ | $0$ | $0$ | $150$ | $150$ | $268$
$4$ | $6$ | $0$ | $0$ | $8$ | $8$ | $10$ | $7$ | $201$ | $0$ | $0$ | $501$ | $501$ | $1037$
$5$ | $23$ | $0$ | $0$ | $39$ | $39$ | $59$ | $8$ | $522$ | $0$ | $0$ | $1524$ | $1524$ | $3598$
### 2.3 Pattern Count on $(132,231)$-Avoiding Permutations
###### Theorem 2.11.
For $n\geq 3$, in the set $S_{n}(132,231)$, we have
$\displaystyle f_{123}(n)=f_{213}(n)=f_{312}(n)=f_{321}(n)={n\choose
3}2^{n-3}.$ (2.15)
Proof. For each ${\sigma}\in S_{n}(132,231)$, we observe that $n$ must lie in
the beginning or the end of ${\sigma}$, and $n-1$ must lie in the beginning or
the end of ${\sigma}\backslash\\{n\\}$, …, and so on. Here
${\sigma}\backslash\\{n\\}$ denotes the sequence obtained from ${\sigma}$ by
deleting element $n$. Based on such observation, suppose $abc$ is a length $3$
pattern in $S_{n}(132,231)$, and set
$[n]\backslash\\{a,b,c\\}:=\\{r_{1}>r_{2}>\cdots>r_{n-4}>r_{n-3}\\}$. We can
construct a permutation in the set $S_{n}(132,231)$ which contains an $abc$
pattern as follows:
Start with the subsequence ${\sigma}^{0}:=abc$, and for $i$ from $1$ to $n-3$,
${\sigma}^{i}$ is obtained from ${\sigma}^{i-1}$ by inserting $r_{i}$ into it.
* •
if there are at least two elements in ${\sigma}^{i-1}$ smaller than $r_{i}$,
then choose the two elements $A$ and $B$ such that $A$ is the leftmost one and
$B$ is the rightmost one. We put $r_{i}$ immediately to the left of $A$ or
immediately to the right of $B$;
* •
if there is only one element $A$ in ${\sigma}^{i-1}$ such that $A<r_{i}$, then
we can put $r_{i}$ immediately to the left or to the right of $A$;
* •
if all the elements in ${\sigma}^{i-1}$ are larger than $r_{i}$, then choose
$A$ the least one, and put $r_{i}$ immediately to the left or to the right of
$A$.
Finally, we set ${\sigma}:={\sigma}^{n-3}$ and ${\sigma}\in S_{n}(132,231)$
from the above construction. Moreover, there are ${n\choose 3}$ choices to
choose $abc$, and the number of permutations having $abc$ as a pattern is
$2^{n-3}$ since each $r_{i}$ has $2$ choices. This completes the proof.
Here we give an illustration of constructing a permutation in $S_{8}(132,231)$
which contains the pattern $abc=256$. Set ${\sigma}^{0}:=256$, we may have
${\sigma}^{1}=8\,2\,5\,6$, ${\sigma}^{2}=8\,7\,2\,5\,6$,
${\sigma}^{3}=8\,7\,2\,4\,5\,6$, ${\sigma}^{4}=8\,7\,3\,2\,4\,5\,6$,
${\sigma}:={\sigma}^{5}=8\,7\,3\,2\,1\,4\,5\,6$.
We could also provide combinatorial proofs for the phenomenon
$f_{123}(n)=f_{213}(n)=f_{312}(n)=f_{321}(n)$. From Lemma 2.1, we have
${\sigma}\in S_{n}(132,231)\Leftrightarrow{\sigma}^{r}\in S_{n}(132,231)$, and
this follows $f_{123}(n)=f_{321}(n)$ and $f_{213}(n)=f_{312}(n)$ from
$123^{r}=321$ and $213^{r}=312$, respectively. It remains to give a bijection
for $f_{213}(n)=f_{123}(n)$, and our following construction is motivated from
Bóna [2].
A binary plane tree is a rooted unlabeled tree in which each vertex has at
most two children, and each child is a left child or a right child of its
parent. For each ${\sigma}\in S_{n}(132)$, we construct a binary plane tree
$T({\sigma})$ as follows: the root of $T({\sigma})$ corresponds to the entry
$n$ of ${\sigma}$, the left subtree of the root corresponds to the string of
entries of ${\sigma}$ on the left of $n$, and the right subtree of the root
corresponds to the string of entries of ${\sigma}$ on the right of $n$. Both
subtrees are constructed recursively by the same rule. For more details, see
[1, 2, 7].
A left descendant (resp. right descendant) of a vertex $x$ in a binary plane
tree is a vertex in the left (resp. right) subtree of $x$. The left (resp.
right) subtree of $x$ does not contain $x$ itself. Similarly, an ascendant of
a vertex $x$ in a binary plane tree is a vertex whose subtree contains $x$.
Given a tree $T$ and a vertex $v\in T$, let $T_{v}$ be the subtree of $T$ with
$v$ as the root. Let $R$ be an occurrence of the pattern $123$ in ${\sigma}\in
S_{n}(132)$, and let $R_{1},R_{2},R_{3}$ be the three vertices of
$T({\sigma})$ that correspond to $R$, going left to right. Then, $R_{1}$ is a
left descendant of $R_{2}$, and $R_{2}$ is a left descendant of $R_{3}$.
From the above correspondence, we see that for ${\sigma}\in S_{n}(132,231)$,
$T({\sigma})$ is a binary plane tree on $n$ vertices such that each vertex has
at most one child from the forbiddance of the pattern $231$. For simplicity,
denote by $\mathcal{T}_{n}$ the set of such binary plane trees on $n$
vertices. Let $Q$ be an occurrence of the pattern $213$ in ${\sigma}\in
S_{n}(132,231)$, and let $Q_{2},Q_{1},Q_{3}$ be the three vertices of
$T({\sigma})$ that correspond to $Q$, going left to right. From the
characterization of trees in $\mathcal{T}_{n}$, $Q_{2}$ is a left descendant
of $Q_{3}$, and $Q_{1}$ is a right descendant of $Q_{2}$.
Let $\mathcal{A}_{n}$ be the set of binary plane trees in $\mathcal{T}_{n}$
where three vertices forming a $213$-pattern are colored black. Let
$\mathcal{B}_{n}$ be the set of all binary plane trees in $\mathcal{T}_{n}$
where three vertices forming a 123-pattern are colored black. We will define a
map $\rho:\mathcal{A}_{n}\rightarrow\mathcal{B}_{n}$ as follows. Given a tree
$T\in\mathcal{A}_{n}$ with $Q_{2},Q_{1},Q_{3}$ being the three black vertices
as a $213$-pattern, define $\rho(T)$ be the tree obtained by changing the
right subtree of $Q_{2}$ to be its left subtree. See Figure 1 for an
illustration.
$Q_{1}$$Q_{2}$$Q_{3}$$\Rightarrow$$Q_{1}$$Q_{2}$$Q_{3}$right subtree of
$Q_{2}$left subtree of $Q_{2}$ Figure 1: The bijection $\rho$.
In the tree $\rho(T)$, the relative positions of $Q_{2}$ and $Q_{3}$ keep the
same, and $Q_{1}$ is a left descendant of $Q_{2}$. Therefore, the three black
points $Q_{1}Q_{2}Q_{3}$ form a $123$-pattern in $\rho(T)$, and
$\rho(T)\in\mathcal{B}_{n}$. It is easy to describe the converse and we omit
here.
The first few values of $f_{q}(S_{n}(132,231))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $1$ | $0$ | $1$ | $0$ | $1$ | $1$ | $6$ | $160$ | $0$ | $160$ | $0$ | $160$ | $160$
$4$ | $8$ | $0$ | $8$ | $0$ | $8$ | $8$ | $7$ | $560$ | $0$ | $560$ | $0$ | $560$ | $560$
$5$ | $40$ | $0$ | $40$ | $0$ | $40$ | $40$ | $8$ | $1792$ | $0$ | $1792$ | $0$ | $1792$ | $1792$
### 2.4 Pattern Count on $(132,312)$-Avoiding Permutations
We begin with a correspondence between $(132,312)$-avoiding permutations and
compositions of $n$.
###### Theorem 2.12.
There is a bijection $\varphi_{4}$ between the sets $S_{n}(132,312)$ and
$\mathcal{C}_{n}$.
Proof. For ${\sigma}\in S_{n}(132,312)$, let
${\sigma}_{i_{1}},{\sigma}_{i_{2}},\ldots,{\sigma}_{i_{k}}$ be the $k$ left to
right maxima with $i_{1}<i_{2}<\cdots<i_{k}$, and thus,
$c=(i_{2}-i_{1})+(i_{3}-i_{2})+\cdots+(i_{k}-i_{k-1})+(n+1-i_{k})$ is a
composition of $n$ from $i_{1}=1$. On the converse, let
$n=c_{k}+c_{k-1}+\cdots+c_{2}+c_{1}\in{\mathcal{C}}_{n}$. For $1\leq i\leq k$,
if $c_{i}=1$ then set $\tau_{i}=n-i+1$; otherwise, set
$m_{i}=c_{1}+\cdots+c_{i-1}-i+2$ and
$\tau_{i}=n-i+1,m_{i}+c_{i}-2,\ldots,m_{i}+1,m_{i}$. It is easy to get
${\sigma}=\tau_{k},\tau_{k-1},\ldots,\tau_{2},\tau_{1}\in S_{n}(132,312)$ as
desired. For example, if $9=3+1+2+3$, then
${\sigma}=6\,5\,4\,7\,8\,3\,9\,2\,1$.
###### Proposition 2.13.
For $n\geq 3$, we have
$\displaystyle f_{123}(n)=\sum_{k\geq 1\atop
c_{1}+c_{2}+\cdots+c_{k}=n}\sum_{i=1}^{k-2}c_{i}{k-i\choose 2}.$ (2.16)
Proof. Given a permutation ${\sigma}=\tau_{k},\ldots,\tau_{2},\tau_{1}$ in
$S_{n}(132,312)$ whose composition is given by
$n=c_{k}+c_{k-1}+\cdots+c_{2}+c_{1}$, we see that the first element in
$\tau_{i}$ is larger than all the elements in $\tau_{j}$, whereas the other
elements in $\tau_{i}$ are smaller than the elements in $\tau_{j}$ for
$i+1\leq j\leq k$. The left to right maxima form an increasing subsequence and
the other elements form a decreasing subsequence. Thus we have $c_{i}$ choices
to choose one element in $\tau_{k-i+1}$ to play the role of $1$, and then
${k-i\choose 2}$ choices to choose two left to right maxima in $\tau_{j}$ for
$j<k-i+1$ to paly the role of $23$, summing up all the number of $123$
patterns in subsequences $\tau_{k},\ldots,\tau_{2},\tau_{1}$ gives the formula
(2.16).
###### Theorem 2.14.
For $n\geq 3$, in the set $S_{n}(132,312)$, we have
$\displaystyle f_{123}(n)$ $\displaystyle=f_{321}(n)={n\choose 3}2^{n-3},$
(2.17) $\displaystyle f_{213}(n)$ $\displaystyle=f_{231}(n)={n\choose
3}2^{n-3}.$ (2.18)
Proof. From Lemma 2.1, we see that ${\sigma}\in
S_{n}(132,312)\Leftrightarrow{\sigma}^{c}\in S_{n}(132,312)$, which follows
$f_{123}(n)=f_{321}(n)$ and $f_{213}(n)=f_{231}(n)$ from $123^{c}=321$ and
$213^{c}=231$, respectively.
To calculate $f_{123}(n)$, we have by Prop 2.13
$f_{123}(n+1)=\sum_{k\geq 1,c_{1}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-2}c_{i}{k-i\choose 2}+\sum_{k\geq
1,c_{1}\geq 2\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-2}c_{i}{k-i\choose 2}.$
If $c_{1}=1$, then $k\geq 2$, and
$\displaystyle\sum_{k\geq 1,c_{1}=1\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-2}c_{i}{k-i\choose 2}$
$\displaystyle=\sum_{k-1\geq 1\atop
c_{2}+\cdots+c_{k}=n}\sum_{i=2}^{k-2}c_{i}{k-i\choose 2}+\sum_{k-1\geq 1\atop
c_{2}+\cdots+c_{k}=n}{k-1\choose 2}$ $\displaystyle=f_{123}(n)+d(n)/2.$
If $c_{1}\geq 2$, let $c^{\prime}_{1}=c_{1}-1$, $c^{\prime}_{i}=c_{i}$ for
$2\leq i\leq k$, then $c^{\prime}_{1}\geq 1$, and
$\displaystyle\sum_{k\geq 1,c_{1}\geq 2\atop
c_{1}+c_{2}+\cdots+c_{k}=n+1}\sum_{i=1}^{k-2}c_{i}{k-i\choose 2}$
$\displaystyle=\sum_{k\geq 1\atop
c^{\prime}_{1}+c^{\prime}_{2}+\cdots+c^{\prime}_{k}=n}\sum_{i=1}^{k-2}c^{\prime}_{i}{k-i\choose
2}+\sum_{k\geq 1\atop
c^{\prime}_{1}+c^{\prime}_{2}+\cdots+c^{\prime}_{k}=n}{k-1\choose 2}$
$\displaystyle=f_{123}(n)+\sum_{k\geq 1\atop
c^{\prime}_{1}+c^{\prime}_{2}+\cdots+c^{\prime}_{k}=n}\left[{k\choose
2}+1-k\right]$ $\displaystyle=f_{123}(n)+d(n)/2+2^{n-1}-c(n).$
Combining the above two cases, we get that
$f_{123}(n+1)=2f_{123}(n)+(n^{2}-n)2^{n-3},$
and the formula (2.17) is derived by solving the recurrence with initial value
$f_{123}(3)=1$. Formula (2.18) is a direct computation of the equality
$2f_{123}(n)+2f_{213}(n)={n\choose 3}2^{n-1}$.
In the subsequent section, we could also give a combinatorial interpretation
for $f_{231}(n)=f_{123}(n)$ by using binary plane trees.
For ${\sigma}\in S_{n}(132,312)$, as in previous section, we could construct a
binary plane tree $T({\sigma})$ on $n$ vertices such that each vertex that is
a right descendant of some vertex does not have a left descendant from the
forbiddance of the pattern $312$. Denote by $\mathscr{T}_{n}$ the set of such
trees on $n$ vertices. Let $Q$ be an occurrence of the pattern $231$ in
${\sigma}\in S_{n}(132,312)$, and let $Q_{2},Q_{3},Q_{1}$ be the three
vertices of $T({\sigma})$ that correspond to $Q$, going left to right. Then,
$Q_{2}$ is a left descendant of $Q_{3}$, and there exists a lowest ascendant
$x$ of $Q_{3}$ or $x=Q_{3}$ so that $Q_{1}$ is a right descendant of $x$. Let
$\mathscr{A}_{n}$ be the set of binary plane trees in $\mathscr{T}_{n}$ in
which three vertices forming a $231$-pattern are colored black. Let
$\mathscr{B}_{n}$ be the set of all binary plane trees in $\mathscr{T}_{n}$ in
which three vertices forming a 123-pattern are colored black. It remains to
construct a map $\varrho:\mathscr{A}_{n}\rightarrow\mathscr{B}_{n}$.
Given a tree $T\in\mathscr{A}_{n}$ with $Q_{2},Q_{3},Q_{1}$ being the three
black vertices as a $231$-pattern, denoted by $y$ the vertex that is the
parent of $x$ if it exists. We can see that $x$ is the left child of $y$ from
$T\in\mathscr{A}_{n}$. Let $T^{u}:=T-T_{x}$ be the tree obtained from $T$ by
deleting the subtree $T_{x}$, and $T^{d}:=T_{x}-T_{Q_{1}}$ be the tree
obtained from $T_{x}$ by deleting the subtree $T_{Q_{1}}$. Now we can define
$\varrho(T)$ be the tree obtained from $T$ by first interchanging $Q_{1}$ as
the left child of $y$, then adjoining the subtree $T^{d}$ as the left subtree
of the vertex $Q_{1}$ and keeping all three black vertices the same if $y$
exits. Otherwise, we adjoin the subtree $T^{d}$ as the left subtree of the
vertex $Q_{1}$ directly. An illustration is given in Figure 2.
$Q_{2}$$Q_{3}$$Q_{1}$$x$$y$$\Rightarrow$$Q_{2}$$Q_{3}$$Q_{1}$$x$$y$$T_{u}$$\nearrow$$T_{u}$$\rightarrow$$T_{d}$$\searrow$$T_{d}$$\rightarrow$
Figure 2: The bijection $\varrho$.
In the tree $\varrho(T)$, the relative positions of $Q_{2}$ and $Q_{3}$ are
unchanged, and $Q_{3}$ is a left descendant of $Q_{1}$, thus the three black
points $Q_{2}Q_{3}Q_{1}$ form a $123$-pattern in $\varrho(T)$, and
$\varrho(T)\in\mathscr{B}_{n}$. It is easy to describe the converse map and we
omit here.
The first few values of $f_{q}(S_{n}(132,312))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $1$ | $0$ | $1$ | $1$ | $0$ | $1$ | $6$ | $160$ | $0$ | $160$ | $160$ | $0$ | $160$
$4$ | $8$ | $0$ | $8$ | $8$ | $0$ | $8$ | $7$ | $560$ | $0$ | $560$ | $560$ | $0$ | $560$
$5$ | $40$ | $0$ | $40$ | $40$ | $0$ | $40$ | $8$ | $1792$ | $0$ | $1792$ | $1792$ | $0$ | $1792$
### 2.5 Pattern Count on $(132,321)$-Avoiding Permutations
We begin with a correspondence from Simion and Schmidt [8] as follows:
###### Lemma 2.15.
There is a bijection $\varphi_{5}$ between the set
$S_{n}(132,321)\backslash\\{\text{identity}\\}$ and the set of $2$-element
subsets of $[n]$.
Proof. For a permutation ${\sigma}\in
S_{n}(132,321)\backslash\\{\text{identity}\\}$, suppose that ${\sigma}_{k}=m$
($k<m$), and then we define $\varphi_{5}({\sigma})=\\{k,m\\}$. On the
converse, given two elements $1\leq k<m\leq n$, set
$\tau_{1}=m-k+1,m-k+2,\ldots,m-1,m$, $\tau_{2}=1,2,\ldots,m-k$ and
$\tau_{3}=m+1,m+2,\ldots,n-1,n$. Then define
${\sigma}=\varphi_{5}^{-1}(k,m)=\tau_{1},\tau_{2},\tau_{3}$. For example, if
$k=4,m=6$, then ${\sigma}=3\,4\,5\,6\,1\,2\,7\,8$.
From the above lemma, we have
###### Proposition 2.16.
For $n\geq 3$,
$\displaystyle f_{213}(n)$ $\displaystyle=\sum_{1\leq k<m\leq n}k(m-k)(n-m),$
(2.19) $\displaystyle f_{312}(n)$ $\displaystyle=\sum_{1\leq k<m\leq
n}k{m-k\choose 2}.~{}~{}~{}~{}~{}~{}$ (2.20)
Proof. Given a permutation ${\sigma}=\tau_{1},\tau_{2},\tau_{3}$ with
$\varphi_{5}({\sigma})=\\{k,m\\}$ in the set $S_{n}(132,321)$, we see that the
elements in $\tau_{1}$, $\tau_{2}$ and $\tau_{3}$ are increasing, and
$\tau_{2}<\tau_{1}<\tau_{3}$. We have $k$ choices to select one element in
$\tau_{1}$ to play the role of $2$, and then have $m-k$ choices to choose one
element in $\tau_{2}$ to play the role of $1$, and $n-m$ choices to choose one
element in $\tau_{3}$ to play the role of $3$. Summing up all possible $k$ and
$m$ gives the formula (2.19).
For the pattern $312$, we have $k$ choices in the subsequence $\tau_{1}$ to
choose one element to play the role of $3$, after this we have ${m-k\choose
2}$ choices in the subsequence $\tau_{2}$ to choose two elements to play the
role of $23$. Summing up all $k$ and $m$ gives the formula (2.20).
Next, we exhibit some useful formulae for calculating $f_{213}(n)$ and
$f_{312}(n)$ as follows:
###### Lemma 2.17.
For $n\geq 2$,
$\displaystyle\sum_{k=1}^{n-1}k$
$\displaystyle=\frac{n(n-1)}{2};~{}~{}~{}~{}~{}\sum_{k=1}^{n-1}k^{2}=\frac{n(n-1)(2n-1)}{6};$
$\displaystyle\sum_{k=1}^{n-1}k^{3}$
$\displaystyle=\frac{n^{2}(n-1)^{2}}{4};~{}~{}~{}\sum_{k=1}^{n-1}k^{4}=\frac{n(n-1)(2n-1)(3n^{2}-3n+1)}{30}.$
Based on previous analysis, we are ready to give the exact formulae for the
pattern count in $S_{n}(132,321)$.
###### Theorem 2.18.
For $n\geq 3$, in the set $S_{n}(132,321)$, we have
$\displaystyle f_{213}(n)$ $\displaystyle=f_{231}(n)=f_{312}(n)={n+2\choose
5},~{}~{}~{}~{}$ (2.21) $\displaystyle f_{123}(n)$
$\displaystyle=n(7n^{4}-40n^{3}+85n^{2}-80n+28)/120.$ (2.22)
Proof. From Lemma 2.1, we see that ${\sigma}\in
S_{n}(132,321)\Leftrightarrow{\sigma}^{-1}\in S_{n}(132,321)$, it implies that
$f_{312}(n)=f_{231}(n)$ since $312^{-1}=231$. By using Prop 2.16, we have
$\displaystyle f_{312}(n)$ $\displaystyle=\sum_{1\leq k<m\leq n}k{m-k\choose
2}$ $\displaystyle=\sum_{k=1}^{n-1}k\sum_{m=k+1}^{n}{m-k\choose
2}=\sum_{k=1}^{n-1}k{n-k+1\choose 3}$
$\displaystyle=\sum_{k=1}^{n-1}\left[(n^{3}-n)k+(1-3n^{2})k^{2}+2nk^{3}-k^{4}\right]={n+2\choose
5},$
where the last equality holds from Lemma 2.17. By using Prop 2.16 again, we
have
$\displaystyle f_{213}(n)$ $\displaystyle=\sum_{1\leq k<m\leq
n}k(m-k)(n-m)=\sum_{k=1}^{n-1}\sum_{m=k+1}^{n}k(m-k)(n-m)$
$\displaystyle=\sum_{k=1}^{n-1}\sum_{m^{\prime}=1}^{n-k}km^{\prime}(n-m^{\prime}-k)=\sum_{k=1}^{n-1}k(n-k)\sum_{m^{\prime}=1}^{n-k}m^{\prime}-\sum_{k=1}^{n-1}k\sum_{m^{\prime}=1}^{n-k}m^{\prime
2}$
$\displaystyle=\sum_{k=1}^{n-1}\left[\left(\frac{n^{3}}{6}-\frac{n}{6}\right)k+\left(\frac{1}{6}-\frac{n^{2}}{2}\right)k^{2}+\frac{n}{2}k^{3}-\frac{1}{6}k^{4}\right]={n+2\choose
5},$
where the last equality holds by simple calculation from Lemma 2.17. We
complete the proof by employing the relation
$2f_{231}(n)+f_{213}(n)+f_{123}(n)={n\choose 3}\left[{n\choose 2}+1\right]$.
We also notice that the equality $f_{213}(n)=f_{231}(n)$ can be proved by
using Bóna’s bijection [2] directly on the set of binary plane trees on $n$
vertices such that the vertex which is a right descendant of some node has no
right descendants.
The first few values of $f_{q}(S_{n}(132,321))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $1$ | $0$ | $1$ | $1$ | $1$ | $0$ | $6$ | $152$ | $0$ | $56$ | $56$ | $56$ | $0$
$4$ | $10$ | $0$ | $6$ | $6$ | $6$ | $0$ | $7$ | $392$ | $0$ | $126$ | $126$ | $126$ | $0$
$5$ | $47$ | $0$ | $21$ | $21$ | $21$ | $0$ | $8$ | $868$ | $0$ | $252$ | $252$ | $252$ | $0$
## 3 Triply Restricted Permutations
In this section, we study the pattern count in the simultaneous avoidance of
any three patterns of length $3$. Based on Lemma 2.1, Simion and Schmidt [8]
showed that the pairs of patterns among the total ${6\choose 3}=20$ cases fall
into the following $6$ classes:
###### Proposition 3.1.
For every symmetric group $S_{n}$,
(1) $|S_{n}(123,132,213)|=|S_{n}(231,312,321)|=F_{n+1}$;
(2)
$|S_{n}(123,132,231)|=|S_{n}(123,213,312)|=|S_{n}(132,231,321)|=|S_{n}(213,312,321)|=n$;
(3)
$|S_{n}(132,213,231)|=|S_{n}(132,213,312)|=|S_{n}(132,231,312)|=|S_{n}(213,231,312)|=n$;
(4)
$|S_{n}(123,132,312)|=|S_{n}(123,213,231)|=|S_{n}(132,312,321)|=|S_{n}(213,231,321)|=n$;
(5) $|S_{n}(123,231,312)|=|S_{n}(132,213,321)|=n$;
(6) $|S_{n}(R)|=0$ for all $R\supset\\{123,321\\}$ if $n\geq 5$,
where $F_{n}$ is the Fibonacci number given by $F_{0}=0,F_{1}=1$ and
$F_{n}=F_{n-1}+F_{n-2}$ for $n\geq 2$.
### 3.1 Pattern Count on $(123,132,213)$-Avoiding Permutations
It is known that $F_{n+1}$ counts the number of $0$-$1$ sequences of length
$n-1$ in which there are no consecutive ones, see [4], and we call such a
sequence a Fibonacci binary word for convenience. Let $B_{n}$ denote the set
of all Fibonacci binary words of length $n$. Simion and Schmidt [8] showed
that
###### Lemma 3.2 ([8]).
There is a bijection $\psi_{1}$ between $S_{n}(123,132,213)$ and $B_{n-1}$.
Proof. Let $w=w_{1}w_{2}\cdots w_{n-1}\in B_{n-1}$, and the corresponding
permutation ${\sigma}$ is determined as follows: For $1\leq i\leq n-1$, let
$X_{i}=[n]-\\{{\sigma}_{1},\ldots,{\sigma}_{i-1}\\}$, and then set
$\displaystyle{\sigma}_{i}=$ $\displaystyle\text{largest element in}~{}X_{i},$
if $s_{i}=0$, $\displaystyle{\sigma}_{i}=$ $\displaystyle\text{second largest
element in}~{}X_{i},$ if $s_{i}=1$.
Finally, ${\sigma}_{n}$ is the unique element in $X_{n}$. For example, if
$w=01001010$, then $\psi_{1}(w)=9\,7\,8\,6\,4\,5\,2\,3\,1$.
Given a word $w=w_{1}w_{2}\cdots w_{n}\in B_{n}$, we call $i$ ($1\leq i<n$) an
ascent of $w$ if $w_{i}<w_{i+1}$, and denote by
${\mathrm{asc}}(w)=\\{i|w_{i}<w_{i+1}\\}$ and
${\mathrm{maj}}(w)=\sum\limits_{i\in{\mathrm{asc}}(w)}i$.
###### Proposition 3.3.
The total number of occurrences of the pattern $312$ in $S_{n}(123,132,213)$
is given by
$\displaystyle f_{312}(n)=\sum_{w\in B_{n-1}}{\mathrm{maj}}(w).$ (3.2)
Proof. Suppose ${\sigma}\in S_{n}(123,132,213)$ and
$\psi_{1}({\sigma})=w_{1}w_{2}\cdots w_{n-1}$. If $k$ is an ascent of $w$,
then $w_{k}w_{k+1}=01$. We have ${\sigma}_{k}>{\sigma}_{j}$ for all $j>k$
since ${\sigma}_{k}$ is the largest element in $X_{k}$. On the other hand,
there exists a unique $l>k+1$ such that ${\sigma}_{l}>{\sigma}_{k+1}$ since
${\sigma}_{k+1}$ is the second largest element in $X_{k+1}$. From the
bijection $\psi_{1}$, we see that for all $i\in[n-1]$, there is at most one
$j>i$ such that ${\sigma}_{j}>{\sigma}_{i}$; this implies that
${\sigma}_{i}>{\sigma}_{k+1}$ for all $i<k$. Thus we find that
${\sigma}_{i}{\sigma}_{k+1}{\sigma}_{l}$ forms a $312$-pattern for all $i\leq
k$, that is the ascent $k$ will produce $k$’s $312$-pattern in which
${\sigma}_{k+1}$ plays the role of $1$. Summing up all the ascents, we derive
that there are total ${\mathrm{maj}}(w)$ such patterns in ${\sigma}$.
###### Theorem 3.4.
For $n\geq 3$, in the set $S_{n}(123,132,213)$, we have
$\displaystyle\sum_{n\geq 3}f_{231}(n)x^{n}$ $\displaystyle=\sum_{n\geq
3}f_{312}(n)x^{n}=\frac{x^{3}(1+2x)}{(1-x-x^{2})^{3}},$ (3.3)
$\displaystyle\sum_{n\geq 3}f_{321}(n)x^{n}$
$\displaystyle=\frac{x^{3}(1+6x+12x^{2}+8x^{3})}{(1-x-x^{2})^{4}}.$ (3.4)
Proof. From Lemma 2.1, we have $f_{231}(n)=f_{312}(n)$ since ${\sigma}\in
S_{n}(123,132,213)\Leftrightarrow{\sigma}^{-1}\in S_{n}(123,132,213)$ and
$231^{-1}=312$. By Prop 3.3, we write
$\sum_{n\geq 3}f_{312}(n)x^{n}=\sum_{n\geq 3}x^{n}\sum_{{\sigma}\in
B_{n-1}}{\mathrm{maj}}(w)=x\sum_{n\geq 3}\sum_{w\in
B_{n-1}}{\mathrm{maj}}(w)x^{n-1}=xu(x),$
where $u(x)=\sum\limits_{n\geq 2}\sum\limits_{w\in
B_{n}}{\mathrm{maj}}(w)x^{n}$.
Let $M_{n}(q)=\sum\limits_{w\in B_{n}}q^{{\mathrm{maj}}(w)}$ and
$M(x,q)=\sum\limits_{n\geq 2}M_{n}(q)x^{n}$. Then $u(x)=\frac{\partial
M(x,q)}{\partial q}\mid_{q=1}$. Given a word $w=w_{1}w_{2}\cdots w_{n}\in
B_{n}$, if $w_{n}=0$, then ${\mathrm{maj}}(w)={\mathrm{maj}}(w_{1}w_{2}\cdots
w_{n-1})$; otherwise, $w_{n-1}w_{n}=01$ and
${\mathrm{maj}}(w)={\mathrm{maj}}(w_{1}w_{2}\cdots w_{n-2})+(n-1)$. Hence, we
have
$\displaystyle M_{n}(q)=M_{n-1}(q)+q^{n-1}M_{n-2}(q)\text{ for }n\geq 4,$
(3.5)
with $M_{2}(q)=2+q$ and $M_{3}(q)=2+q+2q^{2}$. Multiplying the recursion by
$x^{n}$ and summing over $n\geq 4$ yields that
$M(x,q)-(2+q)x^{2}-(2+q+2q^{2})x^{3}=x\left[M(x,q)-(2+q)x^{2}\right]+qx^{2}M(xq,q).$
Therefore
$(1-x)M(x,q)=qx^{2}M(xq,q)+(2+q)x^{2}+2q^{2}x^{3}.$
Differentiate both sides with respect to $q$, we get
$\displaystyle(1-x)\frac{\partial M(x,q)}{\partial
q}=x^{2}\left[M(xq,q)+q\frac{\partial M(xq,q)}{\partial
q}\right]+x^{2}+4qx^{3}.$ (3.6)
Setting $q=1$ in equation (3.6) reads that
$(1-x)u(x)=x^{2}\left[M(x,1)+\frac{\partial M(xq,q)}{\partial
q}\mid_{q=1}\right]+x^{2}+4x^{3},$
Employing the well-known generating $\sum_{n\geq
0}F_{n}x^{n}=\frac{x}{1-x-x^{2}}$, we have
$M(x,1)=\sum_{n\geq 2}|B_{n}|x^{n}=\sum_{n\geq
2}F_{n+2}x^{n}=\frac{x^{2}(3+2x)}{1-x-x^{2}}.$
Further,
$\displaystyle\frac{\partial M(xq,q)}{\partial q}\mid_{q=1}$
$\displaystyle=\left(\sum_{n\geq 2}\sum_{w\in
B_{n}}q^{n+{\mathrm{maj}}(w)}x^{n}\right)|_{q=1}$ $\displaystyle=\sum_{n\geq
2}x^{n}\sum_{w\in B_{n}}(n+{\mathrm{maj}}(w))=\sum_{n\geq
2}nF_{n+2}x^{n}+u(x).$
From the generating function of $F_{n+2}$, we obtain
$\sum_{n\geq
2}nF_{n+2}x^{n}=x\left(\frac{x^{2}(3+2x)}{1-x-x^{2}}\right)^{\prime}=\frac{x^{2}(6+3x-4x^{2}-2x^{3})}{(1-x-x^{2})^{2}},$
which yields that
$(1-x)u(x)=x^{2}\left[\frac{x^{2}(3+2x)}{1-x-x^{2}}+\frac{x^{2}(6+3x-4x^{2}-2x^{3})}{(1-x-x^{2})^{2}}+u(x)\right]+x^{2}+4x^{3}.$
Therefore,
$u(x)={x^{2}(1+2x)}/{(1-x-x^{2})^{3}},$
which gives the generating function for $f_{312}(n)$ as shown in formula
(3.3).
For formula (3.4), we first have
$\displaystyle\sum_{n\geq 3}f_{321}(n)x^{n}=\sum_{n\geq 3}{n\choose
3}F_{n+1}x^{n}-2\sum_{n\geq 3}f_{312}(n)x^{n}$ (3.7)
from $2f_{312}(n)+f_{321}(n)={n\choose 3}F_{n+1}$. Using the fact that
$\sum_{n\geq 0}F_{n}x^{n}=\frac{x}{1-x-x^{2}}$, we get
$\displaystyle\sum_{n\geq
3}F_{n+1}x^{n}=\frac{1}{x}\left(\frac{x}{1-x-x^{2}}-x-x^{2}-2x^{3}\right)=\frac{x^{3}(3+2x)}{1-x-x^{2}},$
$\displaystyle\sum_{n\geq 3}{n\choose
3}F_{n+1}x^{n}=\frac{x^{3}}{6}\left(\sum_{n\geq
3}F_{n+1}x^{n}\right)^{{}^{\prime\prime\prime}}=\frac{x^{3}(3+8x+6x^{2}+4x^{3})}{(1-x-x^{2})^{4}}.$
Formula (3.4) follows by substituting the generating functions of ${n\choose
3}F_{n+1}$ and $f_{312}(n)$ into (3.7), and we complete the proof.
The first few values of $f_{q}(S_{n}(123,132,213))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $0$ | $0$ | $0$ | $1$ | $1$ | $1$ | $6$ | $0$ | $0$ | $0$ | $40$ | $40$ | $180$
$4$ | $0$ | $0$ | $0$ | $5$ | $5$ | $10$ | $7$ | $0$ | $0$ | $0$ | $95$ | $95$ | $545$
$5$ | $0$ | $0$ | $0$ | $15$ | $15$ | $50$ | $8$ | $0$ | $0$ | $0$ | $213$ | $213$ | $1478$
### 3.2 Pattern Count on Other Triple Avoiding Permutations
We begin with the pattern count on $(123,132,231)$-avoiding permutations.
###### Theorem 3.5.
For $n\geq 3$, in the set $S_{n}(123,132,231)$, we have
$\displaystyle f_{213}(n)$ $\displaystyle=f_{312}(n)={n\choose 3},$ (3.8)
$\displaystyle f_{321}(n)$ $\displaystyle=(n-2){n\choose 3}.$ (3.9)
Proof. We first give the following structure from Simion and Schmidt [8]
${\sigma}\in
S_{n}(123,132,231)\Leftrightarrow{\sigma}=n,n-1,\ldots,k+1,k-1,k-2,\ldots,1,k\text{
for some }k.$ (3.10)
Based on such structure, we can show $f_{213}(n)=f_{312}(n)$ by a direct
bijection. Let $q=abc$ be a $213$-pattern of a permutation ${\sigma}\in
S_{n}(123,132,231)$. From $b<c$ and the fact that ${\sigma}\in
S_{n}(123,132,231)$ has only one ascent at position $n-1$, we have
${\sigma}(n)=c$, and thus ${\sigma}=n,n-1,\ldots,c+1,c-1,c-2,\ldots,2,1,c$
from the structure (3.10). Let $q^{\prime}=cba$ (a $312$-pattern) and we set
${\sigma}^{\prime}=n,n-1,\ldots,a+1,a-1,a-2,\ldots,2,1,a$ as the desired
permutation. For example, if $n=7$ and $q=326$, then
${\sigma}=7\,5\,4\,3\,2\,1\,6$. Moreover, $q^{\prime}=623$ and
${\sigma}^{\prime}=7\,6\,5\,4\,2\,1\,3$.
To calculate $f_{312}(n)$, we suppose
${\sigma}=n,n-1,\ldots,k+1,k-1,k-2,\ldots,2,1,k$ for some $k$ from structure
(3.10). We can construct a $312$-pattern as follows: Choose one element from
the first $n-k$ elements to play the role of $3$, then choose one element from
the next $k-1$ elements to play the role of $1$, and the last element plays
the role of $2$. Thus, summing up all possible $k$, we have
$f_{312}(n)=\sum_{k=1}^{n}(n-k)(k-1)=-n^{2}+(n+1)\sum_{k=1}^{n}k-\sum_{k=1}^{n}k^{2}=\frac{n(n-1)(n-2)}{6}={n\choose
3}.$
The formula for $f_{321}(n)$ follows by using
$f_{213}(n)+f_{312}(n)+f_{321}(n)=n{n\choose 3}$.
The first few values of $f_{q}(S_{n}(123,132,231))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $0$ | $0$ | $1$ | $0$ | $1$ | $1$ | $6$ | $0$ | $0$ | $20$ | $0$ | $20$ | $80$
$4$ | $0$ | $0$ | $4$ | $0$ | $4$ | $8$ | $7$ | $0$ | $0$ | $35$ | $0$ | $35$ | $175$
$5$ | $0$ | $0$ | $10$ | $0$ | $10$ | $30$ | $8$ | $0$ | $0$ | $56$ | $0$ | $56$ | $336$
For $(132,213,231)$-avoiding permutations, we have
###### Theorem 3.6.
For $n\geq 3$, in the set $S_{n}(132,213,231)$,
$\displaystyle f_{123}(n)$ $\displaystyle=f_{312}(n)={n+1\choose 4},$ (3.11)
$\displaystyle f_{321}(n)$ $\displaystyle=\frac{n(n-2)(n-1)^{2}}{12}.$ (3.12)
Proof. We begin with the following structure by Simion and Schmidt [8]
${\sigma}\in
S_{n}(132,213,231)\Leftrightarrow{\sigma}=n,n-1,\ldots,k+1,1,2,\ldots,k-1,k\text{~{}for
some~{}}k.$ (3.13)
Based on such structure, we first prove $f_{123}(n)=f_{312}(n)$. For each
${\sigma}=n,n-1,\ldots,k+1,1,\ldots,\underline{a},a+1,\ldots,\underline{b},b+1,\ldots,c-1,\underline{c},c+1,\ldots,k-1,k$
with $abc$ as a $123$-pattern, we set
${\sigma}^{\prime}=n,n-1,\ldots,\underline{n-k+c},\ldots,c,1,2,\ldots,\underline{a},a+1,\ldots,\underline{b},b+1,\ldots,c-1,$
and it is easy to check that $n-k+c,a,b$ is a $312$-pattern of
${\sigma}^{\prime}$. For example, if
${\sigma}=9\,8\,7\,1\,\underline{2}\,\underline{3}\,4\,\underline{5}\,6$ then
${\sigma}^{\prime}=9\,\underline{8}\,7\,6\,5\,1\,\underline{2}\,\underline{3}\,4$.
To calculate $f_{123}(n)$, we suppose
${\sigma}=n,n-1,\ldots,k+1,1,2,\ldots,k-1,k$ for some $k$ from structure
(3.13). Then, a $123$-pattern can be obtained by choosing three elements from
the last $k$ elements to play the role of $123$. Thus, summing up all possible
$k$ gives
$f_{123}(n)=\sum_{k=1}^{n}{k\choose 3}={n+1\choose 4}.$
We complete the proof by using $f_{123}(n)+f_{312}(n)+f_{321}(n)=n{n\choose
3}$.
The first few values of $f_{q}(S_{n}(132,213,231))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $1$ | $0$ | $0$ | $0$ | $1$ | $1$ | $6$ | $35$ | $0$ | $0$ | $0$ | $35$ | $50$
$4$ | $5$ | $0$ | $0$ | $0$ | $5$ | $6$ | $7$ | $70$ | $0$ | $0$ | $0$ | $70$ | $105$
$5$ | $15$ | $0$ | $0$ | $0$ | $15$ | $20$ | $8$ | $126$ | $0$ | $0$ | $0$ | $126$ | $196$
For $(123,132,312)$-avoiding permutations, we have
###### Theorem 3.7.
For $n\geq 3$, in the set $S_{n}(123,132,312)$,
$\displaystyle f_{213}(n)$ $\displaystyle=f_{231}(n)={n\choose 3},$ (3.14)
$\displaystyle f_{321}(n)$ $\displaystyle=(n-2){n\choose 3}.$ (3.15)
###### Proof.
We begin with the following structure from Simion and Schmidt [8]
${\sigma}\in
S_{n}(132,213,231)\Leftrightarrow{\sigma}=n-1,n-2,\ldots,k+1,n,k,k-1,\ldots,1\text{
for some }k.$ (3.16)
Based on such structure, we prove $f_{213}(n)=f_{231}(n)$ by a direct
correspondence. Let
${\sigma}=n-1,\ldots,\underline{a},a+1,\ldots,\underline{b},b+1,\ldots,k+1,\underline{n},k,k-1,\ldots,2,1$
with $abn$ as a $213$-pattern. Then, we set
${\sigma}^{\prime}=n-1,\ldots,\underline{n-a+b},\ldots,n-a+k+1,\underline{n},n-a+k,n-a+k-1,\ldots,\underline{n-a},\ldots,2,1,$
where $n-a+b,n,n-a$ is a $231$-pattern of ${\sigma}^{\prime}$. For example, if
${\sigma}=8\,\underline{7}\,6\,\underline{5}\,4\,\underline{9}\,3\,2\,1$, then
${\sigma}^{\prime}={\sigma}=8\,\underline{7}\,6\,\underline{9}\,5\,4\,3\,\underline{2}\,1$.
To calculate $f_{213}(n)$, we suppose that
${\sigma}=n-1,n-2,\ldots,k+1,n,k,k-1,\ldots,2,1$ for some $k$. Then, a
$213$-pattern can be obtained by choosing two elements from the first $n-k-1$
elements to play the role of $21$, and let $n$ play the role of $3$. Thus,
summing up all possible $k$, we have
$f_{213}(n)=\sum_{k=0}^{n-1}{n-k-1\choose 2}={n\choose 3}.$
We complete the proof by using $f_{213}(n)+f_{231}(n)+f_{321}(n)=n{n\choose
3}$.
The first few values of $f_{q}(S_{n}(123,132,312))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $0$ | $0$ | $1$ | $1$ | $0$ | $1$ | $6$ | $0$ | $0$ | $20$ | $20$ | $0$ | $80$
$4$ | $0$ | $0$ | $4$ | $4$ | $0$ | $8$ | $7$ | $0$ | $0$ | $35$ | $35$ | $0$ | $175$
$5$ | $0$ | $0$ | $10$ | $10$ | $0$ | $30$ | $8$ | $0$ | $0$ | $56$ | $56$ | $0$ | $336$
Finally, we study the pattern count on $(123,231,312)$-avoiding permutations.
###### Theorem 3.8.
For $n\geq 3$, in the set $S_{n}(123,231,312)$, we have
$\displaystyle f_{132}(n)$ $\displaystyle=f_{213}(n)={n+1\choose 4},$ (3.17)
$\displaystyle f_{321}(n)$ $\displaystyle=\frac{n(n-2)(n-1)^{2}}{12}.$ (3.18)
Proof. From Lemma 2.1, we see that
${\sigma}\in S_{n}(123,231,312)\Leftrightarrow{\sigma}^{r}\in
S_{n}(321,132,213)\Leftrightarrow({\sigma}^{r})^{c}\in S_{n}(123,231,312).$
We have $f_{213}(n)=f_{132}(n)$ from $(213^{r})^{c}=312^{c}=132$. The
following structure of the set $S_{n}(123,231,312)$ is given by Simion and
Schmidt [8]
${\sigma}\in
S_{n}(132,213,231)\Leftrightarrow{\sigma}=k-1,k-2,\ldots,3,2,1,n,n-1\ldots,k\text{~{}for
some~{}}k.$ (3.19)
Suppose that ${\sigma}=k-1,k-2,\ldots,3,2,1,n,n-1\ldots,k$ for some $k$, then
a $213$-pattern can be obtained as follows: Choose two elements from the first
$k-1$ elements to play the role of $21$, and choose one element from the last
$n-k+1$ elements to play the role of $3$. Thus, summing up all possible $k$,
we have
$\displaystyle f_{213}(n)$ $\displaystyle=\sum_{k=1}^{n}{k-1\choose
2}(n-k+1)=\sum_{k=0}^{n-1}{k\choose 2}(n-k)={n+1\choose 4}.$
The formula for $f_{321}(n)$ is obtained by the relation
$2f_{213}(n)+f_{321}(n)=n{n\choose 3}$.
The first few values of $f_{q}(S_{n}(123,231,312))$ for $q$ of length $3$ are
shown below.
$n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$ | $n$ | $f_{123}$ | $f_{132}$ | $f_{213}$ | $f_{231}$ | $f_{312}$ | $f_{321}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$3$ | $0$ | $1$ | $1$ | $0$ | $0$ | $1$ | $6$ | $0$ | $35$ | $35$ | $0$ | $0$ | $50$
$4$ | $0$ | $5$ | $5$ | $0$ | $0$ | $6$ | $7$ | $0$ | $70$ | $70$ | $0$ | $0$ | $105$
$5$ | $0$ | $15$ | $15$ | $0$ | $0$ | $20$ | $8$ | $0$ | $126$ | $126$ | $0$ | $0$ | $196$
## References
* [1] M. Bóna, A Walk Through Combinatorics, 3rd edition, World Scientific, 2011.
* [2] M. Bóna, Surprising symmetries in objects counted by Catalan numbers, Electron. J. Combin. 19 (2012), P62.
* [3] M. Bóna, The absence of a pattern and the occurrences of another, Discrete Math. Theor. Comput. Sci. 12 (2010), no. 2, 89–102.
* [4] L. Comtet, Advanced Combinatorics, Reidel, Dordrecht, 1974.
* [5] J. Cooper, Combinatorial Problems I like, internet resource, available at http:// www.math.sc.edu/ cooper/combprob.html.
* [6] C. Homberger, Expected Patterns in Permutation Classes, Electron. J. Combin. 19(3) (2012), P43.
* [7] K. Rudolph, Pattern Popularity in $132$-avoiding Permutations, Electron. J. Combin. 20(1) (2013), P8.
* [8] R. Simion and F.W. Schmidt, Restricted permutations, European J. Combin. 6 (1985), 383–406.
* [9] R.P. Stanley, Enumerative Combinatorics, vol. 1, Cambridge University Press, 1997.
|
arxiv-papers
| 2013-02-17T12:54:01 |
2024-09-04T02:49:41.850550
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Alina F. Y. Zhao",
"submitter": "Alina Zhao",
"url": "https://arxiv.org/abs/1302.4063"
}
|
1302.4074
|
# Trace asymptotics formula for the Schrödinger operators with constant
magnetic fields.
Mouez DIMASSI and Anh Tuan DUONG Mouez Dimassi, IMB ( UMR CNRS 5251),
Université de Bordeaux 1, 33405 Talence, France
[email protected] Anh Tuan Duong, LAGA (UMR CNRS 7539), Univ.
Paris 13, F-93430 Villetaneuse, France [email protected]
###### Abstract.
In this paper, we consider the $2D$-Schrödinger operator with constant
magnetic field $H(V)=(D_{x}-By)^{2}+D_{y}^{2}+V_{h}(x,y)$, where $V$ tends to
zero at infinity and $h$ is a small positive parameter. We will be concerned
with two cases: the semi-classical limit regime $V_{h}(x,y)=V(hx,hy)$, and the
large coupling constant limit case $V_{h}(x,y)=h^{-\delta}V(x,y)$. We obtain a
complete asymptotic expansion in powers of $h^{2}$ of ${\rm
tr}(\Phi(H(V),h))$, where $\Phi(\cdot,h)\in
C^{\infty}_{0}(\mathbb{R};\mathbb{R})$. We also give a Weyl type asymptotics
formula with optimal remainder estimate of the counting function of
eigenvalues of $H(V)$.
###### Key words and phrases:
Magnetic Schrödinger operators, asymptotic trace formula, eigenvalues
distribution
###### 2010 Mathematics Subject Classification:
81Q10, 35J10, 35P20, 35C20, 47F05
## 1\. Introduction
Let $H_{0}=(D_{x}-By)^{2}+D_{y}^{2}$ be the $2D$-Schrödinger operator with
constant magnetic field $B>0$. Here $D_{\nu}=\frac{1}{i}\partial_{\nu}$. It is
well known that the operator $H_{0}$ is essentially self-adjoint on
$C_{0}^{\infty}(\mathbb{R}^{2})$ and its spectrum consists of eigenvalues of
infinite multiplicity (called Landau levels, see, e.g., [1]). We denote by
$\sigma(H_{0})$ (resp. $\sigma_{\rm ess}(H_{0})$) the spectrum (resp. the
essential spectrum) of the operator $H_{0}$. Then,
$\sigma(H_{0})=\sigma_{\rm ess}(H_{0})=\bigcup_{n=0}^{\infty}\\{(2n+1)B\\}.$
Let $V\in C^{\infty}(\mathbb{R}^{2};\mathbb{R})$ and assume that $V$ is
bounded with all its derivatives and satisfies
(1.1) $\lim\limits_{|(x,y)|\to\infty}V(x,y)=0.$
We now consider the perturbed Schrödinger operator
(1.2) $H(V)=H_{0}+\ V_{h}(x,y),$
where $V_{h}$ is a potential depending on a semi-classical parameter $h>0$,
and is of the form $V_{h}(x,y)=V(hx,hy)$ or $V_{h}(x,y)=h^{-\delta}V(x,y)$,
($\delta>0$). Using the Kato-Rellich theorem and the Weyl criterion, one sees
that $H(V)$ is essentially self-adjoint on $C_{0}^{\infty}(\mathbb{R}^{2})$
and
$\sigma_{\rm ess}(H(V))=\sigma_{\rm
ess}(H_{0})=\bigcup_{n=0}^{\infty}\\{(2n+1)B\\}.$
The spectral properties of the $2D$-Schrödinger operator with constant
magnetic field $H(V)$ have been intensively studied in the last ten years. In
the case of perturbations, the Landau levels $\Lambda_{n}=(2n+1)B$ become
accumulation points of the eigenvalues and the asymptotics of the function
counting the number of the eigenvalues lying in a neighborhood of
$\Lambda_{n}$ have been examined by many authors in different aspects. For
recent results, the reader may consult [19, 8, 20, 15, 14, 10, 2] and the
references therein.
The asymptotics with precise remainder estimate for the counting spectral
function of the operator $H(h):=H_{0}+V(hx,hy)$ have been obtained by V. Ivrii
[13]. In fact, he constructs a micro-local canonical form for $H(h)$, which
leads to the sharp remainder estimates.
However, there are only a few works treating the case of the large coupling
constant limit (i.e., $V_{h}(x,y)=h^{-\delta}V(x,y)$) (see [16, 17, 18]). In
this case, the asymptotic behavior of the counting spectral function depends
both on the sign of the perturbation and on its decay properties at infinity.
In [18], G. Raikov obtained only the main asymptotic term of the counting
spectral function as $h\searrow 0$.
The method used in [18] is of variational nature. By this method one can find
the main term in the asymptotics of the counting spectral function with a
weaker assumption on the perturbation $V$. However, it is quite difficult to
establish with these techniques an asymptotic formula involving sharp
remainder estimates.
For both the semi-classical and large coupling constant limit, we give a
complete asymptotic expansion of the trace of $\Phi(H(V),h)$ in powers of
$h^{2}$. We also establish a Weyl-type asymptotic formula with optimal
remainder estimate for the counting function of eigenvalues of $H(V)$. The
remainder estimate in Corollary 2.4 and Corollary 2.6 is ${\mathcal{O}}(1)$,
so it is better than in the standard case (without magnetic field, see e.g.,
[9]).
To prove our results, we show that the spectral study of $H(h)$ near some
energy level $z$ can be reduced to the study of an $h^{2}$-pseudo-differential
operator $E_{-+}(z)$ called the effective Hamiltonian. Our results are still
true for the case of dimension $2d$ with $d\geq 1$. For the transparency of
the presentation, we shall mainly be concerned with the two-dimensional case.
The paper is organized as follows: In the next section we state the
assumptions and the results precisely, and we give an outline of the proofs.
In Section 3 we reduce the spectral study of $H(V)$ to the one of a system of
$h^{2}$-pseudo-differential operators $E_{-+}(z)$. In Section 4, we establish
a trace formula involving the effective Hamiltonian $E_{-+}(z)$, and we prove
the results concerning the semi-classical case. Finally, Section 5 is devoted
to the proofs of the results concerning the large coupling constant limit
case.
## 2\. Formulations of main results
### 2.1. Semi-classical case
In this section we will be concerned with the semi-classical magnetic
Schrödinger operator
$H(h)=H_{0}+V(hx,hy),$
where $V$ satisfies (1.1). By choosing $B={\rm constant}$, we may actually
assume that $B=1$.
Fix two real numbers $a$ and $b$ such that
$[a,b]\subset\mathbb{R}\setminus\sigma_{\rm ess}(H(h))$. We define
(2.1) $l_{0}:={\rm
min}\left\\{q\in{\mathbb{N}};V^{-1}([a-(2q+1),b-(2q+1)])\not=\emptyset\right\\},\,\,$
$l:={\rm
sup}\left\\{q\in{\mathbb{N}};V^{-1}([a-(2q+1),b-(2q+1)])\not=\emptyset\right\\}.$
We will give an asymptotic expansion in powers of $h^{2}$ of ${\rm
tr}(f(H(h),h))$ in the two following cases:
a) $f(x,h)=f(x)$, where $f\in C^{\infty}_{0}(\left(a,b\right);{\mathbb{R}})$.
b) $f(x,h)=f(x)\widehat{\theta}(\frac{x-\tau}{h^{2}})$, where $f,\theta\in
C^{\infty}_{0}({\mathbb{R}};\mathbb{R})$, $\tau\in\mathbb{R}$, and
$\widehat{\theta}$ is the Fourier transform of $\theta$.
As a consequence, we get a sharp remainder estimate for the counting spectral
function of ${H(h)}$ when $h\searrow 0$. Let us state the results precisely.
###### Theorem 2.1.
Assume (1.1), and let $f\in C_{0}^{\infty}((a,b);\mathbb{R})$. There exists a
sequence of real numbers $(\alpha_{j}(f))_{j\in{\mathbb{N}}}$, such that
(2.2) ${\rm tr}(f(H(h)))\sim\sum_{k=0}^{\infty}\alpha_{k}(f)h^{2(k-1)},$
where
(2.3) $\alpha_{0}(f)=\sum_{j=l_{0}}^{l}\frac{1}{2\pi}\iint
f((2j+1)+V(x,y))dxdy.$
Let $\theta\in C^{\infty}_{0}({\mathbb{R}})$, and let $\epsilon$ be a positive
constant. Set
$\breve{\theta}(\tau)={1\over 2\pi}\int
e^{it\tau}\theta(t)dt,\quad\breve{\theta}_{\epsilon}(t)={1\over\epsilon}\breve{\theta}({t\over\epsilon}).$
In the sequel we shall say that $\lambda$ is not a critical value of $V$ if
and only if $V(X)=\lambda$ for some $X\in\mathbb{R}^{2}$ implies
$\nabla_{X}V(X)\neq 0$.
###### Theorem 2.2.
Fix $\mu\in\mathbb{R}\setminus\sigma_{\rm ess}(H(h))$ which is not a critical
value of $(2j+1+V)$, for $j=l_{0},...,l$. Let $f\in
C^{\infty}_{0}(\left(\mu-\epsilon,\mu+\epsilon\right);\mathbb{R})$ and
$\theta\in C^{\infty}_{0}(\left(-{1\over C},{1\over C}\right);{\mathbb{R}})$,
with $\theta=1$ near $0$. Then there exist $\epsilon>0$, $C>0$ and a
functional sequence $c_{j}\in C^{\infty}({\mathbb{R}};\mathbb{R})$,
$j\in{\mathbb{N}}$, such that for all $M,N\in{\mathbb{N}}$, we have
(2.4) ${\rm
tr}\left(f(H(h))\breve{\theta}_{h^{2}}(t-H(h))\right)=\sum_{k=0}^{M}c_{k}(t)h^{2(k-1)}+\mathcal{O}\left(\frac{h^{2M}}{\langle
t\rangle^{N}}\right)$
uniformly in $t\in\mathbb{R}$, where
(2.5)
$c_{0}(t)=\frac{1}{2\pi}f(t)\sum_{j=l_{0}}^{l}\int_{\\{(x,y)\in\mathbb{R}^{2};\;2j+1+V(x,y)=t\\}}\frac{d{S_{t}}}{|\nabla
V(x,y)|}.$
###### Corollary 2.3.
In addition to the hypotheses of Theorem 2.1 suppose that $a$ and $b$ are not
critical values of $((2j+1)+V)$ for all $j=l_{0},...,l$. Let
${\mathcal{N}}_{h}([a,b])$ be the number of eigenvalues of $H(h)$ in the
interval $[a,b]$ counted with their multiplicities. Then we have
(2.6) ${\mathcal{N}}_{h}([a,b])=h^{-2}C_{0}+\mathcal{O}(1),h\searrow 0,$
where
(2.7) $C_{0}=\frac{1}{2\pi}\sum_{j=l_{0}}^{l}{\rm
Vol}\left(V^{-1}([a-(2j+1),b-(2j+1)])\right).$
### 2.2. Large coupling constant limit case.
We apply the above results to the Schrödinger operator with constant magnetic
field in the large coupling constant limit case. More precisely, consider
(2.8) $H_{\lambda}=(D_{x}-y)^{2}+D_{y}^{2}+\lambda V(x,y).$
Here $\lambda$ is a large constant, and the electric potential $V$ is assumed
to be strictly positive. Let $X:=(x,y)\in\mathbb{R}^{2}$. We suppose in
addition that for all $N\in\mathbb{N}$,
(2.9)
$V(X)=\sum_{j=0}^{N-1}\omega_{2j}\left(\frac{X}{\left|X\right|}\right)|X|^{-\delta-2j}+r_{2N}(X),\mbox{
for }\left|X\right|\geq 1,$
where
* •
$\omega_{0}\in C^{\infty}(\mathbb{S}^{1};(0,+\infty))$, $\omega_{2j}\in
C^{\infty}(\mathbb{S}^{1};\mathbb{R})$, $j\geq 1$. Here $\mathbb{S}^{1}$
denotes the unit circle.
* •
$\delta$ is some positive constant,
* •
$|\partial_{X}^{\beta}r_{2N}(X)|\leq
C_{\beta}(1+|X|)^{-|\beta|-\delta-2N},\;\forall\beta\in\mathbb{N}^{2}$.
Since $V$ is positive, it follows that
$\sigma(H_{\lambda})\subset[1,+\infty)$. Fix two real numbers $a$ and $b$ such
that $a>1$ and $[a,b]\subset\mathbb{R}\setminus\sigma_{\rm ess}(H_{\lambda})$.
Since $\sigma_{\rm ess}(H_{\lambda})=\cup_{j=0}^{\infty}\\{(2j+1)\\}$, there
exists $q\in{\mathbb{N}}$ such that $2q+1<a<b<2q+3$. The following results are
consequences of Theorem 2.1, Theorem 2.2 and Corollary 2.3.
###### Theorem 2.4.
Assume (2.9), and let $f\in C_{0}^{\infty}((a,b);\mathbb{R})$. There exists a
sequence of real numbers $(b_{j}(f))_{j\in{\mathbb{N}}}$, such that
(2.10) ${\rm
tr}(f(H_{\lambda}))\sim\lambda^{\frac{2}{\delta}}\sum_{k=0}^{\infty}b_{k}(f)\lambda^{-\frac{2k}{\delta}},\;\lambda\nearrow+\infty$
where
(2.11)
$b_{0}(f)=\frac{1}{2\pi\delta}\int_{0}^{2\pi}(\omega_{0}(\cos\theta,\sin\theta))^{\frac{2}{\delta}}d\theta\sum_{j=0}^{q}\int
f(u)(u-(2j+1))^{-1-\frac{2}{\delta}}du.$
###### Theorem 2.5.
Let $f\in C^{\infty}_{0}(\left(a-\epsilon,b+\epsilon\right);\mathbb{R})$ and
$\theta\in C^{\infty}_{0}(\left(-{1\over C},{1\over C}\right);{\mathbb{R}})$,
with $\theta=1$ near $0$. Then there exist $\epsilon>0$, $C>0$ and a
functional sequence $c_{j}\in C^{\infty}({\mathbb{R}};\mathbb{R})$,
$j\in{\mathbb{N}}$, such that for all $M,N\in{\mathbb{N}}$, we have
(2.12) ${\rm
tr}\left(f(H_{\lambda})\breve{\theta}_{\lambda^{-\frac{2}{\delta}}}(t-H_{\lambda})\right)=\lambda^{\frac{2}{\delta}}\sum_{k=0}^{M}c_{k}(t)\lambda^{-\frac{2k}{\delta}}+\mathcal{O}\left(\frac{\lambda^{-\frac{2M}{\delta}}}{\langle
t\rangle^{N}}\right)$
uniformly in $t\in\mathbb{R}$, where
(2.13)
$c_{0}(t)=\frac{1}{2\pi}f(t)\sum_{j=0}^{q}\int_{\\{X\in\mathbb{R}^{2};\;2j+1+W(X)=t\\}}\frac{d{S_{t}}}{|\nabla_{X}W(X)|}.$
Here $W(X)=\omega_{0}(\frac{X}{|X|})|X|^{-\delta}$.
###### Corollary 2.6.
Let ${\mathcal{N}}_{\lambda}([a,b])$ be the number of eigenvalues of
$H_{\lambda}$ in the interval $[a,b]$ counted with their multiplicities. We
have
${\mathcal{N}}_{\lambda}([a,b])=\lambda^{\frac{2}{\delta}}D_{0}+\mathcal{O}(1),\;\lambda\nearrow+\infty,$
where
$D_{0}=\frac{1}{4\pi}\sum_{j=0}^{q}\left((a-2j-1)^{-\frac{2}{\delta}}-(b-2j-1)^{-\frac{2}{\delta}}\right)\int_{0}^{2\pi}\left(\omega_{0}(\cos\theta,\sin\theta)\right)^{\frac{2}{\delta}}d\theta.$
### 2.3. Outline of the proofs
The purpose of this subsection is to provide a broad outline of the proofs. By
a change of variable on the phase space, the operator $H(h)$ is unitarily
equivalent to
$P(h):=P_{0}+V^{w}(h)=-\frac{\partial^{2}}{\partial
y^{2}}+y^{2}+V^{w}(x+hD_{y},hy+h^{2}D_{x}),X=(x,y)\in{\mathbb{R}}^{2}.$
Let $\Pi=1_{[c,d]}(P_{0})$ be the spectral projector of the harmonic
oscillator on the interval
$[c,d]=\left[a-\|V\|_{L^{\infty}(\mathbb{R}^{2})},b+\|V\|_{L^{\infty}(\mathbb{R}^{2})}\right]$.
Using the explicit expression of $\Pi$ we will reduce the spectral study of
$(P-z)$ for $z\in[a,b]+i[-1,1]$ to the study of a system of $h^{2}$-pseudo-
differential operator, $E_{-+}(z)$ depending only on $x$ (see Remark 3.7 and
Corollary 3.9). In particular, modulo ${\mathcal{O}}(h^{\infty})$, we are
reduced to proving Theorem 2.1 and Theorem 2.2 for a system of $h^{2}$-pseudo-
differential operator (see Proposition 4.1). Thus, (2.2) and (2.4) follows
easily from Theorem 1.8 in [7] (see also [8]). Corollary 2.3 is a simple
consequence of Theorem 2.1, Theorem 2.2 and a Tauberian-argument.
To deal with the large coupling constant limit case, we note that for all
$M>0$ and $\lambda$ large enough, we have
$\left\\{(x,y,\eta,\xi)\in{\mathbb{R}}^{4};|(x,y)|<M,\,\,(\xi-y)^{2}+\eta^{2}+\lambda
V(x,y)\in[a,b]\right\\}=\emptyset.$
Thus, on the symbolic level, only the behavior of $V(x,y)$ at infinity
contributes to the asymptotic behavior of the left hand sides of (2.10) and
(2.12). Since, for $|X|$ large enough, $\lambda
V(X)=\varphi_{0}(hX)+\varphi_{2}(hX)h^{2}+\cdots+\varphi_{2j}(hX)h^{2j}+\cdots$
with $h=\lambda^{-\frac{1}{\delta}}$ and
$\varphi_{0}(X)=\omega_{0}(\frac{X}{|X|})|X|^{-\delta}$, Theorem 2.4 (resp.
Theorem 2.5) follows from Theorem 2.1 (resp. Theorem 2.2).
## 3\. The effective Hamiltonian
### 3.1. Classes of symbols
Let $M_{n}(\mathbb{C})$ be the space of complex square matrices of order $n$.
We recall the standard class of semi-classical matrix-valued symbols on
$T^{*}{\mathbb{R}}^{d}={\mathbb{R}}^{2d}$:
$S^{m}({\mathbb{R}}^{2d};M_{n}({\mathbb{C}}))=\left\\{a\in{C}^{\infty}({\mathbb{R}}^{2d}\times(0,1];M_{n}({\mathbb{C}}));\,\|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}a(x,\xi;h)\|_{M_{n}({\mathbb{C}})}\leq
C_{\alpha,\beta}h^{-m}\right\\}.$
We note that the symbols are tempered as $h\searrow 0$. The more general class
$S^{m}_{\delta}({\mathbb{R}}^{2d};M_{n}({\mathbb{C}}))$, where the right hand
side in the above estimate is replaced by
$C_{\alpha,\beta}h^{-m-\delta(|\alpha|+|\beta|)}$, has nice quantization
properties as long as $0\leq\delta\leq\frac{1}{2}$ (we refer to [11, Chapter
7]).
For $h$-dependent symbol $a\in
S_{\delta}^{m}({\mathbb{R}}^{2d};M_{n}({\mathbb{C}}))$, we say that $a$ has an
asymptotic expansion in powers of $h$ in
$S_{\delta}^{m}(\mathbb{R}^{2d};M_{n}(\mathbb{C}))$ and we write
$a\sim\sum\limits_{j\geq 0}a_{j}h^{j},$
if there exists a sequence of symbols $a_{j}(x,\xi)\in
S_{\delta}^{m}(\mathbb{R}^{2d};M_{n}(\mathbb{C}))$ such that for all
$N\in{\mathbb{N}}$, we have
$a-\sum_{j=0}^{N}a_{j}h^{j}\in
S_{\delta}^{m-N-1}(\mathbb{R}^{2d};M_{n}(\mathbb{C})).$
In the special case when $m=\delta=0$ (resp. $m=\delta=0,n=1)$, we will write
$S^{0}(\mathbb{R}^{2d};M_{n}(\mathbb{C}))$ (resp. $S^{0}(\mathbb{R}^{2d})$)
instead of $S^{0}_{0}(\mathbb{R}^{2d};M_{n}(\mathbb{C}))$ ( resp.
$S_{0}^{0}(\mathbb{R}^{2d};M_{1}({\mathbb{C}})$).
We will use the standard Weyl quantization of symbols. More precisely, if
$a\in S_{\delta}^{m}(\mathbb{R}^{2d};M_{n}(\mathbb{C}))$, then
$a^{w}(x,hD_{x};h)$ is the operator defined by
$a^{w}(x,hD_{x};h)u(x)=(2\pi h)^{-d}\iint
e^{\frac{i(x-y,\xi)}{h}}a\left(\frac{x+y}{2},\xi;h\right)u(y)dyd\xi,\;u\in\mathcal{S}(\mathbb{R}^{d};\mathbb{C}^{n}).$
In order to prove our main results, we shall recall some well-known results
###### Proposition 3.1.
(Composition formula) Let $a_{i}\in
S^{m_{i}}_{\delta}({\mathbb{R}}^{2d};M_{n}({\mathbb{C}}))$, $i=1,2$,
$\delta\in\left.[0,{1\over 2}\right)$. Then
$b^{w}(y,hD_{y};h)=a_{1}^{w}(y,hD_{y})\circ a^{w}_{2}(y,hD_{y})$ is an
$h$-pseudo-differential operator, and
$b(y,\eta;h)\sim\sum_{j=0}^{\infty}b_{j}(y,\eta)h^{j},\,\,\hbox{\rm in
}S^{m_{1}+m_{2}}_{\delta}({\mathbb{R}}^{2d};M_{n}({\mathbb{C}})).$
###### Proposition 3.2.
(Beals characterization) Let
$A=A_{h}:{\mathcal{S}}({\mathbb{R}}^{d};\mathbb{C}^{n})\to{\mathcal{S}}^{\prime}({\mathbb{R}}^{d};\mathbb{C}^{n})$,
$0<h\leq 1$. The following two statements are equivalent:
(1) $A=a^{w}(x,hD_{x};h)$, for some $a=a(x,\xi;h)\in
S^{0}({\mathbb{R}}^{2d};M_{n}(\mathbb{C}))$.
(2) For every $N\in{\mathbb{N}}$ and for every sequence $l_{1}(x,\xi)$,…,
$l_{N}(x,\xi)$ of linear forms on ${\mathbb{R}}^{2d}$, the operator ${\rm
ad}_{l_{1}^{w}(x,hD_{x})}\circ\dots\circ{\rm ad}_{l_{N}^{w}(x,hD_{x})}A_{h}$
belongs to ${\mathcal{L}}(L^{2},L^{2})$ and is of norm ${\mathcal{O}}(h^{N})$
in that space. Here, ${\rm ad}_{A}B:=[A,B]=AB-BA$.
###### Proposition 3.3.
($L^{2}-$ boundedness) Let $a=a(x,\xi;h)\in
S^{0}_{\delta}({\mathbb{R}}^{2d};M_{n}(\mathbb{C}))$, $0\leq\delta\leq 1/2$.
Then $a^{w}(x,hD_{x};h)$ is bounded :
$L^{2}({\mathbb{R}}^{d};{\mathbb{C}}^{n})\rightarrow
L^{2}({\mathbb{R}}^{d};{\mathbb{C}}^{n})$, and there is a constant $C$
independent of $h$ such that for $0<h\leq 1$;
$\|a^{w}(x,hD_{x};h)\|\leq C.$
### 3.2. Reduction to a semi-classical problem
Here, we shall make use of a strong field reduction onto the $j$th
eigenfunction of the harmonic oscillator, $j=l_{0}\cdots l$, and a well-posed
Grushin problem for $H(h)$. We show that the spectral study of $H(h)$ near
some energy level $z$ can be reduced to the study of an $h^{2}$-pseudo-
differential operator $E_{-+}(z)$ called the effective Hamiltonian. Without
any loss of generality we may assume that $l_{0}=1$.
###### Lemma 3.4.
There exists a unitary operator $\widetilde{W}:L^{2}({\mathbb{R}}^{2})\to
L^{2}({\mathbb{R}}^{2})$ such that
$P(h)=\widetilde{\mathcal{W}}H(h)\widetilde{\mathcal{W}}^{*}$
where $P(h):=P_{0}+V^{w}(h)$, $P_{0}:=-\frac{\partial^{2}}{\partial
y^{2}}+y^{2}$ and ${V}^{w}(h):=V^{w}(x+hD_{y},hy+h^{2}D_{x})$.
###### Proof.
The linear symplectic mapping
$\widetilde{S}:\;\mathbb{R}^{4}\to\mathbb{R}^{4}\mbox{ given by
}(x,y,\xi,\eta)\mapsto\left(\frac{1}{h}x+\eta,y+h\xi,h\xi,\eta\right),$
maps the Weyl symbol of the operator $H(h)$ into the Weyl symbol of the
operator $P(h)$. By Theorem A.2 in [11, Chapter 7], there exists a unitary
operator $\widetilde{W}:L^{2}({\mathbb{R}}^{2})\to L^{2}({\mathbb{R}}^{2})$
associated to $\widetilde{S}$ such that
$P(h)=\widetilde{\mathcal{W}}H(h)\widetilde{\mathcal{W}}^{*}$. ∎
Introduce the operator $R^{-}_{j}:L^{2}({\mathbb{R}}_{x})\rightarrow
L^{2}({\mathbb{R}}^{2}_{x,y})$ by
$(R^{-}_{j}v)(x,y)=\phi_{j}(y)v(x),$
where $\phi_{j}$ is the $j$th normalized eigenfunction of the harmonic
oscillator. Further, the operator
$R_{j}^{+}:L^{2}({\mathbb{R}}^{2}_{x,y})\rightarrow L^{2}({\mathbb{R}}_{x})$
is defined by
$(R_{j}^{+}u)(x)=\int\phi_{j}(y)u(x,y)dy.$
Notice that $R_{j}^{+}$ is the adjoint of $R_{j}^{-}$. An easy computation
shows that $R_{j}^{+}R_{j}^{-}=I_{L^{2}({\mathbb{R}}_{x})}$ and
$R_{j}^{-}R_{j}^{+}=\Pi_{j}$, where
$\Pi_{j}:\;L^{2}({\mathbb{R}}^{2})\to
L^{2}({\mathbb{R}}^{2}),\;v(x,y)\mapsto\int v(x,t)\phi_{j}(t)dt\phi_{j}(y).$
Define $\Pi=\sum\limits_{j=1}^{l}\Pi_{j}$.
###### Lemma 3.5.
Let $\Omega:=\\{z\in\mathbb{C};\;{\rm Re}z\in[a,b],\,\,|{\rm Im}z|<1\\}$. The
operator
$(I-\Pi)P(h)(I-\Pi)-z:\;(I-\Pi)L^{2}({\mathbb{R}}^{2})\to(I-\Pi)L^{2}({\mathbb{R}}^{2})$
is uniformly invertible for $z\in\Omega$.
###### Proof.
It follows from the definition of $\Pi$ that
$\sigma((I-\Pi)P_{0}(I-\Pi))=\bigcup_{k\in\mathbb{N}\setminus\\{1,...,l\\}}\\{2k+1\\}$.
Hence
$\sigma((I-\Pi)P(h)(I-\Pi))\subset\bigcup_{k\in\mathbb{N}\setminus\\{1,...,l\\}}\left[2k+1-\|V\|_{L^{\infty}(\mathbb{R}^{2})},2k+1+\|V\|_{L^{\infty}(\mathbb{R}^{2})}\right],$
which implies
$\sigma((I-\Pi)P(h)(I-\Pi))\cap[a,b]=\emptyset.$
Consequently,
$\|(I-\Pi)P(h)(I-\Pi)-z\|\geq{\rm
dist}\left([a,b],\sigma((I-\Pi)P(h)(I-\Pi))\right)>0$
uniformly for $z\in\Omega$. Thus, we obtain
$(I-\Pi)P(h)(I-\Pi)-z:\;(I-\Pi)L^{2}({\mathbb{R}}^{2})\to(I-\Pi)L^{2}({\mathbb{R}}^{2})$
is uniformly invertible for $z\in\Omega$. ∎
For $z\in\Omega$, we put
$\mathcal{P}(z)=\begin{pmatrix}{}(P(h)-z)&R_{1}^{-}&...&R_{l}^{-}\\\
R_{1}^{+}&0&...&0\\\ .&.&...&.\\\ .&.&...&.\\\ .&.&...&.\\\
R_{l}^{+}&0&...&0\end{pmatrix}\mbox{ and
}\widetilde{\mathcal{E}}(z)=\begin{pmatrix}{}R(z)&R_{1}^{-}&...&R_{l}^{-}\\\
R_{1}^{+}&A_{1}&...&0\\\ .&.&...&.\\\ .&.&...&.\\\ .&.&...&.\\\
R_{l}^{+}&0&...&A_{l}\end{pmatrix},$
where $A_{j}=z-(2j+1)-R_{j}^{+}V^{w}(h)R_{j}^{-},j=1,...,l$ and
$R(z)=((I-\Pi)P(h)(I-\Pi)-z)^{-1}(I-\Pi)$.
Let
$\mathcal{E}_{1}(z):=\mathcal{P}(z)\widetilde{\mathcal{E}}(z)=(a_{k,j})_{k,j=1}^{l+1}$.
In the next step we will compute explicitly $a_{k,j}$.
Using the fact that $\Pi R(z)=0$ as well as the fact that $\Pi$ commutes with
$P_{0}$, we deduce that $(P(h)-z)R(z)=(I-\Pi)+[\Pi,V^{w}(h)]R(z)$.
Consequently,
(3.1)
$a_{1,1}=(P(h)-z)R(z)+\sum_{j=1}^{l}R_{j}^{-}R_{j}^{+}=I+[\Pi,V^{w}(h)]R(z).$
Next, from the definition of $A_{1}$ and the fact that
$P_{0}R_{1}^{-}=3R_{1}^{-}$ (we recall that $l_{0}=1$), one has
$\displaystyle a_{1,2}$ $\displaystyle=(P(h)-z)R_{1}^{-}+R_{1}^{-}A_{1}$
$\displaystyle=-(z-3)R_{1}^{-}+V^{w}(h)R_{1}^{-}+R_{1}^{-}(z-3)-\Pi_{1}V^{w}(h)R_{1}^{-}$
$\displaystyle=V^{w}(h)R_{1}^{-}-\Pi_{1}V^{w}(h)R_{1}^{-}$
$\displaystyle=[V^{w}(h),\Pi_{1}]R_{1}^{-}.$
Similarly, $a_{1,j}=[V^{w}(h),\Pi_{j-1}]R_{j-1}^{-},j\geq 3$.
Since $R_{1}^{+}(1-\Pi_{1})=R_{1}^{+}-R_{1}^{+}R_{1}^{-}R_{1}^{+}=0$ and
$R_{1}^{+}\Pi_{j}=R_{1}^{+}R_{j}^{-}R_{j}^{+}=0$ for $j\not=1$, it follows
that $a_{2,1}=R_{1}^{+}R(z)=0$. Evidently,
$a_{2,2}=R_{1}^{+}R_{1}^{-}=I_{L^{2}(\mathbb{R})}$ and
$a_{2,j}=R_{1}^{+}R_{j}^{-}=0$ for $j\geq 3$. The same arguments as above show
that $a_{k,j}=\delta_{j,k}I_{L^{2}(\mathbb{R})}$ for all $k\geq 3$. Summing up
we have proved
$\mathcal{E}_{1}(z)=\mathcal{P}(z)\widetilde{\mathcal{E}}(z)=\begin{pmatrix}{}I+[\Pi,V^{w}(h)]R(z)&[V^{w}(h),\Pi_{1}]R_{1}^{-}&...&[V^{w}(h),\Pi_{l}]R_{l}^{-}\\\
0&I_{L^{2}(\mathbb{R})}&...&0\\\ .&.&...&.\\\ .&.&...&.\\\ .&.&...&.\\\
0&0&...&I_{L^{2}(\mathbb{R})}\end{pmatrix},$
Let $f_{j}\in C_{0}^{\infty}(\mathbb{R})$, $f_{j}=1$ near $2j+1$ and
supp$f_{j}\subset[2j,2j+2]$. By the spectral theorem we have
$\Pi_{j}=f_{j}(D_{y}^{2}+y^{2})$. On the other hand, the functional calculus
of pseudo-differential operators shows that
$\Pi_{j}=f_{j}(D_{y}^{2}+y^{2})=B^{w}(y,D_{y})$ with
$B(y,\eta)={\mathcal{O}}(\langle
y\rangle^{-\infty}\langle\eta\rangle^{-\infty})$.
The composition formula of pseudo-differential operators (Proposition 3.1)
gives
(3.2)
$[V^{w}(h),\Pi_{j}]=\sum_{k=1}^{N}b_{k,j}^{w}(x,h^{2}D_{x})c_{k,j}^{w}(y,D_{y})h^{k}+\mathcal{O}(h^{N+1}),\;\forall
N\in\mathbb{N},$
where $b_{k,j},c_{k,j}\in S^{0}(\mathbb{R}^{2})$. This together with the
Calderon-Vaillancourt theorem (Proposition 3.3) yields
$[V^{w}(h),\Pi_{j}]=\mathcal{O}(h)$ in $\mathcal{L}(L^{2}(\mathbb{R}^{2}))$.
Therefore, for $h$ is sufficiently small, $\mathcal{E}_{1}(z)$ is uniformly
invertible for $z\in\Omega$, and
$\mathcal{E}_{1}(z)^{-1}=\begin{pmatrix}{}a(z)&-a(z)[V^{w}(h),\Pi_{1}]R_{1}^{-}&...&-a(z)[V^{w}(h),\Pi_{l}]R_{l}^{-}\\\
0&I_{L^{2}(\mathbb{R})}&...&0\\\ .&.&...&.\\\ .&.&...&.\\\ .&.&...&.\\\
0&0&...&I_{L^{2}(\mathbb{R})}\end{pmatrix},$
where $a(z)=(I+[\Pi,V^{w}(h)]R(z))^{-1}$. Using the explicit expressions of
$\widetilde{\mathcal{E}}(z)$ and $\mathcal{E}_{1}(z)^{-1}$, we get
$\mathcal{E}(z):=\widetilde{\mathcal{E}}(z)\mathcal{E}_{1}(z)^{-1}$
$=\begin{pmatrix}{}R(z)a(z)&-R(z)a(z)[V^{w}(h),\Pi_{1}]R_{1}^{-}+R_{1}^{-}&...&-R(z)a(z)[V^{w}(h),\Pi_{l}]R_{l}^{-}+R_{l}^{-}\\\
R_{1}^{+}a(z)&A_{1}-R_{1}^{+}a(z)[V^{w}(h),\Pi_{1}]R_{1}^{-}&...&-R_{1}^{+}a(z)[V^{w}(h),\Pi_{l}]R_{l}^{-}\\\
.&.&...&.\\\ .&.&...&.\\\ .&.&...&.\\\
R_{l}^{+}a(z)&-R_{l}^{+}a(z)[V^{w}(h),\Pi_{1}]R_{1}^{-}&...&A_{l}-R_{l}^{+}a(z)[V^{w}(h),\Pi_{l}]R_{l}^{-}\end{pmatrix}.$
Thus, we have proved the following theorem.
###### Theorem 3.6.
Let $\Omega$ be as in Lemma 3.5. Then $\mathcal{P}(z)$ is uniformly invertible
for $z\in\Omega$ with inverse $\mathcal{E}(z)$. In addition, $\mathcal{E}(z)$
is holomorphic in $z\in\Omega$.
From now on, we write
$\mathcal{E}(z)=(B_{k,j})_{k,j=1}^{l+1}=\begin{pmatrix}E(z)&E_{+}(z)\\\
E_{-}(z)&E_{-+}(z)\end{pmatrix}$, where $E_{-+}(z)=(B_{k,j})_{k,j=2}^{l+1}$,
$E(z)=R(z)a(z)$, $E_{-}(z)=\begin{pmatrix}R_{1}^{+}a(z)\\\ .\\\ .\\\ .\\\
R_{l}^{+}a(z)\end{pmatrix}$, and
$E_{+}(z)=\begin{pmatrix}-R(z)a(z)[V^{w}(h),\Pi_{1}]R_{1}^{-}+R_{1}^{-}&...&-R(z)a(z)[V^{w}(h),\Pi_{l}]R_{l}^{-}+R_{l}^{-}\end{pmatrix}.$
###### Remark 3.7.
The following formulas are consequences of the fact that ${\mathcal{E}}(z)$ is
the inverse of ${\mathcal{P}}(z)$ as well as the fact that $R_{j}^{\pm}$ are
independent of $z$ (see [8, 12]):
(3.3) $(z-P(h))^{-1}=-E(z)+E_{+}(z)(E_{-+}(z))^{-1}E_{-}(z),\;z\in\rho(P(h)),$
(3.4) $\partial_{z}E_{-+}(z)=E_{-}(z)E_{+}(z).$
In what follows, the explicit formulae for $E(z)$ and $E_{\pm}(z)$ are not
needed. We just indicate that they are holomorphic in $z$. In the remainder of
this section, we will prove that the symbol of the operator $E_{-+}(z)$ is in
$S^{0}({\mathbb{R}}^{2};M_{l}(\mathbb{C}))$, and has a complete asymptotic
expansion in powers of $h$. Moreover, we will give explicitly the principal
term.
###### Proposition 3.8.
For $1\leq k,j\leq l$, the operators $R_{j}^{+}V^{w}(h)R_{j}^{-}$ and
$R_{k}^{+}a(z)[V^{w}(h),\Pi_{j}]R_{j}^{-}$ are $h^{2}-$pseudo-differential
operators with bounded symbols. Moreover, there exist $v_{j,n},b_{k,j,n}\in
S^{0}(\mathbb{R}^{2})$, $n=1,2,..$, such that
(3.5) $\displaystyle R_{j}^{+}V^{w}(h)R_{j}^{-}$
$\displaystyle=\sum_{n=0}^{N}h^{2n}v^{w}_{j,n}(x,h^{2}D_{x})+\mathcal{O}\left(h^{2(N+1)}\right),$
(3.6) $\displaystyle R_{k}^{+}a(z)[V^{w}(h),\Pi_{j}]R_{j}^{-}$
$\displaystyle=\sum_{n=1}^{N}b^{w}_{k,j,n}(x,h^{2}D_{x},z)h^{n}+\mathcal{O}\left(h^{N+1}\right),\mbox{
for }k\not=j,$ (3.7) $\displaystyle R_{j}^{+}a(z)[V^{w}(h),\Pi_{j}]R_{j}^{-}$
$\displaystyle=\sum_{n=1}^{N}b^{w}_{j,j,2n}(x,h^{2}D_{x},z)h^{2n}+\mathcal{O}\left(h^{2(N+1)}\right),\;\forall
N\in\mathbb{N},$
Here
$v_{j,0}(x,\xi)=V(x,\xi),\;\;j=1,...,l.$
###### Proof.
The proofs of (3.5), (3.6) and (3.7) are quite similar, and are based on the
Beal’s characterization of $h^{2}$-pseudo-differential operators (see
Proposition 3.2). We give only the main ideas of the proof of (3.5) and we
refer to [8, 11, 12] for more details. Let $Q$ denote the left hand side of
(3.5). Let $l^{w}(x,h^{2}D_{x})$ be as in Proposition 3.2. Using the fact that
$R_{j}^{\pm}$ commutes with $l^{w}(x,h^{2}D_{x})$ as well as the fact that
$V^{w}(h)$ is an $h^{2}$-pseudo-differential operator on $x$, we deduce from
Proposition 3.2 that $Q=q^{w}(x,h^{2}D_{x};h)$, with $q\in
S^{0}(\mathbb{R}^{2})$. On the other hand, writing
(3.8) $V^{w}(h)=V^{w}(x,h^{2}D_{x})+hD_{y}\left({\partial V\over\partial
x}\right)^{w}(x,h^{2}D_{x})+hy\left({\partial V\over\partial
y}\right)^{w}(x,h^{2}D_{x})+\cdots,$
and using Proposition 3.2, we see that $q(x,\xi;h)$ has an asymptotic
expansion in powers of $h$.
Notice that the odd powers of $h$ in (3.5) and (3.7) disappear, due to the
special properties of the eigenfunctions of the harmonic oscillator (i.e.,
$\int_{\mathbb{R}}y^{2j+1}|\phi_{j}(y)|^{2}dy=\int_{\mathbb{R}}\phi_{j}(y)\,\partial_{y}^{2j+1}\phi_{j}(y)dy=0$).
Finally, since $R^{+}_{j}R^{-}_{j}=I_{L^{2}(\mathbb{R})}$, it follows from
(3.8) that $v_{j,0}(x,\xi)=V(x,\xi)$.
∎
Let $e_{-+}(x,\xi,z,h)$ denote the symbol of $E_{-+}(z)$. The following
corollary follows from the above proposition and the definition of
$E_{-+}(z)$.
###### Corollary 3.9.
We have
$e_{-+}(x,\xi,z,h)\sim\sum_{j=0}^{\infty}e_{-+}^{j}(x,\xi,z)h^{j},\,\,\ {\rm
in}\,\,S^{0}({\mathbb{R}}^{2};M_{l}({\mathbb{C}})),$
with
$e_{-+}^{0}(x,\xi,z)=\Bigl{(}(z-(2j+1)-V(x,\xi))\delta_{i,j}\Bigr{)}_{1\leq
i,j\leq l}.$
## 4\. Proof of Theorem 2.1 and Theorem 2.3
### 4.1. Trace formulae
Let $f\in C_{0}^{\infty}((a,b);\mathbb{R})$, where
$(a,b)\subset\mathbb{R}\setminus\sigma_{\rm ess}(P(h))$, and let $\theta\in
C^{\infty}_{0}(\mathbb{R};\mathbb{R})$. Set
$\Sigma_{j}([a,b])=\left\\{(x,\xi)\in\mathbb{R}^{2};\;2j+1+V(x,\xi)\in[a,b]\right\\},j=1,...,l$
and
(4.1) $\Sigma_{[a,b]}=\bigcup_{j=1}^{l}\Sigma_{j}([a,b]).$
Let $\widetilde{f}\in C_{0}^{\infty}((a,b)+i[-1,1])$ be an almost analytic
extension of $f$, i.e., $\widetilde{f}=f$ on ${\mathbb{R}}$ and
$\overline{\partial}_{z}\widetilde{f}$ vanishes on ${\mathbb{R}}$ to infinite
order, i.e. ${\overline{\partial}_{z}\widetilde{f}(z)={\mathcal{O}}_{N}(|{\rm
Im}\;z|^{N})\hbox{ for all }N\in{\mathbb{N}}.}$ Then the functional calculus
due to Helffer-Sjöstrand (see e.g. [11, Chapter 8]) yields
(4.2)
$f(P(h))=-{1\over\pi}\int\overline{\partial}_{z}\widetilde{f}(z)(z-P(h))^{-1}L(dz),$
(4.3)
$f(P(h))\breve{\theta}_{h^{2}}(t-P(h))=-{1\over\pi}\int\overline{\partial}_{z}\widetilde{f}(z)\breve{\theta}_{h^{2}}(t-z)(z-P(h))^{-1}L(dz).$
Here $L(dz)=dxdy$ is the Lebesgue measure on the complex plane
${\mathbb{C}}\sim{\mathbb{R}}^{2}_{x,y}$. In the last equality we have used
the fact that $\widetilde{f}(z)\breve{\theta}_{h^{2}}(t-z)$ is an almost
analytic extension of $f(x)\breve{\theta}_{h^{2}}(t-x)$, since
$z\mapsto\breve{\theta}_{h^{2}}(t-z)$ is analytic.
###### Proposition 4.1.
For $h$ small enough, we have
(4.4) ${\rm tr}(f(P(h)))={\rm
tr}\left(-\frac{1}{\pi}\int\overline{\partial}_{z}\widetilde{f}(z)(E_{-+}(z))^{-1}\partial_{z}E_{-+}(z)L(dz)\chi^{w}(x,h^{2}D_{x})\right)+\mathcal{O}(h^{\infty}),$
(4.5) ${\rm tr}\left(f(P(h))\breve{\theta}_{h^{2}}(t-P(h))\right)=$ ${\rm
tr}\left(-\frac{1}{\pi}\int\overline{\partial}_{z}\widetilde{f}(z)\breve{\theta}_{h^{2}}(t-z)(E_{-+}(z))^{-1}\partial_{z}E_{-+}(z)L(dz)\chi^{w}(x,h^{2}D_{x})\right)+\mathcal{O}(h^{\infty}),$
where $\chi\in C_{0}^{\infty}({\mathbb{R}}^{2};{\mathbb{R}})$ is equal to one
in a neighbourhood of $\Sigma_{[a,b]}$.
###### Proof.
Replacing $(z-P(h))^{-1}$ in (4.2) by the right hand side of (3.3), and using
the fact that $E(z)$ is holomorphic in $z$, we obtain
(4.6)
$f({P(h)})=-\frac{1}{\pi}\int\overline{\partial}_{z}\widetilde{f}(z)E_{+}(z)(E_{-+}(z))^{-1}E_{-}(z)L(dz).$
Let $\widetilde{V}\in S^{0}({\mathbb{R}}^{2})$ be a real-valued function
coinciding with $V$ for large $(x,y)$, and having the property that
(4.7) $|z-(2j+1)-\widetilde{V}(x,y)|>c>0,\,\,j=1,2,...,l,$
uniformly in $z\in{\rm supp}\;\widetilde{f}$, and $(x,y)\in{\mathbb{R}}^{2}$.
We recall that for $z\in{\rm supp}\;\widetilde{f}$, ${\rm
Re}z\in(a,b)\subset\mathbb{R}\setminus\sigma_{\rm
ess}(H(h))=\mathbb{R}\setminus\cup_{k=0}^{\infty}\\{(2k+1)\\}$. Then (4.7)
holds for $\widetilde{V}\in S^{0}({\mathbb{R}}^{2})$ with $\|\widetilde{V}\|$
small enough.
Set
$\widetilde{E}_{-+}(z):=E_{-+}(z)+\left(V^{w}(x,h^{2}D_{x})-\widetilde{V}^{w}(x,h^{2}D_{x})\right)I_{l}$,
and let $\widetilde{e}(x,\xi,z)$ be the principal symbol of
$\widetilde{E}_{-+}(z)$. Here $I_{l}$ denotes the unit matrix of order $l$. It
follows from (4.7) that $|{\rm det}\,\widetilde{e}(x,\xi,z)|>c^{l}$. Then for
sufficiently small $h>0$, the operator $\widetilde{E}_{-+}(z)$ is elliptic,
and $\widetilde{E}_{-+}(z)^{-1}$ is well defined and holomorphic for $z$ in
some fixed complex neighbourhood of supp$\widetilde{f}$, (see chapter 7 of
[11]). Hence, by an integration by parts, we get
$-{1\over\pi}\int\overline{\partial}_{z}\widetilde{f}(z)E_{+}(z)\widetilde{E}_{-+}(z)^{-1}E_{-}(z)L(dz)=0.$
Combining this with (4.6) and using the resolvent identity for ${\rm
Im}\;z\neq 0$
$E_{-+}(z)^{-1}=\widetilde{E}_{-+}(z)^{-1}+E_{-+}(z)^{-1}(\widetilde{E}_{-+}(z)-E_{-+}(z))\widetilde{E}_{-+}(z)^{-1},$
we obtain
(4.8) ${\rm tr}\left(f(P(h))\right)=-\frac{1}{\pi}{\rm
tr}\left(\int\overline{\partial}_{z}\widetilde{f}(z)E_{+}(z){E}_{-+}(z)^{-1}(\widetilde{E}_{-+}(z)-{E}_{-+}(z))\widetilde{E}_{-+}(z)^{-1}E_{-}(z)L(dz)\right).$
Since the symbol of $E_{-+}(z)-\widetilde{E}_{-+}(z)$ is
$(\widetilde{V}-V)I_{l}$ belonging to
$C^{\infty}_{0}(\mathbb{R}^{2};M_{l}(\mathbb{C}))$, we have
$E_{-+}(z)-\widetilde{E}_{-+}(z)$ is a trace class operator. It is then clear
that we can permute integration and the operator ”tr” in the right hand side
of (4.8).
Using the property of cyclic invariance of the trace, and applying (3.4) we
get
${\rm
tr}\left(E_{+}(z)E_{-+}(z)^{-1}(\widetilde{E}_{-+}(z)-E_{-+}(z))\widetilde{E}_{-+}(z)^{-1}E_{-}(z)\right)=$
${\rm
tr}\left(E_{-+}(z)^{-1}(\widetilde{E}_{-+}(z)-E_{-+}(z))\widetilde{E}_{-+}(z)^{-1}\partial_{z}E_{-+}(z)\right).$
Let $\chi\in C_{0}^{\infty}(\mathbb{R}^{2})$ be equal to $1$ in a
neighbourhood of ${\rm supp\,}(\widetilde{V}-V)$. From the composition formula
for two $h^{2}-\Psi$DOs with Weyl symbols (see Proposition 3.1), we see that
all the derivatives of the symbol of the operator
$(E_{-+}(z)-\widetilde{E}_{-+}(z))(\widetilde{E}_{-+}(z))^{-1}\partial_{z}E_{-+}(z)(1-\chi^{w}(x,h^{2}D_{x}))$
are ${\mathcal{O}}(h^{2N}\langle(x,\xi)\rangle^{-N})$ for every
$N\in{\mathbb{N}}$. The trace class-norm of this expression is therefore
${\mathcal{O}}(h^{\infty})$, and consequently
(4.9) ${\rm
tr}(E_{+}(z)E_{-+}(z)^{-1}(\widetilde{E}_{-+}(z)-E_{-+}(z))\widetilde{E}_{-+}(z)^{-1}E_{-}(z))=$
${\rm
tr}(E_{-+}(z)^{-1}(\widetilde{E}_{-+}(z)-E_{-+}(z))\widetilde{E}_{-+}(z)^{-1}\partial_{z}E_{-+}(z)\chi^{w}(x,h^{2}D_{x}))+{\mathcal{O}}(h^{\infty}|{\rm
Im}z|^{-1}).$
Here we recall from (3.3) that $E_{-+}(z)^{-1}={\mathcal{O}}(|{\rm
Im}z|^{-1})$.
Inserting (4.9) into (4.8), and using the fact that
$\widetilde{E}_{-+}(z)^{-1}\partial_{z}E_{-+}(z)$ is holomorphic in $z$ we
obtain (4.4). The proof of (4.5) is similar.
∎
Trace formulas involving effective Hamiltonian like (4.4) and (4.5) were
studied in [7, 8]. Applying Theorem 1.8 in [7] to the left hand side of (4.4),
we obtain
(4.10) ${\rm
tr}(f(P(h)))\sim\sum_{j=0}^{\infty}\beta_{j}h^{j-2},\,\,(h\searrow 0).$
To use Theorem 1.8 in [7] we make the following definition.
###### Definition 4.2.
We say that $p(x,\xi)\in S^{0}(\mathbb{R}^{2};M_{l}(\mathbb{C})),$ is micro-
hyperbolic at $(x_{0},\xi_{0})$ in the direction $T\in\mathbb{R}^{2}$, if
there are constants $C_{0},C_{1},C_{2}>0$ such that
$(\langle
dp(x,\xi),T\rangle\omega,\omega)\geq\frac{1}{C_{0}}\|\omega\|^{2}-C_{1}\|p(x,\xi)\omega\|^{2}.$
for all $(x,\xi)\in\mathbb{R}^{2}$ with
$\|(x,\xi)-(x_{0},\xi_{0})\|\leq\frac{1}{C_{2}}$ and all
$\omega\in\mathbb{C}^{l}$.
The assumption of Theorem 2.2 implies that the principal symbol
$e^{0}_{-+}(x,\xi,z)$ of $E_{-+}(z)$ is micro-hyperbolic at every point
$(x_{0},\xi_{0})\in\Sigma_{\mu}:=\\{(x,\xi)\in\mathbb{R}^{2};\,\,{\rm
det}(e^{0}_{-+}(x,\xi,\mu))=0\\}$. Thus, according to Theorem 1.8 in [7] there
exists $C>0$ large enough and $\epsilon>0$ small such that for $f\in
C_{0}^{\infty}(]\mu-\epsilon,\mu+\epsilon[;\mathbb{R})$, $\theta\in
C^{\infty}_{0}(]-\frac{1}{C},\frac{1}{C}[;\mathbb{R})$, we have:
(4.11) ${\rm
tr}\left(f(P(h))\breve{\theta}_{h^{2}}(t-P(h))\right)\sim\sum_{j=0}^{\infty}\gamma_{j}(t)h^{j-2},\,\,(h\searrow
0),$
with $\gamma_{0}(t)=c_{0}(t)$.
By observing that the $h$-pseudo-differential calculus can be extended to
$h<0$, we have
$\left|h^{2}{\rm
tr}\left(f(P(h))\breve{\theta}_{h^{2}}(t-P(h))\right)-\sum_{0\leq j\leq
N}\gamma_{j}(t)h^{j}\right|\leq
C_{N}|h|^{N+1},\,\,\,h\in]-h_{N},h_{N}[\setminus\\{0\\}.$ $\left|h^{2}{\rm
tr}(f(P(h))-\sum_{0\leq j\leq N}\beta_{j}h^{j}\right|\leq
C_{N}|h|^{N+1},\,\,\,h\in]-h_{N},h_{N}[\setminus\\{0\\}.$
By the change of variable $(x,y)\rightarrow(x,-y)$, we see that $P(h)$ is
unitarily equivalent to $P(-h)$. From this we deduce that $h^{2}{\rm
tr}(f(P(h)))$ and $h^{2}{\rm
tr}\left(f(P(h))\breve{\theta}_{h^{2}}(t-P(h))\right)$ are unchanged when we
replace $h$ by $-h$. We recall that if $A$ and $B$ are unitarily equivalent
trace class operators then ${\rm tr}(A)={\rm tr}(B)$. Consequently,
$\gamma_{2j+1}=\beta_{2j+1}=0$. This ends the proof of Theorem 2.1 and Theorem
2.2.
### 4.2. Proof of Corollary 2.3.
Pick $\sigma>0$ small enough. Let $\phi_{1}\in
C_{0}^{\infty}(\left(a-\sigma,a+\sigma\right);[0,1])$, $\phi_{2}\in
C_{0}^{\infty}(\left(a+{\sigma\over 2},b-{\sigma\over 2}\right);[0,1])$,
$\phi_{3}\in C_{0}^{\infty}(\left(b-\sigma,b+\sigma\right);[0,1])$ satisfy
$\phi_{1}+\phi_{2}+\phi_{3}=1$ on $\left(a-{\sigma\over 2},b+{\sigma\over
2}\right)$. Let $\gamma_{0}(h)\leq\gamma_{1}(h)\leq\cdots\leq\gamma_{N}(h)$ be
the eigenvalues of $H(h)$ counted with their multiplicity and lying in the
interval $\left(a-\sigma,b+\sigma\right)$. We have
(4.12)
$\displaystyle\begin{split}{\mathcal{N}}_{h}(a,b)&=\sum_{a\leq\gamma_{j}(h)\leq
b}(\phi_{1}+\phi_{2}+\phi_{3})(\gamma_{j}(h))\\\
&=\sum_{a\leq\gamma_{j}(h)}\phi_{1}(\gamma_{j}(h))+\sum\phi_{2}(\gamma_{j}(h))+\sum_{\gamma_{j}(h)\leq
b}\phi_{3}(\gamma_{j}(h))\\\
&=\sum_{a\leq\gamma_{j}(h)}\phi_{1}(\gamma_{j}(h))+{\rm
tr}(\phi_{2}(H(h)))+\sum_{\gamma_{j}(h)\leq
b}\phi_{3}(\gamma_{j}(h)).\end{split}$
According to Theorem 2.1, we have
(4.13) ${\rm tr}\left(\phi_{m}(H(h))\right)=\frac{1}{2\pi
h^{2}}\sum_{j=1}^{l}\int_{{\mathbb{R}}^{2}}\phi_{m}((2j+1)+V(X))dX+{\mathcal{O}}(1),\quad
m=1,2,3.$
Set $M(\tau,h):=\sum\limits_{\gamma_{j}(h)\leq\tau}\phi_{3}(\gamma_{j}(h))$.
Evidently, in the sense of distribution, we have
(4.14)
${\mathcal{M}}(\tau):=M^{\prime}(\tau,h)=\sum_{j}\delta(\tau-\gamma_{j}(h))\phi_{3}(\gamma_{j}(h)).$
In what follows, we choose $\theta\in C^{\infty}_{0}(\left(-{1\over C},{1\over
C}\right);[0,1])$, ($C>0$ large enough) such that
$\theta(0)=1,\,\,\breve{\theta}(t)\geq
0,t\in{\mathbb{R}},\,\,\breve{\theta}(t)\geq\epsilon_{0},t\in[-\delta_{0},\delta_{0}]\quad{\rm
for\;some}\quad\delta_{0}>0,\epsilon_{0}>0.$
###### Corollary 4.3.
There is $C_{0}>0$, such that, for all
$(\lambda,h)\in{\mathbb{R}}\times\left(0,h_{0}\right)$, we have:
$|M(\lambda+\delta_{0}h^{2},h)-M(\lambda-\delta_{0}h^{2},h)|\leq C_{0}.$
###### Proof.
Since $\phi_{3}\geq 0$, it follows from the construction of $\theta$ that
${\epsilon_{0}\over
h^{2}}\sum_{\lambda-\delta_{0}h^{2}\leq\gamma_{j}(h)\leq\lambda+\delta_{0}h^{2}}\phi_{3}(\gamma_{j}(h))\leq\sum_{|{\lambda-\gamma_{j}(h)}|<\delta_{0}h^{2}}\breve{\theta}_{h^{2}}(\lambda-\gamma_{j}(h))\phi_{3}(\gamma_{j}(h))\leq$
$\sum_{j}\breve{\theta}_{h^{2}}(\lambda-\gamma_{j}(h))\phi_{3}(\gamma_{j}(h))=\breve{\theta}_{h^{2}}\star{\mathcal{M}}(\lambda)={\rm
tr}\left(\phi_{3}(H(h))\breve{\theta}_{h^{2}}(\lambda-H(h))\right).$
Now Corollary 4.3 follows from (2.4). ∎
According to Corollary 4.3, we have
(4.15) $\int\left\langle{\tau-\lambda\over
h^{2}}\right\rangle^{-2}{\mathcal{M}}(\tau)d\tau=\sum_{k\in{\mathbb{Z}}}\int_{\\{\delta_{0}k\leq{\tau-\lambda\over
h^{2}}\leq\delta_{0}(k+1)\\}}\left\langle{\tau-\lambda\over
h^{2}}\right\rangle^{-2}{\mathcal{M}}(\tau)d\tau\leq
C_{0}\left(\sum_{k\in{\mathbb{Z}}}\langle{\delta_{0}k}\rangle^{-2}\right).$
On the other hand, since $\breve{\theta}\in{\mathcal{S}}({\mathbb{R}})$ and
$\theta(0)=1$, there exists $C_{1}>0$ such that:
$\left|\int_{-\infty}^{\lambda}\breve{\theta}_{h^{2}}(\tau-y)dy-1_{\left(-\infty,\lambda\right)}(\tau)\right|=\left|\int_{{\tau-\lambda\over
h^{2}}}^{+\infty}\breve{\theta}(y)dy-1_{\left(-\infty,\lambda\right)}(\tau)\right|\leq
C_{1}\left\langle\frac{\tau-\lambda}{h^{2}}\right\rangle^{-2},$
uniformly in $\tau\in{\mathbb{R}}$ and $h\in\left(0,h_{0}\right)$.
Consequently,
(4.16)
$\left|\int_{-\infty}^{\lambda}\breve{\theta}_{h^{2}}\star{\mathcal{M}}(\tau)d\tau-\int_{-\infty}^{\lambda}{\mathcal{M}}(\tau)d\tau\right|\leq
C_{1}\int\left\langle{\tau-\lambda\over
h^{2}}\right\rangle^{-2}{\mathcal{M}}(\tau)d\tau.$
Putting together (4.14), (4.15) and (4.16), we get
(4.17)
$\int_{-\infty}^{\lambda}\breve{\theta}_{h^{2}}\star{\mathcal{M}}(\tau)d\tau=M(\lambda,h)+{\mathcal{O}}(1).$
Note that $\breve{\theta}_{h^{2}}\star{\mathcal{M}}(\tau)={\rm
tr}\left(\phi_{3}(H(h))\breve{\theta}_{h^{2}}(\tau-H(h))\right)$. As a
consequence of (2.4), (2.5) and (4.17) we obtain
(4.18) $M(\lambda,h)=h^{-2}m(\lambda)+{\mathcal{O}}(1),$
where
(4.19) $m(\lambda)=\int_{-\infty}^{\lambda}c_{0}(\tau)d\tau={1\over
2\pi}\sum_{j=1}^{l}\int_{\\{X\in{\mathbb{R}}^{2}|(2j+1)+V(X)\leq\lambda\\}}\phi_{3}((2j+1)+V(X))dX.$
Here we have used the fact that if $E$ is not a critical value of $V(X)$, then
${\partial\over\partial E}\left(\int_{\\{X\in{\mathbb{R}}^{2};\;V(X)\leq
E\\}}\phi(V(X))dX\right)=\phi(E)\int_{S_{E}}{dS_{E}\over|\nabla_{X}V|},$
where $S_{E}=V^{-1}(E)$ (see [22, Lemma V-9]).
Applying (2.2), (4.18) and (4.19) to $\phi_{1}$ and writing:
$\sum_{a\leq\gamma_{j}(h)}\phi_{1}(\gamma_{j}(h))=\sum_{j}\phi_{1}(\gamma_{j}(h))-\sum_{\gamma_{j}(h)<a}\phi_{1}(\gamma_{j}(h)),$
we get
(4.20)
$\sum_{a\leq\gamma_{j}(h)}\phi_{1}(\mu_{j}(h))=h^{-2}m_{1}(a)+{\mathcal{O}}(1),$
with
(4.21) $m_{1}(a)={1\over
2\pi}\sum_{j=1}^{l}\int_{\\{X\in{\mathbb{R}}^{2}|(2j+1)+V(X)\geq
a\\}}\phi_{1}((2j+1)+V(X))dX.$
Now Corollary 2.3 results from (4.13), (4.14), (4.18), (4.19), (4.20) and
(4.21).
## 5\. Proof of Theorem 2.4 and Theorem 2.6
As we have noticed in the outline of the proofs, we will construct a potential
$\varphi(X;h)=\varphi_{0}(X)+\varphi_{2}(X)h^{2}+\cdots+\varphi_{2j}(X)h^{2j}+\cdots,$
such that for all $f\in C^{\infty}_{0}((a,b);\mathbb{R})$ and $\theta\in
C^{\infty}_{0}(\mathbb{R};\mathbb{R})$, we have
(5.1) ${\rm tr}(f(H_{\lambda}))={\rm tr}(f(Q))+{\mathcal{O}}(h^{\infty}),$
(5.2) ${\rm
tr}\left(f(H_{\lambda})\breve{\theta}_{\lambda^{-\frac{2}{\delta}}}(t-H_{\lambda})\right)={\rm
tr}\left(f(Q)\breve{\theta}_{h^{2}}(t-Q)\right)+{\mathcal{O}}(h^{\infty}),$
where $Q:=H_{0}+\varphi(hX;h)$ and $h=\lambda^{-\frac{1}{\delta}}$. By
observing that Theorem 2.1, Theorem 2.2 and Corollary 2.3 remain true when we
replace $H(h)=H_{0}+V(hX)$ by $Q$, Theorem 2.4, Theorem 2.5 and Corollary 2.6
follow from (5.1) and (5.2). The remainder of this paper is devoted to the
proof of (5.1) and (5.2).
### 5.1. Construction of reference operator $Q$
Set $h=\lambda^{-\frac{1}{\delta}}$. For $M>0$, put
(5.3) $\Omega_{M}(h)=\\{X\in\mathbb{R}^{2};\;h^{-\delta}V(X)>M\\}.$
Since $\omega_{0}>0$ and continuous on the unit circle, there exist two
positive constants $C_{1}$ and $C_{2}$ such that
$C_{1}<(\min\limits_{\mathbb{S}^{1}}\omega_{0})^{1/\delta}\leq(\max\limits_{\mathbb{S}^{1}}\omega_{0})^{1/\delta}<C_{2}$.
According to the hypothesis (2.9), there exists $h_{0}>0$ such that
$B(0,C_{1}M^{-1/\delta}h^{-1})\subset\Omega_{M}(h)\subset
B(0,C_{2}M^{-1/\delta}h^{-1}),\;\mbox{ for all }0<h\leq h_{0}.$
Here $B(0,r)$ denotes the ball of center $0$ and radius $r$.
Let $\chi\in C_{0}^{\infty}(B(0,C_{1}M^{-1/\delta});[0,1])$ satisfying
$\chi=1$ near zero. Set
* •
$\varphi(X;h):=(1-\chi(X))h^{-\delta}V(\frac{X}{h})+M\chi(X),$
* •
$W_{h}(X):=h^{-\delta}V(X)-\varphi(hX;h)=\chi(hX)(h^{-\delta}V(X)-M)$.
By the construction of $\varphi(\cdot;h)$ and $W_{h}$, we have
(5.4) $|\partial_{X}^{\alpha}\varphi(X;h)|\leq C_{\alpha},\mbox{ uniformly for
}h\in(0,h_{0}],$ (5.5) $\varphi(hX;h)>\frac{M}{2}\mbox{ for
}X\in\Omega_{\frac{M}{2}}(h),$ (5.6) ${\rm supp}W_{h}\subset
B(0,C_{1}M^{-1/\delta}h^{-1})\subset\Omega_{M}(h).$
On the other hand, it follows from (2.9) that for all $N\in\mathbb{N}$, there
exist $\varphi_{0},...,\varphi_{2N},K_{2N+2}(\cdot;h)\in
C^{\infty}(\mathbb{R}^{2};\mathbb{R})$, uniformly bounded with respect to
$h\in(0,h_{0}]$ together with their derivatives such that:
(5.7)
$\varphi(X;h)=\sum\limits_{j=0}^{N}\varphi_{2j}(X)h^{2j}+h^{2N+2}K_{2N+2}(X;h)$
with
$\varphi_{0}(X)=(1-\chi(X))\omega_{0}\left(\frac{X}{|X|}\right)|X|^{-\delta}+M\chi(X).$
In fact, if $X\in{\rm supp}\chi$ then
$\omega_{0}\left(\frac{X}{|X|}\right)|X|^{-\delta}>C_{1}^{\delta}|X|^{-\delta}>M$,
which implies that $\varphi_{0}(X)\geq(1-\chi(X))M+M\chi(X)=M$ for all
$X\in{\rm supp}\chi$. Consequently, we have
###### Lemma 5.1.
If $\varphi_{0}(X)<M$ then
$\varphi_{0}(X)=\omega_{0}\left(\frac{X}{|X|}\right)|X|^{-\delta}$.
Let $\psi\in C^{\infty}(\mathbb{R};[\frac{M}{3},+\infty))$ satisfying
$\psi(t)=t$ for all $t\geq\frac{M}{2}$. We define
$F_{1}(X;h):=\psi(\varphi(hX;h))\mbox{ and
}F_{2}(X;h):=\psi(h^{-\delta}V(X)).$
Let $\mathcal{U}$ be a small complex neighborhood of $[a,b]$. From now on, we
choose $M>a+b$ large enough such that
(5.8) $F_{j}(X;h)-{\rm Re}z\geq\frac{M}{4},\;j=1,2,$
uniformly for $z\in\mathcal{U}$. This choice of M implies that:
* •
If $2j+1+\varphi(X;h)\in[a,b]$ then $\varphi_{0}(X)<M$ for all
$h\in(0,h_{0}]$,
* •
The function defined by $z\mapsto(z-H_{F_{j}})^{-1}$ is holomorphic from
$\mathcal{U}$ to $\mathcal{L}(L^{2}(\mathbb{R}^{2}))$, where
$H_{F_{j}}:=H_{0}+F_{j}(X;h),\;j=1,2.$
Moreover, it follows from (5.4) that
$\partial_{X}^{\alpha}F_{j}(X;h)=\mathcal{O}_{\alpha}(h^{-\delta}).$
Finally, (5.5) shows that
(5.9) $\begin{split}&{\rm dist}\left({\rm supp}W_{h},{\rm
supp}[\varphi(h\cdot;h)-F_{1}(\cdot;h)]\right)\geq\frac{a_{1}(M)}{h},\\\ &{\rm
dist}\left({\rm supp}W_{h},{\rm
supp}[h^{-\delta}V(\cdot)-F_{2}(\cdot;h)]\right)\geq\frac{a_{2}(M)}{h},\end{split}$
with $a_{1}(M),a_{2}(M)>0$ independent of $h$.
###### Lemma 5.2.
Let $\widetilde{\chi}\in C_{0}^{\infty}(\mathbb{R}^{2})$. For
$z\in\mathcal{U},$ the operators $\widetilde{\chi}(hX)(z-H_{F_{j}})^{-1}$,
$j=1,2,$ belong to the class of Hilbert-Schmidt operators. Moreover
(5.10) $\|\widetilde{\chi}(hX)(z-H_{F_{j}})^{-1}\|_{\rm
HS}=\mathcal{O}(h^{-3-\delta}).$
Here we denote by $\|.\|_{\rm HS}$ the Hilbert-Schmidt norm of operators.
###### Proof.
We prove (5.10) for $j=1$. The case $j=2$ is treated in the same way.
Using the resolvent equation, one has
(5.11) $(z-H_{F_{1}})^{-1}=\left(z-{M\over
6}-H_{0}\right)^{-1}+\left(z-\frac{M}{6}-H_{0}\right)^{-1}\left(F_{1}(X;h)-\frac{M}{6}\right)(z-H_{F_{1}})^{-1}.$
On the other hand, the operator $\left(z-\frac{M}{6}-H_{0}\right)^{-1}$ was
shown to be an integral operator with integral kernel $K_{0}(X,Y,z)$
satisfying $|K_{0}(X,Y,z)|\leq Ce^{-\frac{1}{8}|X-Y|^{2}}$ uniformly for
$z\in\mathcal{U}$ (see [4, Formula 2.17]). Let $K_{1}(X,Y,z)$ be the integral
kernel of $\widetilde{\chi}(hX)(z-\frac{M}{6}-H_{0})^{-1}$. Then
$K_{1}(X,Y,z)=\widetilde{\chi}(hX)K_{0}(X,Y,z)$.
Let $\langle X\rangle=(1+|X|^{2})^{\frac{1}{2}}$, $X\in\mathbb{R}^{2}$. Since
$\widetilde{\chi}\in C_{0}^{\infty}(\mathbb{R}^{2})$, one has
$\widetilde{\chi}(hX)h^{3}\langle X\rangle^{3}$ is uniformly bounded for
$h>0$. Combining this with the fact that $\frac{1}{\langle
X\rangle^{3}}e^{-\frac{1}{8}|X-Y|^{2}}\in L^{2}(\mathbb{R}^{4})$, we obtain
(5.12)
$\|K_{1}(X,Y,z)\|_{L^{2}(\mathbb{R}^{4})}=\left\|\widetilde{\chi}(hX)h^{3}\langle
X\rangle^{3}\frac{1}{h^{3}\langle
X\rangle^{3}}K_{0}(X,Y,z)\right\|_{L^{2}(\mathbb{R}^{4})}=\mathcal{O}(h^{-3}).$
It shows that $\widetilde{\chi}(hX)(z-\frac{M}{6}-H_{0})^{-1}$ is a Hilbert-
Schmidt operator and
(5.13)
$\left\|\widetilde{\chi}(hX)\left(z-\frac{M}{6}-H_{0}\right)^{-1}\right\|_{\rm
HS}=\left\|K_{1}(X,Y,z)\right\|_{L^{2}(\mathbb{R}^{4})}=\mathcal{O}(h^{-3}).$
Consequently, (5.11) and (5.13) imply that
$\displaystyle\|\widetilde{\chi}(hX)(z-H_{F_{1}})^{-1}\|_{\rm
HS}\leq\left\|\widetilde{\chi}(hX)\left(z-\frac{M}{6}-H_{0}\right)^{-1}\right\|_{\rm
HS}$
$\displaystyle+\left\|\widetilde{\chi}(hX)\left(z-\frac{M}{6}-H_{0}\right)^{-1}\right\|_{\rm
HS}\left\|F_{1}(X;h)-\frac{M}{6}\right\|_{L^{\infty}(\mathbb{R}^{2})}\|(z-H_{F_{1}})^{-1}\|$
$\displaystyle=\mathcal{O}(h^{-3-\delta}),$
where we have used $F_{1}(X;h)=\mathcal{O}(h^{-\delta})$. ∎
###### Lemma 5.3.
For $z\in\mathcal{U}$, the operator
$W_{h}(X)(H_{F_{1}}-z)^{-1}(\varphi(hX;h)-F_{1}(X;h))$
belongs to the class of Hilbert-Schmidt operators. Moreover,
(5.14) $\|W_{h}(X)(H_{F_{1}}-z)^{-1}(\varphi(hX;h)-F_{1}(X;h))\|_{\rm
HS}=\mathcal{O}(h^{\infty}).$
###### Proof.
Let $H^{0}_{F_{1}}:=-\Delta+F_{1}(X;h)$. We denote by $G(X,Y;z)$ (resp.
$G_{0}(X,Y;{\rm Re}z)$) the Green function of $(H_{F_{1}}-z)^{-1}$ (resp.
$(H^{0}_{F_{1}}-{\rm Re}z)^{-1}$).
From the functional calculus, one has
(5.15)
$\displaystyle\begin{split}(H_{F_{1}}-z)^{-1}&=\int_{0}^{\infty}e^{tz}e^{-tH_{F_{1}}}dt,\\\
(H^{0}_{F_{1}}-{\rm Re}z)^{-1}&=\int_{0}^{\infty}e^{t{\rm
Re}z}e^{-tH^{0}_{F_{1}}}dt.\end{split}$
For $t\geq 0$, the Kato inequality (see [5, Formula 1.8]) implies that
(5.16) $|e^{-tH_{F_{1}}}u|\leq
e^{-tH^{0}_{F_{1}}}|u|\;({\text{pointwise}}),\;u\in L^{2}(\mathbb{R}^{2}).$
Then (5.15) and (5.16) yield
(5.17) $|(H_{F_{1}}-z)^{-1}u|\leq(H^{0}_{F_{1}}-{\rm
Re}z)^{-1}|u|\;({\text{pointwise}}),\;u\in L^{2}(\mathbb{R}^{2}).$
Consequently, applying [3, Theorem 10] we have $|G(X,Y;z)|\leq G_{0}(X,Y;{\rm
Re}z)$ for a.e. $X,Y\in\mathbb{R}^{2}$. From this, one obtains
(5.18)
$|W_{h}(X)G(X,Y;z)(\varphi(hY;h)-F_{1}(Y;h))|\leq|W_{h}(X)G_{0}(X,Y;{\rm
Re}z)(\varphi(hY;h)-F_{1}(Y;h))|$
for a.e. $X,Y\in\mathbb{R}^{2}$.
On the other hand, using (5.9) M. Dimassi proved that (see [6, Proposition
3.3])
(5.19) $\|W_{h}(X)G_{0}(X,Y;{\rm
Re}z)(\varphi(hY;h)-F_{1}(Y;h))\|_{L^{4}(\mathbb{R}^{4})}=\mathcal{O}(h^{\infty}).$
Thus, (5.18) and (5.19) give
(5.20)
$\|W_{h}(X)G(X,Y;z)(\varphi(hY;h)-F_{1}(Y;h))\|_{L^{4}(\mathbb{R}^{4})}=\mathcal{O}(h^{\infty}).$
The estimate (5.20) shows that the operator
$W_{h}(X)(H_{F_{1}}-z)^{-1}(\varphi(hX;h)-F_{1}(X;h))$ is Hilbert-Schmidt and
(5.21) $\|W_{h}(X)(H_{F_{1}}-z)^{-1}(\varphi(hX;h)-F_{1}(X;h))\|_{\rm
HS}=\mathcal{O}(h^{\infty}).$
∎
By using the same arguments as in Lemma 5.3, we also obtain
###### Lemma 5.4.
For $z\in\mathcal{U}$, the operator
$W_{h}(X)(H_{F_{2}}-z)^{-1}(h^{-\delta}V(X)-F_{2}(X;h))$
belongs to the class of Hilbert-Schmidt operators and
$\|W_{h}(X)(H_{F_{2}}-z)^{-1}(h^{-\delta}V(X)-F_{2}(X;h))\|_{\rm
HS}=\mathcal{O}(h^{\infty}).$
Let $Q:=H_{0}+\varphi(hX;h)$. For $z\in\mathcal{U}$, ${\rm Im}z\not=0$, put
(5.22)
$G(z)=(z-H_{\lambda})^{-1}-(z-Q)^{-1}-(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{1}})^{-1}.$
###### Proposition 5.5.
The operator $G(z)$ is of trace class and satisfies the following estimate:
(5.23) $\|G(z)\|_{\rm tr}=\mathcal{O}(h^{\infty}|{\rm Im}z|^{-2}),$
uniformly for $z\in\mathcal{U}$ with ${\rm Im}z\not=0$.
###### Proof.
It follows from the resolvent equation that
(5.24) $(z-H_{\lambda})^{-1}-(z-Q)^{-1}=(z-H_{\lambda})^{-1}W_{h}(z-Q)^{-1}.$
On the other hand, one has
(5.25) $\displaystyle\begin{split}(z-H_{\lambda})^{-1}&=(z-H_{F_{2}})^{-1}\\\
&+(z-H_{\lambda})^{-1}(h^{-\delta}V(X)-F_{2}(X;h))(z-H_{F_{2}})^{-1}\end{split}$
and
(5.26) $\displaystyle\begin{split}(z-Q)^{-1}&=(z-H_{F_{1}})^{-1}\\\
&+(z-H_{F_{1}})^{-1}(\varphi(hX;h)-F_{1}(X;h))(z-Q)^{-1}.\end{split}$
Substituting (5.25) and (5.26) into the right hand side of (5.24), one gets
$\displaystyle
G(z)=(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{1}})^{-1}(\varphi(hX;h)-F_{1}(X;h))(z-Q)^{-1}$
$\displaystyle+(z-H_{\lambda})^{-1}(h^{-\delta}V(X)-F_{2}(X;h))(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{1}})^{-1}$
$\displaystyle+(z-H_{\lambda})^{-1}(h^{-\delta}V(X)-F_{2}(X;h))(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{1}})^{-1}\times$
$\displaystyle\times(\varphi(hX;h)-F_{1}(X;h))(z-Q)^{-1}=:A(z)+B(z)+C(z).$
Next we choose $\widetilde{\chi}\in C_{0}^{\infty}(\mathbb{R}^{2})$ such that
$\widetilde{\chi}(hX)W_{h}(X)=W_{h}(X)$. It follows from Lemma 5.2 and Lemma
5.3 that
$\displaystyle\|A(z)\|_{\rm
tr}\leq\|(z-H_{F_{2}})^{-1}\widetilde{\chi}(hX)\|_{\rm
HS}\|W_{h}(z-H_{F_{1}})^{-1}(\varphi(hX,h)-F_{1}(X;h))\|_{\rm
HS}\|(z-Q)^{-1}\|$ $\displaystyle=\mathcal{O}(h^{\infty}{|{\rm Im}z|}^{-1}).$
Here we have used the fact that $\|(z-Q)^{-1}\|=\mathcal{O}({|{\rm
Im}z|}^{-1})$. Similarly, we also obtain $\|B(z)\|_{\rm
tr}=\mathcal{O}(h^{\infty}{|{\rm Im}z|}^{-1})$ and $\|C(z)\|_{\rm
tr}=\mathcal{O}(h^{\infty}{|{\rm Im}z|}^{-2})$. Thus,
$\|G(z)\|_{\rm tr}\leq\|A(z)\|_{\rm tr}+\|B(z)\|_{\rm tr}+\|C(z)\|_{\rm
tr}=\mathcal{O}(h^{\infty}{|{\rm Im}z|}^{-2}).$
∎
### 5.2. Proof of (5.1) and Theorem 2.4
Let $f\in C_{0}^{\infty}((a,b);\mathbb{R})$ and let $\widetilde{f}\in
C_{0}^{\infty}(\mathcal{U})$ be an almost analytic extension of $f$. From the
Helffer- Sjötrand formula and (5.22), we get
(5.27) $\displaystyle\begin{split}&f(H_{\lambda})-f(Q)\\\
&=-\frac{1}{\pi}\int\overline{\partial}_{z}\widetilde{f}(z)[(z-H_{\lambda})^{-1}-(z-Q)^{-1}]L(dz)\\\
&=-\frac{1}{\pi}\int\overline{\partial}_{z}\widetilde{f}(z)\left[(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{1}})^{-1}+G(z)\right]L(dz).\end{split}$
Notice that $(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{1}})^{-1}$ is holomorphic in
$z\in\mathcal{U}$, then
(5.28)
$-\frac{1}{\pi}\int\overline{\partial}_{z}\widetilde{f}(z)(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{1}})^{-1}L(dz)=0.$
Thus, (5.27) and (5.28) follow that
(5.29)
$f(H_{\lambda})-f(Q)=-\frac{1}{\pi}\int\overline{\partial}_{z}\widetilde{f}(z)G(z)L(dz),$
which together with (5.23) yields (5.1).
Applying Theorem 2.1 to the operator $Q$ and using (5.1) we obtain (2.10) with
$b_{0}(f)=\sum_{j=0}^{q}\frac{1}{2\pi}\iint f(2j+1+\varphi_{0}(X))dX$
According to Lemma 5.1, one has
$2j+1+\varphi_{0}(X)\in[a,b]\Longleftrightarrow\varphi_{0}(X)=\omega_{0}\left(\frac{X}{|X|}\right)|X|^{-\delta},j=0,...,q.$
Thus, after a change of variable in the integral we get
$b_{0}(f)=\frac{1}{2\pi\delta}\int_{0}^{2\pi}(\omega_{0}(\cos\theta,\sin\theta))^{\frac{2}{\delta}}d\theta\sum_{j=0}^{q}\int
f(u)(u-(2j+1))^{-1-\frac{2}{\delta}}du.$
We recall that ${\rm supp}f\subset]a,b[$, with $2q+1<a<b<2q+3$. This ends the
proof of Theorem 2.4.
### 5.3. Proof of (5.2) and Theorem 2.5
The proof of (5.2) is a slight modification of (5.1). For that, let $\phi\in
C_{0}^{\infty}((-2,2);[0,1])$ such that $\phi=1$ on $[-1,1]$. Put
$\phi_{h}(z)=\phi(\frac{{\rm Im}z}{h^{2}})$, then
$\widetilde{f}(z)\phi_{h}(z)$ is also an almost analytic extension of $f$.
Applying again the Helffer-Sjöstrand formula, we get
(5.30)
$\displaystyle\begin{split}&f(H_{\lambda})\breve{\theta}_{\lambda^{-\frac{2}{\delta}}}(t-H_{\lambda})-f(Q)\breve{\theta}_{h^{2}}(t-Q)\\\
&=-\frac{1}{\pi}\int\overline{\partial}_{z}(\widetilde{f}\phi_{h})(z)\breve{\theta}_{h^{2}}(t-z)[(z-H_{\lambda})^{-1}-(z-Q))^{-1}]L(dz)\\\
&=-\frac{1}{\pi}\int\overline{\partial}_{z}(\widetilde{f}\phi_{h})(z)\breve{\theta}_{h^{2}}(t-z)\left[(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{2}})^{-1}+G(z)\right]L(dz)\\\
&=-\frac{1}{\pi}\int\overline{\partial}_{z}(\widetilde{f}\phi_{h})(z)\breve{\theta}_{h^{2}}(t-z)G(z)L(dz),\end{split}$
where in the last equality we have used the fact that
$(z-H_{F_{2}})^{-1}W_{h}(z-H_{F_{1}})^{-1}$ is holomorphic in
$z\in\mathcal{U}$.
According to the Paley-Wiener theorem (see e.g. [21, Theorem IX.11]) the
function $\breve{\theta}_{h^{2}}(t-z)$ is analytic with respect to $z$ and
satisfies the following estimate
(5.31)
$\breve{\theta}_{h^{2}}(t-z)=\mathcal{O}\left(\frac{1}{h^{2}}\exp\left(\frac{|{\rm
Im}z|}{Ch^{2}}\right)\right).$
Combining this with the fact that
$\overline{\partial}_{z}(\widetilde{f}\phi_{h})(z)={\mathcal{O}}(|{\rm
Im}z|^{\infty})\phi_{h}(z)+\mathcal{O}\left(\frac{1}{h^{2}}\right)1_{[h^{2},2h^{2}]}(|{\rm
Im}z|)$, and using Proposition 5.5 we get
$\|\overline{\partial}_{z}(\widetilde{f}\phi_{h})(z)\breve{\theta}_{h^{2}}(t-z)G(z)\|_{\rm
tr}={\mathcal{O}}(h^{\infty}).$
This together with (5.30) ends the proof of (5.2).
By observing that
$X.\nabla_{X}\left(\omega_{0}\left(\frac{X}{|X|}\right)\right)=0$, we have
(5.32)
$X.\nabla_{X}\left(\omega_{0}\left(\frac{X}{|X|}\right)|X|^{-\delta}\right)=-\delta\omega_{0}\left(\frac{X}{|X|}\right)|X|^{-\delta}.$
Then, since $\omega_{0}>0$, we obtain
$\nabla_{X}(\omega_{0}(\frac{X}{|X|})|X|^{-\delta})\not=0$ for
$X\in\mathbb{R}^{2}\setminus\\{0\\}$. It implies that the functions
$2j+1+\varphi_{0}(X)$, $j=1,...,q$, do not have any critical values in the
interval $[a,b]$. Consequently, Theorem 2.5 follows from (5.2) and Theorem
2.2.
## References
* [1] J. Avron, I. Herbst, and B. Simon. Schrödinger operators with magnetic fields. I. General interactions. Duke Math. J., 45(4):847–883, 1978.
* [2] V. Bruneau, P. Miranda, and G. Raikov. Discrete spectrum of quantum Hall effect Hamiltonians I. Monotone edge potentials. J. Spectr. Theory, 1(3):237–272, 2011.
* [3] J. Brüning, V. Geyler, and K. Pankrashkin. Continuity properties of integral kernels associated with Schrödinger operators on manifolds. Ann. Henri Poincaré, 8(4):781–816, 2007.
* [4] H. D. Cornean and G. Nenciu. On eigenfunction decay for two-dimensional magnetic Schrödinger operators. Comm. Math. Phys., 192(3):671–685, 1998.
* [5] H. L. Cycon, R. G. Froese, W. Kirsch, and B. Simon. Schrödinger operators with application to quantum mechanics and global geometry. Texts and Monographs in Physics. Springer-Verlag, Berlin, study edition, 1987.
* [6] M. Dimassi. Développements asymptotiques pour des perturbations fortes de l’opérateur de Schrödinger périodique. Ann. Inst. H. Poincaré Phys. Théor., 61(2):189–204, 1994.
* [7] M. Dimassi. Trace asymptotics formulas and some applications. Asymptot. Anal., 18(1-2):1–32, 1998.
* [8] M. Dimassi. Développements asymptotiques de l’opérateur de Schrödinger avec champ magnétique fort. Comm. Partial Differential Equations, 26(3-4):595–627, 2001.
* [9] M. Dimassi. Spectral shift function in the large coupling constant limit. Ann. Henri Poincaré, 7(3):513–525, 2006.
* [10] M. Dimassi and V. Petkov. Spectral shift function for operators with crossed magnetic and electric fields. Rev. Math. Phys., 22(4):355–380, 2010.
* [11] M. Dimassi and J. Sjöstrand. Spectral asymptotics in the semi-classical limit, volume 268 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 1999.
* [12] B. Helffer and J. Sjöstrand. Équation de Schrödinger avec champ magnétique et équation de Harper. In Schrödinger operators (Sønderborg, 1988), volume 345 of Lecture Notes in Phys., pages 118–197. Springer, Berlin, 1989.
* [13] V. Ivrii. Microlocal analysis and precise spectral asymptotics. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 1998.
* [14] F. Klopp and G. Raikov. The fate of the Landau levels under perturbations of constant sign. Int. Math. Res. Not. IMRN, (24):4726–4734, 2009.
* [15] M. Melgaard and G. Rozenblum. Eigenvalue asymptotics for weakly perturbed Dirac and Schrödinger operators with constant magnetic fields of full rank. Comm. Partial Differential Equations, 28(3-4):697–736, 2003.
* [16] A. Mohamed and G. Raikov. On the spectral theory of the Schrödinger operator with electromagnetic potential. In Pseudo-differential calculus and mathematical physics, volume 5 of Math. Top., pages 298–390. Akademie Verlag, Berlin, 1994.
* [17] G. Raikov. Strong electric field eigenvalue asymptotics for the Schrödinger operator with electromagnetic potential. Lett. Math. Phys., 21(1):41–49, 1991.
* [18] G. Raikov. Strong-electric-field eigenvalue asymptotics for the perturbed magnetic Schrödinger operator. Comm. Math. Phys., 155(2):415–428, 1993.
* [19] G. Raikov. Eigenvalue asymptotics for the Schrödinger operator in strong constant magnetic fields. Comm. Partial Differential Equations, 23(9-10):1583–1619, 1998\.
* [20] G. Raikov and S. Warzel. Quasi-classical versus non-classical spectral asymptotics for magnetic Schrödinger operators with decreasing electric potentials. Rev. Math. Phys., 14(10):1051–1072, 2002.
* [21] M. Reed and B. Simon. Methods of modern mathematical physics. II. Fourier analysis, self-adjointness. Academic Press [Harcourt Brace Jovanovich Publishers], New York, 1975\.
* [22] D. Robert. Autour de l’approximation semi-classique, volume 68 of Progress in Mathematics. Birkhäuser Boston Inc., Boston, MA, 1987.
|
arxiv-papers
| 2013-02-17T14:49:30 |
2024-09-04T02:49:41.858756
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mouez Dimassi and Anh Tuan Duong",
"submitter": "Duong Anh Tuan",
"url": "https://arxiv.org/abs/1302.4074"
}
|
1302.4497
|
# Calculation of the determinant in the Wheeler-De Witt equation.
Carlos A. Jiménez-Orjuela
Nelson Vanegas-Arbeláez
Instituto de Física
Universidad de Antioquia.
[email protected]
[email protected]
###### Abstract
The Riemann-zeta function regularization procedure has been studied
intensively as a good method in the computation of the determinant for pseudo-
diferential operator. In this paper we propose a different approach for the
computation of the determinant base on the Wheeler-De Witt equation.
## 1 Introduction
In this theory, the space has geometry $\mathbb{R}\times\Sigma$ where $\Sigma$
is a two dimensional compact manifold, usually a 2-Torus $T^{2}$. The
eignestates and eigenvalues are [1]:
$\left|m\,n\right>\,=\,e^{2\pi i(mx+ny)}\hskip
28.45274ptl_{mn}=\frac{4\pi^{2}}{\tau_{2}}|n-m\tau|^{2}+V_{0},$ (1)
where $m,n$ are integers, and $x,y$ angular coordinates, $V_{0}$ is related to
the potential function and $\tau=\tau_{1}+i\tau_{2}$ is the complex modulus
asociated to a metric [2],
$d\bar{s}^{2}=\frac{1}{\tau_{2}}|dx+\tau dy|^{2}.$ (2)
The zeta function asociated with the operator is;
$\zeta(s)=\sum_{n,m}(l_{nm})^{-s}.$ (3)
Then, the determinant of the operator is:
$\displaystyle\mbox{det}\,\,D_{0}$ $\displaystyle=$
$\displaystyle\prod_{n,m}l_{nm}=\prod_{n,m}e^{ln(l_{nm})}$ (4)
$\displaystyle=$ $\displaystyle\mbox{exp}\left[-\lim_{s\rightarrow
0}\frac{d}{ds}\sum_{nm}l_{nm}^{-s}\right]$ $\displaystyle=$ $\displaystyle
e^{-\zeta^{\prime}(0)}.$
In [3] we find that the zeta function is asociated to the Gamma function as:
$\zeta(s)=\sum_{n}l_{nm}^{-s}=\frac{1}{\Gamma(s)}\int_{0}^{\infty}t^{s-1}\sum_{nm}e^{-l_{nm}t}dt,$
(5)
with,
$e^{-l_{nm}t}=e^{V_{0}t}e^{-\frac{4\pi^{2}}{\tau_{2}}|n-m\tau|^{2}t},$ (6)
and,
$\displaystyle|n-m\tau|^{2}$ $\displaystyle=$
$\displaystyle\left(n-m(\tau_{1}-i\tau_{2})\right)\left(n-m(\tau_{1}+i\tau_{2})\right)$
(7) $\displaystyle=$ $\displaystyle
n^{2}-2nm\tau_{1}+m^{2}(\tau_{1}^{2}+\tau_{2}^{2}).$
This can be wrote in a matrix form as:
$\displaystyle|n-m\tau|^{2}$ $\displaystyle=$
$\displaystyle\left(n\,\,,\,\,m\right)\cdot\left(\begin{array}[]{cc}1&-\tau_{1}\\\
-\tau_{1}&\tau_{1}^{2}+\tau_{2}^{2}\end{array}\right)\cdot\left(\begin{array}[]{c}n\\\
m\end{array}\right),$ (12) $\displaystyle\equiv$
$\displaystyle\vec{n}^{t}\,\,\Omega^{\prime}\,\,\vec{n},$ (13)
this matrix satisfies:
$\det\,\Omega^{\prime}\,=\,\tau_{2}^{2}.$ (14)
Then we can define a new matrix as $\Omega=\Omega^{\prime}/\tau_{2}$, where
$\det\,\Omega\,=1$, and the zeta function is:
$\displaystyle\zeta(s)$ $\displaystyle=$
$\displaystyle\frac{1}{\Gamma(s)}\int_{0}^{\infty}dt\,t^{s-1}\sum_{\vec{n}}e^{-4\pi^{2}\vec{n}^{T}\Omega\vec{n}t-V_{0}t}$
(15) $\displaystyle=$
$\displaystyle\frac{1}{\Gamma(s)}\int_{0}^{\infty}dt\,t^{s-1}e^{-V_{0}t}\sum_{nm}e^{\frac{-4\pi^{2}t}{\tau_{2}}A_{nm}},$
with,
$\displaystyle A_{nm}$ $\displaystyle\equiv$ $\displaystyle
n^{2}-2nm\tau_{1}+m^{2}(\tau_{1}^{2}+\tau_{2}^{2})$ (16) $\displaystyle=$
$\displaystyle\left(n^{2}-m\tau_{1}\right)^{2}+m^{2}\tau_{2}^{2}.$
## 2 Some Useful Results
Let’s list three equation we are going to use [3, 4, 5]
$\displaystyle\Psi\equiv\lim_{s\rightarrow 0}\frac{d}{ds}\zeta(s)$
$\displaystyle=$ $\displaystyle\lim{s\rightarrow
0}\left(\Gamma(s)\zeta(s)+\frac{1}{s}\right),$
$\displaystyle\sum_{n\in\mathbb{Z}}e^{-\pi t(n+v)^{2}}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{t}}\sum_{n\in\mathbb{Z}}e^{2\pi inv-\frac{\pi
n^{2}}{t}}\hskip 28.45274ptt>0,$
$\displaystyle\int_{0}^{\infty}dx\,x^{\nu-1}\,e^{-\frac{\beta}{x}-\alpha x}$
$\displaystyle=$ $\displaystyle
2\,\Big{(}\frac{\beta}{\alpha}\Big{)}^{\nu/2}\,K_{\nu}(2\sqrt{\beta\alpha}),$
(17)
for the computation ahead, with this in mind and following in the same line as
in [3], we now split the problem in four contributions,
$\zeta(s)=\hskip 8.5359pt\underbrace{\zeta_{1}(s)}_{n=m=0}\hskip
8.5359pt+\hskip 8.5359pt\underbrace{\zeta_{2}(s)}_{n=0,m\neq 0}\hskip
8.5359pt+\hskip 8.5359pt\underbrace{\zeta_{3}(s)}_{m=0,n\neq 0}+\hskip
8.5359pt\underbrace{\zeta_{4}(s)}_{m\neq 0,n\neq 0}.\\\ $
The first case is solved making $(n=m=0)$ in the integral (15), then we can
see that under $x=V_{0}t$, the gamma function is canceled,
$\displaystyle\zeta_{1}(s)$ $\displaystyle=$
$\displaystyle\frac{1}{\Gamma(s)}\int_{0}^{\infty}dt\,t^{s-1}e^{-V_{0}t}$ (18)
$\displaystyle=$ $\displaystyle V_{0}^{-s},$
then,
$\lim_{s\rightarrow 0}\zeta_{1}^{\prime}(s)=-\mbox{ln}V_{0}.$ (19)
The second case, with ($n=0$) in (17) is:
$\zeta_{2}(s)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}dt\,t^{s-1}e^{-V_{0}t}\sum_{m\neq
0}e^{-\alpha\pi m^{2}t}\hskip
28.45274pt\alpha\equiv\frac{4\pi\tau^{2}}{\tau_{2}}.$ (20)
If we consider $\tau_{2}>0$, and using (17), we obtain,
$\displaystyle\zeta_{2}(s)$ $\displaystyle=$
$\displaystyle\frac{1}{\Gamma(s)}\sum_{m\neq
0}\int_{0}^{\infty}dt\,\frac{t^{s-1}}{\sqrt{\alpha t}}e^{-V_{0}t}e^{-\frac{\pi
m^{2}}{\alpha t}}$ (21) $\displaystyle=$
$\displaystyle\frac{1}{\Gamma(s)\sqrt{\alpha}}\sum_{m\neq
0}\int_{0}^{\infty}dt\,t^{(s-1/2)-1}e^{-\frac{\pi m^{2}}{\alpha t}-V_{0}t}.$
And with (17), if $V_{0}>0$ then,
$\zeta_{2}(s)=\frac{2}{\Gamma(s)\sqrt{\alpha}}\sum_{m\neq 0}\left(\frac{\pi
m^{2}}{V_{0}\alpha}\right)^{(s/2-1/4)}K_{s-1/2}\left(2\sqrt{\frac{\pi
m^{2}V_{0}}{\alpha}}\right).$ (22)
The third contribution, with ($m=0$) in (15) is:
$\zeta_{3}(s)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}dt\,t^{s-1}e^{-V_{0}t}\sum_{n\neq
0}e^{-\gamma\pi n^{2}t}\hskip 28.45274pt\gamma\equiv\frac{4\pi}{\tau_{2}}.$
(23)
And again we consider $\tau_{2}>0$ for use (17) to obtain:
$\displaystyle\zeta_{3}(s)$ $\displaystyle=$
$\displaystyle\frac{1}{\Gamma(s)\sqrt{\gamma}}\sum_{n\neq
0}\int_{0}^{\infty}dt\,t^{(s-1/2)-1}e^{-\frac{\pi n^{2}}{\gamma t}-V_{0}t}$
(24) $\displaystyle=$
$\displaystyle\frac{2}{\Gamma(s)\sqrt{\gamma}}\sum_{n\neq 0}\left(\frac{\pi
n^{2}}{V_{0}\gamma}\right)^{(s/2-1/4)}K_{s-1/2}\left(2\sqrt{\frac{\pi
n^{2}V_{0}}{\gamma}}\right).$
The fourth contribution, with ($m,n\neq 0$) in (15) and again using (17) we
obtain:
$\zeta_{4}(s)=\frac{\sqrt{\tau_{2}}}{\sqrt{\pi}\Gamma(s)}\sum_{n,m\neq
0}\left(\frac{\tau_{2}n^{2}}{4V_{0}+16\pi^{2}m^{2}\tau_{2}}\right)^{s/2-1/4}K_{s-1/2}\left(\sqrt{\frac{\pi^{2}\tau_{2}n^{2}(V_{0}+4\pi^{2}m^{2}\tau_{2})}{\pi^{2}}}\right).$
(25)
## 3 Final Results and Conclusions
Putting together the contributions found in the last secction, replacing the
parameters ($\alpha$, $\gamma$) and taking the limit in (17), we obtain the
expression
$\displaystyle\Psi$ $\displaystyle=$
$\displaystyle\sqrt{\frac{\tau_{2}}{\pi}}\frac{1}{|\tau|}\sum_{n\neq
0}\left(\frac{4|\tau|^{2}V_{0}}{n^{2}\tau_{2}}\right)^{1/4}K_{-1/2}\left(\sqrt{\tau_{2}V_{0}}\left|\frac{n}{\tau}\right|\right)$
(26) $\displaystyle+$ $\displaystyle\sqrt{\frac{\tau_{2}}{\pi}}\sum_{n\neq
0}\left(\frac{4|V_{0}}{n^{2}\tau_{2}}\right)^{1/4}K_{-1/2}\left(\sqrt{\tau_{2}V_{0}}|n|\right)$
$\displaystyle+$ $\displaystyle\sqrt{\frac{\tau_{2}}{\pi}}\sum_{n,m\neq
0}\left(\frac{4(V_{0}+4\pi^{2}m^{2}\tau_{2})}{n^{2}\tau_{2}}\right)^{1/4}K_{-1/2}\left(\sqrt{\tau_{2}V_{0}+4\pi^{2}m^{2}\tau_{2}^{2}}|n|\right)$
$\displaystyle-$ $\displaystyle\mbox{ln}V_{0}.$
The integers “n,m“ are non zero, then we can write
$\sum_{n\neq 0}f(|n|)=2\sum_{n=1}^{\infty}f(n).\hskip 28.45274pt\sum_{n,m\neq
0}f(|n|,|m|)=4\sum_{n,m=1}^{\infty}f(n,m).$ (27)
after some work and using some Bessel function identities we obtain
$\displaystyle\Psi$ $\displaystyle=$ $\displaystyle
2|\tau_{2}|^{1/4}\sqrt{\frac{2}{\pi}}\sum_{n=1}^{\infty}\bigg{[}\frac{\sqrt{|\tau|}V_{0}^{1/4}}{\sqrt{n}}K_{1/2}\left(\frac{n\sqrt{V_{0}\tau_{2}}}{|\tau|}\right)+\frac{V_{0}^{1/4}}{\sqrt{n}}K_{1/2}\left(n\sqrt{V_{0}\tau_{2}}\right)$
(28) $\displaystyle+$ $\displaystyle
2\sum_{m=1}^{\infty}\frac{(V_{0}+4\pi^{2}m^{2}\tau_{2})^{1/4}}{\sqrt{n}}K_{1/2}\left(n\sqrt{V_{0}\tau_{2}+4\pi^{2}m^{2}\tau_{2}^{2}}\right)\bigg{]}-\mbox{ln}V_{0}.$
Then the determinant we are looking for is:
$\displaystyle\det\,D_{0}$ $\displaystyle=$ $\displaystyle
V_{0}\exp\Bigg{\\{}-2|\tau_{2}|^{1/4}\sqrt{\frac{2}{\pi}}\sum_{n=1}^{\infty}\bigg{[}\frac{\sqrt{|\tau|}V_{0}^{1/4}}{\sqrt{n}}K_{1/2}\left(\frac{n\sqrt{V_{0}\tau_{2}}}{|\tau|}\right)$
(29) $\displaystyle+$
$\displaystyle\frac{V_{0}^{1/4}}{\sqrt{n}}K_{1/2}\left(n\sqrt{V_{0}\tau_{2}}\right)$
$\displaystyle+$ $\displaystyle
2\sum_{m=1}^{\infty}\frac{(V_{0}+4\pi^{2}m^{2}\tau_{2})^{1/4}}{\sqrt{n}}K_{1/2}\left(n\sqrt{V_{0}\tau_{2}+4\pi^{2}m^{2}\tau_{2}^{2}}\right)\bigg{]}\Bigg{\\}}.$
The last expression is regular, because the Bessel function is asimptotic and
the expression is multiplied by a factor $1/\sqrt{n}$, then the important
terms are the first in the series, at this point we will compute that limit
with the help of the identity [6]:
$\mbox{lim}_{z\rightarrow 0}K_{\nu}(z)=\frac{(\nu-1)!2^{\nu-1}}{z^{\nu}}\hskip
28.45274pt\nu>0.$ (30)
For the first values of $m$, and $n\ll 1/\sqrt{V_{0}\tau_{2}}$ we obtain:
$\Psi_{z\ll 1}\approx\frac{2|\tau|+6}{n}-\mbox{ln}V_{0},$ (31)
It doesńt mean that the series converg to this value, but than this series can
be wrote like a sum of the first terms of this kind.
In this case the determinant is,
$\mbox{det}D_{0}\approx V_{0}\sum_{\mbox{first
n's}}\mbox{exp}\left[\frac{-2|\tau|-6}{n}\right].$ (32)
Our final result (29) is diferent that some results found in the literature,
for example in [2, 7, 8]. Maybe there is some limit in wich these result are
equivalent, or maybe really there is diferent in a deep sense, the important
thing is than itś limit seems more simple and handle and its well bahaviour is
more evident. This diference may have some interesting consecuences on some
works derived from the quantization with the Wheeler-De Witt equation, in
particular in brane worlds models, like in [9], where the Wheeler-De Witt
equation is solved with first-class Dirac constraints defined in a five
dimensional space, and the important determinant here is known as Morette-
DeWitt determinant, wich must be solved. other nice work is [10], where is
considered the gravity on the worldvolume of D3-brane embedded in the flat
background produced by N p-branes, first clasically, and later in the
quantization, the Wheeler-De Witt equation must be solved.
## References
* [1] S. Carlip Notes on the (2+1)-Dimensional Wheeler-DeWitt Equation. arXiv:gr-qc/9309002v1, 1993.
* [2] E. Elizalde An Extencion Of The Chowla-Selberg Formula Useful In Quantizing With The Wheeler-De Witt Euation. arXiv:hep-th/9402155v2, 1994.
* [3] N. Vanegas. Regularization of Automorphic Functions of Manifolds with Special Kähler Geometry. arXiv:hep-th/9906028 (preprint), 1999.
* [4] K. Chandrasekharan. Elliptic Functions. Springer-Verlag, 1985.
* [5] I.S. Gradshteyn, I.M. Ryzhik. Table of Integrals, Series and Products. Academic Press, 1980. ET II 82(23)a, LET I 146(29)
* [6] George B. Arfken, Hans J. Weber Mathematical Methods For Physicists, Sixth Edition Elsevier Academic Press, 2005.
* [7] Emilio Elizalde Ten Physical Applications of Spectral Zeta Functions. Springer, 1995.
* [8] E. Elizalde Some uses of regularization in quantum gravity and cosmology. arXiv:hep-th/0108134v1, 2001.
* [9] A.O.Barvinsky The Gospel according to DeWitt revisited: quantum effective action in braneworld models. arXiv:hep-th/0504205v1, 2005.
* [10] Pawel Gusin Wheeler-De Witt equation for brane gravity. University of Silesia, Institute of Physics, ul. Uniwersytecka arXiv:0809.0567v1, 2008.
|
arxiv-papers
| 2013-02-19T01:08:00 |
2024-09-04T02:49:41.881638
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Carlos Jimenez and Nelson Vanegas",
"submitter": "Carlos Arturo Jim\\'enez Orjuela",
"url": "https://arxiv.org/abs/1302.4497"
}
|
1302.4519
|
11institutetext: Faculty of Computer Science and Engineering
Ho Chi Minh City University of Technology
268 Ly Thuong Kiet Street, District 10, Ho Chi Minh City, Vietnam
{hungnq2, htnguyen, nam}@cse.hcmut.edu.vn
{50801500, 50801308}@stu.hcmut.edu.vn
# A Genetic Algorithm for Power-Aware Virtual Machine Allocation in Private
Cloud
Nguyen Quang-Hung 11 Pham Dac Nien 22 Nguyen Hoai Nam 22
Nguyen Huynh Tuong 11 Nam Thoai 111122
###### Abstract
Energy efficiency has become an important measurement of scheduling algorithm
for private cloud. The challenge is trade-off between minimizing of energy
consumption and satisfying Quality of Service (QoS) (e.g. performance or
resource availability on time for reservation request). We consider resource
needs in context of a private cloud system to provide resources for
applications in teaching and researching. In which users request computing
resources for laboratory classes at start times and non-interrupted duration
in some hours in prior. Many previous works are based on migrating techniques
to move online virtual machines (VMs) from low utilization hosts and turn
these hosts off to reduce energy consumption. However, the techniques for
migration of VMs could not use in our case. In this paper, a genetic algorithm
for power-aware in scheduling of resource allocation (GAPA) has been proposed
to solve the static virtual machine allocation problem (SVMAP). Due to limited
resources (i.e. memory) for executing simulation, we created a workload that
contains a sample of one-day timetable of lab hours in our university. We
evaluate the GAPA and a baseline scheduling algorithm (BFD), which sorts list
of virtual machines in start time (i.e. earliest start time first) and using
best-fit decreasing (i.e. least increased power consumption) algorithm, for
solving the same SVMAP. As a result, the GAPA algorithm obtains total energy
consumption is lower than the baseline algorithm on simulated experimentation.
## 1 Introduction
Cloud computing [7], which is popular with pay-as-you-go utility model, is
economy driven. Saving operating costs in terms of energy consumption (Watts-
Hour) for a cloud system is highly motivated for any cloud providers. Energy-
efficient resource management in large-scale datacenter is still challenge
[1][13][9][5]. The challenge of energy-efficient scheduling algorithm is
trade-off between minimizing of energy consumption and satisfying demand
resource needs on time and non-preemptive. Resource requirements depend on the
applications and we are interested in virtual computing lab, which is a cloud
system to provide resources for teaching and researching.
There are many studies on energy efficient in datacenters. Some studies
proposed energy efficient algorithm that are based on processor speed scaling
(assumption that CPU technology supports dynamic scaling frequency and voltage
(DVFS)) [1][13]. Some other studies proposed energy efficient by scheduling
for VMs in virtualized datacenter [9][5]. A. Beloglazov et al. [5] presents
the Modified Best-Fit Decreasing (MBFD) algorithm, which is best-fit
decreasing heuristic, for power-aware VM allocation and adaptive threshold-
based migration algorithms to dynamic consolidation of VM resource partitions.
Goiri, Í. et al. [9] presents score-based scheduling, which is hill-climbing
algorithm, to place each VM onto which physical machine has the maximum score.
However, the challenge is still remain. These previous works did not concern
on satisfying demand resource needs on time (i.e. VM starts at a specified
start time) and non-preemptive, in addition to both MBFD and score-based
algorithms do not find an optimal solution for VM allocation problem.
In this paper, we introduce our static virtual machine allocation problem
(SVMAP). To solve the SVMAP, we propose the GAPA, which is a genetic algorithm
to find an optimal solution for VM allocation. On simulated experimentation,
the GAPA discovers a better VM allocation (means lower energy consumption)
than the baseline scheduling algorithm for solving same SVMAP.
## 2 Problem Formulation
### 2.1 Terminology, notation
We describe notation that is used in this paper as following:
* •
$VM_{i}$: the i-th virtual machine
* •
$M_{j}$: the j-th physical machine
* •
$ts_{i}$: start time of the $VM_{i}$
* •
$pe_{i}$: number of processing elements (e.g. cores) of the $VM_{i}$
* •
$PE_{j}$: number of processing elements (e.g. cores) of the $M_{j}$
* •
$mips_{i}$: total required MIPS (Millions Instruction Per Seconds) of the
$VM_{i}$
* •
$MIPS_{j}$: total capacity MIPS (Millions Instruction Per Seconds) of the
$M_{j}$
* •
$d_{i}$: duration time of the $VM_{i}$, units in seconds
* •
$P_{j}(t)$: power consumption (Watts) of a physical machine $M_{j}$
* •
$r_{j}(t)$: set of indexes of virtual machines that is allocated on the
$M_{j}$ at time $t$
### 2.2 Power consumption model
In this section, we introduce factors to model the power consumption of single
physical machine. Power consumption (Watts) of a physical machine is sum of
total power of all components in the machine. In [8], they estimated power
consumption of a typical server (with 2x CPU, 4x memory, 1x hard disk drive,
2x PCI slots, 1x mainboard, 1x fan) in peak power (Watts) spends on main
components such as CPU (38%), memory (17%), hard disk drive (6%), PCI slots
(23%), mainboard (12%), fan (5%). Some papers [8] [4] [6] [5] prove that there
exists a power model between power and resource utilization (e.g. CPU
utilization). We assume that power consumption of a physical machine ($P(.)$)
is linear relationship between power and resource utilization (e.g. CPU
utilization) as [8][4][6][5]. The total power consumption of a single physical
server ($P(.)$) is:
$P(U_{cpu})=P_{idle}+(P_{max}-P_{idle})U_{cpu}$
$U_{cpu}(t)=\sum_{c=1}^{PE_{j}}\sum_{i\in
r_{j}(t)}\dfrac{mips_{i,c}}{MIPS_{j,c}}$
In which:
* •
$U_{cpu}(t)$: CPU utilization of the physical machine at time $t$, $0\leq
U_{cpu}(t)\leq 1$
* •
$P_{idle}$: the power consumption (Watt) of the physical machine in idle, e.g.
0% CPU utilization
* •
$P_{max}$: the maximum power consumption (Watt) of the physical machine in
full load, e.g. 100% CPU utilization
* •
$mips_{i,c}$: requested MIPS of the $c$-th processing element (PE) of the
$VM_{i}$
* •
$MIPS_{j,c}$: Total MIPS of the $c$-th processing element (PE) on the physical
machine $M_{j}$
The number of MIPS that a virtual machine requests can be changed by its
running application. Therefore, the utilization of the machine may also change
over time due to application. We link the utilization with the time $t$. We
re-write the total power consumption of a single physical server ($P(.)$) with
$U_{cpu}(t)$ as:
$P(U_{cpu}(t))=P_{idle}+(P_{max}-P_{idle})U_{cpu}(t)$
and total energy consumption of the physical machine ($E$) in period time
$[t_{0},t_{1}]$ is defined by:
$E=\int\limits_{t_{0}}^{t_{1}}P(U_{cpu}(t))dt$
### 2.3 Static Virtual Machine Allocation Problem (SVMAP)
Given a set of $n$ virtual machines
{$VM_{i}(pe_{i},mips_{i},ts_{i},d_{i})|i=1,...,n$} to be placed on a set of
$m$ physical parallel machines {$M_{j}(PE_{j},MIPS_{j})|j=1,...,m$}. Each
virtual machine $VM_{i}$ requires $pe_{i}$ processing elements and total of
$mips_{i}$ MIPS, and the $VM_{i}$ will be started at time ($ts_{i}$) and
finished at time ($ts_{i}+d_{i}$) without neither preemption nor migration in
its duration ($d_{i}$). We do not limit resource type on CPU. We can extend
for other resource types such as memory, disk space, network bandwidth, etc.
We assume that every physical machine $M_{j}$ can host any virtual machine,
and its power consumption model ($P_{j}(t)$) is proportional to resource
utilization at a time $t$, e.g. power consumption has a linear relationship
with resource utilization (e.g. CPU utilization) [8][2][5].
The objective scheduling is minimizing energy consumption in fulfillment of
maximum requirements of $n$ VMs.
### 2.4 The GAPA Algorithm
The GAPA, which is a kind of Genetic Algorithm (GA), solves the SVMAP. The
GAPA performs steps as in the Algorithm 1.
Algorithm 1: GAPA Algorithm
---
Start: Create an initial population randomly for $s$ chromosomes (with $s$ is
population size)
Fitness: Calculate evaluation value of each chromosome respectively in given
population.
New population: Create a new population by carrying out follows the steps:
Selection: Choose the two individual parents from current population based on
value of evaluation.
Crossover: By using crossover probability, we create new children via
modifying chromosome of parents.
Mutation: With mutation probability, we will mutate at some position on
chromosome.
Accepting: Currently, new children will be a part of the next generation.
Replace: Go to the next generation by assigning the current generation to the
next generation.
Test: If stop condition is satisfied then this algorithm is stopped and
returns individual has the highest evaluation value. Otherwise, go to next
step.
Loop: Go back the Fitness step.
In the GAPA, we use a tree structure to encode chromosome of an individual.
This structure has three levels:
Level 1: Consist of a root node that does not have significant meaning.
Level 2: Consist of a collection of nodes that represent set of physical
machines.
Level 3: Consist of a collection of nodes that represent set of virtual
machines.
With above representation, each instance of tree structure will show that an
allocation of a collection of virtual machines onto a collection of physical
machines. The fitness function will calculate evaluation value of each
chromosome as in the Algorithm 2.
Algorithm 2: Construct fitness function
---
powerOfDatacenter := 0
For each host $\in$ collection of hosts do
utilizationMips := host.getUtilizationOfCpu()
powerOfHost := getPower (host, utilizationMips)
powerOfDatacenter := powerOf Datacenter + powerOfHost
End For
Evaluation value (chromosome) := 1.0 / powerOfDatacenter
## 3 Experimental study
### 3.1 Scenarios
We consider on resource allocation for virtual machines (VMs) in private cloud
that belongs to a college or university. In a university, a private cloud is
built to provide computing resource for needs in teaching and researching. In
the cloud, we deploy installing software and operating system (e.g. Windows,
Linux, etc.) for practicing lab hours in virtual machine images (i.e. disk
images) and the virtual machine images are stored in some file servers. A user
can start, stop and access VM to run their tasks. We consider three needs as
following:
* i
A student can start a VM to do his homework.
* ii
A lecturer can request a schedule to start a group of identical VMs for
his/her students on lab hours at specified start time and in prior. The lab
hours requires that the group of VMs will start on time and continue in
spanning some time slots (e.g. 90 minutes).
* iii
A researcher can start a group of identical VMs to run his/her parallel
application.
### 3.2 Workload and simulated cluster
We use workload from one-day of our university’s schedule for laboratory hours
on six classes in the Table 1. The workload is simulated by total of 211 VMs
and 100 physical machines (hosts).
We consider there are two kind of servers in our simulated virtualized
datacenter, which includes two power consumption models of two power model of
the IBM server x3250 (1 x [Xeon X3470 2933 MHz, 4 cores], 8GB) and another
power model of the Dell Inc. PowerEdge R620 (1 x [Intel Xeon E5-2660 2.2 GHz,
16 cores], 24 GB) server with 16 cores in the Table 2. The baseline scheduling
algorithm (BFD), which sorts list of virtual machines in start time (i.e.
earliest start time first) and using best-fit decreasing (i.e. least increased
power consumption, for example MBFD [5]), will use four IBM servers to
allocate for 16 VMs (each VM requests single processing element). Our GAPA can
finds a better VM allocation (lesser energy consumption) than the minimum
increase of power consumption (best-fit decrease) heuristic in our
experiments. In this example, our GAPA will choose one Dell server to allocate
these 16 VMs. As a result, our GAPA consumes less total energy than the BFD
does.
Table 1: Workload of a university’s one-day schedule Day | Subject | Class ID | Group ID | Students | Lab. Time | Duration (sec.)
---|---|---|---|---|---|---
6 | 506007 | CT10QUEE | QT01 | 5 | —456———- | 8100
6 | 501129 | CT11QUEE | QT01 | 5 | 123————- | 8100
6 | 501133 | DUTHINH6 | DT04 | 35 | 123————- | 8100
6 | 501133 | DUTHINH5 | DT01 | 45 | —456———- | 8100
6 | 501133 | DUTHINH5 | DT02 | 45 | —456———- | 8100
6 | 501133 | DUTHINH6 | DT05 | 35 | 123————- | 8100
6 | 501133 | DUTHINH6 | DT06 | 41 | 123————- | 8100
### 3.3 Experiments
Table 2: Two power models of (i) the IBM server x3250 (1 x [Xeon X3470 2933 MHz, 4 cores], 8GB) [16] and (ii) the Dell Inc. PowerEdge R620 (1 x [Intel Xeon E5-2660 2.2 GHz, 16 cores], 24 GB) [15] Utilization | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1
---|---|---|---|---|---|---|---|---|---|---|---
IBM x3250 | 41.6 | 46.7 | 52.3 | 57.9 | 65.4 | 73.0 | 80.7 | 89.5 | 99.6 | 105.0 | 113.0
Dell R620 | 56.1 | 79.3 | 89.6 | 102.0 | 121.0 | 132.0 | 149.0 | 171.0 | 195.0 | 225.0 | 263.0
Table 3: Total energy consumption (KWh) of running: (i) earliest start time first with best-fit decreasing (BFD); (ii) GAPA algorithms. These GAPA use the probability mutation of 0.01 and size of population of 10. N/A means not available Algorithms | VMs | Hosts | GA’s Generations | GA’s Prob. of Crossover | Energy (KWh) | BFD/GAPA
---|---|---|---|---|---|---
BFD | 211 | 100 | N/A | N/A | 16.858 | 1
GAPA_P10_ G500_C25 | 211 | 100 | 500 | 0.25 | 13.007 | 1.296
GAPA_P10_ G500_C50 | 211 | 100 | 500 | 0.50 | 13.007 | 1.296
GAPA_P10_ G500_C75 | 211 | 100 | 500 | 0.75 | 13.007 | 1.296
GAPA_P10_ G1000_C25 | 211 | 100 | 1000 | 0.25 | 13.007 | 1.296
GAPA_P10_ G1000_C50 | 211 | 100 | 1000 | 0.50 | 13.007 | 1.296
GAPA_P10_ G1000_C75 | 211 | 100 | 1000 | 0.75 | 13.007 | 1.296
Figure 1: The total energy consumption (KWh) for earliest start time first
with best-fit decrease (BFD), GAPA algorithms
We show results from the experiments in the Table 3 and Figure 1. We use a
popular simulated software for a virtualized datacenter is the CloudSim
[14][6] to simulate our virtualized datacenter and the workload. The GAPA is a
VM allocation algorithm that is developed and integrated into the CloudSim
version 3.0.
On simulated experimentation, we have total energy consumptions of both the
BFD and the GAPA algorithms are 16.858KWh and average of 13.007KWh
respectively. We conclude that the energy consumption of the BFD algorithm is
higher than the energy consumption of GAPA algorithm is approximately 130%. In
case of the GAPA, these GAPA use the probability mutation is 0.01 and size of
population is 10, number of generations is {500, 1000}, probability of
crossover is {0.25, 0.5, 0.75}.
## 4 Related works
B. Sotomayor et al. [12] proposed a lease-based model and First-Come-First-
Serve (FCFS) and backfilling algorithms to schedule best-effort, immediate and
advanced reservation jobs. The FCFS and backfilling algorithms consider only
performance metric (e.g. waiting time, slowdown). To maximize performance,
these scheduling algorithms tend to choose free load servers (i.e. highest-
ranking scores) when allocates a new lease. Therefore, a lease with single VM
can be allocated on big, multi-core physical machine. This way could be waste
energy, both of the FCFS and backfilling does not consider on the energy
efficiency.
S. Albers et al. [1] reviewed some energy efficient algorithms which are used
to minimize flow time by changing processor speed adapt to job size. G.
Laszewski et al. [13] proposed scheduling heuristics and to present
application experience for reducing power consumption of parallel tasks in a
cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. We did
not use the DVFS technique to reduce energy consumption on datacenter.
Some studies [9][3][5] proposed algorithms to solve the virtual machine
allocation in private cloud to minimize energy consumption. A. Beloglazov et
al. [3][5] presented a best-fit decreasing heuristic on VM allocation, named
MBFD, and VM migration policies under adaptive thresholds. The MBFD tends to
allocate a VM to such as active physical machine that would take the minimum
increase of power consumption (i.e. the MBFD prefers a physical machine with
minimum power increasing). However, the MBFD cannot find an optimal allocation
for all VMs. In our simulation, for example, the GAPA can find a better VM
allocation (lesser energy consumption) than the minimum increase of power
consumption (best-fit decrease) heuristic in our experiments. In this example,
our GAPA will choose one Dell server to allocate these 16 VMs. As a result,
our GAPA consumes less total energy than the best-fit heuristic does.
Another study on allocation of VMs [9] developed a score-based allocation
method to calculate scores matrix of allocations of $m$ VMs to $n$ physical
machines. A score is sum of many factors such as power consumption, hardware
and software fulfillment, resource requirement. These studies are only
suitable for service allocation, in which each VM will execute a long running,
persistent application. We consider each user job has a limited duration time.
In addition to, our GAPA can find an optimal schedule for the static VM
allocation problem on single objective is minimum energy consumption.
In a recently work, J. Kolodziej et al. [10] presents evolutionary algorithms
for energy management. None of these solutions solves same our SVMAP problem.
## 5 Conclusions and Future works
In a conclusion, a genetic algorithm can apply to the static virtual machine
allocation problem (SVMAP) and brings benefit in minimize total energy
consumption of computing servers. On simulation with workload of one-day lab
hours in university, the energy consumption of the baseline scheduling
algorithm (BFD) algorithm is higher than the energy consumption of GAPA
algorithm is approximately 130%. Disadvantage of the GAPA algorithm is longer
computational time than the baseline scheduling algorithm.
In the future work, we concern methodology to reduce computational time of the
GAPA. We also concern some other constraints, e.g. deadline of jobs. We also
study on migration policies and history-based allocation algorithms.
## References
* [1] Albers, S. and Fujiwara, H.: Energy-efficient algorithms. ACM Review, Vol. 53, No. 5, (2010) pp. 86–96, doi: 10.1145/1735223.1735245
* [2] Barroso, L.A. and Hölzle, U.: The Case for Energy-Proportional Computing, Vol. 40, pp. 33-37. ACM (2007), doi: 10.1109/MC.2007.443
* [3] Beloglazov, A. and Buyya, R.: Energy Efficient Resource Management in Virtualized Cloud Data Centers, Proceedings of the 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, pp. 826-831. (2010) doi: 10.1109/CCGRID.2010.46
* [4] Beloglazov, A. and Buyya, R.: Adaptive Threshold-Based Approach for Energy-Efficient Consolidation of VMs in Cloud Data Centers, ACM (2010)
* [5] Beloglazov, Abawajy, A., J., Buyya, R..:Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing, FGCS, Vol. 28, no. 5, pp. 755-768, (2012). DOI: 10.1016/j.future.2011.04.017
* [6] Beloglazov, A. and Buyya, R.: Optimal Online Deterministic Algorithms and Adaptive Heuristics for Energy and Performance Efficient Dynamic Consolidation of Virtual Machines in Cloud Data Centers, CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE, Concurrency Computat.: Pract. Exper., pp. 1-24, (2011), doi: 10.1002/cpe
* [7] Buyya, R., Yeo, C. S., Venugopal, S., Broberg, J., Brandic, I..: Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility, FGCS, Vol. 25, No. 6 , pp. 599-616. (2009) doi: 10.1016/j.future.2008.12.001
* [8] Fan, X., Weber, W.-D., Barroso, L. A..: Power provisioning for a warehouse-sized computer, Proceedings of the 34th annual international symposium on Computer architecture, pp. 13-23. ACM (2007) doi: 10.1145/1273440.1250665
* [9] Goiri, Í., Julià, F., Nou, R., Berral, J., Guitart, J., Torres, J..:Energy-aware Scheduling in Virtualized Datacenters, in IEEE International Conference on Cluster Computing (CLUSTER 2010), (2010), pp. 58-67.
* [10] J. Kolodziej et al. (Eds.).: A Taxonomy of Evolutionary Inspired Solutions for Energy Management in Green Computing : Problems and Resolution Methods, Advances in IntelligentModelling and Simulation, SCI 422, pp. 215–233.
* [11] Sotomayor, B., Keahey, K., Foster, I..: Combining batch execution and leasing using virtual machines, Proceedings of the 17th international symposium on High performance distributed computing - HPDC ’08, pp. 87-96. ACM (2008) doi: 10.1145/1383422.1383434
* [12] Sotomayor, B.: Provisioning Computational Resources Using Virtual Machines and Leases, PhD Thesis submited to The University of Chicago, US, (2010)
* [13] Laszewski, G. V., Wang, L., Younge, A. J., He, X..: Power-aware scheduling of virtual machines in DVFS-enabled clusters, 2009 IEEE International Conference on Cluster Computing and Workshops, pp. 368–377. (2009) doi: 10.1109/CLUSTR.2009.5289182
* [14] Calheiros, R. N., Ranjan, R., Beloglazov, A., De Rose, C. A. F., Buyya, R..: CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms, Software: Practice and Experience, vol. 41, no. 1, pp. 23–50, 2011.
* [15] SPECpower ssj2008 results for Dell Inc. PowerEdge R620 (Intel Xeon E5-2660, 2.2 GHz). http://www.spec.org/power_ssj2008/results/res2012q2/power_ssj2008-20120417-00451.html. Last accessed: Nov. 29, 2012
* [16] SPECpower ssj2008 results for IBM x3250 (1 x [Xeon X3470 2933 MHz, 4 cores], 8GB). http://www.spec.org/power_ssj2008/results/res2009q4/power_ssj2008-20091104-00213.html. Last accessed: Nov. 29, 2012
|
arxiv-papers
| 2013-02-19T05:12:24 |
2024-09-04T02:49:41.887108
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Nguyen Quang-Hung, Pham Dac Nien, Nguyen Hoai Nam, Nguyen Huynh Tuong,\n Nam Thoai",
"submitter": "Quang-Hung Nguyen",
"url": "https://arxiv.org/abs/1302.4519"
}
|
1302.4618
|
# Saving phase: Injectivity and stability for phase retrieval
Afonso S. Bandeira Jameson Cahill Dustin G. Mixon Aaron A. Nelson Program
in Applied and Computational Mathematics (PACM), Princeton University,
Princeton, NJ 08544 Department of Mathematics, University of Missouri,
Columbia, MO 65211 Department of Mathematics and Statistics, Air Force
Institute of Technology, Wright-Patterson Air Force Base, OH 45433
###### Abstract
Recent advances in convex optimization have led to new strides in the phase
retrieval problem over finite-dimensional vector spaces. However, certain
fundamental questions remain: What sorts of measurement vectors uniquely
determine every signal up to a global phase factor, and how many are needed to
do so? Furthermore, which measurement ensembles yield stability? This paper
presents several results that address each of these questions. We begin by
characterizing injectivity, and we identify that the complement property is
indeed a necessary condition in the complex case. We then pose a conjecture
that $4M-4$ generic measurement vectors are both necessary and sufficient for
injectivity in $M$ dimensions, and we prove this conjecture in the special
cases where $M=2,3$. Next, we shift our attention to stability, both in the
worst and average cases. Here, we characterize worst-case stability in the
real case by introducing a numerical version of the complement property. This
new property bears some resemblance to the restricted isometry property of
compressed sensing and can be used to derive a sharp lower Lipschitz bound on
the intensity measurement mapping. Localized frames are shown to lack this
property (suggesting instability), whereas Gaussian random measurements are
shown to satisfy this property with high probability. We conclude by
presenting results that use a stochastic noise model in both the real and
complex cases, and we leverage Cramer-Rao lower bounds to identify stability
with stronger versions of the injectivity characterizations.
###### keywords:
phase retrieval , quantum mechanics , bilipschitz function , Cramer-Rao lower
bound
## 1 Introduction
Signals are often passed through linear systems, and in some applications,
only the pointwise absolute value of the output is available for analysis. For
example, in high-power coherent diffractive imaging, this loss of phase
information is eminent, as one only has access to the power spectrum of the
desired signal [9]. Phase retrieval is the problem of recovering a signal from
absolute values (squared) of linear measurements, called intensity
measurements. Note that phase retrieval is often impossible—intensity
measurements with the identity basis effectively discard the phase information
of the signal’s entries, and so this measurement process is not at all
injective; the power spectrum similarly discards the phases of Fourier
coefficients. This fact has led many researchers to invoke a priori knowledge
of the desired signal, since intensity measurements might be injective when
restricted to a smaller signal class. Unfortunately, this route has yet to
produce practical phase retrieval guarantees, and practitioners currently
resort to various ad hoc methods that often fail to work.
Thankfully, there is an alternative approach to phase retrieval, as introduced
in 2006 by Balan, Casazza and Edidin [7]: Seek injectivity, not by finding a
smaller signal class, but rather by designing a larger ensemble of intensity
measurements. In [7], Balan et al. characterized injectivity in the real case
and further leveraged algebraic geometry to show that $4M-2$ intensity
measurements suffice for injectivity over $M$-dimensional complex signals.
This realization that so few measurements can yield injectivity has since
prompted a flurry of research in search of practical phase retrieval
guarantees [2, 4, 6, 12, 13, 14, 18, 19, 37]. Notably, Candès, Strohmer and
Voroninski [14] viewed intensity measurements as Hilbert-Schmidt inner
products between rank-1 operators, and they applied certain intuition from
compressed sensing to stably reconstruct the desired $M$-dimensional signal
with semidefinite programming using only $\mathcal{O}(M\log M)$ random
measurements; similar alternatives and refinements have since been identified
[12, 13, 18, 37]. Another alternative exploits the polarization identity to
discern relative phases between certain intensity measurements; this method
uses $\mathcal{O}(M\log M)$ random measurements in concert with an expander
graph, and comes with a similar stability guarantee [2].
Despite these recent strides in phase retrieval algorithms, there remains a
fundamental lack of understanding about what it takes for intensity
measurements to be injective, let alone whether measurements yield stability
(a more numerical notion of injectivity). For example, until very recently, it
was believed that $3M-2$ intensity measurements sufficed for injectivity (see
for example [12]); this was disproved by Heinosaari, Mazzarella and Wolf [24],
who used embedding theorems from differential geometry to establish the
necessity of $(4+o(1))M$ measurements. As far as stability is concerned, the
most noteworthy achievement to date is due to Eldar and Mendelson [19], who
proved that $\mathcal{O}(M)$ Gaussian random measurements separate distant
$M$-dimensional real signals with high probability. Still, the following
problem remains wide open:
###### Problem 1.
What are the necessary and sufficient conditions for measurement vectors to
yield injective and stable intensity measurements?
The present paper addresses this problem in a number of ways. Section 2
focuses on injectivity, and it starts by providing the first known
characterization of injectivity in the complex case (Theorem 4). Next, we make
a rather surprising identification: that intensity measurements are injective
in the complex case precisely when the corresponding phase-only measurements
are injective in some sense (Theorem 5). We then use this identification to
prove the necessity of the complement property for injectivity (Theorem 7).
Later, we conjecture that $4M-4$ intensity measurements are necessary and
sufficient for injectivity in the complex case, and we prove this conjecture
in the cases where $M=2,3$ (Theorems 10 and 12). Our proof for the $M=3$ case
leverages a new test for injectivity, which we then use to verify the
injectivity of a certain quantum-mechanics-inspired measurement ensemble,
thereby suggesting a new refinement of Wright’s conjecture from [36] (see
Conjecture 13).
We devote Section 3 to stability. Here, we start by focusing on the real case,
for which we give upper and lower Lipschitz bounds of the intensity
measurement mapping in terms of singular values of submatrices of the
measurement ensemble (Lemma 16 and Theorem 18); this suggests a new matrix
condition called the strong complement property, which strengthens the
complement property of Balan et al. [7] and bears some resemblance to the
restricted isometry property of compressed sensing [11]. As we will discuss,
our result corroborates the intuition that localized frames fail to yield
stability. We then show that Gaussian random measurements satisfy the strong
complement property with high probability (Theorem 20), which nicely
complements the results of Eldar and Mendelson [19]. In particular, we find an
explicit, intuitive relation between the Lipschitz bounds and the number of
intensity measurements per dimension (see Figure 1. Finally, we present
results in both the real and complex cases using a stochastic noise model,
much like Balan did for the real case in [4]; here, we leverage Cramer-Rao
lower bounds to identify stability with stronger versions of the injectivity
characterizations (see Theorems 21 and 23).
### 1.1 Notation
Given a collection of measurement vectors $\Phi=\\{\varphi_{n}\\}_{n=1}^{N}$
in $V=\mathbb{R}^{M}$ or $\mathbb{C}^{M}$, consider the intensity measurement
process defined by
$(\mathcal{A}(x))(n):=|\langle x,\varphi_{n}\rangle|^{2}.$
Note that $\mathcal{A}(x)=\mathcal{A}(y)$ whenever $y=cx$ for some scalar $c$
of unit modulus. As such, the mapping $\mathcal{A}\colon
V\rightarrow\mathbb{R}^{N}$ is necessarily not injective. To resolve this
(technical) issue, throughout this paper, we consider sets of the form $V/S$,
where $V$ is a vector space and $S$ is a multiplicative subgroup of the field
of scalars. By this notation, we mean to identify vectors $x,y\in V$ for which
there exists a scalar $c\in S$ such that $y=cx$; we write $y\equiv x\bmod S$
to convey this identification. Most (but not all) of the time, $V/S$ is either
$\mathbb{R}^{M}/\\{\pm 1\\}$ or $\mathbb{C}^{M}/\mathbb{T}$ (here,
$\mathbb{T}$ is the complex unit circle), and we view the intensity
measurement process as a mapping $\mathcal{A}\colon
V/S\rightarrow\mathbb{R}^{N}$; it is in this way that we will consider the
measurement process to be injective or stable.
## 2 Injectivity
### 2.1 Injectivity and the complement property
Phase retrieval is impossible without injective intensity measurements. In
their seminal work on phase retrieval [7], Balan, Casazza and Edidin introduce
the following property to analyze injectivity:
###### Definition 2.
We say $\Phi=\\{\varphi_{n}\\}_{n=1}^{N}$ in $\mathbb{R}^{M}$
($\mathbb{C}^{M}$) satisfies the complement property (CP) if for every
$S\subseteq\\{1,\ldots,N\\}$, either $\\{\varphi_{n}\\}_{n\in S}$ or
$\\{\varphi_{n}\\}_{n\in S^{\mathrm{c}}}$ spans $\mathbb{R}^{M}$
($\mathbb{C}^{M}$).
In the real case, the complement property is characteristic of injectivity, as
demonstrated in [7]. We provide the proof of this result below; it contains
several key insights which we will apply throughout this paper.
###### Theorem 3.
Consider $\Phi=\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{R}^{M}$ and the
mapping $\mathcal{A}\colon\mathbb{R}^{M}/\\{\pm 1\\}\rightarrow\mathbb{R}^{N}$
defined by $(\mathcal{A}(x))(n):=|\langle x,\varphi_{n}\rangle|^{2}$. Then
$\mathcal{A}$ is injective if and only if $\Phi$ satisfies the complement
property.
###### Proof.
We will prove both directions by obtaining the contrapositives.
($\Rightarrow$) Assume that $\Phi$ is not CP. Then there exists
$S\subseteq\\{1,\ldots,N\\}$ such that neither $\\{\varphi_{n}\\}_{n\in S}$
nor $\\{\varphi_{n}\\}_{n\in S^{\mathrm{c}}}$ spans $\mathbb{R}^{M}$. This
implies that there are nonzero vectors $u,v\in\mathbb{R}^{M}$ such that
$\langle u,\varphi_{n}\rangle=0$ for all $n\in S$ and $\langle
v,\varphi_{n}\rangle=0$ for all $n\in S^{\mathrm{c}}$. For each $n$, we then
have
$|\langle u\pm v,\varphi_{n}\rangle|^{2}=|\langle u,\varphi_{n}\rangle|^{2}\pm
2\operatorname{Re}\langle u,\varphi_{n}\rangle\overline{\langle
v,\varphi_{n}\rangle}+|\langle v,\varphi_{n}\rangle|^{2}=|\langle
u,\varphi_{n}\rangle|^{2}+|\langle v,\varphi_{n}\rangle|^{2}.$
Since $|\langle u+v,\varphi_{n}\rangle|^{2}=|\langle
u-v,\varphi_{n}\rangle|^{2}$ for every $n$, we have
$\mathcal{A}(u+v)=\mathcal{A}(u-v)$. Moreover, $u$ and $v$ are nonzero by
assumption, and so $u+v\neq\pm(u-v)$.
($\Leftarrow$) Assume that $\mathcal{A}$ is not injective. Then there exist
vectors $x,y\in\mathbb{R}^{M}$ such that $x\neq\pm y$ and
$\mathcal{A}(x)=\mathcal{A}(y)$. Taking $S:=\\{n:\langle
x,\varphi_{n}\rangle=-\langle y,\varphi_{n}\rangle\\}$, we have $\langle
x+y,\varphi_{n}\rangle=0$ for every $n\in S$. Otherwise when $n\in
S^{\mathrm{c}}$, we have $\langle x,\varphi_{n}\rangle=\langle
y,\varphi_{n}\rangle$ and so $\langle x-y,\varphi_{n}\rangle=0$. Furthermore,
both $x+y$ and $x-y$ are nontrivial since $x\neq\pm y$, and so neither
$\\{\varphi_{n}\\}_{n\in S}$ nor $\\{\varphi_{n}\\}_{n\in S^{\mathrm{c}}}$
spans $\mathbb{R}^{M}$. ∎
Note that [7] erroneously stated that the first part of the above proof also
gives that CP is necessary for injectivity in the complex case; the reader is
encouraged to spot the logical error. We wait to identify the error later in
this section so as to avoid spoilers. We will also give a correct proof of the
result in question. In the meantime, let’s characterize injectivity in the
complex case:
###### Theorem 4.
Consider $\Phi=\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{C}^{M}$ and the
mapping $\mathcal{A}\colon\mathbb{C}^{M}/\mathbb{T}\rightarrow\mathbb{R}^{N}$
defined by $(\mathcal{A}(x))(n):=|\langle x,\varphi_{n}\rangle|^{2}$. Viewing
$\\{\varphi_{n}\varphi_{n}^{*}u\\}_{n=1}^{N}$ as vectors in $\mathbb{R}^{2M}$,
denote
$S(u):=\operatorname{span}_{\mathbb{R}}\\{\varphi_{n}\varphi_{n}^{*}u\\}_{n=1}^{N}$.
Then the following are equivalent:
* (a)
$\mathcal{A}$ is injective.
* (b)
$\operatorname{dim}S(u)\geq 2M-1$ for every
$u\in\mathbb{C}^{M}\setminus\\{0\\}$.
* (c)
$S(u)=\operatorname{span}_{\mathbb{R}}\\{\mathrm{i}u\\}^{\perp}$ for every
$u\in\mathbb{C}^{M}\setminus\\{0\\}$.
Before proving this theorem, note that unlike the characterization in the real
case, it is not clear whether this characterization can be tested in finite
time; instead of being a statement about all (finitely many) partitions of
$\\{1,\ldots,N\\}$, this is a statement about all
$u\in\mathbb{C}^{M}\setminus\\{0\\}$. However, we can view this
characterization as an analog to the real case in some sense: In the real
case, the complement property is equivalent to having
$\operatorname{span}\\{\varphi_{n}\varphi_{n}^{*}u\\}_{n=1}^{N}=\mathbb{R}^{M}$
for all $u\in\mathbb{R}^{M}\setminus\\{0\\}$. As the following proof makes
precise, the fact that $\\{\varphi_{n}\varphi_{n}^{*}u\\}_{n=1}^{N}$ fails to
span all of $\mathbb{R}^{2M}$ is rooted in the fact that more information is
lost with phase in the complex case.
###### Proof of Theorem 4.
(a) $\Rightarrow$ (c): Suppose $\mathcal{A}$ is injective. We need to show
that $\\{\varphi_{n}\varphi_{n}^{*}u\\}_{n=1}^{N}$ spans the set of vectors
orthogonal to $\mathrm{i}u$. Here, orthogonality is with respect to the real
inner product, which can be expressed as $\langle
a,b\rangle_{\mathbb{R}}=\operatorname{Re}\langle a,b\rangle$. Note that
$|\langle u\pm v,\varphi_{n}\rangle|^{2}=|\langle u,\varphi_{n}\rangle|^{2}\pm
2\operatorname{Re}\langle
u,\varphi_{n}\rangle\langle\varphi_{n},v\rangle+|\langle
v,\varphi_{n}\rangle|^{2},$
and so subtraction gives
$|\langle u+v,\varphi_{n}\rangle|^{2}-|\langle
u-v,\varphi_{n}\rangle|^{2}=4\operatorname{Re}\langle
u,\varphi_{n}\rangle\langle\varphi_{n},v\rangle=4\langle\varphi_{n}\varphi_{n}^{*}u,v\rangle_{\mathbb{R}}.$
(1)
In particular, if the right-hand side of (1) is zero, then injectivity implies
that there exists some $\omega$ of unit modulus such that $u+v=\omega(u-v)$.
Since $u\neq 0$, we know $\omega\neq-1$, and so rearranging gives
$v=-\frac{1-\omega}{1+\omega}u=-\frac{(1-\omega)(1+\overline{\omega})}{|1+\omega|^{2}}u=-\frac{2\operatorname{Im}\omega}{|1+\omega|^{2}}~{}\mathrm{i}u.$
This means
$S(u)^{\perp}\subseteq\operatorname{span}_{\mathbb{R}}\\{\mathrm{i}u\\}$. To
prove $\operatorname{span}_{\mathbb{R}}\\{\mathrm{i}u\\}\subseteq
S(u)^{\perp}$, take $v=\alpha\mathrm{i}u$ for some $\alpha\in\mathbb{R}$ and
define $\omega:=\frac{1+\alpha\mathrm{i}}{1-\alpha\mathrm{i}}$, which
necessarily has unit modulus. Then
$u+v=u+\alpha\mathrm{i}u=(1+\alpha\mathrm{i})u=\frac{1+\alpha\mathrm{i}}{1-\alpha\mathrm{i}}(u-\alpha\mathrm{i}u)=\omega(u-v).$
Thus, the left-hand side of (1) is zero, meaning $v\in S(u)^{\perp}$.
(b) $\Leftrightarrow$ (c): First, (b) immediately follows from (c). For the
other direction, note that $\mathrm{i}u$ is necessarily orthogonal to every
$\varphi_{n}\varphi_{n}^{*}u$:
$\langle\varphi_{n}\varphi_{n}^{*}u,\mathrm{i}u\rangle_{\mathbb{R}}=\operatorname{Re}\langle\varphi_{n}\varphi_{n}^{*}u,\mathrm{i}u\rangle=\operatorname{Re}\langle
u,\varphi_{n}\rangle\langle\varphi_{n},\mathrm{i}u\rangle=-\operatorname{Re}\mathrm{i}|\langle
u,\varphi_{n}\rangle|^{2}=0.$
Thus, $\operatorname{span}_{\mathbb{R}}\\{\mathrm{i}u\\}\subseteq
S(u)^{\perp}$, and by (b), $\operatorname{dim}S(u)^{\perp}\leq 1$, both of
which gives (c).
(c) $\Rightarrow$ (a): This portion of the proof is inspired by Mukherjee’s
analysis in [33]. Suppose $\mathcal{A}(x)=\mathcal{A}(y)$. If $x=y$, we are
done. Otherwise, $x-y\neq 0$, and so we may apply (c) to $u=x-y$. First, note
that
$\langle\varphi_{n}\varphi_{n}^{*}(x-y),x+y\rangle_{\mathbb{R}}=\operatorname{Re}\langle\varphi_{n}\varphi_{n}^{*}(x-y),x+y\rangle=\operatorname{Re}(x+y)^{*}\varphi_{n}\varphi_{n}^{*}(x-y),$
and so expanding gives
$\langle\varphi_{n}\varphi_{n}^{*}(x-y),x+y\rangle_{\mathbb{R}}=\operatorname{Re}\Big{(}|\varphi_{n}^{*}x|^{2}-x^{*}\varphi_{n}\varphi_{n}^{*}y+y^{*}\varphi_{n}\varphi_{n}^{*}x-|\varphi_{n}^{*}y|^{2}\Big{)}=\operatorname{Re}\Big{(}-x^{*}\varphi_{n}\varphi_{n}^{*}y+\overline{x^{*}\varphi_{n}\varphi_{n}^{*}y}\Big{)}=0.$
Since $x+y\in
S(x-y)^{\perp}=\operatorname{span}_{\mathbb{R}}\\{\mathrm{i}(x-y)\\}$, there
exists $\alpha\in\mathbb{R}$ such that $x+y=\alpha\mathrm{i}(x-y)$, and so
rearranging gives $y=\frac{1-\alpha\mathrm{i}}{1+\alpha\mathrm{i}}x$, meaning
$y\equiv x\bmod\mathbb{T}$. ∎
The above theorem leaves a lot to be desired; it is still unclear what it
takes for a complex ensemble to yield injective intensity measurements. While
in pursuit of a more clear understanding, we established the following bizarre
characterization: A complex ensemble yields injective intensity measurements
precisely when it yields injective phase-only measurements (in some sense).
This is made more precise in the following theorem statement:
###### Theorem 5.
Consider $\Phi=\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{C}^{M}$ and the
mapping $\mathcal{A}\colon\mathbb{C}^{M}/\mathbb{T}\rightarrow\mathbb{R}^{N}$
defined by $(\mathcal{A}(x))(n):=|\langle x,\varphi_{n}\rangle|^{2}$. Then
$\mathcal{A}$ is injective if and only if the following statement holds: If
for every $n=1,\ldots,N$, either $\operatorname{arg}(\langle
x,\varphi_{n}\rangle^{2})=\operatorname{arg}(\langle
y,\varphi_{n}\rangle^{2})$ or one of the sides is not well-defined, then
$x=0$, $y=0$, or $y\equiv x\bmod\mathbb{R}\setminus\\{0\\}$.
###### Proof.
By Theorem 4, $\mathcal{A}$ is injective if and only if
$\forall
x\in\mathbb{C}^{M}\setminus\\{0\\},\qquad\operatorname{span}_{\mathbb{R}}\\{\varphi_{n}\varphi_{n}^{*}x\\}_{n=1}^{N}=\operatorname{span}_{\mathbb{R}}\\{\mathrm{i}x\\}^{\perp}.$
(2)
Taking orthogonal complements of both sides, note that regardless of
$x\in\mathbb{C}^{M}\setminus\\{0\\}$, we know
$\operatorname{span}_{\mathbb{R}}\\{\mathrm{i}x\\}$ is necessarily a subset of
$(\operatorname{span}_{\mathbb{R}}\\{\varphi_{n}\varphi_{n}^{*}x\\}_{n=1}^{N})^{\perp}$,
and so (2) is equivalent to
$\forall
x\in\mathbb{C}^{M}\setminus\\{0\\},\qquad\operatorname{Re}\langle\varphi_{n}\varphi_{n}^{*}x,\mathrm{i}y\rangle=0\quad\forall
n=1,\ldots,N\quad\Longrightarrow\quad y=0\text{ or }y\equiv
x\bmod\mathbb{R}\setminus\\{0\\}.$
Thus, we need to determine when $\operatorname{Im}\langle
x,\varphi_{n}\rangle\overline{\langle
y,\varphi_{n}\rangle}=\operatorname{Re}\langle\varphi_{n}\varphi_{n}^{*}x,\mathrm{i}y\rangle=0$.
We claim that this is true if and only if $\operatorname{arg}(\langle
x,\varphi_{n}\rangle^{2})=\operatorname{arg}(\langle
y,\varphi_{n}\rangle^{2})$ or one of the sides is not well-defined. To see
this, we substitute $a:=\langle x,\varphi_{n}\rangle$ and $b:=\langle
y,\varphi_{n}\rangle$. Then to complete the proof, it suffices to show that
$\operatorname{Im}a\overline{b}=0$ if and only if
$\operatorname{arg}(a^{2})=\operatorname{arg}(b^{2})$, $a=0$, or $b=0$.
($\Leftarrow$) If either $a$ or $b$ is zero, the result is immediate.
Otherwise, if
$2\operatorname{arg}(a)=\operatorname{arg}(a^{2})=\operatorname{arg}(b^{2})=2\operatorname{arg}(b)$,
then $2\pi$ divides $2(\operatorname{arg}(a)-\operatorname{arg}(b))$, and so
$\operatorname{arg}(a\overline{b})=\operatorname{arg}(a)-\operatorname{arg}(b)$
is a multiple of $\pi$. This implies that $a\overline{b}\in\mathbb{R}$, and so
$\operatorname{Im}a\overline{b}=0$.
($\Rightarrow$) Suppose $\operatorname{Im}a\overline{b}=0$. Taking the polar
decompositions $a=re^{\mathrm{i}\theta}$ and $b=se^{\mathrm{i}\phi}$, we
equivalently have that $rs\sin{(\theta-\phi)}=0$. Certainly, this can occur
whenever $r$ or $s$ is zero, i.e., $a=0$ or $b=0$. Otherwise, a difference
formula then gives $\sin{\theta}\cos{\phi}=\cos{\theta}\sin{\phi}$. From this,
we know that if $\theta$ is an integer multiple of $\pi/2$, then $\phi$ is as
well, and vice versa, in which case
$\operatorname{arg}(a^{2})=2\operatorname{arg}(a)=\pi=2\operatorname{arg}(b)=\operatorname{arg}(b^{2})$.
Else, we can divide both sides by $\cos{\theta}\cos{\phi}$ to obtain
$\tan{\theta}=\tan{\phi}$, from which it is evident that
$\theta\equiv\phi\bmod\pi$, and so
$\operatorname{arg}(a^{2})=2\operatorname{arg}(a)=2\operatorname{arg}(b)=\operatorname{arg}(b^{2})$.
∎
To be clear, it is unknown to the authors whether such phase-only measurements
arrive in any application (nor whether a corresponding reconstruction
algorithm is feasible), but we find it rather striking that injectivity in
this setting is equivalent to injectivity in ours. We will actually use this
result to (correctly) prove the necessity of CP for injectivity. First, we
need the following lemma, which is interesting in its own right:
###### Lemma 6.
Consider $\Phi=\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{C}^{M}$ and the
mapping $\mathcal{A}\colon\mathbb{C}^{M}/\mathbb{T}\rightarrow\mathbb{R}^{N}$
defined by $(\mathcal{A}(x))(n):=|\langle x,\varphi_{n}\rangle|^{2}$. If
$\mathcal{A}$ is injective, then the mapping
$\mathcal{B}\colon\mathbb{C}^{M}/\\{\pm 1\\}\rightarrow\mathbb{C}^{N}$ defined
by $(\mathcal{B}(x))(n):=\langle x,\varphi_{n}\rangle^{2}$ is also injective.
###### Proof.
Suppose $\mathcal{A}$ is injective. Then we have the following facts (one by
definition, and the other by Theorem 5):
* (i)
If $\forall n=1,\ldots,N$, $|\langle x,\varphi_{n}\rangle|^{2}=|\langle
y,\varphi_{n}\rangle|^{2}$, then $y\equiv x\bmod\mathbb{T}$.
* (ii)
If $\forall n=1,\ldots,N$, either $\operatorname{arg}(\langle
x,\varphi_{n}\rangle^{2})=\operatorname{arg}(\langle
y,\varphi_{n}\rangle^{2})$ or one of the sides is not well-defined, then
$x=0$, $y=0$, or $y\equiv x\bmod\mathbb{R}\setminus\\{0\\}$.
Now suppose we have $\langle x,\varphi_{n}\rangle^{2}=\langle
y,\varphi_{n}\rangle^{2}$ for all $n=1,\ldots,N$. Then their moduli and
arguments are also equal, and so (i) and (ii) both apply. Of course, $y\equiv
x\bmod\mathbb{T}$ implies $x=0$ if and only if $y=0$. Otherwise both are
nonzero, in which case there exists
$\omega\in\mathbb{T}\cap\mathbb{R}\setminus\\{0\\}=\\{\pm 1\\}$ such that
$y=\omega x$. In either case, $y\equiv x\bmod\\{\pm 1\\}$, so $\mathcal{B}$ is
injective. ∎
###### Theorem 7.
Consider $\Phi=\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{C}^{M}$ and the
mapping $\mathcal{A}\colon\mathbb{C}^{M}/\mathbb{T}\rightarrow\mathbb{R}^{N}$
defined by $(\mathcal{A}(x))(n):=|\langle x,\varphi_{n}\rangle|^{2}$. If
$\mathcal{A}$ is injective, then $\Phi$ satisfies the complement property.
Before giving the proof, let’s first divulge why the first part of the proof
of Theorem 3 does not suffice: It demonstrates that $u+v\neq\pm(u-v)$, but
fails to establish that $u+v\not\equiv u-v\bmod\mathbb{T}$; for instance, it
could very well be the case that $u+v=\mathrm{i}(u-v)$, and so injectivity
would not be violated in the complex case. Regardless, the following proof,
which leverages the injectivity of $\mathcal{B}$ modulo $\\{\pm 1\\}$,
resolves this issue.
###### Proof of Theorem 7.
Recall that if $\mathcal{A}$ is injective, then so is the mapping
$\mathcal{B}$ of Lemma 6. Therefore, it suffices to show that $\Phi$ is CP if
$\mathcal{B}$ is injective. To complete the proof, we will obtain the
contrapositive (note the similarity to the proof of Theorem 3). Suppose $\Phi$
is not CP. Then there exists $S\subseteq\\{1,\ldots,N\\}$ such that neither
$\\{\varphi_{n}\\}_{n\in S}$ nor $\\{\varphi_{n}\\}_{n\in S^{\mathrm{c}}}$
spans $\mathbb{C}^{M}$. This implies that there are nonzero vectors
$u,v\in\mathbb{C}^{M}$ such that $\langle u,\varphi_{n}\rangle=0$ for all
$n\in S$ and $\langle v,\varphi_{n}\rangle=0$ for all $n\in S^{\mathrm{c}}$.
For each $n$, we then have
$\langle u\pm v,\varphi_{n}\rangle^{2}=\langle u,\varphi_{n}\rangle^{2}\pm
2\langle u,\varphi_{n}\rangle\langle v,\varphi_{n}\rangle+\langle
v,\varphi_{n}\rangle^{2}=\langle u,\varphi_{n}\rangle^{2}+\langle
v,\varphi_{n}\rangle^{2}.$
Since $\langle u+v,\varphi_{n}\rangle^{2}=\langle u-v,\varphi_{n}\rangle^{2}$
for every $n$, we have $\mathcal{B}(u+v)=\mathcal{B}(u-v)$. Moreover, $u$ and
$v$ are nonzero by assumption, and so $u+v\neq\pm(u-v)$. ∎
Note that the complement property is necessary but not sufficient for
injectivity. To see this, consider measurement vectors $(1,0)$, $(0,1)$ and
$(1,1)$. These certainly satisfy the complement property, but
$\mathcal{A}((1,\mathrm{i}))=(1,1,2)=\mathcal{A}((1,-\mathrm{i}))$, despite
the fact that $(1,\mathrm{i})\not\equiv(1,-\mathrm{i})\bmod\mathbb{T}$; in
general, real measurement vectors fail to yield injective intensity
measurements in the complex setting since they do not distinguish complex
conjugates. Indeed, we have yet to find a “good” sufficient condition for
injectivity in the complex case. As an analogy for what we really want,
consider the notion of full spark: An ensemble
$\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{R}^{M}$ is said to be full spark
if every subcollection of $M$ vectors spans $\mathbb{R}^{M}$. It is easy to
see that full spark ensembles with $N\geq 2M-1$ necessarily satisfy the
complement property (thereby implying injectivity in the real case), and
furthermore, the notion of full spark is simple enough to admit deterministic
constructions [3, 34]. Deterministic measurement ensembles are particularly
desirable for the complex case, and so finding a good sufficient condition for
injectivity is an important problem that remains open.
### 2.2 Towards a rank-nullity theorem for phase retrieval
If you think of a matrix $\Phi$ as being built one column at a time, then the
rank-nullity theorem states that each column contributes to either the column
space or the null space. If the columns are then used as linear measurement
vectors (say we take measurements $y=\Phi^{*}x$ of a vector $x$), then the
column space of $\Phi$ gives the subspace that is actually sampled, and the
null space captures the algebraic nature of the measurements’ redundancy.
Therefore, an efficient sampling of an entire vector space would apply a
matrix $\Phi$ with a small null space and large column space (e.g., an
invertible square matrix). How do we find such a sampling with intensity
measurements? The following makes this question more precise:
###### Problem 8.
For any dimension $M$, what is the smallest number $N^{*}(M)$ of injective
intensity measurements, and how do we design such measurement vectors?
To be clear, this problem was completely solved in the real case by Balan,
Casazza and Edidin [7]. Indeed, Theorem 3 immediately implies that $2M-2$
intensity measurements are necessarily not injective, and furthermore that
$2M-1$ measurements are injective if and only if the measurement vectors are
full spark. As such, we will focus our attention to the complex case.
In the complex case, Problem 8 has some history in the quantum mechanics
literature. For example, [36] presents Wright’s conjecture that three
observables suffice to uniquely determine any pure state. In phase retrieval
parlance, the conjecture states that there exist unitary matrices $U_{1}$,
$U_{2}$ and $U_{3}$ such that $\Phi=[U_{1}~{}U_{2}~{}U_{3}]$ yields injective
intensity measurements (here, the measurement vectors are the columns of
$\Phi$). Note that Wright’s conjecture actually implies that $N^{*}(M)\leq
3M-2$; indeed, $U_{1}$ determines the norm (squared) of the signal, rendering
the last column of both $U_{2}$ and $U_{3}$ unnecessary. Finkelstein [22]
later proved that $N^{*}(M)\geq 3M-2$; combined with Wright’s conjecture, this
led many to believe that $N^{*}(M)=3M-2$ (for example, see [12]). However,
both this and Wright’s conjecture were recently disproved in [24], in which
Heinosaari, Mazzarella and Wolf invoked embedding theorems from differential
geometry to prove that
$N^{*}(M)\geq\left\\{\begin{array}[]{ll}4M-2\alpha(M-1)-3&\mbox{for all }M\\\
4M-2\alpha(M-1)-2&\mbox{if }M\mbox{ is odd and }\alpha(M-1)=2\bmod 4\\\
4M-2\alpha(M-1)-1&\mbox{if }M\mbox{ is odd and }\alpha(M-1)=3\bmod
4,\end{array}\right.$ (3)
where $\alpha(M-1)\leq\log_{2}(M)$ is the number of $1$’s in the binary
representation of $M-1$. By comparison, Balan, Casazza and Edidin [7] proved
that $N^{*}(M)\leq 4M-2$, and so we at least have the asymptotic expression
$N^{*}(M)=(4+o(1))M$.
At this point, we should clarify some intuition for $N^{*}(M)$ by explaining
the nature of these best known lower and upper bounds. First, the lower bound
(3) follows from an older result that complex projective space
$\mathbb{C}\mathbf{P}^{n}$ does not smoothly embed into
$\mathbb{R}^{4n-2\alpha(n)}$ (and other slight refinements which depend on
$n$); this is due to Mayer [31], but we highly recommend James’s survey on the
topic [26]. To prove (3) from this, suppose
$\mathcal{A}\colon\mathbb{C}^{M}/\mathbb{T}\rightarrow\mathbb{R}^{N}$ were
injective. Then $\mathcal{E}$ defined by
$\mathcal{E}(x):=\mathcal{A}(x)/\|x\|^{2}$ embeds $\mathbb{C}\mathbf{P}^{M-1}$
into $\mathbb{R}^{N}$, and as Heinosaari et al. show, the embedding is
necessarily smooth; considering $\mathcal{A}(x)$ is made up of rather simple
polynomials, the fact that $\mathcal{E}$ is smooth should not come as a
surprise. As such, the nonembedding result produces the best known lower
bound. To evaluate this bound, first note that Milgram [32] constructs an
embedding of $\mathbb{C}\mathbf{P}^{n}$ into $\mathbb{R}^{4n-\alpha(n)+1}$,
establishing the importance of the $\alpha(n)$ term, but the constructed
embedding does not correspond to an intensity measurement process. In order to
relate these embedding results to our problem, consider the real case: It is
known that for odd $n\geq 7$, real projective space $\mathbb{R}\mathbf{P}^{n}$
smoothly embeds into $\mathbb{R}^{2n-\alpha(n)+1}$ [35], which means the
analogous lower bound for the real case would necessarily be smaller than
$2(M-1)-\alpha(M-1)+1=2M-\alpha(M-1)-1<2M-1$. This indicates that the
$\alpha(M-1)$ term in (3) might be an artifact of the proof technique, rather
than of $N^{*}(M)$.
There is also some intuition to be gained from the upper bound $N^{*}(M)\leq
4M-2$, which Balan et al. proved by applying certain techniques from algebraic
geometry (some of which we will apply later in this section). In fact, their
result actually gives that $4M-2$ or more measurement vectors, if chosen
generically, will yield injective intensity measurements; here, generic is a
technical term involving the Zariski topology, but it can be thought of as
some undisclosed property which is satisfied with probability 1 by measurement
vectors drawn from continuous distributions. This leads us to think that
$N^{*}(M)$ generic measurement vectors might also yield injectivity.
The lemma that follows will help to refine our intuition for $N^{*}(M)$, and
it will also play a key role in the main theorems of this section (a similar
result appears in [24]). Before stating the result, define the real
$M^{2}$-dimensional space $\mathbb{H}^{M\times M}$ of self-adjoint $M\times M$
matrices; note that this is not a vector space over the complex numbers since
the diagonal of a self-adjoint matrix must be real. Given an ensemble of
measurement vectors $\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{C}^{M}$,
define the super analysis operator $\mathbf{A}\colon\mathbb{H}^{M\times
M}\rightarrow\mathbb{R}^{N}$ by $(\mathbf{A}H)(n)=\langle
H,\varphi_{n}\varphi_{n}^{*}\rangle_{\mathrm{HS}}$; here,
$\langle\cdot,\cdot\rangle_{\mathrm{HS}}$ denotes the Hilbert-Schmidt inner
product, which induces the Frobenius matrix norm. Note that $\mathbf{A}$ is a
linear operator, and yet
$(\mathbf{A}xx^{*})(n)=\langle
xx^{*},\varphi_{n}\varphi_{n}^{*}\rangle_{\mathrm{HS}}=\operatorname{Tr}[\varphi_{n}\varphi_{n}^{*}xx^{*}]=\operatorname{Tr}[\varphi_{n}^{*}xx^{*}\varphi_{n}]=\varphi_{n}^{*}xx^{*}\varphi_{n}=|\langle
x,\varphi_{n}\rangle|^{2}=(\mathcal{A}(x))(n).$
In words, the class of vectors identified with $x$ modulo $\mathbb{T}$ can be
“lifted” to $xx^{*}$, thereby linearizing the intensity measurement process at
the price of squaring the dimension of the vector space of interest; this
identification has been exploited by some of the most noteworthy strides in
modern phase retrieval [6, 14]. As the following lemma shows, this
identification can also be used to characterize injectivity:
###### Lemma 9.
$\mathcal{A}$ is not injective if and only if there exists a matrix of rank
$1$ or $2$ in the null space of $\mathbf{A}$.
###### Proof.
($\Rightarrow$) If $\mathcal{A}$ is not injective, then there exist
$x,y\in\mathbb{C}^{M}/\mathbb{T}$ with $x\nequiv y\bmod\mathbb{T}$ such that
$\mathcal{A}(x)=\mathcal{A}(y)$. That is, $\mathbf{A}xx^{*}=\mathbf{A}yy^{*}$,
and so $xx^{*}-yy^{*}$ is in the null space of $\mathbf{A}$.
($\Leftarrow$) First, suppose there is a rank-$1$ matrix $H$ in the null space
of $\mathbf{A}$. Then there exists $x\in\mathbb{C}^{M}$ such that $H=xx^{*}$
and $(\mathcal{A}(x))(n)=(\mathbf{A}xx^{*})(n)=0=(\mathcal{A}(0))(n)$. But
$x\not\equiv 0\bmod\mathbb{T}$, and so $\mathcal{A}$ is not injective. Now
suppose there is a rank-$2$ matrix $H$ in the null space of $\mathbf{A}$. Then
by the spectral theorem, there are orthonormal $u_{1},u_{2}\in\mathbb{C}^{M}$
and nonzero $\lambda_{1}\geq\lambda_{2}$ such that
$H=\lambda_{1}u_{1}u_{1}^{*}+\lambda_{2}u_{2}u_{2}^{*}$. Since $H$ is in the
null space of $\mathbf{A}$, the following holds for every $n$:
$0=\langle
H,\varphi_{n}\varphi_{n}^{*}\rangle_{\mathrm{HS}}=\langle\lambda_{1}u_{1}u_{1}^{*}+\lambda_{2}u_{2}u_{2}^{*},\varphi_{n}\varphi_{n}^{*}\rangle_{\mathrm{HS}}=\lambda_{1}|\langle
u_{1},\varphi_{n}\rangle|^{2}+\lambda_{2}|\langle
u_{2},\varphi_{n}\rangle|^{2}.$ (4)
Taking $x:=|\lambda_{1}|^{1/2}u_{1}$ and $y:=|\lambda_{2}|^{1/2}u_{2}$, note
that $y\not\equiv x\bmod\mathbb{T}$ since they are nonzero and orthogonal. We
claim that $\mathcal{A}(x)=\mathcal{A}(y)$, which would complete the proof. If
$\lambda_{1}$ and $\lambda_{2}$ have the same sign, then by (4), $|\langle
x,\varphi_{n}\rangle|^{2}+|\langle y,\varphi_{n}\rangle|^{2}=0$ for every $n$,
meaning $|\langle x,\varphi_{n}\rangle|^{2}=0=|\langle
y,\varphi_{n}\rangle|^{2}$. Otherwise, $\lambda_{1}>0>\lambda_{2}$, and so
$xx^{*}-yy^{*}=\lambda_{1}u_{1}u_{1}^{*}+\lambda_{2}u_{2}u_{2}^{*}=A$ is in
the null space of $\mathbf{A}$, meaning
$\mathcal{A}(x)=\mathbf{A}xx^{*}=\mathbf{A}yy^{*}=\mathcal{A}(y)$. ∎
Lemma 9 indicates that we want the null space of $\mathbf{A}$ to avoid nonzero
matrices of rank $\leq 2$. Intuitively, this is easier when the “dimension” of
this set of matrices is small. To get some idea of this dimension, let’s count
real degrees of freedom. By the spectral theorem, almost every matrix in
$\mathbb{H}^{M\times M}$ of rank $\leq 2$ can be uniquely expressed as
$\lambda_{1}u_{1}u_{1}^{*}+\lambda_{2}u_{2}u_{2}^{*}$ with
$\lambda_{1}\leq\lambda_{2}$. Here, $(\lambda_{1},\lambda_{2})$ has two
degrees of freedom. Next, $u_{1}$ can be any vector in $\mathbb{C}^{M}$,
except its norm must be $1$. Also, since $u_{1}$ is only unique up to global
phase, we take its first entry to be nonnegative without loss of generality.
Given the norm and phase constraints, $u_{1}$ has a total of $2M-2$ real
degrees of freedom. Finally, $u_{2}$ has the same norm and phase constraints,
but it must also be orthogonal to $u_{1}$, that is, $\operatorname{Re}\langle
u_{2},u_{1}\rangle=\operatorname{Im}\langle u_{2},u_{1}\rangle=0$. As such,
$u_{2}$ has $2M-4$ real degrees of freedom. All together, we can expect the
set of matrices in question to have $2+(2M-2)+(2M-4)=4M-4$ real dimensions.
If the set $S$ of matrices of rank $\leq 2$ formed a subspace of
$\mathbb{H}^{M\times M}$ (it doesn’t), then we could expect the null space of
$\mathbf{A}$ to intersect that subspace nontrivially whenever
$\dim\operatorname{null}(\mathbf{A})+(4M-4)>\dim(\mathbb{H}^{M\times
M})=M^{2}$. By the rank-nullity theorem, this would indicate that injectivity
requires
$N\geq\operatorname{rank}(\mathbf{A})=M^{2}-\dim\operatorname{null}(\mathbf{A})\geq
4M-4.$ (5)
Of course, this logic is not technically valid since $S$ is not a subspace. It
is, however, a special kind of set: a real projective variety. To see this,
let’s first show that it is a real algebraic variety, specifically, the set of
members of $\mathbb{H}^{M\times M}$ for which all $3\times 3$ minors are zero.
Of course, every member of $S$ has this minor property. Next, we show that
members of $S$ are the only matrices with this property: If the rank of a
given matrix is $\geq 3$, then it has an $M\times 3$ submatrix of linearly
independent columns, and since the rank of its transpose is also $\geq 3$,
this $M\times 3$ submatrix must have $3$ linearly independent rows, thereby
implicating a full-rank $3\times 3$ submatrix. This variety is said to be
projective because it is closed under scalar multiplication. If $S$ were a
projective variety over an algebraically closed field (it’s not), then the
projective dimension theorem (Theorem 7.2 of [23]) says that $S$ intersects
$\operatorname{null}(\mathbf{A})$ nontrivially whenever the dimensions are
large enough: $\dim\operatorname{null}(\mathbf{A})+\dim
S>\dim\mathbb{H}^{M\times M}$, thereby implying that injectivity requires (5).
Unfortunately, this theorem is not valid when the field is $\mathbb{R}$; for
example, the cone defined by $x^{2}+y^{2}-z^{2}=0$ in $\mathbb{R}^{3}$ is a
projective variety of dimension $2$, but its intersection with the
$2$-dimensional $xy$-plane is trivial, despite the fact that $2+2>3$.
In the absence of a proof, we pose the natural conjecture:
###### The $4M-4$ Conjecture.
Consider $\Phi=\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{C}^{M}$ and the
mapping $\mathcal{A}\colon\mathbb{C}^{M}/\mathbb{T}\rightarrow\mathbb{R}^{N}$
defined by $(\mathcal{A}(x))(n):=|\langle x,\varphi_{n}\rangle|^{2}$. If
$M\geq 2$, then the following statements hold:
* (a)
If $N<4M-4$, then $\mathcal{A}$ is not injective.
* (b)
If $N\geq 4M-4$, then $\mathcal{A}$ is injective for generic $\Phi$.
For the sake of clarity, we now explicitly state what is meant by the word
“generic.” As indicated above, a real algebraic variety is the set of common
zeros of a finite set of polynomials with real coefficients. Taking all such
varieties in $\mathbb{R}^{n}$ to be closed sets defines the Zariski topology
on $\mathbb{R}^{n}$. Viewing $\Phi$ as a member of $\mathbb{R}^{2MN}$, then we
say a generic $\Phi$ is any member of some undisclosed nonempty Zariski-open
subset of $\mathbb{R}^{2MN}$. Considering Zariski-open set are either empty or
dense with full measure, genericity is a particularly strong property. As
such, another way to state part (b) of the $4M-4$ conjecture is “If $N\geq
4M-4$, then there exists a real algebraic variety $V\subseteq\mathbb{R}^{2MN}$
such that $\mathcal{A}$ is injective for every $\Phi\not\in V$.” Note that the
work of Balan, Casazza and Edidin [7] already proves this for $N\geq 4M-2$.
Also note that the analogous statement of (b) holds in the real case: Full
spark measurement vectors are generic, and they satisfy the complement
property whenever $N\geq 2M-1$.
At this point, it is fitting to mention that after we initially formulated
this conjecture, Bodmann presented a Vandermonde construction of $4M-4$
injective intensity measurements at a phase retrieval workshop at the Erwin
Schrödinger International Institute for Mathematical Physics. The result has
since been documented in [8], and it establishes one consequence of the $4M-4$
conjecture: $N^{*}(M)\leq 4M-4$.
As incremental progress toward solving the $4M-4$ conjecture, we offer the
following result:
###### Theorem 10.
The $4M-4$ Conjecture is true when $M=2$.
###### Proof.
(a) Since $\mathbf{A}$ is a linear map from $4$-dimensional real space to
$N$-dimensional real space, the null space of $\mathbf{A}$ is necessarily
nontrivial by the rank-nullity theorem. Furthermore, every nonzero member of
this null space has rank $1$ or $2$, and so Lemma 9 gives that $\mathcal{A}$
is not injective.
(b) Consider the following matrix formed by 16 real variables:
$\Phi(x)=\left[\begin{array}[]{cccc}x_{1}+\mathrm{i}x_{2}&x_{5}+\mathrm{i}x_{6}&x_{9}+\mathrm{i}x_{10}&x_{13}+\mathrm{i}x_{14}\\\
x_{3}+\mathrm{i}x_{4}&x_{7}+\mathrm{i}x_{8}&x_{11}+\mathrm{i}x_{12}&x_{15}+\mathrm{i}x_{16}\end{array}\right].$
(6)
If we denote the $n$th column of $\Phi(x)$ by $\varphi_{n}(x)$, then we have
that $\mathcal{A}$ is injective precisely when $x\in\mathbb{R}^{16}$ produces
a basis $\\{\varphi_{n}(x)\varphi_{n}(x)^{*}\\}_{n=1}^{4}$ for the space of
$2\times 2$ self-adjoint operators. Indeed, in this case $zz^{*}$ is uniquely
determined by $\mathbf{A}zz^{*}=\\{\langle
zz^{*},\varphi_{n}(x)\varphi_{n}(x)^{*}\rangle_{\mathrm{HS}}\\}_{n=1}^{4}=\mathcal{A}(z)$,
which in turn determines $z$ up to a global phase factor. Let $\mathbf{A}(x)$
be the $4\times 4$ matrix representation of the super analysis operator, whose
$n$th row gives the coordinates of $\varphi_{n}(x)\varphi_{n}(x)^{*}$ in terms
of some basis for $\mathbb{H}^{2\times 2}$, say
$\left\\{\left[\begin{array}[]{rr}1&0\\\
0&1\end{array}\right],\left[\begin{array}[]{rr}0&0\\\
0&1\end{array}\right],\frac{1}{\sqrt{2}}\left[\begin{array}[]{rr}0&1\\\
1&0\end{array}\right],\frac{1}{\sqrt{2}}\left[\begin{array}[]{rr}0&\mathrm{i}\\\
-\mathrm{i}&0\end{array}\right]\right\\}.$ (7)
Then
$V=\\{x:\operatorname{Re}\det\mathbf{A}(x)=\operatorname{Im}\det\mathbf{A}(x)=0\\}$
is a real algebraic variety in $\mathbb{R}^{16}$, and we see that
$\mathcal{A}$ is injective whenever $x\in V^{\mathrm{c}}$. Since
$V^{\mathrm{c}}$ is Zariski-open, it is either empty or dense with full
measure. In fact, $V^{\mathrm{c}}$ is not empty, since we may take $x$ such
that
$\Phi(x)=\left[\begin{array}[]{cccc}1&0&1&1\\\
0&1&1&\mathrm{i}\end{array}\right],$
as indicated in Theorem 4.1 of [5]. Therefore, $V^{\mathrm{c}}$ is dense with
full measure. ∎
We also have a proof for the $M=3$ case, but we first introduce Algorithm 1,
namely the HMW test for injectivity; we name it after Heinosaari, Mazarella
and Wolf, who implicitly introduce this algorithm in their paper [24].
Algorithm 1 The HMW test for injectivity when $M=3$
Input: Measurement vectors
$\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{C}^{3}$
Output: Whether $\mathcal{A}$ is injective
Define $\mathbf{A}\colon\mathbb{H}^{3\times 3}\rightarrow\mathbb{R}^{N}$ such
that $\mathbf{A}H=\\{\langle
H,\varphi_{n}\varphi_{n}^{*}\rangle_{\textrm{HS}}\\}_{n=1}^{N}$ {assemble the
super analysis operator}
if $\operatorname{dim}\operatorname{null}(\mathbf{A})=0$ then
“INJECTIVE” {if $\mathbf{A}$ is injective, then $\mathcal{A}$ is injective}
else
Pick $H\in\operatorname{null}(\mathbf{A})$, $H\neq 0$
if $\operatorname{dim}\operatorname{null}(\mathbf{A})=1$ and $\det(H)\neq 0$
then
“INJECTIVE” {if $\mathbf{A}$ only maps nonsingular matrices to zero, then
$\mathcal{A}$ is injective}
else
“NOT INJECTIVE” {in the remaining case, $\mathbf{A}$ maps differences of
rank-$1$ matrices to zero}
end if
end if
###### Theorem 11 (cf. Proposition 6 in [24]).
When $M=3$, the HMW test correctly determines whether $\mathcal{A}$ is
injective.
###### Proof.
First, if $\mathbf{A}$ is injective, then
$\mathcal{A}(x)=\mathbf{A}xx^{*}=\mathbf{A}yy^{*}=\mathcal{A}(y)$ if and only
if $xx^{*}=yy^{*}$, i.e., $y\equiv x\bmod\mathbb{T}$. Next, suppose
$\mathbf{A}$ has a $1$-dimensional null space. Then Lemma 9 gives that
$\mathcal{A}$ is injective if and only if the null space of $\mathbf{A}$ is
spanned by a matrix of full rank. Finally, if the dimension of the null space
is $2$ or more, then there exist linearly independent (nonzero) matrices $A$
and $B$ in this null space. If $\det{(A)}=0$, then it must have rank $1$ or
$2$, and so Lemma 9 gives that $\mathcal{A}$ is not injective. Otherwise,
consider the map
$f\colon t\mapsto\det{(A\cos{t}+B\sin{t})}\qquad\forall t\in[0,\pi].$
Since $f(0)=\det{(A)}$ and $f(\pi)=\det{(-A)}=(-1)^{3}\det{(A)}=-\det{(A)}$,
the intermediate value theorem gives that there exists $t_{0}\in[0,\pi]$ such
that $f(t_{0})=0$, i.e., the matrix $A\cos{t_{0}}+B\sin{t_{0}}$ is singular.
Moreover, this matrix is nonzero since $A$ and $B$ are linearly independent,
and so its rank is either $1$ or $2$. Lemma 9 then gives that $\mathcal{A}$ is
not injective. ∎
As an example, we may run the HMW test on the columns of the following matrix:
$\Phi=\left[\begin{array}[]{rrrrrrrr}2&~{}~{}1&~{}~{}1&~{}~{}0&~{}~{}0&~{}~{}0&~{}~{}1&~{}~{}\mathrm{i}\\\
-1&0&0&1&1&-1&-2&2\\\
0&1&-1&1&-1&2\mathrm{i}&\mathrm{i}&-1\end{array}\right].$ (8)
In this case, the null space of $\mathbf{A}$ is $1$-dimensional and spanned by
a nonsingular matrix. As such, $\mathcal{A}$ is injective. We will see that
the HMW test has a few important applications. First, we use it to prove the
$4M-4$ Conjecture in the $M=3$ case:
###### Theorem 12.
The $4M-4$ Conjecture is true when $M=3$.
###### Proof.
(a) Suppose $N<4M-4=8$. Then by the rank-nullity theorem, the super analysis
operator $\mathbf{A}\colon\mathbb{H}^{3\times 3}\rightarrow\mathbb{R}^{N}$ has
a null space of at least $2$ dimensions, and so by the HMW test, $\mathcal{A}$
is not injective.
(b) Consider a $3\times 8$ matrix of real variables $\Phi(x)$ similar to
$\eqref{eq.variable matrix}$. Then $\mathcal{A}$ is injective whenever
$x\in\mathbb{R}^{48}$ produces an ensemble
$\\{\varphi_{n}(x)\\}_{n=1}^{8}\subseteq\mathbb{C}^{3}$ that passes the HMW
test. To pass, the rank-nullity theorem says that the null space of the super
analysis operator had better be $1$-dimensional and spanned by a nonsingular
matrix. Let’s use an orthonormal basis for $\mathbb{H}^{3\times 3}$ similar to
(7) to find an $8\times 9$ matrix representation of the super analysis
operator $\mathbf{A}(x)$; it is easy to check that the entries of this matrix
(call it $\mathbf{A}(x)$) are polynomial functions of $x$. Consider the matrix
$B(x,y)=\left[\begin{array}[]{c}y^{\mathrm{T}}\\\
\mathbf{A}(x)\end{array}\right],$
and let $u(x)$ denote the vector of $(1,j)$th cofactors of $B(x,y)$. Then
$\langle y,u(x)\rangle=\det(B(x,y))$. This implies that $u(x)$ is in the null
space of $\mathbf{A}(x)$, since each row of $\mathbf{A}(x)$ is necessarily
orthogonal to $u(x)$.
We claim that $u(x)=0$ if and only if the dimension of the null space of
$\mathbf{A}(x)$ is $2$ or more, that is, the rows of $\mathbf{A}(x)$ are
linearly dependent. First, ($\Leftarrow$) is true since the entries of $u(x)$
are signed determinants of $8\times 8$ submatrices of $\mathbf{A}(x)$, which
are necessarily zero by the linear dependence of the rows. For
($\Rightarrow$), we have that $0=\langle y,0\rangle=\langle
y,u(x)\rangle=\det(B(x,y))$ for all $y\in\mathbb{R}^{9}$. That is, even if $y$
is nonzero and orthogonal to the rows of $\mathbf{A}(x)$, the rows of $B(x,y)$
are linearly dependent, and so the rows of $\mathbf{A}(x)$ must be linearly
dependent. This proves our intermediate claim.
We now use the claim to prove the result. The entries of $u(x)$ are
coordinates of a matrix $U(x)\in\mathbb{H}^{3\times 3}$ in the same basis as
before. Note that the entries of $U(x)$ are polynomials of $x$. Furthermore,
$\mathcal{A}$ is injective if and only if $\det U(x)\neq 0$. To see this,
observe three cases:
Case I: $U(x)=0$, i.e., $u(x)=0$, or equivalently,
$\dim\operatorname{null}(\mathbf{A}(x))\geq 2$. By the HMW test, $\mathcal{A}$
is not injective.
Case II: The null space is spanned by $U(x)\neq 0$, but $\det U(x)=0$. By the
HMW test, $\mathcal{A}$ is not injective.
Case III: The null space is spanned by $U(x)\neq 0$, and $\det U(x)\neq 0$. By
the HMW test, $\mathcal{A}$ is injective.
Defining the real algebraic variety $V=\\{x:\det
U(x)=0\\}\subseteq\mathbb{R}^{48}$, we then have that $\mathcal{A}$ is
injective precisely when $x\in V^{\mathrm{c}}$. Since $V^{\mathrm{c}}$ is
Zariski-open, it is either empty or dense with full measure, but it is
nonempty since (8) passes the HMW test. Therefore, $V^{\mathrm{c}}$ is dense
with full measure. ∎
Recall Wright’s conjecture: that there exist unitary matrices $U_{1}$, $U_{2}$
and $U_{3}$ such that $\Phi=[U_{1}~{}U_{2}~{}U_{3}]$ yields injective
intensity measurements. Also recall that Wright’s conjecture implies
$N^{*}(M)\leq 3M-2$. Again, both of these were disproved by Heinosaari et al.
[24] using deep results in differential geometry. Alternatively, Theorem 12
also disproves these in the case where $M=3$, since
$N^{*}(3)=4(3)-3=8>7=3(3)-2$.
Note that the HMW test can be used to test for injectivity in three dimensions
regardless of the number of measurement vectors. As such, it can be used to
evaluate ensembles of $3\times 3$ unitary matrices for quantum mechanics. For
example, consider the $3\times 3$ fractional discrete Fourier transform,
defined in [10] using discrete Hermite-Gaussian functions:
$F^{\alpha}=\frac{1}{6}\left[\begin{array}[]{ccc}3+\sqrt{3}&\sqrt{3}&\sqrt{3}\\\
\sqrt{3}&\frac{3-\sqrt{3}}{2}&\frac{3-\sqrt{3}}{2}\\\
\sqrt{3}&\frac{3-\sqrt{3}}{2}&\frac{3-\sqrt{3}}{2}\end{array}\right]+\frac{e^{\alpha\mathrm{i}\pi}}{6}\left[\begin{array}[]{ccc}3-\sqrt{3}&-\sqrt{3}&-\sqrt{3}\\\
-\sqrt{3}&\frac{3+\sqrt{3}}{2}&\frac{3+\sqrt{3}}{2}\\\
-\sqrt{3}&\frac{3+\sqrt{3}}{2}&\frac{3+\sqrt{3}}{2}\end{array}\right]+\frac{e^{\alpha\mathrm{i}\pi/2}}{2}\left[\begin{array}[]{rrr}0&0&0\\\
0&1&-1\\\ 0&-1&1\end{array}\right]$
It can be shown by the HMW test that $\Phi=[I~{}F^{1/2}~{}F~{}F^{3/2}]$ yields
injective intensity measurements. This leads to the following refinement of
Wright’s conjecture:
###### Conjecture 13.
Let $F$ denote the $M\times M$ discrete fractional Fourier transform defined
in [10]. Then for every $M\geq 3$, $\Phi=[I~{}F^{1/2}~{}F~{}F^{3/2}]$ yields
injective intensity measurements.
This conjecture can be viewed as the discrete analog to the work of Jaming
[27], in which ensembles of continuous fractional Fourier transforms are
evaluated for injectivity.
## 3 Stability
### 3.1 Stability in the worst case
As far as applications are concerned, the stability of reconstruction is
perhaps the most important consideration. To date, the only known stability
results come from PhaseLift [14], the polarization method [2], and a very
recent paper of Eldar and Mendelson [19]. This last paper focuses on the real
case, and analyzes how well subgaussian random measurement vectors distinguish
signals, thereby yielding some notion of stability which is independent of the
reconstruction algorithm used. In particular, given independent random
measurement vectors $\\{\varphi_{n}\\}_{n=1}^{N}\subseteq\mathbb{R}^{M}$,
Eldar and Mendelson evaluated measurement separation by finding a constant $C$
such that
$\|\mathcal{A}(x)-\mathcal{A}(y)\|_{1}\geq
C\|x-y\|_{2}\|x+y\|_{2}\qquad\forall x,y\in\mathbb{R}^{M},$ (9)
where $\mathcal{A}\colon\mathbb{R}^{M}\rightarrow\mathbb{R}^{N}$ is the
intensity measurement process defined by $(\mathcal{A}(x))(n):=|\langle
x,\varphi_{n}\rangle|^{2}$. With this, we can say that if $\mathcal{A}(x)$ and
$\mathcal{A}(y)$ are close, then $x$ must be close to either $\pm y$, and even
closer for larger $C$. By the contrapositive, distant signals will not be
confused in the measurement domain because $\mathcal{A}$ does a good job of
separating them.
One interesting feature of (9) is that increasing the lengths of the
measurement vectors $\\{\varphi_{n}\\}_{n=1}^{N}$ will in turn increase $C$,
meaning the measurements are better separated. As such, for any given
magnitude of noise, one can simply amplify the measurement process so as to
drown out the noise and ensure stability. However, such amplification could be
rather expensive, and so this motivates a different notion of stability—one
that is invariant to how the measurement ensemble is scaled. One approach is
to build on intuition from Lemma 9; that is, a super analysis operator is
intuitively more stable if its null space is distant from all rank-$2$
operators simultaneously; since this null space is invariant to how the
measurement vectors are scaled, this is one prospective (and particularly
geometric) notion of stability. In this section, we will focus on another
alternative. Note that $\operatorname{d}(x,y):=\min\\{\|x-y\|,\|x+y\|\\}$
defines a metric on $\mathbb{R}^{M}/\\{\pm 1\\}$, and consider the following:
###### Definition 14.
We say $f\colon\mathbb{R}^{M}/\\{\pm 1\\}\rightarrow\mathbb{R}^{N}$ is
$C$-stable if for every $\mathrm{SNR}>0$, there exists an estimator
$g\colon\mathbb{R}^{N}\rightarrow\mathbb{R}^{M}/\\{\pm 1\\}$ such that for
every nonzero signal $x\in\mathbb{R}^{M}/\\{\pm 1\\}$ and adversarial noise
term $z$ with $\|z\|^{2}\leq\|f(x)\|^{2}/\mathrm{SNR}$, the relative error in
reconstruction satisfies
$\frac{\operatorname{d}\big{(}g(f(x)+z),x\big{)}}{\|x\|}\leq\frac{C}{\sqrt{\mathrm{SNR}}}.$
According to this definition, $f$ is more stable when $C$ is smaller. Also,
because of the $\mathrm{SNR}$ (signal-to-noise ratio) model, $f$ is $C$-stable
if and only if every nonzero multiple of $f$ is also $C$-stable. Indeed,
taking $\tilde{f}:=cf$ for some nonzero scalar $c$, then for every adversarial
noise term $\tilde{z}$ which is admissible for $\tilde{f}(x)$ and
$\mathrm{SNR}$, we have that $z:=\tilde{z}/c$ is admissible for $f(x)$ and
$\mathrm{SNR}$; as such, $\tilde{f}$ inherits $f$’s $C$-stability by using the
estimator $\tilde{g}$ defined by $\tilde{g}(y):=g(y/c)$. Overall, this notion
of stability offers the invariance to scaling we originally desired. With
this, if we find a measurement process $f$ which is $C$-stable with minimal
$C$, at that point, we can take advantage of noise with bounded magnitude by
amplifying $f$ (and thereby effectively increasing $\mathrm{SNR}$) until the
relative error in reconstruction is tolerable.
Now that we have a notion of stability, we provide a sufficient condition:
###### Theorem 15.
Suppose $f$ is bilipschitz, that is, there exist constants
$0<\alpha\leq\beta<\infty$ such that
$\alpha\operatorname{d}(x,y)\leq\|f(x)-f(y)\|\leq\beta\operatorname{d}(x,y)\qquad\forall
x,y\in\mathbb{R}^{M}/\\{\pm 1\\}.$
If $f(0)=0$, then $f$ is $\frac{2\beta}{\alpha}$-stable.
###### Proof.
Consider the projection function
$P\colon\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}$ defined by
$P(y):=\underset{y^{\prime}\in\operatorname{range}(f)}{\arg\min}\|y^{\prime}-y\|\qquad\forall
y\in\mathbb{R}^{N}.$
In cases where the minimizer is not unique, we will pick one of them to be
$P(y)$. For $P$ to be well-defined, we claim it suffices for
$\operatorname{range}(f)$ to be closed. Indeed, this ensures that a minimizer
always exists; since $0\in\operatorname{range}(f)$, any prospective minimizer
must be no farther from $y$ than $0$ is, meaning we can equivalently minimize
over the intersection of $\operatorname{range}(f)$ and the closed ball of
radius $\|y\|$ centered at $y$; this intersection is compact, and so a
minimizer necessarily exists. In order to avoid using the axiom of choice, we
also want a systematic method of breaking ties when the minimizer is not
unique, but this can be done using lexicographic ideas provided
$\operatorname{range}(f)$ is closed.
We now show that $\operatorname{range}(f)$ is, in fact, closed. Pick a
convergent sequence
$\\{y_{n}\\}_{n=1}^{\infty}\subseteq\operatorname{range}(f)$. This sequence is
necessarily Cauchy, which means the corresponding sequence of inverse images
$\\{x_{n}\\}_{n=1}^{\infty}\subseteq\mathbb{R}^{M}/\\{\pm 1\\}$ is also Cauchy
(using the lower Lipschitz bound $\alpha>0$). Arbitrarily pick a
representative $z_{n}\in\mathbb{R}^{M}$ for each $x_{n}$. Then
$\\{z_{n}\\}_{n=1}^{\infty}$ is bounded, and thus has a subsequence that
converges to some $z\in\mathbb{R}^{M}$. Denote $x:=\\{\pm
z\\}\in\mathbb{R}^{M}/\\{\pm 1\\}$. Then
$\operatorname{d}(x_{n},x)\leq\|z_{n}-z\|$, and so
$\\{x_{n}\\}_{n=1}^{\infty}$ has a subsequence which converges to $x$. Since
$\\{x_{n}\\}_{n=1}^{\infty}$ is also Cauchy, we therefore have
$x_{n}\rightarrow x$. Then the upper Lipschitz bound $\beta<\infty$ gives that
$f(x)\in\operatorname{range}(f)$ is the limit of $\\{y_{n}\\}_{n=1}^{\infty}$.
Now that we know $P$ is well-defined, we continue. Since $\alpha>0$, we know
$f$ is injective, and so we can take $g:=f^{-1}\circ P$. In fact,
$\alpha^{-1}$ is a Lipschitz bound for $f^{-1}$, implying
$\operatorname{d}\big{(}g(f(x)+z),x\big{)}=\operatorname{d}\Big{(}f^{-1}\big{(}P(f(x)+z)\big{)},f^{-1}\big{(}f(x)\big{)}\Big{)}\leq\alpha^{-1}\|P(f(x)+z)-f(x)\|.$
(10)
Furthermore, the triangle inequality and the definition of $P$ together give
$\|P(f(x)+z)-f(x)\|\leq\|P(f(x)+z)-(f(x)+z)\|+\|z\|\leq\|f(x)-(f(x)+z)\|+\|z\|=2\|z\|.$
(11)
Combining (10) and (11) then gives
$\frac{\operatorname{d}\big{(}g(f(x)+z),x\big{)}}{\|x\|}\leq
2\alpha^{-1}\frac{\|z\|}{\|x\|}\leq\frac{2\alpha^{-1}}{\sqrt{\mathrm{SNR}}}\frac{\|f(x)\|}{\|x\|}=\frac{2\alpha^{-1}}{\sqrt{\mathrm{SNR}}}\frac{\|f(x)-f(0)\|}{\|x-0\|}\leq\frac{2\beta/\alpha}{\sqrt{\mathrm{SNR}}},$
as desired. ∎
Note that the “project-and-invert” estimator we used to demonstrate stability
is far from new. For example, if the noise were modeled as Gaussian random,
then project-and-invert is precisely the maximum likelihood estimator.
However, stochastic noise models warrant a much deeper analysis, since in this
regime, one is often concerned with the bias and variance of estimates. As
such, we will investigate these issues in the next section. Another example of
project-and-invert is the Moore-Penrose pseudoinverse of an $N\times M$ matrix
$A$ of rank $M$. Using the obvious reformulation of $C$-stable in this linear
case, it can be shown that $C$ is the condition number of $A$, meaning
$\alpha$ and $\beta$ are analogous to the smallest and largest singular
values. The extra factor of $2$ in the stability constant of Theorem 15 is an
artifact of the nonlinear setting: For the sake of illustration, suppose
$\operatorname{range}(f)$ is the unit circle and $f(x)=(-1,0)$ but
$z=(1+\varepsilon,0)$; then $P(f(x)+z)=(1,0)$, which is just shy of $2\|z\|$
away from $f(x)$. This sort of behavior is not exhibited in the linear case,
in which $\operatorname{range}(f)$ is a subspace.
Having established the sufficiency of bilipschitz for stability, we now note
that $\mathcal{A}$ is not bilipschitz. In fact, more generally, $\mathcal{A}$
fails to satisfy any Hölder condition. To see this, pick some nonzero
measurement vector $\varphi_{n}$ and scalars $C>0$ and $\alpha\geq 0$. Then
$\displaystyle\frac{\|\mathcal{A}((C+1)\varphi_{n})-\mathcal{A}(\varphi_{n})\|}{\operatorname{d}((C+1)\varphi_{n},\varphi_{n})^{\alpha}}$
$\displaystyle=\frac{1}{\|C\varphi_{n}\|^{\alpha}}\bigg{(}\sum_{n^{\prime}=1}^{N}\Big{(}|\langle(C+1)\varphi_{n},\varphi_{n^{\prime}}\rangle|^{2}-|\langle\varphi_{n},\varphi_{n^{\prime}}\rangle|^{2}\Big{)}^{2}\bigg{)}^{1/2}$
$\displaystyle=\frac{(C+1)^{2}-1}{C^{\alpha}}\frac{\|\mathcal{A}(\varphi_{n})\|}{\|\varphi_{n}\|^{\alpha}}.$
Furthermore,
$\|\mathcal{A}(\varphi_{n})\|\geq|(\mathcal{A}(\varphi_{n}))(n)|=\|\varphi_{n}\|^{4}>0$,
while $\frac{(C+1)^{2}-1}{C^{\alpha}}$ diverges as $C\rightarrow\infty$,
assuming $\alpha\leq 1$; when $\alpha>1$, it also diverges as $C\rightarrow
0$, but this case is not interesting for infamous reasons [30].
All is not lost, however. As we will see, with this notion of stability, it
happens to be more convenient to consider $\sqrt{\mathcal{A}}$, defined
entrywise by $(\sqrt{\mathcal{A}}(x))(n)=|\langle x,\varphi_{n}\rangle|$.
Considering Theorem 15, we are chiefly interested in the optimal constants
$0<\alpha\leq\beta<\infty$ for which
$\alpha\operatorname{d}(x,y)\leq\|\sqrt{\mathcal{A}}(x)-\sqrt{\mathcal{A}}(y)\|\leq\beta\operatorname{d}(x,y)\qquad\forall
x,y\in\mathbb{R}^{M}/\\{\pm 1\\}.$ (12)
In particular, Theorem 15 guarantees more stability when $\alpha$ and $\beta$
are closer together; this indicates that when suitably scaled, we want
$\sqrt{\mathcal{A}}$ to act as a near-isometry, despite being a nonlinear
function. The following lemma gives the upper Lipschitz constant:
###### Lemma 16.
The upper Lipschitz constant for $\sqrt{\mathcal{A}}$ is
$\beta=\|\Phi^{*}\|_{2}$.
###### Proof.
By the reverse triangle inequality, we have
$\big{|}|a|-|b|\big{|}\leq\min\big{\\{}|a-b|,|a+b|\big{\\}}\qquad\forall
a,b\in\mathbb{R}.$
Thus, for all $x,y\in\mathbb{R}^{M}/\\{\pm 1\\}$,
$\displaystyle\|\sqrt{\mathcal{A}}(x)-\sqrt{\mathcal{A}}(y)\|^{2}$
$\displaystyle=\sum_{n=1}^{N}\big{|}|\langle x,\varphi_{n}\rangle|-|\langle
y,\varphi_{n}\rangle|\big{|}^{2}$
$\displaystyle\leq\sum_{n=1}^{N}\bigg{(}\min\Big{\\{}|\langle
x-y,\varphi_{n}\rangle|,|\langle x+y,\varphi_{n}\rangle|\Big{\\}}\bigg{)}^{2}$
$\displaystyle\leq\min\Big{\\{}\|\Phi^{*}(x-y)\|^{2},\|\Phi^{*}(x+y)\|^{2}\Big{\\}}$
$\displaystyle\leq\|\Phi^{*}\|_{2}^{2}\big{(}\operatorname{d}(x,y)\big{)}^{2}.$
(13)
Furthermore, picking a nonzero $x\in\mathbb{R}^{M}$ such that
$\|\Phi^{*}x\|=\|\Phi^{*}\|_{2}\|x\|$ gives
$\|\sqrt{\mathcal{A}}(x)-\sqrt{\mathcal{A}}(0)\|=\|\sqrt{\mathcal{A}}(x)\|=\|\Phi^{*}x\|=\|\Phi^{*}\|_{2}\|x\|=\|\Phi^{*}\|_{2}\operatorname{d}(x,0),$
thereby achieving equality in (13). ∎
The lower Lipschitz bound is much more difficult to determine. Our approach to
analyzing this bound is based on the following definition:
###### Definition 17.
We say an $M\times N$ matrix $\Phi$ satisfies the $\sigma$-strong complement
property ($\sigma$-SCP) if
$\max\big{\\{}\lambda_{\mathrm{min}}(\Phi_{S}\Phi_{S}^{*}),\lambda_{\mathrm{min}}(\Phi_{S^{\mathrm{c}}}\Phi_{S^{\mathrm{c}}}^{*})\big{\\}}\geq\sigma^{2}$
for every $S\subseteq\\{1,\ldots,N\\}$.
This is a numerical version of the complement property we discussed earlier.
It bears some resemblance to other matrix properties, namely combinatorial
properties regarding the conditioning of submatrices, e.g., the restricted
isometry property [11], the Kadison-Singer problem [15] and numerically
erasure-robust frames [21]. We are interested in SCP because it is very
related to the lower Lipschitz bound in (12):
###### Theorem 18.
The lower Lipschitz constant for $\sqrt{\mathcal{A}}$ satisfies
$\sigma\leq\alpha\leq\sqrt{2}\sigma,$
where $\sigma$ is the largest scalar for which $\Phi$ has the $\sigma$-strong
complement property.
###### Proof.
By analogy with the proof of Theorem 3, we start by proving the upper bound.
Pick $\varepsilon>0$ and note that $\Phi$ is not $(\sigma+\varepsilon)$-SCP.
Then there exists $S\subseteq\\{1,\ldots,N\\}$ such that both
$\lambda_{\mathrm{min}}(\Phi_{S}\Phi_{S}^{*})<(\sigma+\varepsilon)^{2}$ and
$\lambda_{\mathrm{min}}(\Phi_{S^{\mathrm{c}}}\Phi_{S^{\mathrm{c}}}^{*})<(\sigma+\varepsilon)^{2}$.
This implies that there exist unit (eigen) vectors $u,v\in\mathbb{R}^{M}$ such
that $\|\Phi_{S}^{*}u\|<(\sigma+\varepsilon)\|u\|$ and
$\|\Phi_{S^{\mathrm{c}}}^{*}v\|<(\sigma+\varepsilon)\|v\|$. Taking $x:=u+v$
and $y:=u-v$ then gives
$\displaystyle\|\sqrt{\mathcal{A}}(x)-\sqrt{\mathcal{A}}(y)\|^{2}$
$\displaystyle=\sum_{n=1}^{N}\big{|}|\langle u+v,\varphi_{n}\rangle|-|\langle
u-v,\varphi_{n}\rangle|\big{|}^{2}$ $\displaystyle=\sum_{n\in
S}\big{|}|\langle u+v,\varphi_{n}\rangle|-|\langle
u-v,\varphi_{n}\rangle|\big{|}^{2}+\sum_{n\in S^{\mathrm{c}}}\big{|}|\langle
u+v,\varphi_{n}\rangle|-|\langle u-v,\varphi_{n}\rangle|\big{|}^{2}$
$\displaystyle\leq 4\sum_{n\in S}|\langle
u,\varphi_{n}\rangle|^{2}+4\sum_{n\in S^{\mathrm{c}}}|\langle
v,\varphi_{n}\rangle|^{2},$
where the last step follows from the reverse triangle inequality. Next, we
apply our assumptions on $u$ and $v$:
$\displaystyle\|\sqrt{\mathcal{A}}(x)-\sqrt{\mathcal{A}}(y)\|^{2}$
$\displaystyle\leq
4\big{(}\|\Phi_{S}^{*}u\|^{2}+\|\Phi_{S^{\mathrm{c}}}^{*}v\|^{2}\big{)}$
$\displaystyle<4(\sigma+\varepsilon)^{2}\big{(}\|u\|^{2}+\|v\|^{2}\big{)}=8(\sigma+\varepsilon)^{2}\min\big{\\{}\|u\|^{2},\|v\|^{2}\big{\\}}=2(\sigma+\varepsilon)^{2}\big{(}\operatorname{d}(x,y)\big{)}^{2}.$
Thus, $\alpha<\sqrt{2}(\sigma+\varepsilon)$ for all $\varepsilon>0$, and so
$\alpha\leq\sqrt{2}\sigma$.
Next, to prove the lower bound, take $\varepsilon>0$ and pick
$x,y\in\mathbb{R}^{M}/\\{\pm 1\\}$ such that
$(\alpha+\varepsilon)\operatorname{d}(x,y)>\|\sqrt{\mathcal{A}}(x)-\sqrt{\mathcal{A}}(y)\|.$
We will show that $\Phi$ is not $(\alpha+\varepsilon)$-SCP. To this end, pick
$S:=\\{n:\operatorname{sign}\langle
x,\varphi_{n}\rangle=-\operatorname{sign}\langle y,\varphi_{n}\rangle\\}$ and
define $u:=x+y$ and $v:=x-y$. Then the definition of $S$ gives
$\|\Phi_{S}^{*}u\|^{2}=\sum_{n\in S}|\langle x,\varphi_{n}\rangle+\langle
y,\varphi_{n}\rangle|^{2}=\sum_{n\in S}\big{|}|\langle
x,\varphi_{n}\rangle|-|\langle y,\varphi_{n}\rangle|\big{|}^{2},$
and similarly $\|\Phi_{S^{\mathrm{c}}}^{*}v\|^{2}=\sum_{n\in
S^{\mathrm{c}}}\big{|}|\langle x,\varphi_{n}\rangle|-|\langle
y,\varphi_{n}\rangle|\big{|}^{2}$. Adding these together then gives
$\|\Phi_{S}^{*}u\|^{2}+\|\Phi_{S^{\mathrm{c}}}^{*}v\|^{2}=\sum_{n=1}^{N}\big{|}|\langle
x,\varphi_{n}\rangle|-|\langle
y,\varphi_{n}\rangle|\big{|}^{2}=\|\sqrt{\mathcal{A}}(x)-\sqrt{\mathcal{A}}(y)\|^{2}<(\alpha+\varepsilon)^{2}\big{(}\operatorname{d}(x,y)\big{)}^{2},$
implying both $\|\Phi_{S}^{*}u\|<(\alpha+\varepsilon)\|u\|$ and
$\|\Phi_{S^{\mathrm{c}}}^{*}v\|<(\alpha+\varepsilon)\|v\|$. Therefore, $\Phi$
is not $(\alpha+\varepsilon)$-SCP, i.e., $\sigma<\alpha+\varepsilon$ for all
$\varepsilon>0$, which in turn implies the desired lower bound. ∎
Note that all of this analysis specifically treats the real case; indeed, the
metric we use would not be appropriate in the complex case. However, just like
the complement property is necessary for injectivity in the complex case
(Theorem 7), we suspect that the strong complement property is necessary for
stability in the complex case, but we have no proof of this.
As an example of how to apply Theorem 18, pick $M$ and $N$ to both be even and
let $F=\\{f_{n}\\}_{n\in\mathbb{Z}_{N}}$ be the $\frac{M}{2}\times N$ matrix
you get by collecting the first $\frac{M}{2}$ rows of the $N\times N$ discrete
Fourier transform matrix with entries of unit modulus. Next, take
$\Phi=\\{\varphi_{n}\\}_{n\in\mathbb{Z}_{N}}$ to be the $M\times N$ matrix you
get by stacking the real and imaginary parts of $F$ and normalizing the
resulting columns (i.e., multiplying by $\sqrt{2/M}$). Then $\Phi$ happens to
be a self-localized finite frame due to the rapid decay in coherence between
columns. To be explicit, first note that
$\displaystyle|\langle\varphi_{n},\varphi_{n^{\prime}}\rangle|^{2}$
$\displaystyle=\tfrac{4}{M^{2}}|\langle\operatorname{Re}f_{n},\operatorname{Re}f_{n^{\prime}}\rangle+\langle\operatorname{Im}f_{n},\operatorname{Im}f_{n^{\prime}}\rangle|^{2}$
$\displaystyle\leq\tfrac{4}{M^{2}}\Big{|}\Big{(}\langle\operatorname{Re}f_{n},\operatorname{Re}f_{n^{\prime}}\rangle+\langle\operatorname{Im}f_{n},\operatorname{Im}f_{n^{\prime}}\rangle\Big{)}+i\Big{(}\langle\operatorname{Im}f_{n},\operatorname{Re}f_{n^{\prime}}\rangle-\langle\operatorname{Re}f_{n},\operatorname{Im}f_{n^{\prime}}\rangle\Big{)}\Big{|}^{2}$
$\displaystyle=\tfrac{4}{M^{2}}|\langle f_{n},f_{n^{\prime}}\rangle|^{2},$
and furthermore, when $n\neq n^{\prime}$, the geometric sum formula gives
$|\langle f_{n},f_{n^{\prime}}\rangle|^{2}=\bigg{|}\sum_{m=0}^{M-1}e^{2\pi
im(n-n^{\prime})/N}\bigg{|}^{2}=\frac{\sin^{2}(M\pi(n-n^{\prime})/N)}{\sin^{2}(\pi(n-n^{\prime})/N)}\leq\frac{1}{\sin^{2}(\pi(n-n^{\prime})/N)}.$
Taking $u:=\varphi_{0}$, $v:=\varphi_{N/2}$ and $S:=\\{n:\frac{N}{4}\leq
n<\frac{3N}{4}\\}$, we then have
$\frac{\|\Phi_{S}^{*}u\|^{2}}{\|u\|^{2}}=\|\Phi_{S}^{*}u\|^{2}=\sum_{n\in
S}|\langle\varphi_{0},\varphi_{n}\rangle|^{2}\leq\frac{4}{M^{2}}\sum_{n\in
S}\frac{1}{\sin^{2}(\pi
n/N)}\leq\frac{4}{M^{2}}\cdot\frac{N/2}{\sin^{2}(\pi/4)}=\frac{4N}{M^{2}},$
and similarly for $\frac{\|\Phi_{S^{\mathrm{c}}}^{*}v\|^{2}}{\|v\|^{2}}$. As
such, if $N=o(M^{2})$, then $\Phi$ is $\sigma$-SCP only if $\sigma$ vanishes,
meaning phase retrieval with $\Phi$ necessarily lacks the stability guarantee
of Theorem 18. As a rule of thumb, self-localized frames fail to provide
stable phase retrieval for this very reason; just as we cannot stably
distinguish between $\varphi_{0}+\varphi_{N/2}$ and
$\varphi_{0}-\varphi_{N/2}$ in this case, in general, signals consisting of
“distant” components bring similar instability. This intuition was first
pointed out to us by Irene Waldspurger—we simply made it more rigorous with
the notion of SCP. This means that stable phase retrieval from localized
measurements must either use prior information about the signal (e.g.,
connected support) or additional measurements; indeed, this dichotomy has
already made its mark on the Fourier-based phase retrieval literature [20,
25].
We can also apply the strong complement property to show that certain (random)
ensembles produce stable measurements. We will use the following lemma, which
is proved in the proof of Lemma 4.1 in [16]:
###### Lemma 19.
Given $n\geq m\geq 2$, draw a real $m\times n$ matrix $G$ of independent
standard normal entries. Then
$\operatorname{Pr}\bigg{(}\lambda_{\mathrm{min}}(GG^{*})\leq\frac{n}{t^{2}}\bigg{)}\leq\frac{1}{\Gamma(n-m+2)}\bigg{(}\frac{n}{t}\bigg{)}^{n-m+1}\qquad\forall
t>0.$
###### Theorem 20.
Draw an $M\times N$ matrix $\Phi$ with independent standard normal entries,
and denote $R=\frac{N}{M}$. Provided $R>2$, then for every $\varepsilon>0$,
$\Phi$ has the $\sigma$-strong complement property with
$\sigma=\frac{1}{\sqrt{2}e^{1+\varepsilon/(R-2)}}\cdot\frac{N-2M+2}{2^{R/(R-2)}\sqrt{N}},$
with probability $\geq 1-e^{-\varepsilon M}$.
###### Proof.
Fix $M$ and $N$, and consider the function
$f\colon(M-2,\infty)\rightarrow(0,\infty)$ defined by
$f(x):=\frac{1}{\Gamma(x-M+2)}(\sigma\sqrt{x})^{x-M+1}.$
To simplify our analysis, we will assume that $N$ is even, but the proof can
be amended to account for the odd case. Applying Lemma 19, we have for every
subset $S\subseteq\\{1,\ldots,N\\}$ of size $K$ that
$\operatorname{Pr}(\lambda_{\mathrm{min}}(\Phi_{S}\Phi_{S}^{*})<\sigma^{2})\leq
f(K)$, provided $K\geq M$, and similarly
$\operatorname{Pr}(\lambda_{\mathrm{min}}(\Phi_{S^{\mathrm{c}}}\Phi_{S^{\mathrm{c}}}^{*})<\sigma^{2})\leq
f(N-K)$, provided $N-K\geq M$. We will use this to bound the probability that
$\Phi$ is not $\sigma$-SCP. Since
$\lambda_{\mathrm{min}}(\Phi_{S^{\mathrm{c}}}\Phi_{S^{\mathrm{c}}}^{*})=0$
whenever $|S|\geq N-M+1$ and
$\lambda_{\mathrm{min}}(\Phi_{S}\Phi_{S}^{*})\leq\lambda_{\mathrm{min}}(\Phi_{T}\Phi_{T}^{*})$
whenever $S\subseteq T$, then a union bound gives
$\displaystyle\operatorname{Pr}\Big{(}\Phi\mbox{ is not $\sigma$-SCP}\Big{)}$
$\displaystyle=\operatorname{Pr}\Big{(}\exists
S\subseteq\\{1,\ldots,N\\}\mbox{ s.t.
}\lambda_{\mathrm{min}}(\Phi_{S}\Phi_{S}^{*})<\sigma^{2}\mbox{ and
}\lambda_{\mathrm{min}}(\Phi_{S^{\mathrm{c}}}\Phi_{S^{\mathrm{c}}}^{*})<\sigma^{2}\Big{)}$
$\displaystyle\leq\operatorname{Pr}\Big{(}\exists
S\subseteq\\{1,\ldots,N\\},|S|=N-M+1,\mbox{ s.t.
}\lambda_{\mathrm{min}}(\Phi_{S}\Phi_{S}^{*})<\sigma^{2}\Big{)}$
$\displaystyle\qquad+\operatorname{Pr}\Big{(}\exists
S\subseteq\\{1,\ldots,N\\},M\leq|S|\leq N-M,\mbox{ s.t.
}\lambda_{\mathrm{min}}(\Phi_{S}\Phi_{S}^{*})<\sigma^{2}\mbox{ and
}\lambda_{\mathrm{min}}(\Phi_{S^{\mathrm{c}}}\Phi_{S^{\mathrm{c}}}^{*})<\sigma^{2}\Big{)}$
$\displaystyle\leq\binom{N}{N-M+1}f(N-M+1)+\frac{1}{2}\sum_{K=M}^{N-M}\binom{N}{K}f(K)f(N-K),$
(14)
where the last inequality follows in part from the fact that
$\lambda_{\mathrm{min}}(\Phi_{S}\Phi_{S}^{*})$ and
$\lambda_{\mathrm{min}}(\Phi_{S^{\mathrm{c}}}\Phi_{S^{\mathrm{c}}}^{*})$ are
independent random variables, and the factor $\frac{1}{2}$ is an artifact of
double counting partitions. We will further bound each term in (14) to get a
simpler expression. First, $\binom{2k}{k}\geq 2^{k}$ for all $k$ and so
$\displaystyle f(N-M+1)$
$\displaystyle\leq\frac{1}{\Gamma(N-2M+3)}(\sigma\sqrt{N})^{N-2M+2}$
$\displaystyle\leq\frac{1}{\Gamma(N-2M+3)}(\sigma\sqrt{N})^{N-2M+2}\cdot\frac{1}{2^{\frac{N}{2}-M+1}}\binom{N-2M+2}{\frac{N}{2}-M+1}=f(\tfrac{N}{2})^{2}$
Next, we will find that $g(x):=f(x)f(N-x)$ is maximized at $x=\frac{N}{2}$. To
do this, we first find the critical points of $g$. Since
$0=g^{\prime}(x)=f^{\prime}(x)f(N-x)-f(x)f^{\prime}(N-x)$, we have
$\frac{d}{dy}\log
f(y)\bigg{|}_{y=x}=\frac{f^{\prime}(x)}{f(x)}=\frac{f^{\prime}(N-x)}{f(N-x)}=\frac{d}{dy}\log
f(y)\bigg{|}_{y=N-x}.$ (15)
To analyze this further, we take another derivative:
$\frac{d^{2}}{dy^{2}}\log
f(y)=\frac{1}{2y}+\frac{M-1}{2y^{2}}-\frac{d^{2}}{dy^{2}}\log\Gamma(y-M+2).$
(16)
It is straightforward to see that
$\frac{1}{2y}+\frac{M-1}{2y^{2}}\leq\frac{1}{y-M+2}=\int_{y-M+2}^{\infty}\frac{dt}{t^{2}}<\sum_{k=0}^{\infty}\frac{1}{(y-M+2+k)^{2}}=\frac{d^{2}}{dy^{2}}\log\Gamma(y-M+2),$
where the last step uses a series expression for the trigamma function
$\psi_{1}(z):=\frac{d^{2}}{dz^{2}}\log\Gamma(z)$; see Section 6.4 of [1].
Applying this to (16) then gives that $\frac{d^{2}}{dy^{2}}\log f(y)<0$, which
in turn implies that $\frac{d}{dy}\log f(y)$ is strictly decreasing in $y$.
Thus, (15) requires $x=N-x$, and so $x=\frac{N}{2}$ is the only critical point
of $g$. Furthermore, to see that this is a maximizer, notice that
$g^{\prime\prime}(\tfrac{N}{2})=2f(\tfrac{N}{2})^{2}\cdot\frac{f^{\prime\prime}(\tfrac{N}{2})f(\tfrac{N}{2})-f^{\prime}(\tfrac{N}{2})^{2}}{f(\tfrac{N}{2})^{2}}=2f(\tfrac{N}{2})^{2}\cdot\frac{d}{dy}\frac{f^{\prime}(y)}{f(y)}\bigg{|}_{y=\frac{N}{2}}=2f(\tfrac{N}{2})^{2}\cdot\frac{d^{2}}{dy^{2}}\log
f(y)\bigg{|}_{y=\frac{N}{2}}<0.$
To summarize, we have that $f(N-M+1)$ and $f(K)f(N-K)$ are both at most
$f(\tfrac{N}{2})^{2}$. This leads to the following bound on (14):
$\operatorname{Pr}\Big{(}\Phi\mbox{ is not
$\sigma$-SCP}\Big{)}\leq\frac{1}{2}\sum_{K=0}^{N}\binom{N}{K}f(\tfrac{N}{2})^{2}=2^{N-1}f(\tfrac{N}{2})^{2}=\frac{2^{N-1}}{\Gamma(\frac{N}{2}-M+2)^{2}}\Big{(}\sigma\sqrt{\tfrac{N}{2}}\Big{)}^{N-2M+2}.$
Finally, applying the fact that $\Gamma(k+1)\geq e(\frac{k}{e})^{k}$ gives
$\displaystyle\operatorname{Pr}\Big{(}\Phi\mbox{ is not $\sigma$-SCP}\Big{)}$
$\displaystyle\leq\frac{2^{N-1}}{e^{2}}\bigg{(}\sigma
e\sqrt{2}\cdot\frac{\sqrt{N}}{N-2M+2}\bigg{)}^{N-2M+2}$
$\displaystyle=\frac{2^{RM}}{2e^{2}}\Big{(}e^{-\varepsilon/(R-2)}2^{-R/(R-2)}\Big{)}^{(R-2)M+2}\leq
2^{RM}(e^{\varepsilon}2^{R})^{-M}=e^{-\varepsilon M},$
as claimed. ∎
Considering $\|\Phi^{*}\|_{2}\leq(1+\varepsilon)(\sqrt{N}+\sqrt{M})$ with
probability $\geq 1-2e^{-\varepsilon(\sqrt{N}^{2}+\sqrt{M})^{2}/2}$ (see
Theorem II.13 of [17]), we can leverage Theorem 20 to determine the stability
of a Gaussian measurement ensemble. Specifically, we have by Theorem 15 along
with Lemma 16 and Theorem 18 that such measurements are $C$-stable with
$C=\frac{2\beta}{\alpha}\leq\frac{2\|\Phi^{*}\|_{2}}{\sigma}\sim\underbrace{2(\sqrt{N}+\sqrt{M})\cdot\sqrt{2}e\cdot\frac{2^{R/(R-2)}\sqrt{N}}{N-2M+2}}_{a(R,M)}\leq\underbrace{2\sqrt{2}e\bigg{(}\frac{R+\sqrt{R}}{R-2}\bigg{)}2^{R/(R-2)}}_{b(R)}$
(17)
Figure 1 illustrates these bounds along with different realizations of
$2\|\Phi^{*}\|_{2}/\sigma$. This suggests that the redundancy of the
measurement process is the main factor that determines stability of a random
measurement ensemble (and that bounded redundancies suffice for stability).
Furthermore, the project-and-invert estimator will yield particularly stable
signal reconstruction, although it is not obvious how to efficiently implement
this estimator; this is one advantage given by the reconstruction algorithms
in [2, 14].
| | |
---|---|---|---
Figure 1: The graph on the left depicts $\log_{10}b(R)$ as a function of $R$,
which is defined in (17). Modulo $\varepsilon$ terms, this serves as an upper
bound on $\log_{10}(2\|\Phi^{*}\|_{2}/\sigma)$ with high probability as
$M\rightarrow\infty$, where $\Phi$ is an $M\times RM$ matrix of independent
standard Gaussian entries. Based on Theorem 15 (along with Lemma 16 and
Theorem 18), this provides a stability guarantee for the corresponding
measurement process, namely $\sqrt{\mathcal{A}}$. Since $\log_{10}b(R)$
exhibits an asymptote at $R=2$, this gives no stability guarantee for
measurement ensembles of redundancy $2$. The next three graphs consider the
special cases where $M=2,4,6$, respectively. In each case, the dashed curve
depicts the slightly stronger upper bound of $\log_{10}a(R,M)$, defined in
(17). Also depicted, for each $R\in\\{2,2.5,3,3.5,4\\}$, are $30$ realizations
of $\log_{10}(2\|\Phi^{*}\|_{2}/\sigma)$; we provide a piecewise linear graph
connecting the sample averages for clarity. Notice that as $M$ increases,
$\log_{10}a(R,M)$ approaches $\log_{10}b(R)$; this is easily seen by their
definitions in (17). More interestingly, the random realizations also appear
to be approaching $\log_{10}b(R)$; this is most notable with the realizations
corresponding to $R=2$. To be clear, we use $\sigma$ as a proxy for $\alpha$
(see Theorem 18) because $\alpha$ is particularly difficult to obtain; as
such, we do not plot realizations of $\log_{10}(2\beta/\alpha)$.
### 3.2 Stability in the average case
Suppose a random variable $Y$ is drawn according to some unknown member of a
parameterized family of probability density functions
$\\{f(\cdot;\theta)\\}_{\theta\in\Omega}$. The Fisher information $J(\theta)$
quantifies how much information about the unknown parameter $\theta$ is given
by the random variable on average. This is particularly useful in statistical
signal processing, where a signal measurement is corrupted by random noise,
and the original signal is viewed as a parameter of the random measurement’s
unknown probability density function; as such, the Fisher information
quantifies how useful the noisy measurement is for signal estimation.
In this section, we will apply the theory of Fisher information to evaluate
the stability of $\mathcal{A}$. To do this, we consider a stochastic noise
model, that is, given some signal $x$, we take measurements of the form
$Y=\mathcal{A}(x)+Z$, where the entries of $Z$ are independent Gaussian random
variables with mean $0$ and variance $\sigma^{2}$. We want to use $Y$ to
estimate $x$ up to a global phase factor; to simplify the analysis, we will
estimate a particular $\theta(x)\equiv x$, specifically (and arbitrarily) $x$
divided by the phase of its last nonzero entry. As such, $Y$ is a random
vector with probability density function
$f(y;\theta)=\frac{1}{(2\pi\sigma^{2})^{N/2}}e^{-\|y-\mathcal{A}(\theta)\|^{2}/2\sigma^{2}}\qquad\forall
y\in\mathbb{R}^{N}.$
With this, we can calculate the Fisher information matrix, defined entrywise
by
$\big{(}J(\theta)\big{)}_{ij}:=\mathbb{E}\bigg{[}\bigg{(}\frac{\partial}{\partial\theta_{i}}\log
f(Y;\theta)\bigg{)}\bigg{(}\frac{\partial}{\partial\theta_{j}}\log
f(Y;\theta)\bigg{)}\bigg{|}\theta\bigg{]}.$ (18)
In particular, we have
$\frac{\partial}{\partial\theta_{i}}\log
f(y;\theta)=\frac{\partial}{\partial\theta_{i}}\bigg{(}-\frac{1}{2\sigma^{2}}\sum_{n=1}^{N}\Big{(}y_{n}-\big{(}\mathcal{A}(\theta)\big{)}_{n}\Big{)}^{2}\bigg{)}=\frac{1}{\sigma^{2}}\sum_{n=1}^{N}\Big{(}y_{n}-\big{(}\mathcal{A}(\theta)\big{)}_{n}\Big{)}\frac{\partial}{\partial\theta_{i}}\big{(}\mathcal{A}(\theta)\big{)}_{n},$
and so applying (18) along with the independence of the entries of $Z$ gives
$\big{(}J(\theta)\big{)}_{ij}=\frac{1}{\sigma^{4}}\sum_{n=1}^{N}\sum_{n^{\prime}=1}^{N}\frac{\partial}{\partial\theta_{i}}\big{(}\mathcal{A}(\theta)\big{)}_{n}\frac{\partial}{\partial\theta_{j}}\big{(}\mathcal{A}(\theta)\big{)}_{n^{\prime}}\mathbb{E}[Z_{n}Z_{n^{\prime}}]=\frac{1}{\sigma^{2}}\sum_{n=1}^{N}\frac{\partial}{\partial\theta_{i}}\big{(}\mathcal{A}(\theta)\big{)}_{n}\frac{\partial}{\partial\theta_{j}}\big{(}\mathcal{A}(\theta)\big{)}_{n}.$
It remains to take partial derivatives of $\mathcal{A}(\theta)$, but this
calculation depends on whether $\theta$ is real or complex.
In the real case, we have
$\frac{\partial}{\partial\theta_{i}}\big{(}\mathcal{A}(\theta)\big{)}_{n}=\frac{\partial}{\partial\theta_{i}}\bigg{(}\sum_{m=1}^{M}\theta_{m}\varphi_{n}(m)\bigg{)}^{2}=2\bigg{(}\sum_{m=1}^{M}\theta_{m}\varphi_{n}(m)\bigg{)}\varphi_{n}(i).$
Thus, if we take $\Psi(\theta)$ to be the $M\times N$ matrix whose $n$th
column is $\langle\theta,\varphi_{n}\rangle\varphi_{n}$, then the Fisher
information matrix can be expressed as
$J(\theta)=\frac{4}{\sigma^{2}}\Psi(\theta)\Psi(\theta)^{*}$. Interestingly,
Theorem 3 implies that $J(\theta)$ is necessarily positive definite when
$\mathcal{A}$ is injective. To see this, suppose there exists
$\theta\in\Omega$ such that $J(\theta)$ has a nontrivial null space. Then
$\\{\langle\theta,\varphi_{n}\rangle\varphi_{n}\\}_{n=1}^{N}$ does not span
$\mathbb{R}^{M}$, and so $S=\\{n:\langle\theta,\varphi_{n}\rangle=0\\}$ breaks
the complement property. As the following result shows, when $\mathcal{A}$ is
injective, the conditioning of $J(\theta)$ lends some insight into stability:
###### Theorem 21.
For $x\in\mathbb{R}^{M}$, let $Y=\mathcal{A}(x)+Z$ denote noisy intensity
measurements with $Z$ having independent $\mathcal{N}(0,\sigma^{2})$ entries.
Furthermore, define the parameter $\theta$ to be $x$ divided by the sign of
its last nonzero entry; let $\Omega\subseteq\mathbb{R}^{M}$ denote all such
$\theta$. Then for any unbiased estimator $\hat{\theta}(Y)$ of $\theta$ in
$\Omega$ with a finite $M\times M$ covariance matrix $C(\hat{\theta})$, we
have $C(\hat{\theta})-J(\theta)^{-1}$ is positive semidefinite whenever
$\theta\in\operatorname{int}(\Omega)$.
This result was first given by Balan (see Theorem 4.1 in [4]). Note that the
requirement that $\theta$ be in the interior of $\Omega$ can be weakened to
$\theta\neq 0$ by recognizing that our choice for $\theta$ (dividing by the
sign of the last nonzero entry) was arbitrary. To interpret this theorem, note
that
$\displaystyle\operatorname{Tr}[C(\hat{\theta})]$
$\displaystyle=\operatorname{Tr}[\mathbb{E}[(\hat{\theta}(Y)-\theta)(\hat{\theta}(Y)-\theta)^{\mathrm{T}}]]$
$\displaystyle=\mathbb{E}[\operatorname{Tr}[(\hat{\theta}(Y)-\theta)(\hat{\theta}(Y)-\theta)^{\mathrm{T}}]]=\mathbb{E}[\operatorname{Tr}[(\hat{\theta}(Y)-\theta)^{\mathrm{T}}(\hat{\theta}(Y)-\theta)]]=\mathbb{E}\|\hat{\theta}(Y)-\theta\|^{2},$
and so Theorem 21 and the linearity of the trace together give
$\mathbb{E}\|\hat{\theta}(Y)-\theta\|^{2}=\operatorname{Tr}[C(\hat{\theta})]\geq\operatorname{Tr}[J(\theta)^{-1}]$.
In the previous section, Definition 14 provided a notion of worst-case
stability based on the existence of an estimator with small error. By analogy,
Theorem 21 demonstrates a converse of sorts: that no unbiased estimator will
have mean squared error smaller than $\operatorname{Tr}[J(\theta)^{-1}]$. As
such, a stable measurement ensemble might minimize
$\sup_{\theta\in\Omega}\operatorname{Tr}[J(\theta)^{-1}]$, although this is a
particularly cumbersome objective function to work with. More interestingly,
Theorem 21 provides another numerical strengthening of the complement property
(analogous to the strong complement property of the previous section).
Unfortunately, we cannot make a more rigorous comparison between the worst-
and average-case analyses of stability; indeed, our worst-case analysis
exploited the fact that $\sqrt{\mathcal{A}}$ is bilipschitz (which
$\mathcal{A}$ is not), and as we shall see, the average-case analysis depends
on $\mathcal{A}$ being differentiable (which $\sqrt{\mathcal{A}}$ is not).
To calculate the information matrix in the complex case, we first express our
parameter vector in real coordinates:
$\theta=(\theta_{1}+\mathrm{i}\theta_{M+1},\theta_{2}+\mathrm{i}\theta_{M+2},\ldots,\theta_{M}+\mathrm{i}\theta_{2M})$,
that is, we view $\theta$ as a $2M$-dimensional real vector by concatenating
its real and imaginary parts. Next, for any arbitrary function
$g\colon\mathbb{R}^{2M}\rightarrow\mathbb{C}$, the product rule gives
$\frac{\partial}{\partial\theta_{i}}|g(\theta)|^{2}=\frac{\partial}{\partial\theta_{i}}g(\theta)\overline{g(\theta)}=\bigg{(}\frac{\partial}{\partial\theta_{i}}g(\theta)\bigg{)}\overline{g(\theta)}+g(\theta)\overline{\bigg{(}\frac{\partial}{\partial\theta_{i}}g(\theta)\bigg{)}}=2\operatorname{Re}g(\theta)\frac{\partial}{\partial\theta_{i}}\overline{g(\theta)}.$
(19)
Since we care about partial derivatives of $\mathcal{A}(\theta)$, we take
$g(\theta)=\langle\theta,\varphi_{n}\rangle=\sum_{m=1}^{M}(\theta_{m}+\mathrm{i}\theta_{M+m})\overline{\varphi_{n}(m)}$,
and so
$\frac{\partial}{\partial\theta_{i}}\overline{g(\theta)}=\left\\{\begin{array}[]{cl}\varphi_{n}(i)&\mbox{if
}i\leq M\\\ -\mathrm{i}\varphi_{n}(i-M)&\mbox{if }i>M.\end{array}\right.$ (20)
Combining (19) and (20) then gives the following expression for the Fisher
information matrix: Take $\Psi(\theta)$ to be the $2M\times N$ matrix whose
$n$th column is formed by stacking the real and imaginary parts of
$\langle\theta,\varphi_{n}\rangle\varphi_{n}$; then
$J(\theta)=\frac{4}{\sigma^{2}}\Psi(\theta)\Psi(\theta)^{*}$.
###### Lemma 22.
Take $\widetilde{J}(\theta)$ to be the $(2M-1)\times(2M-1)$ matrix that comes
from removing the last row and column of $J(\theta)$. If $\mathcal{A}$ is
injective, then $\widetilde{J}(\theta)$ is positive definite for every
$\theta\in\operatorname{int}(\Omega)$.
###### Proof.
First, we note that
$J(\theta)=\frac{4}{\sigma^{2}}\Psi(\theta)\Psi(\theta)^{*}$ is necessarily
positive semidefinite, and so
$\inf_{\|x\|=1}x^{\mathrm{T}}\tilde{J}(\theta)x=\inf_{\|x\|=1}[x;0]^{\mathrm{T}}J(\theta)[x;0]\geq\inf_{\|y\|=1}y^{\mathrm{T}}J(\theta)y\geq
0.$
As such, it suffices to show that $\tilde{J}(\theta)$ is invertible.
To this end, take any vector $x$ in the null space of $\widetilde{J}(\theta)$.
Then defining $y:=[x;0]\in\mathbb{R}^{2M}$, we have that $J(\theta)y$ is zero
in all but (possibly) the $2M$th entry. As such, $0=\langle
y,J(\theta)y\rangle=\|\frac{2}{\sigma}\Psi(\theta)^{*}y\|^{2}$, meaning $y$ is
orthogonal to the columns of $\Psi(\theta)$. Since $\mathcal{A}$ is injective,
Theorem 4 then gives that $y=\alpha\mathrm{i}\theta$ for some
$\alpha\in\mathbb{R}$. But since $\theta\in\operatorname{int}(\Omega)$, we
have $\theta_{M}>0$, and so the $2M$th entry of $\mathrm{i}\theta$ is
necessarily nonzero. This means $\alpha=0$, and so $y$ (and thus $x$) is
trivial. ∎
###### Theorem 23.
For $x\in\mathbb{C}^{M}$, let $Y=\mathcal{A}(x)+Z$ denote noisy intensity
measurements with $Z$ having independent $\mathcal{N}(0,\sigma^{2})$ entries.
Furthermore, define the parameter $\theta$ to be $x$ divided by the phase of
its last nonzero entry, and view $\theta$ as a vector in $\mathbb{R}^{2M}$ by
concatenating its real and imaginary parts; let
$\Omega\subseteq\mathbb{R}^{2M}$ denote all such $\theta$. Then for any
unbiased estimator $\hat{\theta}(Y)$ of $\theta$ in $\Omega$ with a finite
$2M\times 2M$ covariance matrix $C(\hat{\theta})$, the last row and column of
$C(\hat{\theta})$ are both zero, and the remaining $(2M-1)\times(2M-1)$
submatrix $\widetilde{C}(\hat{\theta})$ has the property that
$\widetilde{C}(\hat{\theta})-\widetilde{J}(\theta)^{-1}$ is positive
semidefinite whenever $\theta\in\operatorname{int}(\Omega)$.
###### Proof.
We start by following the usual proof of the vector parameter Cramer-Rao lower
bound (see for example Appendix 3B of [28]). Note that for any
$i,j\in\\{1,\ldots,2M\\}$,
$\displaystyle\int_{\mathbb{R}^{N}}\big{(}(\hat{\theta}(y))_{j}-\theta_{j}\big{)}\frac{\partial\log
f(y;\theta)}{\partial\theta_{i}}f(y;\theta)dy$
$\displaystyle=\int_{\mathbb{R}^{N}}(\hat{\theta}(y))_{j}\frac{\partial
f(y;\theta)}{\partial\theta_{i}}dy-\theta_{j}\int_{\mathbb{R}^{N}}\frac{\partial
f(y;\theta)}{\partial\theta_{i}}dy$
$\displaystyle=\frac{\partial}{\partial\theta_{i}}\int_{\mathbb{R}^{N}}(\hat{\theta}(y))_{j}f(y;\theta)dy-\theta_{j}\frac{\partial}{\partial\theta_{i}}\int_{\mathbb{R}^{N}}f(y;\theta)dy,$
where the second equality is by differentiation under the integral sign (see
Lemma 24 for details; here, we use the fact that $\hat{\theta}$ has a finite
covariance matrix so that $\hat{\theta}_{j}$ has a finite second moment).
Next, we use the facts that $\hat{\theta}$ is unbiased and $f(\cdot;\theta)$
is a probability density function (regardless of $\theta$) to get
$\int_{\mathbb{R}^{N}}\big{(}(\hat{\theta}(y))_{j}-\theta_{j}\big{)}\frac{\partial\log
f(y;\theta)}{\partial\theta_{i}}f(y;\theta)dy=\frac{\partial\theta_{j}}{\partial\theta_{i}}=\left\\{\begin{array}[]{ll}1&\mbox{if
}i=j\\\ 0&\mbox{if }i\neq j\end{array}\right..$
Thus, letting $\nabla_{\theta}\log f(y;\theta)$ denote the column vector whose
$i$th entry is $\frac{\partial\log f(y;\theta)}{\partial\theta_{i}}$, we have
$I=\int_{\mathbb{R}^{N}}\big{(}\hat{\theta}(y)-\theta\big{)}\big{(}\nabla_{\theta}\log
f(y;\theta)\big{)}^{\mathrm{T}}f(y;\theta)dy.$
Equivalently, we have that for all column vectors $a,b\in\mathbb{R}^{2M}$,
$a^{\mathrm{T}}b=\int_{\mathbb{R}^{N}}a^{\mathrm{T}}\big{(}\hat{\theta}(y)-\theta\big{)}\big{(}\nabla_{\theta}\log
f(y;\theta)\big{)}^{\mathrm{T}}b~{}f(y;\theta)dy.$
Next, we apply the Cauchy-Schwarz inequality in $f$-weighted $L^{2}$ space to
get
$\displaystyle\big{(}a^{\mathrm{T}}b\big{)}^{2}$
$\displaystyle=\bigg{(}\int_{\mathbb{R}^{N}}a^{\mathrm{T}}\big{(}\hat{\theta}(y)-\theta\big{)}\big{(}\nabla_{\theta}\log
f(y;\theta)\big{)}^{\mathrm{T}}b~{}f(y;\theta)dy\bigg{)}^{2}$
$\displaystyle\leq\bigg{(}\int_{\mathbb{R}^{N}}a^{\mathrm{T}}\big{(}\hat{\theta}(y)-\theta\big{)}\big{(}\hat{\theta}(y)-\theta\big{)}^{\mathrm{T}}a~{}f(y;\theta)dy\bigg{)}\bigg{(}\int_{\mathbb{R}^{N}}b^{\mathrm{T}}\big{(}\nabla_{\theta}\log
f(y;\theta)\big{)}\big{(}\nabla_{\theta}\log
f(y;\theta)\big{)}^{\mathrm{T}}b~{}f(y;\theta)dy\bigg{)}$
$\displaystyle=\big{(}a^{\mathrm{T}}C(\hat{\theta})a\big{)}\big{(}b^{\mathrm{T}}J(\theta)b\big{)},$
where the last step follows from pulling vectors out of integrals. At this
point, we take $b:=[\widetilde{J}(\theta)^{-1}\tilde{a};0]$, where $\tilde{a}$
is the first $2M-1$ entries of $a$. Then
$\big{(}\tilde{a}^{\mathrm{T}}\widetilde{J}(\theta)^{-1}\tilde{a}\big{)}^{2}=\big{(}a^{\mathrm{T}}b\big{)}^{2}\leq\big{(}a^{\mathrm{T}}C(\hat{\theta})a\big{)}\big{(}b^{\mathrm{T}}J(\theta)b\big{)}=\big{(}a^{\mathrm{T}}C(\hat{\theta})a\big{)}\big{(}\tilde{a}^{\mathrm{T}}\widetilde{J}(\theta)^{-1}\tilde{a}\big{)}.$
(21)
At this point, we note that since the last (complex) entry of
$\theta\in\Omega$ is necessarily positive, then as a $2M$-dimensional real
vector, the last entry is necessarily zero, and furthermore every unbiased
estimator $\hat{\theta}$ in $\Omega$ will also vanish in the last entry. It
follows that the last row and column of $C(\hat{\theta})$ are both zero.
Furthermore, since $\widetilde{J}(\theta)^{-1}$ is positive by Lemma 22,
division in (21) gives
$\big{(}\tilde{a}^{\mathrm{T}}\widetilde{J}(\theta)^{-1}\tilde{a}\big{)}\leq\big{(}a^{\mathrm{T}}C(\hat{\theta})a\big{)}=\big{(}\tilde{a}^{\mathrm{T}}\widetilde{C}(\hat{\theta})\tilde{a}\big{)},$
from which the result follows. ∎
## Appendix
Here, we verify that we can differentiate under the integral sign in the proof
of Theorem 23.
###### Lemma 24.
Consider the probability density function defined by
$f(y;\theta)=\frac{1}{(2\pi\sigma^{2})^{N/2}}e^{-\|y-\mathcal{A}(\theta)\|^{2}/2\sigma^{2}}\qquad\forall
y\in\mathbb{R}^{N}.$
Then for every function $g\colon\mathbb{R}^{N}\rightarrow\mathbb{R}$ with
finite second moment
$\int_{\mathbb{R}^{N}}g(y)^{2}f(y;\theta)dy<\infty\qquad\forall\theta\in\Omega,$
we can differentiate under the integral sign:
$\frac{\partial}{\partial\theta_{i}}\int_{\mathbb{R}^{N}}g(y)f(y;\theta)dy=\int_{\mathbb{R}^{N}}g(y)\frac{\partial}{\partial\theta_{i}}f(y;\theta)dy.$
###### Proof.
First, we adapt the proof of Lemma 5.14 in [29] to show that it suffices to
find a function $b(y;\theta)$ with finite second moment such that, for some
$\varepsilon>0$,
$\bigg{|}\frac{f(y;\theta+z\delta_{i})-f(y;\theta)}{zf(y;\theta)}\bigg{|}\leq
b(y;\theta)\qquad\forall
y\in\mathbb{R}^{N},\theta\in\Omega,|z|<\varepsilon,z\neq 0$ (22)
where $\delta_{i}$ denotes the $i$th identity basis element in
$\mathbb{R}^{2M}$. Indeed, by applying the Cauchy-Schwarz inequality over
$f$-weighted $L^{2}$ space, we have
$\int_{\mathbb{R}^{N}}|g(y)|b(y;\theta)f(y;\theta)dy\leq\bigg{(}\int_{\mathbb{R}^{N}}g(y)^{2}f(y;\theta)dy\bigg{)}^{1/2}\bigg{(}\int_{\mathbb{R}^{N}}b(y;\theta)^{2}f(y;\theta)dy\bigg{)}^{1/2}<\infty$
and so the dominated convergence theorem gives
$\displaystyle\int_{\mathbb{R}^{N}}g(y)\frac{\partial}{\partial\theta_{i}}f(y;\theta)dy$
$\displaystyle=\int_{\mathbb{R}^{N}}\lim_{z\rightarrow
0}\bigg{(}g(y)\frac{f(y;\theta+z\delta_{i})-f(y;\theta)}{zf(y;\theta)}\bigg{)}f(y;\theta)dy$
$\displaystyle=\lim_{z\rightarrow
0}\int_{\mathbb{R}^{N}}\bigg{(}g(y)\frac{f(y;\theta+z\delta_{i})-f(y;\theta)}{zf(y;\theta)}\bigg{)}f(y;\theta)dy$
$\displaystyle=\lim_{z\rightarrow
0}\frac{1}{z}\bigg{(}\int_{\mathbb{R}^{N}}g(y)f(y;\theta+z\delta_{i})dy-\int_{\mathbb{R}^{N}}g(y)f(y;\theta)dy\bigg{)}$
$\displaystyle=\frac{\partial}{\partial\theta_{i}}\int_{\mathbb{R}^{N}}g(y)f(y;\theta)dy.$
In pursuit of such a function $b(y;\theta)$, we first use the triangle and
Cauchy-Schwarz inequalities to get
$\displaystyle\bigg{|}\frac{f(y;\theta+z\delta_{i})-f(y;\theta)}{zf(y;\theta)}\bigg{|}$
$\displaystyle=\frac{1}{|z|}\Big{|}e^{-\frac{1}{2\sigma^{2}}\big{(}\|y-\mathcal{A}(\theta+z\delta_{i})\|^{2}-\|y-\mathcal{A}(\theta)\|^{2}\big{)}}-1\Big{|}$
$\displaystyle=\frac{1}{|z|}\Big{|}e^{-\frac{1}{2\sigma^{2}}\big{(}\|\mathcal{A}(\theta+z\delta_{i})\|^{2}-\|\mathcal{A}(\theta)\|^{2}-2\langle
y,\mathcal{A}(\theta+z\delta_{i})-\mathcal{A}(\theta)\rangle\big{)}}-e^{\frac{1}{\sigma^{2}}\langle
y,\mathcal{A}(\theta+z\delta_{i})-\mathcal{A}(\theta)\rangle}+e^{\frac{1}{\sigma^{2}}\langle
y,\mathcal{A}(\theta+z\delta_{i})-\mathcal{A}(\theta)\rangle}-1\Big{|}$
$\displaystyle\leq\frac{1}{|z|}\bigg{(}e^{\frac{1}{\sigma^{2}}\langle
y,\mathcal{A}(\theta+z\delta_{i})-\mathcal{A}(\theta)\rangle}\Big{|}e^{-\frac{1}{2\sigma^{2}}\big{(}\|\mathcal{A}(\theta+z\delta_{i})\|^{2}-\|\mathcal{A}(\theta)\|^{2}\big{)}}-1\Big{|}+\Big{|}e^{\frac{1}{\sigma^{2}}\langle
y,\mathcal{A}(\theta+z\delta_{i})-\mathcal{A}(\theta)\rangle}-1\Big{|}\bigg{)}$
$\displaystyle\leq\frac{1}{|z|}\bigg{(}e^{\frac{1}{\sigma^{2}}\|y\|\|\mathcal{A}(\theta+z\delta_{i})-\mathcal{A}(\theta)\|}\Big{|}e^{-\frac{1}{2\sigma^{2}}\big{(}\|\mathcal{A}(\theta+z\delta_{i})\|^{2}-\|\mathcal{A}(\theta)\|^{2}\big{)}}-1\Big{|}+\Big{|}e^{\frac{1}{\sigma^{2}}\|y\|\|\mathcal{A}(\theta+z\delta_{i})-\mathcal{A}(\theta)\|}-1\Big{|}\bigg{)},$
(23)
Denote
$c(z;\theta):=\frac{1}{\sigma^{2}}\|\mathcal{A}(\theta+z\delta_{i})-\mathcal{A}(\theta)\|$.
Since $(e^{st}-1)/t\leq se^{st}$ whenever $s,t\geq 0$, we then have
$\frac{|e^{c(z;\theta)\|y\|}-1|}{|z|}=\frac{c(z;\theta)}{|z|}\cdot\frac{e^{c(z;\theta)\|y\|}-1}{c(z;\theta)}\leq\frac{c(z;\theta)}{|z|}\|y\|e^{c(z;\theta)\|y\|}.$
Also by l’Hospital’s rule, there exist continuous functions $C_{1}$ and
$C_{2}$ on the real line such that
$C_{1}(z;\theta)=\frac{c(z;\theta)}{|z|},\qquad
C_{2}(z;\theta)=\frac{1}{|z|}\Big{|}e^{-\frac{1}{2\sigma^{2}}\big{(}\|\mathcal{A}(\theta+z\delta_{i})\|^{2}-\|\mathcal{A}(\theta)\|^{2}\big{)}}-1\Big{|},\qquad\forall
z\neq 0.$
Thus, continuing (23) gives
$\bigg{|}\frac{f(y;\theta+z\delta_{i})-f(y;\theta)}{zf(y;\theta)}\bigg{|}\leq\Big{(}C_{1}(z;\theta)\|y\|+C_{2}(z;\theta)\Big{)}e^{c(z;\theta)\|y\|}.$
Now for a fixed $\varepsilon$, take
$C_{j}(\theta):=\sup_{|z|<\varepsilon}C_{j}(z;\theta)$ and
$c(\theta):=\sup_{|z|<\varepsilon}c(z;\theta)$, and define
$b(y;\theta):=\Big{(}C_{1}(\theta)\|y\|+C_{2}(\theta)\Big{)}e^{c(\theta)\|y\|}.$
Since $C_{j}(\theta)$ and $c(\theta)$ are suprema of continuous functions over
a bounded set, these are necessarily finite for all $\theta\in\Omega$. As
such, our choice for $b$ satisfies (22). It remains to verify that $b$ has a
finite second moment. To this end, let $B(R(\theta))$ denote the ball of
radius $R(\theta)$ centered at the origin (we will specify $R(\theta)$ later).
Then
$\displaystyle\int_{\mathbb{R}^{N}}b(y;\theta)^{2}f(y;\theta)dy$
$\displaystyle=\int_{B(R(\theta))}b(y;\theta)^{2}f(y;\theta)dy+\int_{\mathbb{R}^{N}\setminus
B(R(\theta))}b(y;\theta)^{2}f(y;\theta)dy$
$\displaystyle\leq\Big{(}C_{1}(\theta)R(\theta)+C_{2}(\theta)\Big{)}^{2}e^{2c(\theta)R(\theta)}$
$\displaystyle\qquad+\frac{1}{(2\pi\sigma^{2})^{N/2}}\int_{\mathbb{R}^{N}\setminus
B(R(\theta))}\Big{(}C_{1}(\theta)\|y\|+C_{2}(\theta)\Big{)}^{2}e^{2c(\theta)\|y\|-\frac{1}{2\sigma^{2}}\|y-\mathcal{A}(\theta)\|^{2}}dy.$
(24)
From here, we note that whenever $\|y\|\geq
2\|\mathcal{A}(\theta)\|+8\sigma^{2}c(\theta)$, we have
$\displaystyle\|y-\mathcal{A}(\theta)\|^{2}$
$\displaystyle\geq\|y\|^{2}-2\|y\|\|\mathcal{A}(\theta)\|+\|\mathcal{A}(\theta)\|^{2}$
$\displaystyle\geq\Big{(}2\|\mathcal{A}(\theta)\|+8\sigma^{2}c(\theta)\Big{)}\|y\|-2\|y\|\|\mathcal{A}(\theta)\|+\|\mathcal{A}(\theta)\|^{2}$
$\displaystyle\geq 8\sigma^{2}c(\theta)\|y\|.$
Rearranging then gives
$2c(\theta)\|y\|\leq\frac{1}{4\sigma^{2}}\|y-\mathcal{A}(\theta)\|^{2}$. Also
let $h(\theta)$ denote the larger root of the polynomial
$p(x;\theta):=2C_{1}(\theta)^{2}\Big{(}x^{2}-2\|\mathcal{A}(\theta)\|x+\|\mathcal{A}(\theta)\|^{2}\Big{)}-\Big{(}C_{1}(\theta)x+C_{2}(\theta)\Big{)}^{2},$
and take $h(\theta):=0$ when the roots of $p(x;\theta)$ are not real. (Here,
we are assuming that $C_{1}>0$, but the proof that (24) is finite when
$C_{1}=0$ quickly follows from the $C_{1}>0$ case.) Then
$(C_{1}(\theta)\|y\|+C_{2}(\theta))^{2}\leq
2C_{1}(\theta)^{2}\|y-\mathcal{A}(\theta)\|^{2}$ whenever $\|y\|\geq
h(\theta)$, since by the Cauchy-Schwarz inequality,
$2C_{1}(\theta)^{2}\|y-\mathcal{A}(\theta)\|^{2}-\Big{(}C_{1}(\theta)\|y\|+C_{2}(\theta)\Big{)}^{2}\geq
p(\|y\|;\theta)\geq 0,$
where the last step follows from the fact that $p(x;\theta)$ is concave up.
Now we continue by taking
$R(\theta):=\max\\{2\|\mathcal{A}(\theta)\|+8\sigma^{2}c(\theta),h(\theta)\\}$:
$\displaystyle\int_{\mathbb{R}^{N}\setminus
B(R(\theta))}\Big{(}C_{1}(\theta)\|y\|+C_{2}(\theta)\Big{)}^{2}e^{2c(\theta)\|y\|-\frac{1}{2\sigma^{2}}\|y-\mathcal{A}(\theta)\|^{2}}dy$
$\displaystyle\qquad\leq\int_{\mathbb{R}^{N}\setminus
B(R(\theta))}2C_{1}(\theta)^{2}\|y-\mathcal{A}(\theta)\|^{2}e^{-\frac{1}{4\sigma^{2}}\|y-\mathcal{A}(\theta)\|^{2}}dy$
$\displaystyle\qquad\leq\big{(}2\pi(\sqrt{2}\sigma)^{2}\big{)}^{N/2}\cdot
2C_{1}(\theta)^{2}\int_{\mathbb{R}^{N}}\|x\|^{2}\frac{1}{(2\pi(\sqrt{2}\sigma)^{2})^{N/2}}e^{-\|x\|^{2}/2(\sqrt{2}\sigma)^{2}}dx,$
where the last step comes from integrating over all of $\mathbb{R}^{N}$ and
changing variables $y-\mathcal{A}(\theta)\mapsto x$. This last integral
calculates the expected squared length of a vector in $\mathbb{R}^{N}$ with
independent $\mathcal{N}(0,2\sigma^{2})$ entries, which is $2N\sigma^{2}$.
Thus, substituting into (24) gives that $b$ has a finite second moment. ∎
## Acknowledgments
The authors thank Irene Waldspurger and Profs. Bernhard G. Bodmann, Matthew
Fickus, Thomas Strohmer and Yang Wang for insightful discussions, and the
Erwin Schrödinger International Institute for Mathematical Physics for hosting
a workshop on phase retrieval that helped solidify some of the ideas in this
paper. A. S. Bandeira was supported by NSF DMS-0914892, and J. Cahill was
supported by NSF 1008183, NSF ATD 1042701, and AFOSR DGE51: FA9550-11-1-0245.
The views expressed in this article are those of the authors and do not
reflect the official policy or position of the United States Air Force,
Department of Defense, or the U.S. Government.
## References
* [1] M. Abramowitz, I. A. Stegun, Handbook of Mathematical Functions, Dover Publications, New York, 1964.
* [2] B. Alexeev, A. S. Bandeira, M. Fickus, D. G. Mixon, Phase retrieval with polarization, Available online: arXiv:1210.7752
* [3] B. Alexeev, J. Cahill, D. G. Mixon, Full spark frames, J. Fourier Anal. Appl. 18 (2012) 1167–1194.
* [4] R. Balan, Reconstruction of signals from magnitudes of redundant representations, Available online: 1207.1134
* [5] R. Balan, B. G. Bodmann, P. G. Casazza, D. Edidin, Fast algorithms for signal reconstruction without phase, Proc. SPIE 6701, Wavelets XII (2007) 67011L.
* [6] R. Balan, B. G. Bodmann, P. G. Casazza, D. Edidin, Painless reconstruction from magnitudes of frame coefficients, J. Fourier Anal. Appl. 15 (2009) 488–501.
* [7] R. Balan, P. Casazza, D. Edidin, On signal reconstruction without phase, Appl. Comp. Harmon. Anal. 20 (2006) 345–356.
* [8] B. G. Bodmann, N. Hammen, Stable phase retrieval with low-redundancy frames, Available online: arXiv:1302.5487
* [9] O. Bunk, A. Diaz, F. Pfeiffer, C. David, B. Schmitt, D. K. Satapathy, J. F. van der Veen, Diffractive imaging for periodic samples: retrieving one-dimensional concentration profiles across microfluidic channels, Acta Cryst. A63 (2007) 306–314.
* [10] Ç. Candan, M. A. Kutay, H. M. Ozaktas, The discrete fractional Fourier transform, IEEE Trans. Signal. Process. 48 (2000) 1329–1337.
* [11] E. J. Candès, The restricted isometry property and its implications for compressed sensing, C. R. Acad. Sci. Paris, Ser. I 346 (2008) 589–592.
* [12] E. J. Candès, Y. Eldar, T. Strohmer, V. Voroninski, Phase retrieval via matrix completion, Available online: arXiv:1109.0573
* [13] E. J. Candès, X. Li, Solving quadratic equations via PhaseLift when there are about as many equations as unknowns, Available online: arXiv:1208.6247
* [14] E. J. Candès, T. Strohmer, V. Voroninski, PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming, Available online: arXiv:1109.4499
* [15] P. G. Casazza, M. Fickus, J. C. Tremain, E. Weber, The Kadison-Singer problem in mathematics and engineering: A detailed account, Contemporary Math., 414 Operator theory, operator algebras and applications, D. Han, P.E.T. Jorgensen and D.R. Larson Eds. (2006) 297–356.
* [16] Z. Chen, J. J. Dongarra, Condition numbers of Gaussian random matrices, SIAM J. Matrix Anal. Appl. 27 (2005) 603–620.
* [17] K. R. Davidson, S. J. Szarek, Local operator theory, random matrices and Banach spaces, In: Handbook in Banach Spaces Vol I, ed. W. B. Johnson, J. Lindenstrauss, Elsevier (2001) 317–366.
* [18] L. Demanet, P. Hand, Stable optimizationless recovery from phaseless linear measurements, Available online: arXiv:1208.1803
* [19] Y. C. Eldar, S. Mendelson, Phase retrieval: Stability and recovery guarantees, Available online: arXiv:1211.0872
* [20] A. Fannjiang, Absolute uniqueness in phase retrieval with random illumination, Inverse Probl. 28 (2012) 075008.
* [21] M. Fickus, D. G. Mixon, Numerically erasure-robust frames, Linear Algebra Appl. 437 (2012) 1394–1407.
* [22] J. Finkelstein, Pure-state informationally complete and “really” complete measurements, Phys. Rev. A, 70 (2004) 052107.
* [23] R. Hartshorne, Algebraic Geometry, Graduate Texts in Mathematics, Springer, New York, 1977.
* [24] T. Heinosaari, L. Mazzarella, M. M. Wolf, Quantum tomography under prior information, Available online: arXiv:1109.5478
* [25] K. Jaganathan, S. Oymak, B. Hassibi, Recovery of sparse 1-D signals from the magnitudes of their Fourier transform, Available online: arXiv:1206.1405
* [26] I. M. James, Euclidean models of projective spaces, Bull. London Math. Soc. 3 (1971) 257–276.
* [27] P. Jaming, Uniqueness results for the phase retrieval problem of fractional Fourier transforms of variable order, Available online: arXiv:1009.3418
* [28] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, Prentice Hall, 1993.
* [29] E. L. Lehmann, G. Casella, Theory of Point Estimation, 2nd ed., Springer, 1998.
* [30] R. J. Lipton, Definitions, Definitions, Do We Need Them? Godel’s Lost Letter and P=NP, Available online: http://rjlipton.wordpress.com/2010/01/23/definitions-definitions-do-we-need-them/
* [31] K. H. Mayer, Elliptische Differentialoperatoren und Ganzzahligkeitssätze für charakteristische Zahlen, Topology 4 (1965) 295–313.
* [32] R. J. Milgram, Immersing projective spaces, Ann. Math. 85 (1967) 473–482.
* [33] A. Mukherjee, Embedding complex projective spaces in Euclidean space, Bull. London Math. Soc. 13 (1981) 323–324.
* [34] M Püschel, J. Kovačević, Real, tight frames with maximal robustness to erasures, Proc. Data Compression Conf. (2005) 63–72.
* [35] B. Steer, On the embedding of projective spaces in euclidean space, Proc. London Math. Soc. 21 (1970) 489–501.
* [36] A. Vogt, Position and momentum distributions do not determine the quantum mechanical state, In: A. R. Marlow, ed., Mathematical Foundations of Quantum Theory, Academic Press, New York, 1978.
* [37] I. Waldspurger, A. d’Aspremont, S. Mallat, Phase recovery, MaxCut and complex semidefinite programming, Available online: arXiv:1206.0102
|
arxiv-papers
| 2013-02-19T14:31:24 |
2024-09-04T02:49:41.900246
|
{
"license": "Public Domain",
"authors": "Afonso S. Bandeira, Jameson Cahill, Dustin G. Mixon, Aaron A. Nelson",
"submitter": "Dustin Mixon",
"url": "https://arxiv.org/abs/1302.4618"
}
|
1302.4693
|
Virtual in situs: Sequencing mRNA from cryo-sliced Drosophila embryos to
determine genome-wide spatial patterns of gene expression
Peter A. Combs1,∗, Michael B. Eisen2,3
1 Graduate Program in Biophysics, University of California, Berkeley,
California, United States of America
2 Department of Molecular and Cell Biology, University of California,
Berkeley, California, United States of America
3 Howard Hughes Medical Institute, University of California, Berkeley,
California, United States of America
$\ast$ E-mail: [email protected]
## Abstract
Complex spatial and temporal patterns of gene expression underlie embryo
differentiation, yet methods do not yet exist for the efficient genome-wide
determination of spatial expression patterns during development. In situ
imaging of transcripts and proteins is the gold-standard, but it is difficult
and time consuming to apply to an entire genome, even when highly automated.
Sequencing, in contrast, is fast and genome-wide, but is generally applied to
homogenized tissues, thereby discarding spatial information. It is likely that
these methods will ultimately converge, and we will be able to sequence RNAs
in situ, simultaneously determining their identity and location. As a step
along this path, we developed methods to cryosection individual blastoderm
stage Drosophila melanogaster embryos along the anterior-posterior axis and
sequence the mRNA isolated from each 25$\mu\mbox{m}$ slice. The spatial
patterns of gene expression we infer closely match patterns previously
determined by in situ hybridization and microscopy. We applied this method to
generate a genome-wide timecourse of spatial gene expression from shortly
after fertilization through gastrulation. We identify numerous genes with
spatial patterns that have not yet been described in the several ongoing
systematic in situ based projects. This simple experiment demonstrates the
potential for combining careful anatomical dissection with high-throughput
sequencing to obtain spatially resolved gene expression on a genome-wide
scale.
## Introduction
Analyzing gene expression in multicellular organisms has long involved a
tradeoff between the spatial precision of imaging and the efficiency and
comprehensiveness of genomic methods. RNA in situ hybridization (ISH) and
antibody staining of fixed samples, or fluorescent imaging of live samples,
provides high resolution spatial information for small numbers of genes [1, 2,
3]. But even with automated sample preparation, imaging, and analysis, in situ
based methods are difficult to apply to an entire genomes worth of transcripts
or proteins. High throughput genomic methods, such as DNA microarray
hybridization or RNA sequencing, are fast and relatively inexpensive, but the
amount of input material they require has generally limited their application
to homogenized samples, often from multiple individuals. Methods involving the
tagging, sorting, and analysis of RNA from cells in specific spatial domains
have shown promise [4], but remain non-trivial to apply systematically,
especially across genotypes and species.
Recent advances in DNA sequencing suggest an alternative approach. With
increasingly sensitive sequencers and improved protocols for sample
preparation, it is now possible to analyze small samples without
amplification. Several years ago we developed methods to analyze the RNA from
individual Drosophila embryos [5]. As we often recovered more RNA from each
embryo than was required to obtain accurate measures of gene expression, we
wondered whether we could obtain good data from pieces of individual embryos,
and whether we could obtain reliable spatial expression information from such
data. To test this possibility, we chose to focus on anterior-posterior (A-P)
patterning in the early embryo D. melanogaster embryo, as the system is
extremely well-characterized and the geometry of the early embryo also lends
itself to biologically meaningful physical dissection by simple sectioning
along the elongated A-P axis.
## Results
To test whether we could consistently recover and sequence RNA from sectioned
D. melanogaster embryos, we collected embryos from our laboratory stock of the
line CantonS (CaS), aged them for approximately 2.5 hours so that the bulk of
the embryos were in the cellular blastoderm stage, and fixed them in methanol.
We examined the embryos under a light microscope and selected single embryos
that were roughly halfway through cellularization (mitotic cell cycle 14;
developmental stage 5). We embedded each embryo in a cryoprotecting gel,
flash-froze it in liquid nitrogren, and took transverse sections along the
anterior-posterior axis. For this initial trial we used 60$\mu\mbox{m}$
sections, meaning that we cut each approximately 350$\mu\mbox{m}$ embryo into
six pieces. We placed each piece into a separate tube, isolated RNA using
Trizol, and prepared sequencing libraries using the Illumina Tru-Seq kit .
In early trials we had difficulty routinely obtaining good quality RNA-seq
libraries from every section. We surmised that we were losing material from
some slices during library preparation as a result of the small amount
(approximately 15ng) total RNA per slice. To overcome this limitation, after
the initial RNA extraction we added RNA from a single embryo of a distantly
related Drosophila species to each tube to serve as a carrier. As we only used
distantly related and fully sequenced species as carriers, we could readily
separate reads derived from the D. melanogaster slice and the carrier species
computationally after sequencing. With the additional approximately 100ng of
total RNA in each sample, library preparation became far more robust.
We sliced and sequenced three CaS embryos using an Illumina HiSeq 2000,
obtaining approximately 40 million 50 bp paired-end reads for each
slice+carrier sample. We aligned these reads to the D. melanogaster and
carrier genomes using TopHat[6, 7], and identified between 1.7 and 31.4
percent of reads as having come unambiguously from D. melanogaster (see Table
1). We then used Cufflinks[8] to infer expression levels for all annotated
mRNAs.
The data for each slice within an embryo were generally highly correlated
(Figure S1), reflecting the large number of highly expressed genes with
spatially uniform expression patterns. The data for equivalent slices of
embryos 2 and 3 were also highly correlated, while the slices for embryo 1
were systematically less well matched to their counterparts in embryos 2 and 3
(Figure S2), suggesting that it may have been sampled at a slightly different
developmental stage.
To examine how well our data recapitulated known spatial profiles, we manually
examined a panel of genes with known anterior-posterior patterns of gene
expression. Figure 1A shows RNA in-situ hybridization patterns from the
Berkeley Drosophila Genome Project (BDGP) [2] alongside the expression data
for that gene from our sliced embryos, demonstrating a close qualitative
agreement between the visualized expression patterns and our sliced RNA-seq
data.
In order to more quantitatively compare our data to existing patterns, we
constructed a reference set of spatial expression patterns along the A-P axis
using three-dimensional “virtual embryos” from the Berkeley Drosophila
Transcription Network Project, which contain expression patterns for 95 genes
at single-nucleus resolution [1]. We transformed the relative expression
levels from these images into absolute values (FPKM) using genome-wide
expression data from intact single embryos [5]. We compared the observed
expression for these 95 genes from an average of each of our slices to all
possible 60$\mu\mbox{m}$ slices of these virtual embryos (Figure 1B). High
scores for most slices fell into narrow windows, with the best matches for
each slice falling sequentially along the embryo with a spacing of about
60$\mu\mbox{m}$ , the same thickness as the slices.
We next used the program Cuffdiff [9] to identify 85 genes with statistically
significant differences in expression between slices (this is a very
conservative estimate). We compared these genes to those examined by the BDGP,
the most comprehensive annotation of spatial localization in D. melanogaster
development that we are aware of [2]. Of our differentially expressed genes,
21 had no imaging data available, and 33 were annotated as present in a subset
of the embryo (the annotation term meant to capture patterned genes); the
remaining 31 genes showed either clear patterns that were not annotated with
the most general keyword, or no clear staining (Figure S3). There were 194
genes tagged by the BDGP as patterned that were not picked up as having
statistically significant patterns in our data. However, most of these had
primarily dorsal-ventral patterns, faint patterns, later staging in the images
used for annotation, or had good qualitative agreement with our data but fell
above the cutoff for statistical significance (Figure S4).
As a more sensitive approach to finding patterned genes, we applied $k$-means
clustering to our data. We first filtered on expression level (at least one
slice in one embryo with FPKM ¿ 10) and agreement between replicates (average
Pearson correlation between embryos of ¿ 0.5), then clustered based on
normalized expression ($k=20$, centroid linkage) [10]. We identified several
broad classes of expression, including localization to each of the poles, and
several classes of expression that correspond to five different gap gene-like
bands along the AP axis2 and Figure S5. Of the 745 genes, only 349 had images
in the BDGP set[2]. Where present at similar stages, this data agrees with the
RNA-seq patterns, although staining is often undetectable and well-matched
stages are often missing from the databases (Figure S6).
To extend our dataset, we collected individual embryos from seven different
time points based on morphology—stage 2, stage 4, and 5 time points within
stage 5—and sliced them into 25$\mu\mbox{m}$ sections, yielding between 10 and
15 contiguous, usable slices per embryo. For these embryos we used total RNA
from the yeasts Saccharomyces cerevisiae and Torulaspora delbruckii as
carrier, which are so far diverged as to have fewer than 0.003% of reads
ambiguously mapping.
These finer slices are better able to distinguish broad gap-gene domains, with
several slices of relatively low expression between the multiple domains of
hb, kni, and gt, whereas the coarser slices only have one, or at best two
slices. Excitingly, we can also distinguish the repression between stripes of
pair-rule genes like eve as well (Figure 3). Given the non-orthogonal
orientation of the anterior-most and posterior-most eve stripes relative to
the AP axis, we do not expect to see all 7 pair-rule stripes, but at least
three can be unambiguously observed.
Putting the 60um and 25um slice datasets together, we find a large number of
genes with reproducible patterns in the 60um slices whose formation over time
can be clearly seen in the timed 25um slices, including many without no
previously described early patterns (Figure S7).
## Discussion
The experiments reported demonstrate that slicing and sequencing animal
embryos is a practical and effective method to systematically characterize
spatial patterns of expression. While we are by no means the first to dissect
samples and characterize their RNAs—Ding and Lipshitz pioneered this kind of
analysis twenty years ago [11]—to our knowledge we are the first to
successfully apply such a technique to report genome-wide spatial patterns in
a single developing animal embryo.
Given the degree to which the D. melanogaster embryo has been studied, and the
presence of at least two large in situ based studies whose goals were to
systematically identify and characterize genes with patterned expression in
the embryo, we were surprised by the large number of genes we find as clearly
patterned that had not been previously described as such. We note in
particular a large number of genes with expression restricted to the poles,
most with no known role in either anterior patterning or pole cell formation
or activity. This emphasizes the potential for sequencing-based methods to
replace in situ based studies in the systematic analysis of patterned gene
expression, as they are not only simpler, cheaper, and easier to apply to
different species and genetic backgrounds, but appear to be more sensitive.
The data we present here are far from perfect - the relatively small number of
reads per slice means that the slice by slice data are somewhat noisy. However
the consistency between replicates and the agreement between the 25um and 60um
data demonstrate that the experiment clearly worked, and additional sequencing
depth and better methdos for working with small samples should greatly reduce
the noise as we move forward.
Obviously, to truly replace in situ based methods, sequencing based methods
will need to achieve greater resolution than presented here. One can envision
three basic approaches to achieving the ultimate goal of determining the
location of every RNA in a spatially complex tissue. Sequencing RNAs in place
in intact tissues would obviously be the ideal method, and we are aware of
several groups working towards this goal. In the interim, however, methods to
isolate and characterize smaller and smaller subsets of cells are our only
alternative. One possibility is to combine spatially restricted reporter gene
expression and cell sorting to purify and characterize the RNA composition of
differentiated tissue—c.f. [4]. While elegant, this approach cannot be rapidly
applied to different genetic backgrounds, requires separate tags for every
region/tissue to be analyzed, and will likely not work on single individuals.
Sectioning based methods offer several advantages, principally that they can
be applied to almost any sample from any genetic background or species, and
allow for the biological precision of investigating single individuals. The
60$\mu\mbox{m}$ and 25$\mu\mbox{m}$ slices we used here represent reasonable
tradeoffs between sequencing depth and spatial resolution given the current
limits of sample preparation and sequencing methods, but with methods having
been described to sequence the RNAs from “single” cells, it should be possible
to obtain far better linear spatial resolution in the near future.
Finally, as sequencing costs continue to plummet, it should be possible to
sequence greater numbers of increasingly small samples. According to our
estimates, a single embryo contains enough RNA to sequence over 700 samples to
a depth of 20 million reads. While this number of samples would necessitate
more advanced sectioning and sample preparation techniques, the ultimate goal
of knowing the localization of every single transcript is rapidly becoming
feasible.
## Materials and Methods
### Fly Line, Imaging, and Slicing
We raised flies on standard media at $25^{\circ}$ in uncrowded conditions, and
collected eggs from many 3 to 10-day old females from our Canton-S lab stocks.
We washed and dechorionated the embryos, then fixed them according to a
standard methanol cracking protocol. Briefly, we initially placed embryos in
20ml glass vials containing 10ml of heptane and 10ml of PEM (100mM PIPES, 2mM
EGTA, 1mM MgSO4) and mixed gently. We then removed the aqueous phase, added
10ml of methanol, shook vigorously for 15-30 seconds, and collected the
devitellinized embryos, which we washed several times in methanol to remove
residual heptane. We then placed the fixed embryos on a slide in halocarbon
oil, and imaged on a Nikon 80i with DS-5M camera. After selecting embryos with
the appropriate stage according to depth of membrane invagination and other
morphological features, we washed embryos with methanol saturated with
bromophenol blue dye (Fisher, Fair Lawn NJ), aligned them in standard cryotomy
cups (Polysciences Inc, Warrington, PA), covered them with OCT tissue freezing
medium (Triangle Biomedical, Durham, NC), and flash froze them in liquid
nitrogen.
We sliced frozen embryos on a Microm HM 550 (Thermo Scientific, Kalamozoo, MI)
at a thickness of 60$\mu\mbox{m}$ or 25$\mu\mbox{m}$ . We adjusted the
horizontal position of the blade after every slice to eliminate the
possibility of carry-over from previous slices, and used a new blade for every
embryo. We placed each slice in an individual RNase-free, non-stick tube (Life
Technologies, Grand Island, NY).
### RNA Extraction, Library Preparation, and Sequencing
We performed RNA extraction in TRIzol (Life Technologies, Grand Island, NY)
according to manufacturer instructions, except with a higher concentration of
glycogen as carrier (20 ng) and a higher relative volume of TRIzol to the
expected material (1mL, as in [5]). For the 60um slices, we pooled total RNA
from each slice with total RNA from single D. persimilis, D. willistoni, or D.
mojavensis embryos, then made libraries according to a modified TruSeq mRNA
protocol from Illumina. We prepared all reactions with half-volume sizes to
increase relative sample concentration, and after AmpureXP cleanup steps, we
took care to pipette off all of the resuspended sample, leaving less than 0.5
$\mu L$, rather than the 1-3 $\mu L$ in the protocol. Furthermore, we only
performed 13 cycles of PCR amplification rather than the 15 in the protocol,
to minimize PCR duplication bias.
Libraries were quantified using the Kapa Library Quantification kit for the
Illumina Genome Analyzer platform (Kapa Biosystems) on a Roche LC480 RT-PCR
machine according to the manufacturer’s instructions, then pooled to equalize
index concentration. Pooled libraries were then submitted to the Vincent
Coates Genome Sequencing Laboratory for 50bp paired-end sequencing according
to standard protocols for the Illumina HiSeq 2000. Bases were called using
HiSeq Control Software v1.8 and Real Time Analysis v2.8.
### Mapping and Quantification
Reads were mapped using TopHat v2.0.6 to a combination of the FlyBase
reference genomes (version FB2012_05) for D. melanogaster and the appropriate
carrier species genomes with a maximum of 6 read mismatches [12, 13]. Reads
were then assigned to either the D. melanogaster or carrier genomes if there
were at least 4 positions per read to prefer one species over the other. We
used only the reads that mapped to D. melanogaster to generate transcript
abundances in Cufflinks.
### Data and Software
We have deposited all reads in the NCBI GEO under the accession number
GSE43506 which is available immediately. The processed data, including a
search feature of the 25$\mu\mbox{m}$ dataset, are available at the journal
website and at eisenlab.org/sliceseq. All custom analysis software is
available github.com/petercombs/Eisenlab-Code, and is primarily written in
Python [14, 15, 16, 17, 18]. Commit b0b115a was used to perform all analysis
in this paper.
## Acknowledgments
We thank all who have contributed feedback through the open review of the
manuscript on MBE’s blog and the arXiv.
## References
* 1. Fowlkes CC, Hendriks CLL, Keränen SVE, Weber GH, Rübel O, et al. (2008) A quantitative spatiotemporal atlas of gene expression in the Drosophila blastoderm. Cell 133: 364–374.
* 2. Tomancak P, Berman BP, Beaton A, Weiszmann R, Kwan E, et al. (2007) Global analysis of patterns of gene expression during Drosophila embryogenesis. Genome Biology 8: R145.
* 3. Lécuyer E, Yoshida H, Parthasarathy N, Alm C, Babak T, et al. (2007) Global analysis of mRNA localization reveals a prominent role in organizing cellular architecture and function. Cell 131: 174–187.
* 4. Steiner FA, Talbert PB, Kasinathan S, Deal RB, Henikoff S (2012) Cell-type-specific nuclei purification from whole animals for genome-wide expression and chromatin profiling. Genome Research 22: 766–777.
* 5. Lott SE, Villalta JE, Schroth GP, Luo S, Tonkin LA, et al. (2011) Noncanonical compensation of zygotic X transcription in early Drosophila melanogaster development revealed through single-embryo RNA-seq. PLoS Biology 9: e1000590.
* 6. Langmead B, Salzberg SL (2012) Fast gapped-read alignment with Bowtie 2. Nature Methods 9: 357–359.
* 7. Kim D, Pertea G, Trapnell C, Pimentel H, Kelley R, et al. (2013) TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome Biology 14: R36.
* 8. Roberts A, Goff L, Pertea G, Kim D, Kelley DR, et al. (2012) Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and Cufflinks. Nature Protocols 7: 562–578.
* 9. Trapnell C, Hendrickson DG, Sauvageau M, Goff L, Rinn JL, et al. (2013) Differential analysis of gene regulation at transcript resolution with RNA-seq. Nature Biotechnology 31: 46–53.
* 10. de Hoon MJL, Imoto S, Nolan J, Miyano S (2004) Open source clustering software. Bioinformatics (Oxford, England) 20: 1453–1454.
* 11. Ding D, Lipshitz HD (1993) A molecular screen for polar-localised maternal RNAs in the early embryo of Drosophila. Zygote (Cambridge, England) 1: 257–271.
* 12. McQuilton P, St Pierre SE, Thurmond J, the FlyBase Consortium (2011) FlyBase 101 - the basics of navigating FlyBase. Nucleic Acids Research 40: D706–D714.
* 13. Trapnell C, Pachter L, Salzberg SL (2009) TopHat: discovering splice junctions with RNA-Seq. Bioinformatics (Oxford, England) 25: 1105–1111.
* 14. Van Rossum G, Drake FL (2003) Python language reference manual.
* 15. Cock PJA, Antao T, Chang JT, Chapman BA, Cox CJ, et al. (2009) Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics (Oxford, England) 25: 1422–1423.
* 16. Hunter JD (2007) Matplotlib: A 2D graphics environment. Computing In Science & Engineering 9: 90–95.
* 17. Jones E, Oliphant T, Peterson P, others (2001) SciPy: Open source scientific tools for Python.
* 18. Perez F, Granger BE (2007) IPython: a system for interactive scientific computing. Computing In Science & Engineering .
## Figure Legends
Figure 1: Expression in the slices closely matches previous expression data.
(A) 33 genes with previously known A-P patterns are shown with virtual in situ
images for each of the three 60$\mu\mbox{m}$ sliced CaS embryos. The virtual
in situ images are each scaled to the slice with the highest expression level
for each embryo individually. (B) Expression data closely matches with
previous quantitative data at the same stages. We used a Bayesian procedure to
estimate the location of each 60$\mu\mbox{m}$ slice with reference to an ISH-
based atlas, with absolute expression levels set using whole embryo RNA-seq
data. The line graphs represent the distribution of position estimates for
each slice, and the colored bars are one sixth the embryo width and placed at
the position of greatest probability. Figure 2: Heat maps of gene expression
clusters. Of the $k=20$ clusters, 13 with non-uniform patterns are shown. The
expression levels for each gene was normalized for clustering and display so
that the maximum expression of each gene in each embryo is dark blue. The plot
above each cluster is the mean normalized expression level in that cluster.
Figure 3: Expression of key patterning genes across early development.
Expression levels in the 25um timeseries are normalized to the highest
expression level at any time pioint. For slices with poor quality data
(timepoint 4, slice 10; timepoint 6, slice 6; timepoint 7, slice 7; and
timepoint 7, slice 8) data imputed from neighboring slices is shown.
Expression levels for the 60um slice samples are normalized to the highest
level in each embryo.
## Supplemental Figure Legends
Prepublication Supplemental Figures available by request.
Figure S1. Correlation of slices within embryos. Log-log plots of FPKM values
between slices within each of the three 60$\mu\mbox{m}$ sliced embryos.
Figure S2. Correlation of slices between embryos. Log-log plots of FPKM values
of corresponding slices between each of the three 60$\mu\mbox{m}$ sliced
embryos.
Figure S3. Genes called as patterned by Cuffdiff lacking subset tag in BDGP
database. Images are from BDGP; graphs are average of three CaS embryos. Many
of these are known patterned genes, highlighting the incompleteness of
available annotations.
Figure S4. Genes with subset tag in BDGP not called as patterned by Cuffdiff.
Figure S5. Figure 2 with gene names.
Figure S6. Images from BDGP for genes in clusters shows in Figure 2.
Figure S7. Data from 25$\mu\mbox{m}$ timecourse and 60$\mu\mbox{m}$ embryos
for a large number of genes with manually curated patterns.
## Tables
Table 1: Sequencing statistics for sliced single-stage wild-type mRNA-Seq
samples
Replicate | Slice | Carrier Species | Barcode Index | Total Reads | Uniquely mapped D. mel reads (%) | Ambiguous Reads (%)
---|---|---|---|---|---|---
1 | 1 | D. per | 1 | 69,339,972 | 2,284,228 (3.2%) | 1,634,055 (2.3%)
1 | 2 | D. per | 2 | 73,632,862 | 3,706,630 (5.0%) | 1,603,444 (2.1%)
1 | 3 | D. per | 3 | 82,076,328 | 6,002,034 (7.3%) | 1,774,485 (2.1%)
1 | 4 | D. per | 4 | 73,437,708 | 6,401,565 (8.7%) | 1,592,665 (2.1%)
1 | 5 | D. per | 5 | 75,922,812 | 4,951,178 (6.5%) | 1,559,097 (2.0%)
1 | 6 | D. per | 6 | 78,623,784 | 1,355,079 (1.7%) | 1,574,067 (2.0%)
2 | 1 | D. wil | 7 | 59,813,036 | 4,066,295 (6.7%) | 878,476 (1.4%)
2 | 2 | D. wil | 8 | 90,961,338 | 15,212,716 (16.7%) | 1,301,095 (1.4%)
2 | 3 | D. wil | 9 | 73,201,902 | 14,855,374 (20.2%) | 911,768 (1.2%)
2 | 4 | D. wil | 10 | 75,754,772 | 23,858,301 (31.4%) | 1,136,031 (1.4%)
2 | 5 | D. wil | 11 | 84,497,566 | 10,026,713 (11.8%) | 1,080,910 (1.2%)
2 | 6 | D. wil | 12 | 66,316,952 | 13,122,508 (19.7%) | 898,776 (1.3%)
3 | 1 | D. moj | 13 | 75,847,986 | 12,496,248 (16.4%) | 3,615,452 (4.7%)
3 | 2 | D. moj | 14 | 72,497,660 | 4,005,714 (5.5%) | 803,381 (1.1%)
3 | 3 | D. moj | 15 | 77,532,368 | 11,138,154 (14.3%) | 772,446 (0.9%)
3 | 4 | D. moj | 16 | 83,400,882 | 8,227,562 (9.8%) | 861,839 (1.0%)
3 | 5 | D. moj | 18 | 83,608,454 | 2,630,069 (3.1%) | 795,169 (0.9%)
3 | 6 | D. moj | 19 | 85,823,784 | 2,239,493 (2.6%) | 829,382 (0.9%)
Counts are for read ends. Discordant read ends are always classed as
ambiguous, but failure of one end to map does not disqualify the other.
|
arxiv-papers
| 2013-02-19T17:44:59 |
2024-09-04T02:49:41.912581
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Peter A. Combs, Michael B. Eisen",
"submitter": "Peter Combs",
"url": "https://arxiv.org/abs/1302.4693"
}
|
1302.4779
|
# Failure Data Analysis of HPC Systems ††thanks: This work was supported in
part by Contract 74837-001-0349 from the Regents of University of California
(Los Alamos National Laboratory) to William Marsh Rice University and by
National Science Foundation under grant ACI 02-19597.
Charng-Da Lu
Buffalo, NY 14214
###### Abstract
Continuous availability of HPC systems built from commodity components have
become a primary concern as system size grows to thousands of processors. In
this paper, we present the analysis of 8-24 months of real failure data
collected from three HPC systems at the National Center for Supercomputing
Applications (NCSA). The results show that the availability is 98.7-99.8% and
most outages are due to software halts. On the other hand, the downtime are
mostly contributed by hardware halts or scheduled maintenance. We also used
failure clustering analysis to identify several correlated failures.
## 1 . Introduction
Continuous availability of high performance computing (HPC) systems built from
commodity components have become a primary concern as system size grows to
thousands of processors. To design more reliable systems, a solid
understanding of failure behavior of current systems is in need. Therefore, we
believe failure data analysis of HPC systems can serve three purposes. First,
it highlights dependability bottlenecks and serves as a guideline for
designing more reliable systems. Second, real data can be used to drive
numerical evaluation of performability models and simulations, which are an
essential part of reliability engineering. Third, it can be applied to predict
node availability, which is useful for resource characterization and
scheduling [1].
In this paper, we studied 8-24 months of real failure data collected from
three HPC systems111All NCSA HPC systems described in this paper have been
decommissioned in 2003 and 2004. at the National Center for Supercomputing
Applications (NCSA). The remainder of this paper is organized as follows. In
§2 we described the systems characteristics and failure data collection. We
present preliminary analysis of failure data in §3, followed by failure
distribution and correlation analysis in §4 and §5. We summarize related work
in §6 and conclude our study in §7.
## 2 . The Systems and Measurements
The three HPC systems we studied are quite different architecturally. The
first is an array of SGI Origin 2000 (O2K) machines. SGI Origin 2000 is a cc-
NUMA distributed shared memory supercomputer. An O2K can have up to 512 CPUs
and 1 TB of memory, all under control of one single-system-image IRIX
operating system. The configuration at NCSA is an array of twelve O2K’s (total
1520 CPUs) connected by proprietary, high-speed HIPPI switches. Table 1 lists
its detailed specification. The machines A, B, E, F, and N are equipped with
250 MHz MIPS R10000 processors, and the rest with 195 MHz MIPS R10000
processors. M4 accepts interactive access, while the others machines only
service batch jobs. Peak performance of NCSA O2K is 328 gigaflops.
The second and the third systems are Beowulf-style PC clusters. “Platinum”
cluster has 520 two-way SMP 1 GHz Pentium-III nodes (1040 CPUs), 512 of which
are compute nodes (2 GB memory), and the rest are storage nodes and
interactive access nodes (1.5 GB memory). “Titan” cluster consists of 162 two-
way SMP 800 MHz Itanium-1 nodes (324 CPUs), 160 of which are compute nodes
(1.5 GB memory) and 2 are for interactive access. Both clusters use Myrinet
2000 and Gigabit Ethernet as system interconnect. Myrinet is faster and for
node communications, whereas the Gigabit Ethernet is slower and serves I/O
traffic. Both clusters have one teraflop of peak performance.
All three HPC systems use batch job control software to manage workload. O2K
runs LSF (Load Sharing Facility) queueing system. Each job on O2K have
resource limits of 50 hours of run-time and 256 CPUs. Platinum and Titan
employ Portable Batch System with the Maui Scheduler, and the job limits are
352 and 128 nodes for 24 hours, respectively.
According to a user survey [8], the NCSA HPC systems are devoted to multiple
disciplinary sciences research: physics (20%), engineering (16%), chemistry
(14%), biology (13%), astronomy (13%), and material science (12%). Seventy
percent of users write programs in Fortran (F90 and F77) or mix of Fortran and
C/C++. Sixty-five percent users use MPI or OpenMP as the parallel programming
model. In terms of job sizes, 22% users typically allocate 9-16 CPUs. About
equally many users (14-15%) allocate 2-4, 5-8, 17-32, or 33-64 CPUs.
The failure log was collected in the form of monthly or quarterly reliability
reports. At the end of a month or aquarter, a report for each node/machine is
created. A report records outage date (but no outage time), type, and
duration. There are five outage types defined by NCSA system administrator:
Software Halt (SW), Hardware Halt (HW), Scheduled Maintenance (M), Network
Outages, and Air Conditioning or Power Halts (PWR). The cause of an outage is
determined as follows: a program runs at machine boot time prompts the
administrator to enter the reason for the outage. If nothing is entered after
two minutes, the program defaults to recording a Software Halt.
The data collection period was two years (April 2000 to March 2002) for O2K
and eight months (January 2003 to August 2003) for Platinum and Titan. In this
set of failure log, there is no occurrence of Network Outage, so we exclude it
from the rest of analysis.
| All | A | B | E | F | H1 | H | J | M | M2 | M4 | N | S
---|---|---|---|---|---|---|---|---|---|---|---|---|---
CPUs | 1520 | 128 | 256 | 128 | 128 | 128 | 128 | 128 | 128 | 64 | 48 | 128 | 128
Mem (GB) | 618 | 64 | 128 | 64 | 76 | 64 | 32 | 32 | 32 | 16 | 14 | 64 | 32
Outages | 687 | 87 | 182 | 40 | 81 | 42 | 25 | 32 | 24 | 41 | 59 | 37 | 37
SW (%) | 59 | 74 | 68 | 53 | 63 | 57 | 60 | 59 | 42 | 44 | 39 | 49 | 46
HW(%) | 13 | 8 | 19 | 13 | 9 | 21 | 8 | 3 | 17 | 10 | 5 | 5 | 32
M(%) | 21 | 11 | 12 | 28 | 20 | 12 | 24 | 19 | 29 | 32 | 49 | 38 | 14
PWR(%) | 7 | 7 | 2 | 8 | 9 | 10 | 8 | 19 | 13 | 15 | 7 | 8 | 8
Downtime (day) | 9.5 | 8.7 | 19.2 | 15.3 | 5.9 | 13.8 | 6.2 | 3.6 | 4.8 | 7.5 | 4.5 | 5.5 | 5.0
SW (%) | 27 | 32 | 49 | 11 | 36 | 9 | 5 | 25 | 15 | 27 | 12 | 6 | 16
HW(%) | 28 | 35 | 29 | 1 | 10 | 67 | 49 | $<1$ | 22 | 12 | 4 | 28 | 22
M(%) | 41 | 28 | 20 | 85 | 44 | 22 | 45 | 66 | 58 | 52 | 75 | 62 | 58
PWR(%) | 4 | 4 | 2 | 3 | 9 | 2 | 2 | 9 | 4 | 9 | 10 | 4 | 4
Avail(%) | 98.7 | 98.8 | 97.4 | 97.9 | 99.2 | 98.1 | 99.2 | 99.5 | 99.3 | 99.0 | 99.4 | 99.2 | 99.3
S Avail(%) | 99.2 | 99.1 | 97.9 | 99.7 | 99.6 | 98.5 | 99.5 | 99.8 | 99.7 | 99.5 | 99.8 | 99.7 | 99.7
MTBF (day) | 1.0 | 8.1 | 4.0 | 15.9 | 8.6 | 14.7 | 29.5 | 22.5 | 30.3 | 17.4 | 12.3 | 18.6 | 18.5
StdDev | 2.1 | 14.5 | 5.8 | 28.5 | 14.7 | 20.9 | 36.9 | 33.4 | 48.3 | 31.7 | 18.7 | 34.3 | 22.9
Median | 0.9 | 1.7 | 2.1 | 1.7 | 2.5 | 3.5 | 25.0 | 1.0 | 4.0 | 1.6 | 5.7 | 1.0 | 11.0
MTTR (hr) | 3.5 | 2.4 | 2.5 | 9.2 | 1.7 | 7.9 | 6.0 | 2.7 | 4.8 | 4.4 | 1.9 | 3.6 | 3.2
StdDev | 13.1 | 7.8 | 5.2 | 29.9 | 5.1 | 33.2 | 15.6 | 7.5 | 9.3 | 7.7 | 8.1 | 8.8 | 7.2
Median | 0.5 | 0.4 | 0.9 | 0.43 | 0.4 | 0.5 | 0.5 | 0.3 | 0.5 | 0.7 | 0.4 | 0.4 | 0.5
MTTR SW | 1.5 | 1.1 | 1.8 | 1.9 | 1.0 | 1.3 | 0.5 | 1.1 | 1.7 | 2.7 | 0.6 | 0.4 | 1.1
MTTR HW | 6.3 | 10.5 | 3.9 | 0.6 | 2.0 | 24.7 | 36.3 | 0.4 | 6.4 | 5.4 | 1.3 | 18.6 | 2.2
MTTR M | 8.0 | 5.8 | 4.3 | 28.6 | 3.9 | 14.6 | 11.2 | 9.6 | 9.6 | 7.1 | 2.8 | 5.9 | 13.8
MTTR PWR | 2.1 | 1.5 | 3.4 | 3.8 | 1.0 | 1.5 | 1.3 | 1.2 | 1.7 | 2.8 | 2.6 | 1.7 | 1.7
Table 1: O2K Failure Data Summary | | Platinum | Titan
---|---|---
Outages | 7279 | 947
Outage/Node | 14.00 | 5.85
SW (%) | 84 | 60
HW (%) | $<0.1$ | 5
M (%) | 16 | 1
PWR (%) | 0 | 34
Downtime/Node (hr) | 12.16 | 12.55
SW (%) | 69 | 18
HW (%) | 10 | 18
M (%) | 21 | $<1$
PWR (%) | 0 | 64
Avail (%) | 99.79 | 99.78
S Avail (%) | 99.83 | 99.79
| | Platinum | Titan
---|---|---
System MTBF (hr) | 0.79 | 5.99
StdDev | 5.77 | 40.09
Node MTBF (day) | 14.16 | 26.89
StdDev | 15.87 | 25.75
Median | 9 | 29
Node MTTR (hr) | 0.87 | 2.15
StdDev | 4.27 | 4.65
Median | 0.15 | 0.28
MTTR SW | 0.70 | 0.63
MTTR HW | 100.67 | 7.60
MTTR M | 1.15 | 0.55
MTTR PWR | $-$ | 4.08
Table 2: Platinum and Titan Failure Data Summary
## 3 . Preliminary Results
Before describing the failure data, we would like to clarify some terminology.
Time to Failure (TTF) is the interval between the end of last failure and the
beginning of next failure. Time between Failures (TBF) is the interval between
the beginnings of two consecutive failures. Time to Repair (TTR) is synonymous
with Downtime. Figure 1 illustrates the differences. Because the failure log
does not include the start and end times of outages, we can only calculate
TBFs in terms of days.
Figure 1: TBF, TTF, and TTR
Table 1 and 2 and Figure 2 summarize the failure data for the three HPC
systems. There are two kind of availability measures. The usual availability
is computed as
$1-\frac{\sum(\mbox{\\# Down CPU}\times\mbox{Downtime})}{\mbox{\\# Total
CPU}\times\mbox{Total time}}$
The scheduled availability (S Avail) removes the Scheduled Maintenance
downtime from consideration and only counts scheduled uptime as total time, so
it is computed as
$1-\frac{\sum(\mbox{\\# Down CPU}\times\mbox{Unsched. Downtime})}{\mbox{\\#
Total CPU}\times\mbox{Sched. time}}$
Note that in O2K’s case, the twelve machines have different number of CPUs, so
“# Down CPU” is the number of CPUs on the failed machine. In Platinum and
Titan’s case, the “# Down CPU” is 2.
For the whole system of O2K, the TBF reported in Table 1 is actually TBF, and
the downtime is the weighted average of individual machine downtimes:
$\frac{\sum(\mbox{\\# Down CPU}\times\mbox{Downtime})}{\mbox{\\# Total CPU}}$
Figure 2: The rows from top to bottom depict weekly Availability, Outages,
Downtime, and Failure Clustering (see §5), respectively. The X axis in all
plots is week. The Y axis in Downtime row is CPU-hours and in Failure
Clustering row, the number of machines/nodes involved.
From the data it is obvious that software halts account for most outages
(59-83%), but the average downtime (i.e. MTTR) is only 0.6-1.5 hours. On the
other hand, although the fraction of hardware outages is meager (1-13%),
average hardware downtime is the greatest among all unscheduled outage types
(6.3-100.7 hours). This is reasonable because hardware problems usually
requires ordering and replacing parts and performing tests, while many
software problems can be fixed by reboot.
We contacted the NCSA staff about the hardware failure causes of PC clusters.
We were told that there were two or three cases where power supplies needed to
be replaced; otherwise, the main cause of hardware outages is the Myrinet,
including network cards, cables, and switch cards. A network card resides at a
host PC and is connected by cables to the Myrinet switch enclosure. A Myrinet
switch enclosure stacks many Myrinet M3-SPINE switch cards. The usual symptom
that prompts a network card or switch card replacement is there are excessive
CRC check errors. Sometimes the self-testing in a switch card may fail and
lead to replacement. Cable replacements also occurred because the “ping” query
packets cannot get through.
The availability is lower for O2K because when one of its machine is down, as
much as one-sixth of the overall system capacity could disappear (e.g. machine
B, which has 256 CPUs.) This is unlike PC clusters in which each node usually
contains no more than 8 CPUs, so the availability could degrade more
gracefully, assuming the outage is not catastrophic such as a power failure or
network partitioning. Although monolithic single-system-image machines benefit
from ease of administration, a unified view of process space, and extremely
fast interprocess communication, it seems large systems composed of finer-
grained management units are more favorable in terms of availability.
For O2K, the machine-wise TBFs and TTRs are skewed toward small values. Eleven
of twelve machines have MTBF greater than 8 days, but the medians of TBF are
mostly smaller than 4 days. For TTR, nine machines’ MTTR are greater than 2.5
hours, yet the medians are 0.3-0.9 hours. The same phenomenon also occurs on
Platinum and Titan’s node TTR. These prompt us to study examine closely the
distributions of TBF and TTR, which we documented our findings in the next
section.
## 4 . Failure Distribution
In analytical modeling, the distributions of TBF and TTR are key components
for obtaining precise results [12] because distributions of the same mean and
variance can still yield very different outcomes. In this section, we
investigate the distributions of TBF and TTR with the assumption that failures
and repairs are all independent.
We first choose a set of distributions as our parametric probability models
and seek the parameters that best fit the data to these models. An open-source
statistical package called WAFO [2] is used to find parameters. Then we apply
chi-square test as goodness-of-fit test to pick the best-fit distribution.
Our selection of probability models includes exponential, gamma distribution
and a family of heavy-tail distributions (Weibull, Truncated Weibull, Log-
normal, Inverse normal, and Pareto [4]). Heavy-tail means the complementary
cumulative distribution function $1-F(x)$ decays more slowly than
exponentially. Heavy-tail distributions are chosen because many failure data
studies (e.g. [10, 3]) have shown that they are actually more prevalent than
exponential distribution, which is commonly assumed in probability models to
make analysis tractable.
Figure 3: Distributions of node TBF and TTR. Dashed line is the fitting
distribution. The X axis in TBF plots is day and in TTR plots, hour.
For each system, we conglomerate TBF and TTR data of all machines/nodes and
present their distributions and fitting functions in Figure 3. For O2K, the
TTR is fit by Inverse normal $f(x)=1.87(2\pi
x^{3})^{-0.5}\exp(-12.76(x-0.37)^{2}/x)$ and TBF by Weibull
$F(x)=1-\exp(-5.61x^{0.5})$. For Platinum, the TTR is fit by Truncated Weibull
$F(x)=1-\exp(-6.79(x+0.14)^{0.15}+5.07)$ and TBF by Exponential
$F(x)=1-e^{-0.07x}$. For Titan, the TTR is fit by Gamma
$f(x)=0.27x^{-0.51}e^{-0.23x}$ and TTR by Exponential $F(x)=1-e^{-0.037x}$.
The distributions of Titan’s failure data have staircase-like shapes unfound
in other two systems’. For example, there are two sudden shoot-ups at 1.4 hour
and 6.8 hour in Titan’s TTR distribution. The shoot-ups mean that there were
massive nodes down for about the same period of time, which implies a
possibility of correlated failure. To understand this anomaly, we perform a
failure correlation analysis, as described in the next section.
## 5 . Failure Correlation
In the last section we assumed the failures are independent and derived the
failure distribuion. Failure independence is a common assumption in
reliability engineering to simplify analysis and system design. However, many
statistical tests and log analyses showed that real-world distributed
computing environments do exhibit correlated failures.
In this section, we investigate how outages of different machines relate to
each other by clustering approach [13]. Roughly speaking, this approach groups
failures which are close either in space or in time. It should be emphasized
that the correlation resulted from clustering is purely statistical and does
not imply the failures really have cause-and-effect (causal) relationship.
Since our collection of failure log lacks error details, we can only rely on
statistics to find correlation.
To not confuse with the word “cluster” in “PC clusters,” we will refer to a
failure cluster as a “batch.” We define a batch to be a time period
$[T_{1},T_{2}]$ in which every day there is at least one outage (regardless of
type), and no outages occur on day $T_{1}-1$ or $T_{2}+1$. Put another way, we
coalesce into a batch the failures of different machines/nodes that occur in
consecutive days. The bottom row of Figure 2 illustrates the results. The
width and height of a rectangle indicate the duration and the machine/node
count of that batch, respectively.
Using this method, we found there are 79 batches for O2K, accounting for 55
percent of all outages. Eight-five percent of batches last for no more than
three days, and 89 percent of batches involve no more than four machines.
There are four batches that involve all twelve machines. In week 31, the
failure was caused by power or air conditioning problem and was followed a
two-day maintenance. In week 35, there was a system-wide maintenance on the
first day, but some machines experienced hardware halts and all were again
taken offline for maintenance on the second day, and all machines had short
software problems on the last day. In week 78, a system maintenance occurred
and lasted 37-91 hours. The last catastrophic outage occurred on week 97 due
to power problems. Note that the massive outages in week 31,35, and 78 are
also reflected as spikes in Availability and Downtime plots.
The failure clustering plot also reveals some possible failure correlation in
Platinum and Titan systems. Statistically speaking, the chance of a batch
having a great deal of outages in a short time (e.g. the razor-thin rectangles
in the bottom row of Figure 2) is close to zero. Thus, a reasonable
explanation for such an occurrence is failure correlation. To justify this
claim, we take Platinum system as an example. There is a batch in week 4 which
contains 501 nodes in one day. If we assumes failures are independent and TBF
has exponential distribution, then the number of failures in a given duration
follows Poisson distribution. So the chance of at least 501 outages in one day
is $\sum_{n=501}^{\infty}(e^{-30}30^{n}/n!)=6.3\times 10^{-14}$ where $30$ is
the average number of outages per day of Platinum system. After checking the
log, it shows that particular outage is Software Halt and gives 5-15 minutes
downtime.
Titan system’s failure correlation is even more conspicuous. The three peaks
represent massive outages at week 10, due to a 9 minute software halt followed
by 6.8 hours of hardware halt, at week 14, due to 1.4 hours of power failure,
and at week 21, due to 6.8 hours of power failure. The 1.4 and 6.8 hours of
downtime explains the two sudden rises in Titan’s TTR distribution in Figure 3
as most nodes experienced them. The three staircases in Titan’s TBF
distribution reflect the intervals among the three massive outages, which are
64, 29, and 48 days. As in O2K’s case, the three outages of Titan are also
mirrored in Availability, Outages, and Downtime plots.
## 6 . Related Work
Field failure data analysis of very large HPC systems is usually for internal
circulation and is almost never published in detail. Nevertheless, there are
several talks and reports that shed light on the administration experience of
some of the world’s most powerful supercomputers.
Koch [5] reported the situtaiton of ASCI White. A whole-system reboot of ASCI
White takes 4 hours and preventive maintenance is performed weekly, with
separate periods for software and hardware. Machine problems occurred in every
aspect of the system. Transient CPU faults generated invalid floating-point
numbers, and it took great effort to spot these corrupted nodes because they
passed standard diagnostic tests and only failed in real programs. Bad optical
interconnects led to non-repeatable link errors which corrupted the
computation because these errors could sneak through network host firmware
without being detected. The storage system was not 100% dependable either. The
parallel file-system sometimes failed to return I/O error to the user program
when the program was dumping restart files. In addition, the archival
subsystem’s buggy firmware corrupted restart files and made the user program
fail to launch.
Seager [11] showed that the reliability of the ASCI White improved over time
as MTBF increased steadily from as short as 5 hours in January 2001 to 40
hours in February 2003\. Except uncategorized failures, the storage system
(both local disks and IBM Serial Disk System) is the main source of hardware
problems. Next to disks is CPU and third-party hardware troubles. For
software, communication libraries and operating systems contributed the most
interruptions.
Morrison [9] reported operations of the ASCI Q during June 2002 thru February
2003. The MTBI (mean time between interruption) is 6.5 hours, and on the
average there were 114 unplanned outages per month. Putting storage subsystem
aside, hardware problems account for 73.6% of node outages, with CPU and
memory modules being responsible for over 96% of all hardware faults (CPU is
62.5% and memory is 33.6%.) Network adaptors or system boards seldom fail.
Levine [7] described the failure statistics of Pittsburgh Supercomputing
Center’s supercomputer Lemieux: MTBI during April 2002 to February 2003 is 9.7
hours, shorter than predicted 12 hours. The availability is 98.33% during mid-
November 2002 to early February 2003.
The National Energy Research Scientific Computing Center (NERSC) houses
several supercomputers and their operations are summarized in NERSC’s annual
self-evaluation reports [6]. During August 2002 to July 2003, their largest
supercomputer Seaborg reached 98.74% scheduled availability, 14 days MTBI, and
3.3 hours MTTR. Storage and file servers had similar availability. Two-thirds
of Seaborg’s outages and over 85% of storage system’s outages are due to
software.
## 7 . Conclusions
In this paper we reported the failure data analysis of three NCSA HPC systems,
one of which is an array of distributed shared memory mainframes and the rest
are PC clusters. The results show that the availability is 98.7-99.8%. Most
outages are due to software halts, but the downtime per outage is highest due
to hardware halts or scheduled maintenance. We also sought the distributions
of time-between-failures and time-to-repairs and found some of them exhibit
heavy-tail distributions instead of exponential. Finally, we applied failure
clustering analysis and identified several correlated failures. Because
failure data analysis of HPC system is scarce, we believe this paper provides
very valuable information for researchers and practioners working on
reliability modeling and engineering.
## Acknowledgements
Special thanks go to Nancy Rudins and Brian Kucic of NCSA for providing the
failure data.
## References
* [1] J. Brevik, D. Nurmi, and R. Wolski. Quantifying machine availability in networked and desktop Grid systems. Technical Report CS-2003-37, University of California, San Diego, 2003\.
* [2] P. Brodtkorb et al. WAFO - a Matlab toolbox for analysis of random waves and loads. In Proceedings of the 10th International Offshore and Polar Engineering conference, 2000. http://www.maths.lth.se/matstat/wafo/.
* [3] T. Heath, R. Martin, and T. Nguyen. The shape of failure. In Workshop on Evaluating and Architecting System Dependability (EASY), 2001.
* [4] N. Johnson, S. Kotz, and N. Balakrishnan. Continuous Univariate Distributions. Wiley-Interscience, second edition, 1995.
* [5] K. Koch. How does ASCI actually complete multi-month 1000-processor milestone simulations? (talk). In Conference on High Speed Computing, 2002. http://www.ccs.lanl.gov/salishan02/program.htm.
* [6] W. Kramer. How are we doing? A self-assessment of the quality of services and systems at NERSC (2001). National Energy Research Scientific Computing Center, 2002.
* [7] M. Levine. NSF’s terascale computing system (talk). In SOS 7th Workshop on Distributed Supercomputing, 2003. http://www.cs.sandia.gov/SOS7/.
* [8] B. Loftis. 2001 Alliance user survey. NCSA Scientific Computing Division, 2001.
* [9] J. Morrison. The ASCI Q System at Los Alamos (talk). In SOS 7th Workshop on Distributed Supercomputing, 2003.
* [10] D. Nurmi, J. Brevik, and R. Wolski. Modeling machine availability in enterprise and wide-area distributed computing environments. Technical Report CS-2003-28, University of California, San Diego, 2003\.
* [11] M. Seager. Operational machines: ASCI White (talk). In SOS 7th Workshop on Distributed Supercomputing, 2003.
* [12] K. Trivedi. Probability $\&$ Statistics with Reliability, Queuing, and Computer Science Applications. Prentice-Hall, 1982.
* [13] J. Xu, Z. Kalbarczyk, and R. Iyer. Networked Windows NT system field failure data analysis. In IEEE Pacific Rim International Symposium on Dependable Computing, 1999.
|
arxiv-papers
| 2013-02-20T00:21:44 |
2024-09-04T02:49:41.919554
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Charng-Da Lu",
"submitter": "Charng-Da Lu",
"url": "https://arxiv.org/abs/1302.4779"
}
|
1302.4805
|
# Energy-Efficient Optimization for Physical Layer Security in Multi-Antenna
Downlink Networks with QoS Guarantee
Xiaoming Chen and Lei Lei This work was supported by the grants from the NUAA
Research Funding (No. NN2012004), the open research fund of National Mobile
Communications Research Laboratory Southeast University (No. 2012D16), the
Natural Science Foundation Program of China (No. 61100195) and the Doctoral
Fund of Ministry of Education of China (No. 20123218120022).Xiaoming Chen
(e-mail: [email protected]) and Lei Lei are with the College of
Electronic and Information Engineering, Nanjing University of Aeronautics and
Astronautics, and also with the National Mobile Communications Research
Laboratory, Southeast University, China.
###### Abstract
In this letter, we consider a multi-antenna downlink network where a secure
user (SU) coexists with a passive eavesdropper. There are two design
requirements for such a network. First, the information should be transferred
in a secret and efficient manner. Second, the quality of service (QoS), i.e.
delay sensitivity, should be take into consideration to satisfy the demands of
real-time wireless services. In order to fulfill the two requirements, we
combine the physical layer security technique based on switched beam
beamforming with an energy-efficient power allocation. The problem is
formulated as the maximization of the secrecy energy efficiency subject to
delay and power constraints. By solving the optimization problem, we derive an
energy-efficient power allocation scheme. Numerical results validate the
effectiveness of the proposed scheme.
###### Index Terms:
Physical layer security, energy-efficient power allocation, switched beam
beamforming, QoS guarantee.
## I Introduction
Without doubt, information security is a critical issue of wireless
communications due to the open nature of wireless channel. Traditionally,
information security is realized by using cryptography technology. In fact,
information theory has proven that if the eavesdropper channel is degraded,
secure communication can be guaranteed by only using physical layer
technology, namely physical layer security [1] [2], even if the eavesdropper
has strong computation capabilities.
The essence of physical layer security is to maximize the secrecy rate. If
there are multiple antennas at the information source, transmit beamforming
can be developed to improve the legitimate channel capacity and to degrade the
eavesdropper channel capacity, so the achievable secrecy rate is increased [3]
[4]. In [5], the problem of optimal beamforming was addressed by maximizing
the secrecy rate. A potential drawback of the above approach lies in that the
source must have full channel state information (CSI) to design the transmit
beam. To alleviate the assumption, a joint power allocation and beamforming
scheme was proposed based on the full CSI of the legitimate channel and the
partial CSI of the eavesdropper channel [6]. Yet, the CSI is difficult to
obtain for the source, especially the CSI of the eavesdropper channel. It is
proved that if there is no CSI of the eavesdropper channel, the beamforming
along the direction of the legitimate channel is optimal [7]. Thus, the
authors in [7] proposed to convey the quantized CSI of the legitimate channel
for beamforming, and derive a performance upper bound as a function of the
feedback amount. Since the source has no knowledge of the eavesdropper channel
that varies randomly, it is impossible to provide a steady secrecy rate. In
this case, the secrecy outage capacity is adopted as a useful and intuitive
metric to evaluate security, which is defined as the maximum rate under the
constraint that the outage probability that the real transmission rate is
greater than secrecy capacity is equal to a given value [8]. In multi-antenna
systems, the switched beam beamforming is a popular limited feedback
beamforming scheme because of its low complexity, small overhead and good
performance [9]. In this study, we propose to adopt the switched beam
beamforming to increase the capacity of the legitimate channel, and success in
deriving the closed-formed expression of the outage probability in terms of
the secrecy outage capacity.
Recently, the increasing interest in various advanced wireless services
results in an urgent demand for the communications with quality of service
(QoS) guarantee, such as delay sensitivity for the video transmission [10].
Many previous analogous works focus on the maximization of secrecy outage
capacity with the QoS requirement, in which it is optimal to use the maximum
available power. Lately, energy-efficient wireless communication [11], namely
green communications receives considerably attentions due to energy shortage
and greenhouse effect. In [12], energy-efficient resource allocation for
secure OFDMA downlink network was studied. Therefore, we also expect to
maximize the secrecy energy efficiency, namely the number of transmission bits
per Joule, while meeting delay and power constraints. By solving this problem,
we derive an energy-efficient adaptive power allocation scheme according to
the channel condition and performance requirement.
The rest of this letter is organized as follows. We first give an overview of
the secure multi-antenna network in Section II, and then derive an energy-
efficient power allocation scheme by maximizing the secrecy energy efficiency
while satisfying the delay and power constraints in Section III. In Section
IV, we present some numerical results to validate the effectiveness of the
proposed scheme. Finally, we conclude the whole paper in Section V.
## II System Model
Figure 1: An overview of the considered system model.
We consider a multi-antenna downlink network, where a base station (BS) with
$N_{t}$ antennas communicates with a single antenna secure user (SU), while a
single antenna eavesdropper also receives the signal from the BS and tries to
detect it. We use $\alpha_{s}\textbf{h}$ to denote the $N_{t}$ dimensional
legitimate channel vector from the BS to the SU, where $\alpha_{s}$ is the
channel large-scale fading component, including path loss and shadow fading,
and h represents the channel small-scale fading component, which is a
circularly symmetric complex Gaussian (CSCG) random vector with zero mean and
unit variance. Similarly, we use $\alpha_{e}\textbf{g}$ to denote the $N_{t}$
dimensional eavesdropper channel vector from the BS to the eavesdropper, where
$\alpha_{e}$ and g are the large-scale and small-scale channel fading
components, respectively. The network is operated in time slots. We assume
that $\alpha_{s}$ and $\alpha_{e}$ remain constant during a relatively long
time period due to their slow fading, while h and g remain constant in a time
slot and independently fade slot by slot. At the beginning of each time slot,
the SU selects an optimal column vector from a predetermined $N_{t}\times
N_{t}$ unitary matrix
$\mathcal{W}=\\{\textbf{w}_{1},\textbf{w}_{2},\cdots,\textbf{w}_{N_{t}}\\}$,
where $\textbf{w}_{i}$ is the $i$-th column vector, according to the following
criteria:
$i^{\star}=\arg\max\limits_{1\leq i\leq
N_{t}}|\textbf{h}^{H}\textbf{w}_{i}|^{2}$ (1)
Then, the SU conveys the index $i^{\star}$ to the BS via the feedback link,
and the BS performs beamforming to the predetermined signal by using
$\textbf{w}_{i^{\star}}$, namely switched beam beamforming. Thus, the receive
signals at the SU and the eavesdropper are given by
$y_{s}=\sqrt{P}\alpha_{s}\textbf{h}^{H}\textbf{w}_{i^{\star}}x+n_{s}$ (2)
and
$y_{e}=\sqrt{P}\alpha_{e}\textbf{g}^{H}\textbf{w}_{i^{\star}}x+n_{e}$ (3)
respectively, where $x$ is the Gaussian distributed transmit signal with unit
variance, $P$ is the transmit power, $n_{s}$ and $n_{e}$ are the additive
Gaussian white noises with unit variance at the SU and the eavesdropper,
respectively. Hence, the capacities of the legitimate and eavesdropper
channels can be expressed as
$C_{s}=W\log_{2}(1+\gamma_{s})$ (4)
and
$C_{e}=W\log_{2}(1+\gamma_{e})$ (5)
where $W$ is the spectrum bandwidth,
$\gamma_{s}=P\alpha_{s}^{2}|\textbf{h}^{H}\textbf{w}_{i^{\star}}|^{2}$ and
$\gamma_{e}=P\alpha_{e}^{2}|\textbf{g}^{H}\textbf{w}_{i^{\star}}|^{2}$ are the
signal-to-noise ratio (SNR) at the SU and the eavesdropper, respectively.
Therefore, from the perspective of information theory, the secrecy capacity is
given by $C_{sec}=[C_{s}-C_{e}]^{+}$, where $[x]^{+}=\max(x,0)$. Since there
are no knowledge of the eavesdropper channel at the BS, it is impossible to
provide a steady secrecy capacity. In this letter, we take the secrecy outage
capacity $R_{sec}$ as the performance metric, which is defined as the maximum
rate under the condition that the outage probability that the transmission
rate surpasses the secrecy capacity is equal to a given value $\varepsilon$,
namely
$P_{r}\bigg{(}R_{sec}>C_{s}-C_{e}\bigg{)}=\varepsilon$ (6)
Substituting (4) and (5) into (6), the outage probability can be transformed
as
$\displaystyle\varepsilon$ $\displaystyle=$ $\displaystyle
P_{r}\bigg{(}\gamma_{s}<2^{R_{sec}/W}(1+\gamma_{e})-1\bigg{)}$ (7)
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}\int_{0}^{2^{R_{sec}/W}(1+\gamma_{e})-1}f_{\gamma_{s}}(x)f_{\gamma_{e}}(y)dxdy$
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}F_{\gamma_{s}}\left(2^{R_{sec}/W}(1+y)-1\right)f_{\gamma_{e}}(y)dy$
where $f_{\gamma_{e}}(y)$ is the probability density function (pdf) of
$\gamma_{e}$, $f_{\gamma_{s}}(x)$ and $F_{\gamma_{s}}(x)$ are the pdf and
cumulative distribution function (cdf) of $\gamma_{s}$, respectively. Since
$\textbf{w}_{i^{\star}}$ is independent of g,
$|\textbf{g}^{H}\textbf{w}_{i^{\star}}|^{2}$ is exponentially distributed.
Thus, we have
$f_{\gamma_{e}}(y)=\frac{1}{P\alpha_{e}^{2}}\exp\left(-\frac{y}{P\alpha_{e}^{2}}\right)$
(8)
Similarly, $|\textbf{h}^{H}\textbf{w}_{i^{\star}}|^{2}$ can be considered as
the maximum one of $N_{t}$ independent exponentially distributed random
variables caused by beam selection, so we have
$F_{\gamma_{s}}(x)=\left(1-\exp\left(-\frac{x}{P\alpha_{s}^{2}}\right)\right)^{N_{t}}$
(9)
Substituting (8) and (9) into (7), it is obtained that
$\displaystyle\varepsilon$ $\displaystyle=$ $\displaystyle
1+\sum\limits_{n=1}^{N_{t}}{N_{t}\choose
n}(-1)^{n}\left(\frac{1}{1+n2^{R_{sec}/W}\alpha_{e}^{2}/\alpha_{s}^{2}}\right)$
(10) $\displaystyle*$
$\displaystyle\exp\left(-\frac{n(2^{R_{sec}/W}-1)}{P\alpha_{s}^{2}}\right)$
$\displaystyle=$ $\displaystyle G(R_{sec},P)$
Intuitively, $G(R_{sec},P)$ is a monotonically increasing function of
$R_{sec}$ and a monotonically decreasing function of $P$. Thus, given transmit
power $P$ and the requirement of outage probability $\varepsilon$, the secrecy
outage capacity can be derived by computing $R_{sec}=G^{-1}(\varepsilon,P)$,
where $G^{-1}(\varepsilon,P)$ is the inverse function of $G(R_{sec},P)$.
Moreover, from (10), we can obtain the probability of positive secrecy
probability as
$\displaystyle P_{r}(C_{sec}>0)$ $\displaystyle=$ $\displaystyle 1-G(0,P)$
(11) $\displaystyle=$ $\displaystyle
1-\frac{\alpha_{s}^{2}}{\alpha_{e}^{2}}B\left(\frac{\alpha_{s}^{2}}{\alpha_{e}^{2}},N_{t}+1\right)$
where $B(x,y)$ is the Beta function. Interestingly, it is found that the
probability $P_{r}(C_{sec}>0)$ is independent of $P$. As $N_{t}$ increases,
the probability increases accordingly, because more array gains can be
obtained from the switched beam beamforming. Furthermore, (11) reveals that
the short accessing distance of the SU is benefit to enhance the information
secrecy.
Since most of wireless services are delay sensitive, we take the delay
constraint into consideration. It is assumed that the data for the SU is
arrived in the form of packet of fixed length $N_{b}$ bits with average
arrival rate $\lambda$ (packets per slot), and it has the minimum average
delay requirements $D$ (slots) related to its service style. Following [13],
in order to satisfy the delay constraint, the secrecy outage capacity must
meet the following condition:
$R_{sec}\geq\frac{2D\lambda+2+\sqrt{(2D\lambda+2)^{2}-8D\lambda}}{4D}\frac{N_{b}}{T}=C_{\min}$
(12)
where $T$ is the length of a time slot.
## III Energy-Efficient Power Allocation
Considering the limitation of energy resource and the requirement of green
communication, energy efficiency becomes an important performance metric in
wireless communications. In this section, we attempt to derive a power
allocation scheme to maximize the secrecy energy efficiency while satisfying
the delay and power constraints, which is equivalent to the following
optimization problem:
$\displaystyle J_{1}:\max$ $\displaystyle\frac{R_{sec}}{P_{0}+P}$ (13) s.t.
$\displaystyle G(R_{sec},P)\leq\varepsilon$ (16) $\displaystyle R_{sec}\geq
C_{\min}$ $\displaystyle P\leq P_{\max}$
where $P$ is the power consumption in the power amplifier, and $P_{0}$ is the
power for the work regardless of information transmission, such as the circuit
power. (13) is the so called energy efficiency, defined as the number of
transmission bits per Joule. (16) is used to fulfill the secrecy requirement
based on physical layer security, and (16) is the delay constraint, where
$C_{\min}$ is determined by both the data arrival rate $\lambda$ and the delay
requirement $D$ as shown in (12). $P_{\max}$ is the constraint on maximum
transmit power. Since $G(R_{sec},P)$ is a monotonically increasing function of
$R_{sec}$ and a decreasing function of $P$, the condition of
$G(R_{sec},P)=\varepsilon$ is optimal in the sense of maximizing the secrecy
energy efficiency. Thus, (16) can be canceled and $R_{sec}$ can be replaced by
$G^{-1}(\varepsilon,P)$. Notice that there may be no feasible solution for
$J_{1}$, due to the stringent secrecy and delay constraints. Under such a
condition, in order to obtain a solution, we have to relax the constraint on
the outage probability, average delay or transmit power.
The objective function (13) in a fractional program is a ratio of two
functions of the optimization variable $P$, resulting in $J_{1}$ is a
fractional programming problem, which is in general nonconvex. Following [12],
the objective function is equivalent to
$G^{-1}(\varepsilon,P)-q^{\star}(P_{0}+P)$ by exploiting the properties of
fractional programming, where $q^{\star}$ is the secrecy energy efficiency
when $P$ is equal to the optimal power $P^{\star}$ of $J_{1}$, namely
$q^{\star}=G^{-1}(\varepsilon,P^{\star})/(P_{0}+P^{\star})$. Thus, $J_{1}$ is
transformed as
$\displaystyle J_{2}:\max$ $\displaystyle
G^{-1}(\varepsilon,P)-q^{\star}(P_{0}+P)$ (17) s.t. $\displaystyle
G^{-1}(\varepsilon,P)\geq C_{\min}$ (19) $\displaystyle P\leq P_{\max}$
$J_{2}$, as a convex optimization problem, can be solved by the Lagrange
multiplier method. By some arrangement, its Lagrange dual function can be
written as
$\displaystyle\mathcal{L}(\mu,\nu,P)$ $\displaystyle=$ $\displaystyle
G^{-1}(\varepsilon,P)-q^{\star}(P_{0}+P)+\mu G^{-1}(\varepsilon,P)$ (20)
$\displaystyle-$ $\displaystyle\mu C_{\min}-\nu P+\nu P_{\max}$
where $\mu\geq 0$ and $\nu\geq 0$ are the Lagrange multipliers corresponding
to the constraint (19) and (19), respectively. Therefore, the dual problem of
$J_{2}$ is given by
$\displaystyle\min\limits_{\mu,\nu}\max\limits_{P}\mathcal{L}(\mu,\nu,P)$ (21)
For the given $\mu$ and $\nu$, the optimal power $P^{\star}$ can be derived by
solving the following KKT condition
$\displaystyle\frac{\partial\mathcal{L}(\mu,\nu,P)}{\partial
P}=(1+\mu)\frac{\partial G^{-1}(\varepsilon,P)}{\partial P}-q^{\star}-\nu=0$
(22)
Note that if $P^{\star}$ is negative, we should let $P^{\star}=0$. Moreover,
$\mu$ and $\nu$ can be updated by the gradient method, which are given by
$\mu(n+1)=[\mu(n)-\triangle_{\mu}(G^{-1}(\varepsilon,P)-C_{\min})]^{+}$ (23)
and
$\nu(n+1)=[\nu(n)-\triangle_{\nu}(P_{\max}-P)]^{+}$ (24)
where $n$ is the iteration index, and $\triangle_{\mu}$ and $\triangle_{\nu}$
are the positive iteration steps. Inspired by the Dinkelbach method [14], we
propose an iterative algorithm as follows
Algorithm 1: Energy-Efficient Power Allocation
1. 1.
Initialization: Given $N_{t}$, $T$, $W$, $\alpha_{s}$, $\alpha_{e}$,
$C_{\min}$, $P_{0}$, $P_{\max}$, $\triangle_{\mu}$, $\triangle_{\nu}$ and
$\varepsilon$. Let $\mu=0$, $\nu=0$, $P=0$ and
$q^{\star}=G^{-1}(\varepsilon,P)/(P_{0}+P)$. $\epsilon$ is a sufficiently
small positive real number.
2. 2.
Update $\mu$ and $\nu$ according to (23) and (24), respectively.
3. 3.
Computing the optimal $P^{\star}$ by solving the equation (22) using some math
tools, such as _Mathematics_ and _Matlab_.
4. 4.
If $G^{-1}(\varepsilon,P^{\star})-q^{\star}(P_{0}+P^{\star})>\epsilon$, then
set $q^{\star}=G^{-1}(\varepsilon,P^{\star})/(P_{0}+P^{\star})$, and go to 2).
Otherwise, $P^{\star}$ is the optimal transmit power.
## IV Numerical Results
To examine the effectiveness of the proposed energy-efficient power allocation
scheme, we present several numerical results in the following scenarios: we
set $N_{t}=4$, $W=10$KHz, $C_{\min}=0.8$Kb/s, $P_{0}=0.5$Watt and
$P_{\max}=10$Watt. $\alpha_{s}^{2}$ is normalized to1, and we use
$\rho=\alpha_{e}^{2}/\alpha_{s}^{2}$ to denote the relative large-scale fading
of the eavesdropper channel. It is found that the proposed energy-efficient
power allocation scheme converges after no more than 20 times iterative
computation in the all simulation scenarios.
Fig.2 compares the secrecy energy efficiency of the proposed adaptive power
allocation scheme and the fixed power allocation scheme with
$\varepsilon=0.05$. Intuitively, it is optimal to use $P_{\max}$ as the
transmit power in the sense of maximizing the secrecy outage capacity, so we
set $P=P_{\max}$ fixedly for the fixed power allocation scheme. As seen in
Fig.2, the proposed scheme performs better than the fixed one, especially when
$\rho$ is small. For example, when $\rho=0.10$, there is about $2$Kb/J gain.
Therefore, the proposed scheme is more suitable for the future green and
secure communications. It is found that when $\rho=0.25$, the secrecy energy
efficiency reduces to zero. This is because there is no nonzero secrecy outage
capacity under such constraint conditions. In order to support the case with
the large $\rho$, we need to relax the constraint conditions or deploy more
antennas at the BS to obtain more array gain.
Figure 2: Performance comparison of the traditional and the proposed power
allocation schemes.
Fig.3 investigates the effect of the requirements of outage probability on the
secrecy power efficiency of the proposed scheme. For a given $\rho$, as
$\varepsilon$ decreases, the secrecy energy efficiency reduces accordingly,
this is because more power is used to decrease the outage probability. On the
other hand, for a given requirement of the outage probability, the increase of
$\rho$ leads to the decrease of the secrecy energy efficiency, since the
eavesdropper has a strong eavesdropping ability.
Figure 3: Performance comparison of the proposed power allocation scheme with
different requirements of secrecy outage probability.
## V Conclusion
A major contribution of this paper is the introduction of an energy-efficient
power allocation scheme into a multi-antenna downlink network employing
physical layer security with delay guarantee. Considering the importance of
the CSI in multi-antenna networks, the switched beam beamforming is adopted to
realize the adaptive transmission. Numerical results confirm the effectiveness
of the proposed scheme. In future works, we will further study the cases with
multi-antenna eavesdropper, imperfect CSI, robust beamforming, etc.
## References
* [1] A. D. Wyner, “The wire-tap channel,” _Bell Syst. Tech. J._ , vol. 54, pp. 1355-1387, Oct. 1975.
* [2] P. K. Gopala, L. Lai, and H. El. Gamal, “On the secrecy capacity of fading channels,” _IEEE Trans. Inf. Theory_ , vol. 54, no. 10, pp. 4687-4698, Oct. 2008.
* [3] T. Liu, and S. Shamai, “A note on the secrecy capacity of the multiple-antenna wiretap channel,” _IEEE Trans. Inf. Theory_ , vol. 55, no. 6, pp. 2547-2553, Jun. 2009.
* [4] A. Khisti, and G. W. Wornell, “Secure transmission with multiple antenna-part II: the MIMOME wiretap channel,” _IEEE Trans. Inf. Theory_ , vol. 56, no. 11, pp. 5515-5532, Nov. 2010.
* [5] A. Khisti, and G. W. Wornell, “Secure transmission with multiple antennas I: the MIMOME wiretap channel,” _IEEE Trans. Inf. Theory_ , vol. 56, no. 7, pp. 3088-3104, Jul. 2010.
* [6] T. V. Nguye, and H. Shin, “Power allcoation and achievable secrecy rates in MISOME wiretap channels,” _IEEE Commun. Lett._ , vol. 15, no. 11, pp. 1196-1198, Nov. 2011.
* [7] Bashar, Shafi, Zhi Ding, and Geoffrey Ye Li, “On secrecy of codebook-based transmission beamforming under receiver limited feedback,” _IEEE Trans. Wireless Commun._ , vol. 10, no. 4, pp. 1212-1223, Apr. 2011.
* [8] M. Bloch, J. Barros, M. Rodrigues, and S. Mclaughlin, “Wireless information-theoretic security,” _IEEE Trans. Inf. Theory_ , vol. 54, no. 6, pp. 2515-2534, Jun. 2008.
* [9] X. Chen, Z. Zhang, S. Chen, and C. Wang, “Adaptive mode selection for multiuser MIMO downlink employing rateless codes with QoS provisioning,” _IEEE Trans. Wireless Commun._ , vol. 11, no. 2, pp. 790-799, Feb. 2012.
* [10] X. Chen, and C. Yuen, “Efficient resource allocation in rateless coded MU-MIMO cognitive radio network with QoS provisioning and limited feedback,” _IEEE Trans. Veh. Tech._ , vol. 62, no. 1, pp. 395-399, Jan. 2013.
* [11] F. Chu, K. Chen, and G. Fettweis, “Green resource allocation to minimize receiving energy in OFDMA cellular systems,” _IEEE Commun. Lett._ , vol. 16, no. 3, pp. 372-374, Jan. 2012.
* [12] D. W. K. Ng, E. S. Lo, and R. Schober, “Energy-efficient resource allocation for secure OFDMA systems,” _IEEE Trans. Veh. Tech._ , vol. 61, no. 6, pp. 2572-2585, Jul. 2012.
* [13] D. S. W. Hui, V. K. N. Lau, and W. H. Lam, “Cross-layer design for OFDMA wireless systems with heterogeneous delay requirement,” _IEEE Trans. Wireless Commun._ , vol. 6. no. 8, pp. 2872-2880, Aug. 2007.
* [14] W. Dinkelbach, “On nonlinear fractional programming,” _Manage. Sci._ , vol. 13, no. 7, pp. 492 C498, Mar. 1967. [Online]. Available: http://www. jstor.org/stable/2627691
|
arxiv-papers
| 2013-02-20T04:23:06 |
2024-09-04T02:49:41.924893
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Xiaoming Chen and Lei Lei",
"submitter": "Xiaoming Chen",
"url": "https://arxiv.org/abs/1302.4805"
}
|
1302.4916
|
# Stacking from Tags:
Clustering Bookmarks around a Theme
Arkaitz Zubiaga
Alberto Pérez García-Plaza Queens College and Graduate Center City
University of New York New York, NY, USA [email protected] Víctor Fresno
Raquel Martínez
NLP & IR Group ETSI Informática UNED Madrid, Spain {alpgarcia, vfresno,
raquel}@lsi.uned.es
###### Abstract
Since very recently, users on the social bookmarking service Delicious can
stack web pages in addition to tagging them. Stacking enables users to group
web pages around specific themes with the aim of recommending to others.
However, users still stack a small subset of what they tag, and thus many web
pages remain unstacked. This paper presents early research towards
automatically clustering web pages from tags to find stacks and extend
recommendations.
## 1 Introduction
Social tagging has become a powerful means to organize resources facilitating
later access [1]. On these sites, users can label web pages with tags,
facilitating future access to those web pages both to the author and to other
interested users [2, 7]. Very recently, Delicious.com introduced a new
dimension for organizing web pages: stacking. With stacks, users can group the
web pages that might be of interest for a specific community, e.g.,
Valentine’s Special, or UNIX and Programming Jokes. Stacks can be very useful
for those who are looking for help or information on a specific matter. With
stacks, users are providing a 2-dimensional organization of web pages that is
complemented with tags, as shown by the example in Table 1. However, tagging
activity still clearly exceeds the stacking activity, and many web pages are
tagged but not stacked. Moreover, all the web pages tagged before the new
feature was introduced are not associated with stacks. Thus, finding a way to
infer missing stacks from all those tags would be helpful to recommend more
groups of web pages to communities, or to suggest adding web pages to any user
stack. Different from previous research on clustering and classifying tagged
resources, which evaluated using the Open Directory Project [4, 5] or from
manually built categorizations [6], stacks provide a rather ad hoc ground
truth to evaluate with. This paper describes early research in this direction,
presenting preliminary work on automatically clustering web pages from tags to
find stacks as users would do. Preliminary experiments suggest that using tags
can help reach high performance clustering.
Stack #1 | | Stack #2 |
---|---|---|---
| | |
URL1 | URL2 | URL3 | URL4 | URL5 | URL6 | URL7
| | | | | |
tags1 | tags2 | tags3 | tags4 | tags5 | tags6 | tags7
Table 1: Example of a user’s tags and stacks. The user tagged 7 URLs, with 5
of them in 2 stacks.
## 2 Dataset
We collected the tagging activity for 3,635 Delicious users in October and
November 2011. This subset includes all the users who created at least a stack
in this timeframe. During this period, those users tagged 182,510 web pages,
creating 5,214 stacks. Out of those web pages, 45,196 (24.8%) were stacked
while 137,314 (75.2%) were left out of stacks. Also, a large set of users who
are not included in our dataset are tagging web pages, while they are not
creating stacks. Going into further details on the tagging activity in and out
of the stacks, we observe that, on average, 30.1% of the tags contained in
stacks are also used out of the stacks. This suggests that there is not
specific vocabulary for stacks, but users share vocabulary with web pages out
of stacks. Moreover, there is just a small subset of 22.5% of the stacks that
have a common tag in all their underlying web pages. Hence, most users do not
use an exclusive tag that refers to the stack. This motivates our study on the
automatic clustering of web pages from tags with the aim of finding stacks
that approach to those created by users.
## 3 Experiments
We used Cluto rbr [3] (k-way repeated bisections globally optimized) to find
clusters from tags. Cluto rbr conveniently fits with the present task since,
in practice, it always generates the same clustering solution for a certain
input data. As the main parameter, this algorithm requires as an input the
number of clusters to generate, which is known as K. We used values ranging
from 2 to 10 for K, as a preliminary approach that allows us to evaluate and
understand how the number of created clusters affects the solution. We set the
rest of the parameters to their default values. Upon these settings, we got
the resulting clusters for all the web pages saved by each user, and compare
the results to the stack(s) created by the user. For each run on a stack, we
computed the precision, recall and F1 values, and got the macroaveraged values
for all the stacks.
Figure 1: Macroaveraged F1, P and R for K-dependent Cluto runs, ranging from 2
to 10 for K.
Figure 1 shows precision, recall, and F1 values for K-dependent Cluto runs.
Precision and recall values considerably vary depending on the selected K
value. Creating a few big clusters improves recall, while creating many small
clusters improves precision. However, F1 values remain very similar while K
changes. Regardless of the value selected for K, the clustering gets F1 above
0.6. Hence, the selection of the value for K mainly conditions that the
results get affected by either precision or recall, depending on the
preference.
Figure 2: Comparison of F1 values for K-dependent Cluto runs as compared to
benchmark approaches.
Figure 2 complements the above results by showing the F1 values for our
K-dependent runs as compared to 3 benchmark methods: (1) a baseline approach
that randomly creates the clusters, i.e., randomly generating K clusters of
equal size, (2) an intermediate approach that randomly selects the value of K
for each user, i.e., the average of multiple runs using random K values, and
(3) the ideal upper-bound performance by choosing the optimal K value for each
user stack. These results show that using tags to find stacks clearly
outperforms a random approach, doubling the performance in many cases. This
encourages the use of tags to perform this task in an effective way. Moreover,
even though this paper does not explore how to find a suitable K for each
stack, the upper-bound performance based on optimal K values shows that tags
can reach very high results. An appropriate selection of K could yield
clusters approaching to 0.8 performance in terms of F1. It also clearly
outperforms the random selection of K, encouraging to perform further research
in a way of looking for a suitable K for each user.
## 4 Conclusions
This work describes early research for a work-in-progress on a novel feature
of social bookmarking systems: stacking. To the best of our knowledge, this is
the first research work that deals with stacks. We have shown that the use of
tags to find stacks that resemble to those created by users scores high
performance results above 0.6 in terms of F1. Moreover, choosing the right
parameters for each stack to be created can substantially improve performance
by scoring nearly 0.8. As a preliminary work, these results encourage
performing further study that helps make a decision on the selection of
parameters that improves performance. Future work includes studying behavioral
patterns of users such as tagging vocabulary towards finding the right
parameters for each user. The promising results by using tags to discover
stacks also suggest further research looking for groups of related tags both
to individual users and communities.
## References
* [1] S. Golder and B. Huberman. Usage patterns of collaborative tagging systems. Journal of information science, 32(2):198–208, 2006.
* [2] P. Heymann, G. Koutrika, and H. Garcia-Molina. Can social bookmarking improve web search? In Proceedings of WSDM 2008, pages 195–206. ACM, 2008.
* [3] G. Karypis. CLUTO - a clustering toolkit. Technical Report #02-017, Nov. 2003.
* [4] D. Ramage, P. Heymann, C. Manning, and H. Garcia-Molina. Clustering the tagged web. In WSDM, pages 54–63, 2009.
* [5] A. Zubiaga, V. Fresno, R. Martínez, and A. P. García-Plaza. Harnessing folksonomies to produce a social classification of resources. IEEE TKDE, 99(PrePrints), 2012.
* [6] A. Zubiaga, A. P. García-Plaza, V. Fresno, and R. Martínez. Content-based clustering for tag cloud visualization. In ASONAM, pages 316–319, 2009.
* [7] A. Zubiaga, R. Martínez, and V. Fresno. Getting the most out of social annotations for web page classification. In DocEng, pages 74–83, 2009.
|
arxiv-papers
| 2013-02-20T14:38:38 |
2024-09-04T02:49:41.933221
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Arkaitz Zubiaga and Alberto P\\'erez Garc\\'ia-Plaza and V\\'ictor Fresno\n and Raquel Mart\\'inez",
"submitter": "Arkaitz Zubiaga",
"url": "https://arxiv.org/abs/1302.4916"
}
|
1302.5033
|
# A Weyl Creation Algebra Approach to the Riemann Hypothesis
George T. Diderrich 111Cardinal Stritch University, 6801 N.Yates Rd,
Milwaukee, WI 53217 222Carroll University, 100 N.East av.,Waukesha, WI 53186
[email protected], [email protected], [email protected]
http://sites.google.com/site/gtdiderrich/
AMS Primary 11M55, Secondary 81R15 Keywords: Riemann Hypothesis, zeros,
quantum field theory
(2/25/13)
###### Abstract
We sketch a Weyl creation operator approach to the Riemann Hypothesis; i.e.,
arithmetic on the Weyl algebras with ergodic theory to transport operators. We
prove that finite Hasse-Dirichlet alternating zeta functions or eta functions
can be induced from a product of “creation operators”. The latter idea is the
result of considering complex quantum mechanics with complex time-space
related concepts. Then we overview these ideas, with variations, which may
provide a pathway that may settle RH; e.g., a Euler Factor analysis to study
the quantum behavior of the integers as one pushes up the critical line near a
macro zeta function zero.
###### Contents
1. 1 Slowing Down the Integral Expression for $\zeta(s)$
1. 1.1 Introduction of the Paradigm: Linear Forms to Creation Operators
2. 1.2 Some Approximate Dirichlet-Hasse Eta Functions Related to RH
3. 1.3 Riemann’s Functional Equation for the Zeta Function.
2. 2 Proof of Theorem 1
3. 3 Proof of Theorem 2
4. 4 Quantum Complex Oscillators: The Origin of the Paradigm
1. 4.1 Preliminary Quantum Oscillator Calculations
2. 4.2 Complex Quantum Oscillator Theory; i.e., Weyl Algebra Theory
3. 4.3 Analytical Proof of the Quantization of Complex Oscillators
5. 5 QFT and NCG Particle/Wave Analogies
1. 5.1 A Quantum Eta Function Approach to RH
6. 6 Overview of Technical Approaches to Settle RH
7. 7 Approach to RH via Euler Factors
1. 7.1 Introduction of Weyl Algebra Creation Operators: One for Each Non-Negative Integer
1. 7.1.1 Review of Complex Quantum Oscillator Theory
2. 7.2 Replace the non-negative integers with creation operators
8. 8 Calculation of $a^{s}$
9. 9 Fusing (Folding) the Clifford Domains Up the Critical Strip: Squeeze, Stretch and Tile then Blend.
10. 10 Calculation of the products $a_{(p^{e_{1}}_{1})}^{s}...a_{(p^{e_{n}}_{n})}^{s}$
1. 10.1 Equilibrium Calculation
11. 11 Concluding Remarks
12. 12 Acknowledgments
## 1 Slowing Down the Integral Expression for $\zeta(s)$
We assume some familiarity with the classical version of Riemann zeta function
theory found in Riemann’s original paper [63] including elementary complex
variable theory and the book by H.M. Edwards [26] and [9], [22], [60]. Conrey
gives an overview of some of the previous results in the field [18] along with
the more recent overview by Sarnak [64].
First recall the integral expression of the Riemann zeta function
$\int^{+\infty}_{0}{\frac{x^{s-1}}{e^{x}-1}}dx=\Pi(s-1)\zeta(s)\qquad(s>1).$
Note: We use the notation that $s>1$ where $s=\sigma+it$ means that the real
part of s is greater than one i.e. $\Re s>1$.
Instead of the usual ”‘kernel”’ in the definition of the Riemann zeta function
above of
$e^{z}-1$
we consider first a “harmonic” polynomial of conjugates
$H^{*}_{n}(z)=(z^{2}+1)(z^{2}+2^{2})...(z^{2}+n^{2})$
from considerations of quantum mechanics [23], [7], [75], [28], [29], quantum
oscillator theory and quantum field theory [24], [42], [25], [50], [81], [27].
This idea is the outgrowth of previous unpublished research on complex
variable quantum oscillators [20] (c.f. Section 4) which led to the following
paradigm.
### 1.1 Introduction of the Paradigm: Linear Forms to Creation Operators
Paradigm One can convert a polynomial in $z$ over the complex numbers
$\mathbf{C}$ into annihilation and creation operators as follows :
$z+ik\stackrel{{\scriptstyle}}{{\rightarrow}}\frac{p+ikz}{\sqrt{2}}=a_{(k)}$
where p corresponds to the momentum operator; k corresponds to an angular
frequency; and z corresponds to the position operator using essentially
Dirac’s definitions with all variables complex variables and with complex
parameters.
Actually, Dirac uses the older notation $\eta_{k}$ for particle creation
operators instead of the $a_{k}$ or $a^{*}$ notation. It suffices to consider
creation operators because the destruction operators are just the anti-
particle reversed time induction into the macro space-time thread from it’s
neighboring space-time thread.
Thus the harmonic polynomial above corresponds to a composite family of
quantum oscillators with frequencies equal to the whole numbers. This will
give all of the pure imaginary poles of the Riemann zeta kernel up to a factor
of $2\pi$ and up to $n$ while slowing down the growth properties of $e^{z}-1$.
Furthermore, one can now perform arithmetic c.f. Section 7 using Weyl algebras
because the index $k$ in $a_{(k)}$ possesses arithmetic and analytic
properties. Thus essentially we have found a finite quantum field theory to
analyze zeta functions.
Hence, the present line of research is to use standard methods along with our
new ideas and see what we can learn to track the alignment of the non-trivial
critical zeros of the zeta function.
### 1.2 Some Approximate Dirichlet-Hasse Eta Functions Related to RH
We consider integrals of the form
$L=L_{n}(s)=\int^{+\infty}_{0}{\frac{x^{s-1}}{H^{*}_{n}(x)}}dx\qquad(0<s<2n)$
and prove that $H^{*}_{n}(z)$ induces the formation of a Hasse type of eta
function with the properties:
###### Theorem 1
The trivial zeros of
$\zeta^{H^{*}}_{n}(s)=\sum^{n}_{k=1}(-1)^{(k-1)}\binom{2n}{n+k}k^{-s}$
are given by the negative even integers
$\zeta^{H^{*}}_{n}(-2m)=0\qquad(1\leq m\leq n-1)$
where
$H^{*}_{n}(z)=(z+i)(z-i)(z+2i)(z-2i)...(z+ni)(z-ni).$
F furthermore, $\zeta^{H^{*}}_{n}(s)$ is an entire function and $L_{n}(s)$ is
given by
$L_{n}(s)=\frac{\pi}{sin(s\frac{\pi}{2})}\zeta^{H^{*}}_{n}(-s)\frac{1}{(2n)!}\qquad(0<s<2n)$
which extends meromorphically throughout the s-plane with possible poles at
the even integers except for the pole-free points at
$s=2m\qquad(1\leq m\leq n-1).$
The special case of this theorem with $s=1$ and $n=1$ is the well known result
$\int^{+\infty}_{0}{\frac{1}{x^{2}+1}}dx=\frac{\pi}{2}.$
We call $\zeta^{H^{*}}_{n}(s)$ an approximate alternating zeta function or an
approximate eta function corresponding to the polynomial $H^{*}$. This is of
interest because the Dirichlet eta function or alternating zeta function
$\eta(s)=\sum^{\infty}_{k=1}\frac{(-1)^{k-1}}{k^{s}}=(1-2^{1-s})\zeta(s)\qquad(s>0)$
can be expressed in a slightly different notation using a result of Hasse [78]
as follows
$\displaystyle\eta(s)=\sum^{\infty}_{n=0}\frac{1}{2^{n+1}}\zeta^{H}_{n}(s)\qquad(s>0)$
$\displaystyle\eta(s)=\sum^{\infty}_{n=0}\frac{1}{2^{n+1}}\eta_{n}(s)\qquad
where$
$\displaystyle\zeta^{H}_{n}(s)=\eta_{n}(s)=\sum^{n}_{k=0}(-1)^{k}\binom{n}{k}(k+1)^{-s}\qquad
and$ $\displaystyle\ H_{n}(z)=(z+1)(z+2)...(z+n+1).$
In fact we now find a direct connection to Hasse’s eta function induced from
$H_{n}(z)$ with the properties:
###### Theorem 2
The trivial zeros of
$\zeta^{H}_{n}(s)=\eta_{n}(s)=\sum^{n}_{k=0}(-1)^{(k)}\binom{n}{k}(k+1)^{-s}$
are given by negative integers and zero in the range
$\zeta^{H}_{n}(-m)=\eta_{n}(-m)=0\qquad(0\leq m\leq n-1)$
where
$H_{n}(z)=(z+1)(z+2)...(z+n+1).$
Furthermore, $\zeta^{H}_{n}(s)$ is an entire function and $L_{n}(s)$ is given
by
$L=L_{n}(s)=\int^{+\infty}_{0}{\frac{x^{s-1}}{H_{n}(x)}}dx\qquad(0<s<n+1)$
with
$L_{n}(s)=\frac{\pi}{sin(s\pi)}\zeta^{H}_{n}(1-s)\frac{1}{(n)!}\qquad(0<s<n+1)$
which extends meromorphically throughout the s-plane with possible poles at
the integers except for the pole-free points at
$s=m\qquad(0\leq m\leq n-1).$
Note that the special case of this theorem with $n=0$ is the well known result
$L_{0}(s)=\int^{+\infty}_{0}{\frac{x^{s-1}}{x+1}}dx=\frac{\pi}{sin(s\pi)}\qquad(0<s<1).$
This shows that there is some virtue in exploring the local approximate $\eta$
functions to reveal clues about the global behavior of the zeta function in
the critical strip. In fact, we speculate that for $n=0,1,2..$ that the
critical line of each of the approximate eta functions $\eta_{n}(s)$ is on the
line $\frac{n+1}{2}+it$, in a weaker topology, because the region of
convergence of $L_{n}(s)$ is $0<s<n+1$. Moreover, using Sections 5 and 6 as a
blueprint, we outline an approach to RH based on the above results and ideas.
We plan on studying what we call ”‘proto-zeros”’ to give information about the
non-trivial zeros of the zeta function.
A proto-zero for $\eta_{k}(s)$ is defined to be for $s=\sigma+it$ with fixed
$\sigma$ the smallest in absolute value in a neighborhood of $it$ of the
function. For example, the proto-zeros of $\eta_{1}(s)=1-\frac{1}{2^{s}}$ for
$s=\sigma+it$ with $0<s<1$ i.e. occur whenever $t=\frac{n}{log2/2\pi}$ with
$n$ an integer. We will also be looking for local functional equations.
### 1.3 Riemann’s Functional Equation for the Zeta Function.
It is interesting, along this line of thinking to record the functional
equation of Riemann in the following form [26], [12]
$\frac{\zeta(s)}{\zeta(1-s)}=(2\pi)^{s-1}2sin(\frac{\pi s}{2})\Pi(-s)$
from which it follows fairly readily that the zeta function $\zeta(s)$ has
trivial zeros at the negative even integers. Also the above left hand side
suggests the symmetry
$s\stackrel{{\scriptstyle}}{{\rightarrow}}1-s$
which maps the critical region
$0<s<1$
into itself and keeps invariant the critical line
$s=\frac{1}{2}+it$
where presumably all of the non-trivial zeros lie. The latter is a statement
of the Riemann Hypothesis. Note that the right hand side of the functional
equation contains product factors of the type
$1+\frac{s}{\alpha}.$
This suggests that more might be done with this observation in attacking the
RH c.f. Section 7 on Euler factors approach. Also our results in Theorem 1 and
Theorem 2 seem to echo some sort of functional equation connections to
Riemann’s functional equation but we have not been able to find a direct
relationship so far.
## 2 Proof of Theorem 1
We follow almost in parallel the same techniques that may be found in H.M.
Edwards [26] for elementary Riemann zeta function theory with the major
difference that an explicit functional equation is apparently not manifest but
implied from our analysis. In our case, we develop the proof for our theorem
as follows. First we choose the branch cut corresponding to the non-negative
reals for $log(z)$, which is our notation for $ln(z)$, instead of the usual
non-positive reals. Hence the phases for Sheet(0) correspond to the phase band
$B_{0}=[0,2\pi)$
in the counter clock wise positive sense. Consider the ”‘branch cut”’ contour
integral
$L_{\gamma}=L_{\gamma
n}(s)=\oint_{\gamma(0)}{\frac{z^{s-1}}{H^{*}_{n}(z)}}dz\qquad(s<2n).$
This clearly defines a holomorphic or analytic function using standard methods
i.e., uniform convergence on compact domains because $2n-(s-1)>1$. It is
understood that the contour $\gamma(0)$ is the path that starts at $+\infty$
above the non-negative axis and then circles around zero $0$ in a positive
sense back underneath the non-negative axis excluding all other
discontinuities of the argument of the integral back to $+\infty$ . Define the
following path $\gamma$ (c.f. Figure 1)
$\gamma=\gamma(0)+\gamma(\infty)+\gamma(R).$
$\gamma(0)$$\gamma(\infty)$ Figure 1: The path $\gamma$.
The path $\gamma(\infty)$ is the the path moving downward in a negative sense
from $+\infty$ around the entire z-plane at $\infty$ back to the beginning
just above the $+\infty$ point where the path $\gamma(0)$ starts and ends. The
path $\gamma(R)$ refers to all of the little positive sense circles around
each of the poles
$z_{k}=ik\qquad(k=1,2...n,-1,-2,..-n)$
where it is understood that the entry and exit lines to each little circle
from $\infty$ and back out to $\infty$ cancels. Hence the path $\gamma$
encompasses a single valued holomorphic function defined on Sheet(0) for
$log(z)$ with phase band $B_{0}=[0,2\pi)$. So the path integral of this must
equal 0 by Cauchy’s Theorem:
$\oint_{\gamma}=0=\oint_{\gamma(0)}+\oint_{\gamma(\infty)}+\oint_{\gamma(R)}$
Since $s<2n$ we have
$\oint_{\gamma(\infty)}=0$
hence
$\oint_{\gamma(0)}=-\oint_{\gamma(R)}$
We use the following definitions
$z^{s-1}=\exp((s-1)(log|z|+i\phi(z)))$
and the evaluation of $i^{s-1}$ is given by
$E=e^{\frac{(s-1)\pi}{2}}.$
Using our previous definitions we can evaluate
$\oint_{\gamma(0)}=L_{\gamma}=L_{\gamma n}(s)=-L+E^{4}L\qquad(0<s<2n)$
since the ”‘phase charge”’ on the outbound lower path is $2\pi$. Also note
that
$\oint_{\gamma(R)}=2\pi i\sum r_{k}=2\pi iR\qquad(1\leq|k|\leq n)$
gives the sum of the residues at the poles $z_{k}=ik$. Our next task is to
compute the residues $r_{k}$ for $k>0$. Recall that
$\lim_{z\to ik}(z-ik)\frac{z^{s-1}}{H_{n}(z)}=r_{k}$
where
$H_{n}(z)=(z+i)(z-i)(z+2i)(z-2i)...(z+in)(z-in).$
To organize the calculations, write
$\displaystyle A(z)=(z-i1)(z-2i)...(z-(k-1)i)$ $\displaystyle
B(z)=(z-i(k+1))(z-(k+2)i)...(z-ni)$ $\displaystyle C(z)=(z+i)(z+2i)...(z+ni).$
Therefore,
$\displaystyle A(ik)=((k-1)i)((k-2)i)...(i)=(k-1)!(i)^{k-1}$ $\displaystyle
B(ik)=(-i)(-2i)...(-(n-k)i)=(n-k)!(-i)^{n-k}$ $\displaystyle
C(ik)=((k+1)i)((k+2)i)...((n+k)i)=\frac{(n+k)!}{k!}(i)^{n}.$
Hence,
$r_{k}=(ik)^{s-1}\frac{k!}{((k-1)!)(n-k)!(n+k)!(i)^{k-1}(-i)^{n-k}(i)^{n}}.$
Simplifying where recall $E=e^{(s-1)(\frac{1\pi}{2})}$,
$r_{k}=E\frac{k^{s}}{(n-k)!(n+k)!(i)^{2n-1}(-1)^{n-k}}.$
Finally,
$(i)^{2n-1}(-1)^{n-k}=(-1)^{n}(i)^{-1}(-1)^{n-k}=(-1)^{2n-k}(-i)=-i(-1)^{k}.$
thus we obtain,
$r_{k}=+iE\frac{k^{s}(-1)^{k}}{(n-k)!(n+k)!}\qquad(k>0).$
A similar calculation for $k<0$ gives a ”‘conjugate”’
$r_{k}=-iE^{3}\frac{k^{s}(-1)^{k}}{(n-k)!(n+k)!}\qquad(k<0)$
because on Sheet(0) with phase band $B=[0,2\pi)$ we have
$(-i)^{s-1}=e^{(s-1)(\frac{3\pi}{2})}=E^{3}.$
Thus the sum of the residues are
$\displaystyle
R=\sum{r_{k}}=i(E-E^{3})\sum_{k=1}^{n}\frac{k^{s}(-1)^{k}}{(n+k)!(n-k)!}$
$\displaystyle R=-i(E-E^{3})\frac{1}{(2n)!}\zeta^{H^{*}}_{n}(-s)\qquad where$
$\displaystyle\zeta^{H^{*}}_{n}(s)=\sum^{n}_{k=1}{(-1)^{k-1}}\binom{2n}{n+k}k^{-s}.$
Furthermore note that $E-E^{3}=E(1-E^{2})$ and that
$1-E^{2}=0$
if and only if for the isolated points $s=2m+1$ where m is an integer because
$\frac{\pi}{2}(s-1)=\pi mod(\pi)$.
Using the previously developed calculations starting from the result that the
branch cut contour integral equals the negative of the residues integral
$\oint_{\gamma(0)}=-\oint_{\gamma(R)}$
we find
$-(1-E^{4})L=-2\pi iR$
$(1+E^{2})(1-E^{2})L=2\pi(i)(-i)E(1-E^{2})\zeta^{H^{*}}_{n}(-s)\frac{1}{(2n)!}.$
We can cancel $1-E^{2}$ on both sides because this factor has isolated zeros
only at the odd integers $1\leq s=1+2m\leq 2n-1$ therefore the other factor
must be zero throughout the specified domain, so we obtain
$(1+E^{2})L=2\pi E\zeta^{H^{*}}_{n}(-s)\frac{1}{(2n)!}$
thus
$cos((s-1)\frac{\pi}{2})L=sin(s\frac{\pi}{2})L=\pi\zeta^{H^{*}}_{n}(-s)\frac{1}{(2n)!}.$
We also obtained the same result above by working with a path $\gamma$ just
above the real axis. Now by our working assumptions of $0<s<2n$ we have that
$L=L_{n}(s)=\int^{+\infty}_{0}{\frac{x^{s-1}}{H^{*}_{n}(x)}}dx$
is convergent, and that $\zeta^{H^{*}}_{n}(s)$ is an entire function because
it is a finite sum of exponentials. Now assuming even integers $s=2m$ , we
arrive, using the above equation, through essentially zero-pole cancellation,
$\zeta^{H^{*}}_{n}(-2m)=0\qquad(1\leq m\leq n-1).$
Also we have,
$L_{n}(s)=\int^{+\infty}_{0}{\frac{x^{s-1}}{H^{*}_{n}(x)}}dx=\frac{\pi}{sin(s\frac{\pi}{2})}\zeta^{H^{*}}_{n}(-s)\frac{1}{(2n)!}\qquad(0<s<2n).$
which completes the proof of the theorem.
## 3 Proof of Theorem 2
The proof of Theorem 2 is very similar to the previous Theorem 1; hence, we
present only the highlights.
To begin, the integrand in the corresponding integral is now given by,
$\frac{z^{s-1}}{H_{n}(z)}$
where
$H_{n}(z)=(z+1)(z+2)...(z+n+1)$
is now a product of creation operators $a_{k}$ for $k=1,...n+1$ using our QFT
paradigm. Let
$E=e^{\frac{i\pi}{2}(s-1)}$
then, proceeding as before, the residues are given by
$r_{-k}=\frac{E^{2}k^{s-1}}{(-(k-1))(-(k-2))..(-1)(1)(2)..(n+1-k)}\qquad(1\leq
k\leq n+1)$
then changing notation where
$k^{\prime}=k-1$
then gives
$r_{-(k^{\prime}+1)}=\frac{E^{2}(k^{\prime}+1)^{s-1}(-1)^{k^{\prime}}}{(k^{\prime})!(n-k^{\prime})!}\qquad(k^{\prime}=0,1,..,n).$
Therefore the sum of the residues is
$R=\frac{E^{2}}{n!}\sum^{n}_{k^{\prime}=0}(-1)^{k^{\prime}}\binom{n}{k^{\prime}}(k^{\prime}+1)^{s-1}$
hence
$R=\frac{E^{2}}{n!}\eta_{n}(1-s)$
where
$\eta_{n}(s)=\zeta^{H}_{n}(s)=\sum^{n}_{k=0}(-1)^{k}\binom{n}{k}(k+1)^{-s}$
is Hasse’s local eta function in the alternating zeta function mentioned in
the introduction.
Continuing with the calculations, using the fact that the sum of the little
residue integrals plus the branch cut integral must add to zero, gives the
relationship,
$-L+E^{4}L=-2\pi iR$
where
$L=\int^{+\infty}_{0}{\frac{x^{s-1}}{H_{n}(x)}}dx\qquad(0<s<n+1).$
Thus we find,
$L=2\pi i\frac{E^{2}}{1-E^{4}}\frac{\eta_{n}(1-s)}{n!}$
which upon simplification gives
$\displaystyle L=\frac{\pi}{-sin((s-1)\pi)}\frac{\eta_{n}(1-s)}{n!}\qquad or$
$\displaystyle L=\frac{\pi}{sin(s\pi)}\frac{\eta_{n}(1-s)}{n!}.$
From the above, it is clear that since
$\displaystyle\sin(\pi s)=0\mbox{ the the poles occur at the integers,}$
$\displaystyle s=0,\pm 1,\pm 2,..\mbox{ but the interval of convergence L of
is }$ $\displaystyle 0<s<n+1\mbox{, thus}$ $\displaystyle 0>-s>-n-1$
$\displaystyle 1>1-s>-n$ $\displaystyle-m=1-s$ $\displaystyle 1>-m>-n$
$\displaystyle 0\leq m\leq n-1$ therefore using zero pole cancellation forces
$\displaystyle\eta_{n}(-m)=0\mbox{ giving the zeros in the above range. }$
The remaining details are easily filled in and we have proved the theorem.
Next, we make some preparatory observations on complex quantum oscillator
theory which gives the motivation behind our investigation into RH using QFT
ideas. A portion of these results are from my unpublished research [20] with
inspiration from Lubkin [47].
## 4 Quantum Complex Oscillators: The Origin of the Paradigm
This section requires a little bit of background in elementary quantum
mechanics and the standard quantum harmonic oscillator theory [23], [7]. We
will extend the ordinary quantum oscillator Hamiltonian
$H=\frac{1}{2m}(p^{2}+m^{2}\omega^{2}x^{2})$ to a complex oscillator with
complex frequency $\omega$; with complex mass $m$; with complex $\hbar$; and
complex spatial position $z$; complex momentum $p$; and complex time $s$.
We prove that a complex oscillator can be quantized. This will be demonstrated
by a symbolic and algebraic method and also by an analytic method. One should
note that Becker [3] and Youssef [79],[80] others have studied some of these
ideas before the author became aware of their publications.
Also, we have since learned that D-module theory [37],[32] essentially
captures the algebraic aspects of these constructions with the introduction of
Weyl algebra machinery over the complex numbers $\mathbf{C}=k$.
A Weyl algebra is an associative unital algebra given by the free algebra
$k[b_{(1)},...b_{(n)},a_{(1)},...a_{(n)}]$ modulo two sided commutator
conditions of the form $[b,a]-1$ . Specifically,
$A(k)_{n}=k[b_{(1)},...,b_{(n)},a_{(1)},...,a_{(n)}]$
satisfying commutator conditions with all operators commuting except for
$[b_{(k)},a_{(k)}]=1=b_{(k)}a_{(k)}-a_{(k)},b_{(k)}$
where
$b_{(k)}=a_{(-k)}$
for all integers $k=1,2,..,n$.
The unit operator $1$ commutes with all the $a^{\prime}s$ and $b^{\prime}s$.
Also later in our constructions, we allow the indices $k$ to be any complex
number. Also note that there is a scaling of the a’s and b’s operators to
obtain the $1$ operator.
The difference in our algebraic approach is to emphasize the introduction of
creation operators through the paradigm (c.f. Section 1)
$z+ik\stackrel{{\scriptstyle}}{{\rightarrow}}\frac{p+ikz}{\sqrt{2}}$
and to create ”particles” by hitting the unit operator with powers of a fixed
creation operator. This has a similar flavor to Nakajima’s creation operator
methods in Hilbert scheme theory [55]. The anti-particles correspond to
negative integer particles. Also we can emulate usual quantum mechanics with
kets and bras inside the algebra to compute matrix elements. Hence we view a
Weyl algebra as a finite QFT quantum field theory.
The following provides the original rendition of our thinking which has served
the foundation for our current research activities which we believe has
intrinsic value.
### 4.1 Preliminary Quantum Oscillator Calculations
We start by considering
$H=\frac{1}{2}(p^{2}+\omega^{2}z^{2}).$
which amounts to assuming $m=1$ and $\hbar=1$. This can be accomplished by
using a series of scaling transformations.
We have to be careful in specifying the phase of $\omega$ because we need a
rule to distinguish between creation and destruction operators and hence the
given arrow of time. We will see this later.
Notice that H above is symmetric with the interchange of $\omega$ and
$-\omega$. For the moment, we ignore this ambiguity and proceed with the
following definitions.
Introducing Schrodinger’s prescription,
$p=-i\hbar\partial_{z}=-i\partial_{z}.$
leads to the bosonic commutator quantum condition
$[p,z]=pz-zp=-i$
because operating the commutator on the left with an arbitrary spatial wave
$\psi(z)$ gives
$\displaystyle(pz-zp)\psi=p(z\psi)-z(p\psi)$
$\displaystyle[p,z]\psi=-i\psi+z(p\psi)-z(p\psi)=-i\psi.$
Now classically, at the macro level, factorize $H$ above
$H=\frac{1}{\sqrt{2}}(p+i\omega z)\frac{1}{\sqrt{2}}(p-i\omega z)$
which suggests the following definitions
$\displaystyle a=a_{(\omega)}=\frac{1}{\sqrt{2|\omega|}}(p+i\omega z)$
$\displaystyle b=a_{(-\omega)}=\frac{1}{\sqrt{2|\omega|}}(p-i\omega z)\qquad
with$ $\displaystyle\omega=|\omega|u_{\omega}=|\omega|u\qquad where$
$\displaystyle u=e^{i\phi}\mbox{ is a phase factor for the frequency.}$
with possible phase ambiguity in the factor $\sqrt{\omega}$.
This gives the following commutator
$[b,a]=ba-ab=u$
because
$\displaystyle ab=\frac{H}{|\omega|}+\frac{i\omega}{2|\omega|}(zp-pz)$
$\displaystyle ab=\frac{H}{|\omega|}+u\frac{ii}{2}$ $\displaystyle
ab=\frac{H}{|\omega|}-u\frac{1}{2}.$
Note: The usual notation for the commutator is $[a,a^{*}]=1$ however, we want
to avoid using $C^{*}$ concepts in our deliberations.
Hence we see that $ab$ drops the intrinsic energy by one half of a quantum
under the tacit assumption of the given orientation of ”‘time”’. Similarly, we
obtain
$ba=\frac{H}{|\omega|}+u\frac{1}{2}$
and that this gives an increase of intrinsic energy by one half quantum.
Subtracting the two ”‘number”’ operators gives the quantum condition for
particles
$[b,a]=ba-ab=u$
and adding the two equations together gives H as a symmetrical sum of number
operators,
$H=\frac{|\omega|}{2}(ba+ab).$
Sometimes we write this in the following form to show the decomposition of the
oscillator into a magnitude part times a phase part:
$H_{(\omega)}=|\omega|H_{(u)}=|\omega|uH_{(1)}=\omega H_{(1)}$
where the subscript defines the type of frequency in an obvious notation. Next
we bring in ”‘time”. The Hamiltonian equations of motion for a dynamical
variable $v$ using Poisson Brackets are
$\dot{v}=[v,H]_{PB}.$
Following Dirac [24] this becomes, using commutators in a quantum mechanical
description, Heisenberg’s equation of motion
$i\hbar\dot{v}=[v,H]=vH-Hv.$
Next we examine the law of motion for $a$ assuming a complex frequency
$\omega$ to see the impact on ”‘time”’ and the emergence of anti-particle
concepts. Also we carry out enough of the commutator calculations for the
reader to see what the impact of the phase factor “u” is on the results.
Write assuming again $\hbar=1$ then
$i\dot{a}=[a,H]=\frac{a|\omega|}{2}(ab+ba)-\frac{|\omega|}{2}(ab+ba)a$
calculating and extensively using the commutator relationship $ba=ab+u$ to
move all of the “b’s” to the right gives
$\displaystyle[a,H]=\frac{|\omega|}{2}([a,ab]+[a,ba])$
$\displaystyle[a,ab]=a^{2}b-a(ba)$ $\displaystyle[a,ab]=a^{2}b-a(ab+u)$
$\displaystyle[a,ab]=-au\qquad similarly$ $\displaystyle[a,ba]=-au\qquad
hence$ $\displaystyle[a,H]=-au|\omega|.$
Thus
$i\dot{a}=-\omega a\qquad therefore$
$a_{s}=e^{i\omega s}a_{0}$
and we have the evolution equation for the operator “a” with respect to the
time “s” which is of course the expected result from standard quantum theory.
Actually, we were expecting an additional $\frac{\omega}{2}$ coming from the
vacuum energy. We will be looking into this later. A similar result for the
operator ”‘b”’ is
$b_{s}=e^{-i\omega s}b_{0}.$
Clearly, with respect to the current set of macro observers, the above results
show that there are two possible ”‘rest”’ frames for the operators ”‘a”’ and
”‘b” and consequently ”‘H”’. Thus, if ”‘time”’ follows the tracks given by the
condition
$u_{\omega}u_{s}=\pm 1$
where we use the notation $s=|s|u_{s}$ with $u_{s}=e^{i\phi_{s}}$ as the
corresponding phase factor for ”‘time”, then we will have a ”‘rest”’
condition. Hence we have defined a direction and orientation of ”‘time”’.
If this condition does not hold, the macro observers would report exponential
growth or decay and thus a possible unstable physical interpretation unless
one is content with short time periods or unless the observers attempt to find
a ”‘rest”’ frame to perform calculations.
Along these same lines and more formally, we prove that complex oscillators
can be quantized and that there is an inherent negative energy, anti-particle,
time reversal and even the hint of spin 1/2 characteristics in quantum complex
oscillators [10], [30].
### 4.2 Complex Quantum Oscillator Theory; i.e., Weyl Algebra Theory
As noted above, we have since learned that this section is essentially Weyl
algebra theory. However, we want to keep the ideas and notations on record
because it guides our thinking on the subject giving an almost totally
symbolic/algebraic approach to quantum complex oscillator theory and possibly
quantum field theory.
Our presentation will appear to be almost the same as standard quantum
mechanics and quantum field theory with the difference that we will be trying
to stay as symbolic as possible and will try to be more careful in the
notations, and of course, we want to directly include complex numbers into our
calculations so that complex energy, complex time etc., make sense.
This is pretty much in the spirit of physicists’ techniques. However, to
perform actual calculations we will introduce a method to obtain quantitative
results. Heisenberg and Dirac basically pioneered these techniques followed by
Feynman and others. We need to work with non-unitary and non-hermitian
operators to make progress in our work.
To begin, we record a number of commutator calculations for the development of
a symbolic and algebraic oscillator theory.
###### Definition 1
Assume the following commutator condition
$[b,a]=ba-ab=u$
and define
$H=\frac{1}{2}(ab+ba)$
where $u$ is a phase times the unit operator.
###### Definition 2
Define the wave-particle algebra $\Psi$ over the complex numbers with
generators $"`a"^{\prime}$ and $"^{\prime}b"^{\prime}$ denoted by
$\Psi[a,b]$
with the commutator condition given above. This algebra contains the unit
operator i.e., is a unital algebra.
For the readers reference, this corresponds to the Weyl algebra $A_{1}$ over
the complex numbers.
###### Definition 3
Define the left module
$\ominus=\Psi b$
as the ”‘vacuum”’. We will be calculating modulo the vacuum and will denote
this by $\equiv$. Other vacuums, of course, can be constructed such as those
finite combinations of ”‘a’s”’ and ”‘b’s”’ or ”‘1”’ that may be omitted in the
module.
###### Definition 4
Define the vacuum ”‘ket”’ in the algebra $\Psi$ as follows
$|0>^{\circ}=1$
then clearly we have the vacuum annihilation condition
$b|0>^{\circ}\equiv 0\qquad modulo\ominus$
.
###### Definition 5
(d)Define the following $"^{\prime}a"^{\prime}$ particle excited states lifted
from the vacuum
$a^{k}|0>^{\circ}=|k>^{\circ}=a^{k}$
where in this context we will be favoring the production of ”‘a”’ particles
over ”‘b”’ particles.
###### Definition 6
(e)Define the conjugate of a ”‘ket”’ which turns ”‘kets”’ into ”‘bras”’ where
”‘c”’ is an arbitrary complex number:
$\overline{c|k>^{\circ}}=\overline{c}<k|^{\circ}=\overline{c}b^{k}.$
###### Definition 7
Define the complex ”‘unitary”’ operator to go from time ”‘0”’ to time ”‘s”’ by
$U(s|0)=e^{-iHs}$
so that on stationary kets with possible complex energy E modulo the vacuum
$\ominus=\Psi b$ at the time ”‘s=0”’ with
$H|\psi,0>^{\circ}\equiv E|\psi,0>^{\circ}$
we have
$U(s|0)|\psi,0>^{\circ}\equiv e^{-iEs}|\psi,0>^{\circ}\equiv|\psi,s>^{\circ}.$
Next in order to perform concrete computations we need to make contact with
Schrodinger’s wave mechanics and a semblance of Dirac delta function theory
and distribution theory.
Hence, when necessary identify
$a=u^{\frac{1}{2}}\frac{1}{\sqrt{2}}(p+iz)\stackrel{{\scriptstyle}}{{\rightarrow}}u^{\frac{1}{2}}\frac{1}{\sqrt{2}}(-i\partial_{z}+iz)$
with a similar identification for the ”‘b”’.
Note: Later, we will see, that we can set the phase $u=1$. Clearly, we have an
ambiguous scaling of the commutator in that the phase factor u can slide into
the factors different ways in order to normalize it to
$[b^{\prime},a^{\prime}]=1$.
We could have the balanced situation
$\displaystyle a^{\prime}=u^{\frac{1}{2}}a$ $\displaystyle
b^{\prime}=u^{\frac{1}{2}}b$ $\displaystyle\qquad or$ $\displaystyle
a^{\prime}=a$ $\displaystyle b^{\prime}=\frac{b}{u}$
and so forth.
Our objective is to define the amplitude $<z|k>^{\circ}$ for the variable z.
However, this does not make sense, because z is not a label defined in the
wave particle algebra $\Psi[a,b]$ and the previous definitions. We get around
this by using the following definition.
###### Definition 8
Let $z_{0}$ be a given fixed complex number.
Define the following matrix element
$<\widehat{z_{0}}|0>^{\circ}\equiv e^{-\frac{1}{2}z_{0}^{2}}$
extend this definition for arbitrary elements $\psi$ in the algebra
$\Psi[a,b]$
$\widehat{<z_{0}}|\psi|0>^{\circ}\equiv\psi\widehat{<z_{0}}|0>^{\circ}$
using
$\widehat{<z_{0}}|a^{k}|0>^{\circ}\equiv(\frac{u^{\frac{1}{2}}}{\sqrt{2}}(-i\partial_{z}+iz))^{k}\widehat{<z_{0}}|0>^{\circ}.$
Then the following can be proved quite easily using standard quantum
techniques.
###### Lemma 1
Using the definitions above:
(a) Commutator relations for ”‘a”’ and ”‘b”’,
$\displaystyle[b,a^{n}]=ba^{n}-a^{n}b=una^{n-1}$
$\displaystyle[a,b^{n}]=ab^{n}-b^{n}a=-unb^{n-1}.$
(b) Commutator for H
$[H,a^{n}]=una^{n}.$
(c) Norm of particle states modulo the vacuum $\ominus=\Psi b$
$<n|^{\circ}n>^{\circ}\equiv b^{n}a^{n}\equiv u^{n}n!$
(d) Eigen-energy of particle states modulo the vacuum $\ominus=\Psi b$
$H|k>^{\circ}\equiv u(k+\frac{1}{2})k>^{\circ}$
Proof of the Lemma (highlights):
Here we give some of the details for (c). We can simplify the calculations in
the following way: absorb the ”‘u”’ into say ”‘a”’ and then define the new
”‘coordinates”’ for the operators
$\displaystyle a^{\prime}=\frac{a}{u}$ $\displaystyle b^{\prime}=b\mbox{
thus}$ $\displaystyle[b^{\prime},a^{\prime}]=1.$
which in C* notation is $[a,a^{*}]=1$, hence all of these results should be
quite familiar.
Thus we can assume $[b,a]=1$ and start the proof of (c).
By induction, assume the following for $n$, where the case $n=1$ is obviously
true modulo the vacuum $\ominus=\Psi b$,
$b^{n}a^{n}\equiv n!$
then using the results from (a) we have
$\displaystyle b^{n+1}a^{n+1}=(b^{n+1}a)a^{n}=(ab^{n+1}+(n+1)b^{n})a^{n}\qquad
thus$ $\displaystyle b^{n+1}a^{n+1}\equiv(0+(n+1)b^{n})a^{n}\quad hence$
$\displaystyle b^{n+1}a^{n+1}\equiv(n+1)n!=(n+1)!.$
Note that the term
$b^{n+1}a^{n}\equiv b(b^{n}a^{n})\equiv bn!\equiv 0$
and we are done. This concludes the proof of the lemma.
Using these results we can prove the following result:
###### Theorem 3
(a) Complex quantum oscillator calculations can be conducted without
introducing a ”‘vacuum”’ per se or without introducing a metric structure
using bra and ket vectors in an analytical sense. These concepts are
introduced algebraically.
(b) All calculations can be carried out symbolically and algebraically, and
thus convergence issues and renormalization issues can be performed more
rigorously. ”‘States”’ are just extended products and sums of ”‘a’s”’ and
”‘b’s”’ over the complex numbers.
(c) A particular orientation of ”‘time”’ and vacuum state will select the type
of ”‘real’”’ particle to be observed.
Next we proceed to an analytical method to quantize complex oscillators. This
amounts to a Wick rotation method which pays closer attention to the Riemann
surface aspects of the analytic continuation.
### 4.3 Analytical Proof of the Quantization of Complex Oscillators
###### Theorem 4
(a) The complex oscillator
$H=H_{(u)}=\frac{1}{2}(p^{2}+u^{2}z^{2})$
can be put into a ”‘rest”’ frame
$H^{\prime}=H^{\prime}_{(1)}=\frac{1}{2}(p^{\prime 2}+z^{\prime 2})$
through scaling transformations if and only if one chooses a phase w such that
$w^{4}=u^{2}$
and applying the following transformations
$z^{\prime}=wz$ $p^{\prime}=w^{-1}p$ $H^{\prime}=w^{-2}H$ $s^{\prime}=w^{2}s$
which maintains the quantum condition,
$[p,z]=[p^{\prime},z^{\prime}]=-i.$
Only the case with $w^{2}=-u$ leads to negative energy and time reversal
$H^{\prime}=-u^{-1}H$
and
$s^{\prime}=-us$
and the reversal of $a$ and $b$ to $a=wb^{\prime}$ and $b=wa^{\prime}$ where
$a=\frac{1}{\sqrt{2}}(p+iuz)$
and
$b^{\prime}=\frac{1}{\sqrt{2}}(p^{\prime}-iz^{\prime}).$
(b) In particular for
$u=1$
there are two and only two possible rest frames with
$w=\pm 1$
and
$w=\pm i.$
Only the case with $w=\pm i$ leads to negative energy and time reversal
$H^{\prime}=-H$
and
$s^{\prime}=-s$
and the reversal of $a$ and $b$ to $a=wb^{\prime}$ and $b=wa^{\prime}$. Note:
Quantum oscillators are inherently ambiguous with respect to a ”‘rest”’ frame.
This is not surprising since we are extracting square roots using complex
numbers and thus there will always be a ”‘+”’ or ”‘-”’ solution i.e., we will
always have to introduce Riemann surface methods or explicit algebraic
constructions to keep track of the ambiguities.
(c) Complex oscillators can be quantized using analytic continuation methods
and staying in compact regions of time and space or essentially compact
regions of time and space.
Proof: (a): Using scaling transformations, the most general oscillator
Hamiltonian with all parameters complex and all variables complex
$H=\frac{1}{2m}(p^{2}+m^{2}\omega^{2}z^{2})$
and quantum condition
$[p,z]=-i\hbar$
can be reduced to the form
$H=\frac{1}{2}(p^{2}+u^{2}z^{2})$
where ”‘u”’ is a phase and
$[p,z]=-i.$
Choose a phase ”‘w”’ such that
$w^{4}=u^{2}$
then scale as follows
$z^{\prime}=wz$ $p^{\prime}=w^{-1}p$
then this transforms H and maintains the commutator condition
$[p,z]=-i=[p^{\prime},z^{\prime}]$
as follows
$\displaystyle H=\frac{1}{2}(p^{2}+w^{2}w^{2}z^{2})$ $\displaystyle
H=\frac{1}{2}(p^{2}\frac{w^{2}}{w^{2}}+w^{2}w^{2}z^{2})$ $\displaystyle
H=\frac{1}{2}(p^{\prime 2}w^{2}+w^{2}z^{\prime 2})$ $\displaystyle
H=w^{2}H^{\prime}$ $\displaystyle H^{\prime}=w^{-2}H.$
To prove the swapping of the ”‘a”’ and ”‘b”’ for $w^{2}=-u$ consider
$a=\frac{1}{\sqrt{2}}(p+iuz)$
which leads to
$a=\frac{1}{\sqrt{2}}(p+i(-w^{2})z)$
thus
$a=\frac{1}{\sqrt{2}}(p\frac{w}{w}-iw(wz))$
so
$a=wb^{\prime}.$
Note that this portion of the demonstration works in the opposite direction
which essentially completes the ”‘iff”’ proof. Now time ”‘s”’ transforms as
follows using the Heisenberg evolution equations where ”‘v”’ is a arbitrary
dynamical variable from the algebra $\Psi[a,b]$ over the complex numbers of
the ”‘a”’ and ”‘b”’ operators
$\displaystyle i\frac{d}{ds}v=[v,H]$ $\displaystyle
i\frac{d}{ds}v=[v,w^{2}H^{\prime}]$ $\displaystyle
i\frac{d}{dw^{2}s}v=[v,H^{\prime}]$ $\displaystyle s^{\prime}=w^{2}s$
and this finishes part (a). Clearly (b) is a special case of (a).
Part(c) is straightforward.
## 5 QFT and NCG Particle/Wave Analogies
Here we incorporate some of the motivation behind our investigation into RH
using QFT ideas. We provide a “particle-wave” interpretation of Hasse’s result
where function fields correspond to ”‘waves”’ with exponentials of ”‘s”’ while
the ”‘particles”’ are created once they ”‘hit”’ the vacuum from the
corresponding operator. Wave particle duality is manifest because of the close
association of the exponentials in $s$ times creation operators in the
corresponding field expressions.
The idea of our creation operator paradigm came about in previous work on
quantum ”‘anti-oscillators”’ as we previously mentioned. We now know that
Connes’ NCG (Non-Commutative Geometry) [14], [39], [15], [16] and others is
the way to proceed.
### 5.1 A Quantum Eta Function Approach to RH
In fact, we carry out some of the steps in Section 7. Hence as the writing of
the paper proceeded, we have been gradually blending these ideas into our
thinking and have incorporated some of the NCG language into our work i.e.,
cyclic homology concepts [46].
We now proceed to develop the analogies. First we extend macro time-space to
complex time-space
$s^{\mu}=(s,z).$
Next we examine the following long loop integral along the branch cut
considered in Theorem 2
$\int_{\gamma(0)}\frac{z^{s-1}}{H_{n}(z)}dz\qquad(0<s<n+1)$
which we can call the $H_{n}$, instead of a ”‘harmonic polynomial”, a ”‘cyclic
polynomial
$H_{n}(z)=(z+1)(z+2)...(z+n+1).$
It should be clear from the discussion in Sections 1-3, that this integral and
associated integral have a direct bearing on RH.
Next we apply our original concept of quantizing complex variable function
theory using creation operators i.e., QFT to ”‘soften up”’ the analysis so
that we can study more delicate ”‘phase”’ locking [62] issues in the alignment
of the non-trivial zeros of the $\zeta(s)$. The ”‘soften up”’ idea is one of
the central notions of NCG.
Thus using Weyl creation operator correspondence that we have mentioned
several times previously
$z+ik\stackrel{{\scriptstyle}}{{\rightarrow}}p+ikz/\sqrt{2}=a_{(k)}$
we can perform symbolic particle quantum field theory.
The reader can find an excellent introduction to these concepts in Dirac [24]
and many other books e.g. [42]. We merely mention that there are bosonic
commutator relationships and fermion commutator relationships such that if the
operators commute, they are of the same type or they belong to different
states.
And if there are commutators involving creation and destruction operators in
the same state, we obtain a non-zero result with Dirac deltas times $\hbar$.
So what we are doing is ”‘second quantizing”’ complex variable theory. We have
also considered setting up a Feynman path integral formulation as well [29].
However, we made more progress with the particle methods introduced here.
The next idea is to replace the above integral into ”‘new”’ coordinates using
these operators to obtain something that looks like the following
$\int\frac{a_{0}^{s}}{a_{0}a_{1}a_{2}...a_{n}a_{n+1}}da_{0}.$
Hence, we have a highly interacting finite particle QFT type of analysis to
consider. And we have the task to put all of this on a rigorous foundation.
Our paper started some of this; however, NCG seems to have already created
most of the tools for moving forward in this direction. So our strategy is to
combine NCG methods with our own methods to achieve the desired results.
Now Theorem 2 shows there should be a relationship of the above integral to a
finite approximate Hasse eta function i.e., local eta
$\zeta^{H}_{n}(s)=\eta_{n}(s)=\sum^{n}_{k=0}(-1)^{(k)}\binom{n}{k}(k+1)^{-s}$
which we will now interpret using concepts as foundation from Theorem 3 and
Theorem 4. Recall that Theorem 3 gives us the right to pursue
symbolic/algebraic methods in our computations. However, we only sketch the
details in this paper.
The above operator integral is replaced by the more ”‘gentle”’ quantum
integral
$\eta_{n}^{q}(s)=\int_{\gamma}a_{0}^{s}b_{0}b_{1}...b_{n}b_{n+1}da_{0}.$
We then introduce some cyclic homology concepts from Loday [46] and call the
integrand the target complex
$K_{n}=a_{0}^{s}b_{0}b_{1}...b_{n}b_{n+1}$
which is to be analyzed using insertion and deletion operators with respect to
a well defined algebra of appropriately defined a’s and b’s. We then define
the sampling complex as follows
$\gamma_{n}=[a_{0},a_{1},...,a_{n},a_{n+1}]$
with the following preliminary definition of the differential $da_{0}$ at a
sample point $a_{k}$ as
$da_{0}=a_{k}.$
Then we proceed to define integration of these operators, with respect to an
appropriately defined vacuum, as follows
$\gamma_{n}K_{n}=(a_{0}da_{0})K_{n}+(a_{1}da_{0})K_{n}+..+(a_{n}da_{0})K_{n}+(a_{n+1}da_{0})K_{n}.$
The time-space thread or ”‘micro-quantum arrow of time”’ is chosen to be as
tight as possible energetically speaking, allowing only quantum fluctuations
not to exceed, for either the a’s or b’s, the absolute energy of
$k+1+\frac{1}{2}.$
at a given time step ”$k=-n-1,-n,..-1,0,1,2,..,n,n+1$”’. Positive energy is
equivalent to the number of particles in our discussion here [76].
The a’s and b’s are decomposed further into the form
$a_{k}=\frac{a^{k+1}}{\sqrt{(k+1)!}}$
and
$b_{k}=\frac{b^{k+1}}{\sqrt{(k+1)!}}$
with the usual commutator, as before,
$[b,a]=1.$
The given macro”arrow of time”’ is the standard one given by the complex
$[b_{n+1},b_{n},...,b_{0},a_{0},a_{1},..,a_{n},a_{n+1}]$
which is to be used in carrying out micro-quantum calculations and what we
call the ”time-space thread”’ with respect to a well defined vacuum very
similar to the above complex but expressed in module form. Other micro-quantum
time-space threads are possible.
For example the vacuum,
$\ominus_{n}=\Psi b_{n+1}+\Psi b_{n}+...+\Psi
b_{0}+a_{0}\Psi+...+a_{n}\Psi+a_{n+1}\Psi$
has exactly $2^{n+1}$ typical ”‘events”’ stored here in the form
$[(a_{0}"`or"^{\prime}b_{0}),...,(a_{n}"`or"^{\prime}b_{n})]$
in the forward micro time steps $[0,1,..,n]$. We do not take
$(a_{n+1}"`or"^{\prime}b_{n+1})$ as one of our typical events because it is at
$\infty$, c.f. Figure 2. The time-space thread runs vertically upward.
Figure 2: The time-space thread for $\ominus_{1}$.
$a_{0}$$a_{1}$$a_{2}$$b_{0}$$b_{1}$$b_{2}$$``EventBoundary^{\prime\prime}$
The b’s will interact with the a’s in the upper thread. Notice that we will
”’induce”’ the b’s to sit on a given forward time step and that we do not
allow two different b’s or two different a’s to be there at that same ’time”’
for very long. Then we process all of these events and watch how they drop
from vacuum to vacuum recording useful byproduct information,, i.e., these are
the symbolic ”‘photons”, until we reach the scalars or a useful stopping point
for further analysis. We plan on performing some sort of “quantum particle
fluid” analysis.
In the above algebra, we also use the peculiar quantum projective concept that
$"`zero"^{\prime}$ and $"`\infty"`$ are equivalent, in some sense, as follows
$a_{n+1}\equiv b_{n+1}\equiv\infty\equiv 0\equiv a_{0}\equiv b_{0}.$
This holds because of the design of our vacuum.
Next, with these definitions, we plan on studying the following matrix
element, which, we believe, is analogous to cyclic cohomology constructs,
$<0,a_{0}^{s}|\eta_{n}^{q}(s)>.$
Note that a variety of additional correlations can also be studied as well.
The above will then be related to Hasse’s local eta of the form
$\eta_{n}(-s)=1-\binom{n}{2}2^{s}+\binom{n}{3}3^{s}-.....$
Thus we interpret the local etas as relative complex amplitudes and the global
eta as the superposition of the complex amplitudes of the local etas weighted
by the number of quantum states in the corresponding finite QFT.
The next step is to study micro-equilibrium conditions near
$s=\frac{1}{2}+it$,with t real, and prove that the micro-equilibrium is
achieved for well defined t’s with only very small decay $id$ in t as follows:
$t=t_{r}+id$
where $t_{r}$ is real and $d$ is real i.e. these are the ”‘proto-zeros”’, we
alluded to before.
In general the proto-zeros, we expect can be found, for a fixed $\sigma$, for
variable ”‘t”’ in $s=\sigma+it$ using simultaneous Diophantine approximation
methods [36] over the rationals by looking at the ratios of the logarithms of
the primes, which will be transcendentals [70], [21], [73] in the
corresponding interval for a given local eta. There will be an infinite number
of ”‘good”’ ”‘solutions”. Hence there will be an infinite number of proto-
zeros at any given local eta. The corresponding proto-zeros for large $n$ for
a given $s=\sigma+it$ should be good approximations to a global zero as well
[8].
In carrying out these computations we found it convenient to work with a local
Planck’s constant [48] defined by
$\hbar_{p}=\frac{log(p)}{2\pi}.$
which will measure the ”‘resolving”’ power of a given local eta’s ability to
detect the largest prime in its corresponding lattice. The smallest
oscillatory time interval a local eta can resolve is given by
$\frac{1}{\hbar_{p}}$
Hence a proto-zero for a given local eta, at a given time-space point, is when
Hasses’ local eta is as small as possible relative to a small domain
containing the point.
The meaning that we attach to a typical proto-zero is that the local quantum
prime counting sieving operator is delivering its assessment in the form of a
complex amplitude near the space time point $s=\sigma+it$ that it is
”‘satisfied”’, within its resolving power, in counting the number of primes
less than or equal to $t$. A non-zero complex amplitude, with ”’decay”,
carries important information useful to the ”‘collective”’ in determining a
global zero. So ”‘iso-phase”’ [22] curves may be a more general concept to
consider. This now leads us to discussing some general technical strategies
for settling RH.
## 6 Overview of Technical Approaches to Settle RH
Method 1: Classical with Operator/QFT and NCG Inspired Insights (Local to
Global and Global to Local)
The idea is to prove that the ”‘cloud”’ or ”‘collective”’ of local proto-zeros
helps to determine the global zeros of $\eta(s)$ and conversely that the
global zero is the ”‘center of gravity”’ of the local zeros.
So if the cloud of local zeros moves then the corresponding global zero must
move maintaining its center of gravity. We intend to prove that any given
global zero is determined by only a finite number of proto-zeros. We think
this can be handled by a suitable fixed point theorem.
We expect that the latter set of proto-zeros will not be exactly aligned on
the critical line and we expect some small ”‘decay”’ in each one will be
necessary to achieve this representation. We expect this because a given local
proto-zero by itself is ”‘too loud”’ and needs to be ”‘detuned”’ for the other
local etas to have their say.
We also intend to prove that the local etas are in micro-equilibrium on or
very near the critical line. So, if a global zero would not be on the critical
line, it would move the cloud into a disequilibrium state contradicting our
previous ”‘result”’. Note that Conrey reports that 99 per cent of the global
zeros are very close to the critical line [18]. In fact, Conrey proved that at
least 40 per cent of the global zeros are on the critical line [17].
Clearly with these steps rigorously established, we will have settled RH.
Possible exceptions would be handled by iterative bootstrapping methods
familiar to functional equation theorists [1].
We will use the operator NCG and QFT insights above coupled with the classical
methods working directly with the local eta functions. Hence, we expect to
obtain rigorous results independently of NCG or QFT.
Note: Our Theorems 1 and 2 gave some important clues into the structure of the
local etas, but the reader is undoubtedly aware that Hasse’s result:
$\eta(s)=\frac{\eta_{0}(s)}{2}+\frac{\eta_{1}(s)}{2^{2}}+\frac{\eta_{2}(s)}{2^{3}}+....$
can be proved using elementary methods with the Binomial Transform and that
this amounts to a renormalization of $\eta(s).$
Therefore, we plan on staying, in this approach, as elementary as possible in
the proofs.
On the other hand, in a complementary fashion, using similar concepts:
Method 1’: NCG Methods (Local to Global and Global to Local) We would try to
obtain rigorous results using the NCG/QFT concepts that we outlined in this
paper while implementing Method 1 above in parallel. We would try to utilize
the most advanced methods that we can understand using Connes’ and his
collaborators’ techniques or others’ methods e.g. Conrey’s and his
collaborator’s methods.
Method(s) 2: NCG Methods Applied to Previous Techniques For example, we would
try to obtain local theta functions relationships using the ideas of this
paper and re-examine Hardy’s method etc., including the author’s slow
continued fractions method in the fundamental equation of information theory
using ergodic theory [19].
The following section now outlines such an approach with ergodic theory and
Euler factors.
## 7 Approach to RH via Euler Factors
We sketch a Euler Factor approach to RH using Weyl algebra creation operators
to analyze the quantum behavior of the integers as one pushes up the critical
line near a macro zeta function zero. This research stems from the eta
functions[20] methods described in Sections 1-3 with Ornstein’s [57],[58]
gadget constructions from ergodic theory [74] plus the author’s [19] previous
research concerning the fundamental equation of information theory[2]. Maxim
and his collaborators [49] results provided impetus to move forward in this
direction.
We intend to show that there is a quantum $\chi_{q}$ symmetry, where
notationally, q is a label denoting ”‘quantum”, such that
$b_{(1)}^{s}a_{(1)}\zeta_{q}(s)\equiv\zeta_{q}(s)$
which causes the zero to snap onto $\sigma=\frac{1}{2}$ because
$\bar{\zeta}_{q}(s)\zeta_{q}(s)\equiv 1+2s(s-1)+2s(s-1)\equiv\eta(s).$
and by algebraic coherency [66]; by ergodic flow construction; and the
presence of three renormalized quantum fields: ”‘0”’,”’1”’,”’-1”’ with
appropriate phases [5].
We expect deeper results can be obtained by this process coupled and
contrasted with known methods e.g. Nakajima creation operator techniques for
Douady spaces [11] and Hilbert schemes [49] and the recent results of Ngo
[56].
### 7.1 Introduction of Weyl Algebra Creation Operators: One for Each Non-
Negative Integer
First recall the classical Euler’s result
$\zeta(s)=1+\frac{1}{2^{s}}+\frac{1}{3^{s}}+\frac{1}{4^{s}}+..=\prod_{p=primes}\frac{1}{1-\frac{1}{ps}}\qquad(\sigma>1).$
where $s=\sigma+it$ [26]. Next we need a quick review of Weyl algebras.
As noted in Sections 1-3,we introduce the corresponding Weyl algebras of
commutators[38]
$\displaystyle[b_{(k)},a_{(k)}]=1$
$\displaystyle[b_{(k)},a_{(k^{\prime})}]=0\qquad(k\neq k^{\prime})$
$\displaystyle[b_{(k)},b_{(k^{\prime})}]=0$
$\displaystyle[a_{(k)},a_{(k^{\prime})}]=0$
where the Weyl algebra is built up algebraically from a free algebra on the
symbols corresponding to the a’s and b’s and adjoining the two sided ideal
generated by the above relations.
For example, using our previous notations [19], the particle-wave algebra on a
and b is given by free algebra over the complex numbers modulo the two sided
ideal defined by the commutator condition
$\Psi=\frac{\mathbb{C}[a,b]}{([b,a]-1)}=A_{1}(\mathbf{C}).$
We have waves in particle-wave algebra $\Psi$ because of the
$``up^{\prime\prime}$ and $``down^{\prime\prime}$ characteristics of the
$a^{\prime}s$ and $b^{\prime}s$ with the complex exponentials giving the macro
time-space information, c.f. the next section.
Note: We rename these operators Weyl creation operators because of their close
association with Quantum Harmonic Oscillator theory.
#### 7.1.1 Review of Complex Quantum Oscillator Theory
It is useful to repeat some of the previous analysis on complex quantum
oscillator theory in this context.
The creation/destruction operators are defined as follows, using our paradigm
of replacing a linear polynomial over the complex numbers into a creation
operator as follows
$z+ik\stackrel{{\scriptstyle}}{{\rightarrow}}\frac{1}{\sqrt{2}}(p+ikz)=a_{(k)}=a$
and with
$z\stackrel{{\scriptstyle}}{{\rightarrow}}a_{(0)}=1$
where on the left side $z$ is a complex variable; k corresponds to complex
number i.e. a scalar with phase u and $k=|k|u$; and on the right side $p$ now
corresponds to a momentum operator in the Weyl algebra and is not to be
confused with primes $p$; and $z$ now corresponds to a position operator in
the Weyl algebra. Also we have
$a_{(-k)}=b.$
Using the standard commutator condition CCR (Canonical Commutator Relations)
[61]
$[p,z]=pz-zp=-i$
and the corresponding quantum harmonic oscillator mathematics [7] leads to the
condition with these definitions that the symmetrized product corresponds to
an energy of $k=|k|u$ where $u=e^{i\theta}$ because
$\displaystyle H_{(k)}=H=\frac{1}{2}(ab+ba)$ $\displaystyle
ab=H-\frac{1}{2}k\mbox{ the energy goes down by k/2}$ $\displaystyle
ba=H+\frac{1}{2}k\mbox{ the energy goes up by k/2}$ $\displaystyle
H=\frac{1}{2}(ab+ba)$ $\displaystyle[b,a]=ba-ab=k$
as one can verify by multiplying the corresponding products in our definitions
above. This is usually expressed in physics as
$[a,a^{*}]=1$
where $a=b$ and $a^{*}=a$ in our notation.
This shows that we will be scaling and normalizing all of our creation
operators definitions to satisfy
$[b,a]=1.$
For example
$\displaystyle
a\stackrel{{\scriptstyle}}{{\rightarrow}}u^{{-\frac{1}{2}}}|k|^{-\frac{1}{2}}a$
$\displaystyle
b\stackrel{{\scriptstyle}}{{\rightarrow}}u^{{-\frac{1}{2}}}|k|^{-\frac{1}{2}}b.$
If we want to find out the detailed energy, momentum, and position of a given
state, we will need to take that into account.
Remark : Thus an important intuitive concept here is that we create intrinsic
particles and that we attach energy and other particles in our information-
theoretic constructions to it by which we transport information locally and
globally as needed. Further the b’s correspond to gentle multiplicative
inverses i.e. $ba\equiv 1$ .
So the reader should be aware that all of our creation operators have been
scaled and normalized and that the ambiguity in the square roots have been
accounted for.
Note: What we have done is to write all the operators with the individual
momentum and position joined together and call these operators creation
operators.
### 7.2 Replace the non-negative integers with creation operators
Now in the classical Euler factor expression given above, replace with Weyl
creation operators, one for each non-negative integer $m$, to obtain a quantum
zeta and quantum Euler factors in the critical strip
$\zeta_{q}(s)=a_{(0)}+a^{s}_{(1)}+a^{s}_{(2)}+a^{s}_{(3)}+a^{s}_{(4)}..\equiv\prod_{p=primes}E_{(p)}^{q}(s)$
where
$\displaystyle a_{(0)}=1$ $\displaystyle
E_{(p)}^{q}(s)=a_{(0)}+a^{s}_{(1)}+a^{s}_{(p)}+a^{2s}_{(p)}+a^{3s}_{(p)}.....$
This holds almost formally in an abstract ring of power series sense. It gives
the arithmetic blending instructions needed later in our constructions.
Mathematical convergence and other issues will be addressed later. The
analysis is quite delicate and subtle. The notation $\equiv$ will be explained
later. It is an algebraic geometry condition analogous to the calculation of
the cotangent vectors in algebraic geometry i.e. $M/M^{2}$ [73],[51].
Hence, we are automatically in a Noncommutative Geometry[45] setting with the
corresponding complications in the analysis. However, we now are in a very
advantageous situation because we have created particles and waves from which
we can watch and mold very delicate arithmetic, topological and algebraic
operations all together. The waves allow us to extend beyond classical
boundaries and to have the ability to be in many places at the same time.
So the basic idea is to quantum mechanically transport the real line together
with the integers onto the critical line using quantum field theory concepts.
In order for us to accomplish this, we will need to calculate a complex power
of a creation operator $a^{s}$ without using logarithms. The next section
briefly outlines the methodology for this calculation.
## 8 Calculation of $a^{s}$
Page 1 of Janos Kollar’s book [42] on resolving singularities gives the clue
how to proceed. The binomial expansion leads to
$a^{s}=(1+(a-1))^{s}=\sum^{\infty}_{k=0}(a-1)^{k}\binom{s}{k}$
where
$\binom{s}{k}=\frac{(s)}{k}\frac{(s-1)}{1}.....\frac{(s-(k-1))}{k-1}.$
Also observe that
$\frac{s-m}{m}=(-1)(1-\frac{s}{m})$
hence
$\binom{s}{k}=(-1)^{k-1}(\frac{s}{k})(1-\frac{s}{1}).....(1-\frac{s}{k-1}).$
Thus we have injected, through inclusion, the algebra based on the $a$ and $b$
into a power series in $a^{\prime}=a-1$ and $b^{\prime}=b-1$. Observe that
$[b-1,a-1]=[b^{\prime},a^{\prime}]=1$
which we think of as translation type or transitional creation operators [6].
This operator defines a direction of ”‘time”’ used by the observers. It is not
difficult to demonstrate that the energy of these operators, using symmetrized
products modulo the vacuum $\ominus_{0}=\Psi b+a\Psi$ is one more than the
energy of the original a and b confirming our intuition. One can easily see
this also because $-1=ii$ providing that $z$ is near the vacuum $1$. Thus we
have renormalized $a^{s}$.
###### Definition 9
Next define the following ”‘quantum coherence”’ $||_{q}$functional as follows:
$|\sum_{k=0}^{\infty}a^{\prime
k}\binom{s}{k}|_{q}=|\sum_{k=0}^{\infty}\binom{s}{k}|$
where we have used the discrete topology functional on the operators and the
factors involving the scalars are analyzed using the classical norm defined on
the complex numbers.
This idea came from Serre’s treatise on coherent algebraic sheaf theory [66].
One can systematically extend this notion to more general arithmetic,
topological and algebraic situations. Next introduce the function
$\pi(s)=\sum_{k=0}^{\infty}\binom{s}{k}$
not to be confused with the $\pi(x)$ prime counting function. The convergence
behavior of this function will determine how to extend $a^{s}$ locally using a
theorem of the following type.
###### Theorem 5
The complex power of $a^{s}$ converges in the above sense if
$\displaystyle(\sigma-1)^{2}+t^{2}<1$
$\displaystyle\frac{1}{4}\leq\sigma\leq\frac{3}{4}$ $\displaystyle 0\leq
t\leq\frac{1}{2}$
We use the notation $\textit{F}_{a}$ to denote the above domain and the
companion domain $\textit{F}_{b}$. We call these domains Clifford Domains
A proof (sketch) utilizes the fact that the complex norm, for positive
integers $k$,
$|1-\frac{s}{m}|^{1/2}<1\qquad{m=1,2,..,k-1}.$
and that any accumulating phases cannot produce a divergent harmonic series;
otherwise we can perform a rotation back onto the real line with a rotated
Clifford domain with its own critical strip which overlaps the original domain
producing a contradiction since it holds in the real case. Only finitely many
corrections are needed to handle possible divergences. To start the proof,
begin with finite 2-adic representations of $s=\sigma+it$ for example of the
form
$s=\frac{1}{2}-\frac{1}{2^{3}}+i(\frac{1}{2}-\frac{1}{2^{5}})$
to prove that there will be only a finite number of alternating type harmonic
series(etas)with given ”‘locked”’ phases i.e. a finite number of convergent
etas. By using a large enough power of 2, we can make all of the phases to lie
in a two dimensional lattice over the complex integers. We are exploiting the
fact that the result holds for the perfect phase cases e.g. $s=i\sigma$ and
with the angles
$\theta=\frac{2\pi}{2^{2n}}$
finish the proof using the local polynomial topology on the overlapped
domains.
To see a connection with Weyl algebra and Hilbert scheme methods consider, for
a given $s$ at a given time step $k$, the operators
$\displaystyle a_{(m)}=1-\frac{s}{m}\qquad(m=1,2..,k-1)$ $\displaystyle
b_{(k)}=\frac{s}{m}\qquad thus$
$\displaystyle\binom{s}{k}=(-1)^{k-1}b_{(k)}a_{(1)}....a_{(k-1)}\qquad with$
$\displaystyle a_{(m)}a_{(n)}=a_{(n)}+a_{(m)}+(-1)a_{(mn)}\qquad furthermore$
$\displaystyle a_{(p)}a_{(p)}=a^{2}_{(p)}\equiv(-1)a_{(p^{2})}$
Further study of this situation helps to understand the Weyl creation operator
method that we have been developing.
To calculate $a^{i}$ use
$\displaystyle a^{i/2}=b^{1/4}a^{1/4+i/2}$ $\displaystyle
a^{i}=a^{i/2}a^{i/2}.$
These calculations will help to decide when there are suitable stopping points
in the intermediate steps in the next constructions below.
## 9 Fusing (Folding) the Clifford Domains Up the Critical Strip: Squeeze,
Stretch and Tile then Blend.
First observe that the domains are one half of quantum in the real direction
and one half quantum in the imaginary direction. Hence, have the same
information content i.e. an area of one quarter. So the plan is to maintain a
constant area of one quarter in all of our constructions. This is a basic
concept in ergodic theory [67] and information theory [31].
Now the idea is to construct the quantum symmetry $\chi_{q}$ which is
analogous to Riemann’s reflection functional equation symmetry $s->1-s$
defined as follows
$\displaystyle\chi_{q}\textit{F}_{a}=b^{s}a\textit{F}_{a}$
$\displaystyle\chi_{q}\textit{F}_{b}=a^{s}b\textit{F}_{b}$
This is the fold operation. Clearly this operation is fusing the two domains
together.
* •
Step 1 Fuse and fold the domains together.
* •
Step 2 Stop when the quantum symmetry stabilizes.
* •
Step 3 Squeeze the domains by 1/2 in the $\sigma$-direction toward the
critical line. For example, the mapping, interpreted into a and b language,
$\sigma->\frac{3}{2}\sigma-\frac{1}{4}$ squeezes the left half domain by 1/2.
The right domain is carried out in a similar fashion using the reflection
symmetry.
* •
Step 4 Stretch the domains in the t-direction by similar transformations.
Note: The above preserves the information content of a given phase cell.
* •
Step 5 Tile to fill in the rest to obtain the expanded domains.
* •
Step 6 Repeat as necessary to achieve the desired degree of accuracy.
* •
Step 6.1 These operations effectively extend the operators $a^{s}$ and $b^{s}$
up the critical line and effectively transport the quantum symmetry $\chi_{q}$
along with it.
* •
Step 6.2 Furthermore, we require a s-fractional permutational scaling such
that
$\displaystyle
b^{Ns}\stackrel{{\scriptstyle}}{{\rightarrow}}b^{Ns}(N+1!)^{-s/2}=b_{(N)}^{s}\qquad{and}$
$\displaystyle b^{s}a^{1}a_{(m)}^{s}=m^{-s}a_{(m)}^{s}$
* •
Step 7 With this accomplished we study the behavior of $\zeta_{q}(s)$ near a
macro zero to show that
$b_{(1)}^{s}a_{(1)}\zeta_{q}(s)\equiv\zeta_{q}(s).$
These operations are carried out for all the operators, i.e., for each integer
and the unit translation operator $a^{\prime}=a_{(1)}-1$,which gives the
inherent quantum direction of ”‘time”’, with additional blending carried out
to properly express arithmetic information between the primes and the
integers.
We believe the above operations correspond to some sort of fractional
cohomological extension theory similar to the theory of Hilbert schemes [49].
Clearly, we are using ergodic theory concepts here to transfer information to
the particles.
## 10 Calculation of the products
$a_{(p^{e_{1}}_{1})}^{s}...a_{(p^{e_{n}}_{n})}^{s}$
Having made the above preparations, we expect to prove that the deviation for
an arithmetic product near a macro zero is
$d_{m}=a_{(p^{e_{1}}_{1})}^{s}...a_{(p^{e_{n}}_{n})}^{s}-a^{s}_{(m)}\equiv 0$
where
$m=p_{1}^{e_{1}}..p_{n}^{e_{n}}$
and where the above set of primes correspond to the set of participating
primes accounted for by the standard Riemann zeta theory less than or equal to
$t$ for $s=\sigma+it$ which we think corresponds to s-fractional strata from
Hilbert scheme theory. The equivalence $\equiv$ corresponds to the observer
modules of the form $M=\Psi b+a\Psi$.
### 10.1 Equilibrium Calculation
Next we expect to to find the ”‘trace”’ of $\zeta_{q}(s)$ as follows
$\bar{\zeta}_{q}(s)\zeta_{q}(s)\equiv 1+2s(s-1)+2s(s-1)\equiv\eta(s)$
where the notation $\bar{\zeta}_{q}$ means writing the expression in reverse
order changing a’s into b’s and taking ordinary multiplicative inverses of
scalars since, in the ”‘renormalized”’,” thickened”’ or ”‘dressed”’ form ,
$\zeta_{q}(s)$ decomposes into three quantum fields of appropriate phases
$\zeta_{q}(s)\equiv\tilde{a}^{s}_{(0)}+\tilde{a}^{s}_{(1)}+\tilde{b}^{s}_{(-1)}.$
So what ever symmetry was found in the former quantum field
$\tilde{a}^{s}_{(1)}$ must occur in the latter quantum field
$\tilde{b}^{s}_{(-1)}$ with reverse symmetry to maintain zero angular
momentum. The pole field $\tilde{a}^{s}_{(0)}$ has been smoothed. Any back
reactions will have been coherently stopped in our constructions.
This, if true, explains why the zeros are on the critical line.
To see the possible truth of this, observe the following calculation with the
cubic and higher terms ignored and then analyzed by the modules of the form
$M=\ominus_{0}=\Psi b+a\Psi$.
$\displaystyle b^{s}a^{s}=(1+s(b-1))(1+s(b-1))$ $\displaystyle
b^{s}a^{s}=1+s(b-1)+a(b-1)+s^{2}(b-1)(a-1)$ $\displaystyle b^{s}a^{s}\equiv
1-2s+s^{2}(ba-a-b+1)$ $\displaystyle b^{s}a^{s}\equiv 1-2s+s^{2}(ba+1)$
Hence we arrive at
$\displaystyle b^{s}a^{s}\equiv 1-2s+s^{2}(ba+1)$ $\displaystyle
b^{s}a^{s}\equiv 1-2s+s^{2}(ab+1+1)$ $\displaystyle b^{s}a^{s}\equiv
1-2s+2s^{2}$ $\displaystyle b^{s}a^{s}\equiv 1+2s(s-1)$
Also the observer modules $M=\ominus_{0}=\Psi b+a\Psi$ have to be adjusted to
account for the order of processing of these steps i.e. ”‘chirality”’ [4]
since we have to get rid of the additional ”‘1’s”’ that are produced in two of
the steps that are being forced to be synchronized near the macro zero.
## 11 Concluding Remarks
We believe that we have provided a second approach to RH, using a combination
of classical thinking with QFT[65] insights, with specific targeted potential
theorems that should explain why RH is true. The first approach was outlined
in Section 5 using the etas. Actually both approaches are linked together.
To further aid the reader’s intuition, it is helpful to think of the operators
$a_{(k)}$ as infinitesimals with the quantum ability to be in two places at
the same time i.e., between neighboring integers additively and
multiplicatively. It is helpful to think of the operators being decomposable
into the form
$a_{(k)}\approx a_{(1)}^{k}.$
and that the translation operator
$a^{\prime}=a_{(1)}-1$
helps to extend the operators into a ”‘fractional”’ state in the critical
strip and beyond and provides the micro direction of time. So this is similar
to adele ring theory concepts in number theory [77],[52] except that the
integers can ”talk” to each other beyond quadratic reciprocity.
Further, it is useful to think of $a^{s}_{(k)}$ as a ” quantum field”’
associated with the integer $k$ at a given complex time $s$; and that the
algebra being constructed is the quantum field being analyzed in a given time-
space neighborhood i.e., we are renormalizing or thickening the operators. It
helps to think that we are working along the sum of filtered products of a
given fixed time-space topological [59] space corresponding to a product of
two complex projective manifolds modulo at divisible group times a two
dimensional complex lattice. It helps to think that certain fluctuations from
a ”‘vacuum”’ are being looked at very carefully by our observers i.e. lower
order operators have precedence over quantum fluctuations. Also the observers
must change progressively as the constructions proceed.
The concept of ”‘state-measurement”’ occurs only when the corresponding
operator ”‘touches”’ the corresponding particles; one does not touch a
particle until it is advantageous to do so. Further, symmetrized products of
creation operators are the Hamiltonians that generate the fields.
Further sources of information include: Kahler Geometry [54],[33]; Elliptic
Functions [35],[41]; Algebraic Topology [69]; Functional Analysis [68];
Information Theory [31],[34]; Dynamical Systems [53],[44]; Cohomology [71];
L-functions and Langlands Program [13]; Cryptanalysis [43].
Note Our citations here do not necessarily imply competency by the author.
This will be established through our proofs as the research is conducted..
## 12 Acknowledgments
Overall, I would like to acknowledge Eli Lubkin for the many conversations
dealing with physics over the years which has inspired some of my research
activities. And I would like to acknowledge some very helpful recent
conversations with L. Maxim and J.Cogdell dealing with the Euler factor
approach to RH.
## References
* [1] Aczel,J., _Lectures on Functional Equations and Their Applications in Mathematics, Science and Engineering_ , No. 19 Academic Press, New York, 1966.
* [2] Aczel,J. and Daroczy,Z., _On Measures of Information and Their Characterizations_ , Academic Press, 1975.
* [3] Becker,C.M., Brody,D.C., Jones,H.F., _Complex Extension of Quantum Mechanics_ ,arXiv:quant-ph/0208076v2, 2002, Phys. Rev. Lett. 89, 270401 (2002) [4 pages] .
* [4] Beilinson,A., Drinfeld,V.,_Chiral Algebras_ ,_http://www.math.uchicago.edu/ mitya/langlands.html_.
* [5] Binetruy,P., _Supersymmetry_ , Oxford Graduate Texts, 2007.
* [6] Boffi,G. and Buchsbaum,D., _Threading Homology Through Algebra:Selected Patterns_ , Clarendon Press, 2006.
* [7] Bohm,D., _Quantum Theory_ , Dover Publication, 1979.
* [8] Borwein,P., _An efficient algorithm for the Riemann zeta function_ , Constructive, experimental, and nonlinear analysis (Limoges, 1999), CMS Conf. Proc., vol. 27, Amer. Math. Soc., Providence, RI, 2000, pp. 29-34.
* [9] Borwein,P., Choi,S., Rooney B. et al., eds. (2008), _The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike_ , CMS Books in Mathematics, New York: Springer, 2008.
* [10] Cartan,E., _The Theory of Spinors_ , Dover Publications, New York, 1966.
* [11] Cataldo,M. and Migliorini,L.,_The Douady space of a complex surface_ , arXiv:math/9811159v1, 1998.
* [12] Cassels,J.W.S and Frohlich,A._,Algebraic Number Theory_ , Thompson Book Co., 1967.
* [13] Cogdell,J.,_L-functions and Functoriality_ , CIMPA lectures, Weihai, August, 2010(preprint available on http://www.math.osu.edu/ cogdell.1/).
* [14] Connes,A, _Noncommutative Geometry_ , Academic Press, Inc., San Diego,CA, 1994, also available at www.alainconnes.org.
* [15] Connes,A., Consani,C., Marcolli, M., _The Weil proof and the geometry of the adeles class space_ , “Algebra, Arithmetic and Geometry - Manin Festschrift”, Progress in Mathematics, Birkh user (2010); arXiv: 0703.392.
* [16] Connes A., Consani C., Marcolli M., _Noncommutative geometry and motives: the thermodynamics of endomotives_ , Advances in Mathematics, Vol.214 (2007) N.2, 761(831).
* [17] Conrey,J.B., _More than two-fifths of the zeros of the Riemann zeta-function are on the critical line_ , J.Reine Angew. Math., 399 (1989), 1-26.
* [18] Conrey,B., _The Riemann Hypothesis_ , Notices Amer. Math. Soc. 50 (2003), 341-353.
* [19] Diderrich,G.T., _Continued fractions and the fundamental equation of information theory_ , Aequationes Mathematicae 19(1979), 93-103.
* [20] Diderrich,G.T., Google site:_http://sites.google.com/site/gtdiderrich/_.
* [21] Diderrich,G.T.,_Local boundedness and the Shannon entropy_ , Information and Control, 1978, Vol. 36, No. 3, 292-308.
* [22] Derbyshire,J., _Prime Obsession_ , Joseph Henry Press, Washington, D,C, 2003 .
* [23] Dirac,P.A.M, _The Principles of Quantum Mechanics_ , 4th Edition, 1982, Oxford at the Clarendon Press, QC174.D5.
* [24] Dirac, _P.A.M, Directions in Physics_ , Wiley-Interscience Publication, New York, 1979.
* [25] Dyson,F., _Advanced Quantum Mechanics_ , World Scientific, 2007.
* [26] Edwards,H.M., _Riemann’s Zeta Function_ , Dover Edition 2001, First Published 1974 by Academic Press.
* [27] Ezawa,Z.F., _Quantum Hall Effects_ , 2nd Edition, World Scientific, 2008.
* [28] Feynman,R.P., _The Feynman Lectures on Physics_ , Vol.3, Addison-Wesley Publishing Co., 1966.
* [29] Feynman,R.P. and Hibbs,A.R., _Quantum Mechanics and Path Integrals_ , McGraw-Hill, New York, 1965.
* [30] Feynman,R.P., _The reason for antiparticles_ , in Elementary Particles and the Laws of Physics”, Cambridge Univ. Press (1987).
* [31] Gallager,R., _Information Theory and Reliable Communication_ , John Wiley and Sons, 1968.
* [32] Ginzburg,V.,_Lectures on D modules_ , (1998),_http://mechmath.org/books/1441_.
* [33] Goldberg,S, _Curvature and Homology_ , Dover Edition Revised, 1998.
* [34] Goldman,S., _Information_ , Dover Publications, 1968.
* [35] Hancock,H., _Lectures on the Theory of Elliptic Functions_ , Dover Edition 2004, First Published in 1958.
* [36] Hardy,G.H. and Wright,E.M., _An Introduction to the Theory of Numbers_ , Oxford at the Clarendon Press, 4th Edition,1960.
* [37] Hartillo,M. and Ucha,J., _A Computational Introduction to Weyl Algebra and D-Modules_ , http://cocoa.dima.unige.it/conference/cocoaviii/ucha.pdf.
* [38] Hartwig,J., _Generalized Weyl Algebras and Elliptic Quantum Groups_ , http://www.math.chalmers.se/Math/Research/Preprints/Doctoral/2008/2.pdf, Ph.D. Thesis, University of Gothenburg, Sweden, 2008.
* [39] Jones,V. and Moscovi,H., _Review of Noncommutative Geometry by Alain Connes_ , Bulletin of the American Mathematical Society,v.33, no.4, October 1996, pp.459-466.
* [40] Kaku,M., _Quantum Field Theory_ , Oxford University Press, 1993.
* [41] Koblitz,N., _Introduction to Elliptic Curves and Modular Forms_ , Springer-Verlag, 1984.
* [42] Kollar,J., _Lectures on the Resolutions of Singularities_ , Princeton University Press, 2007.
* [43] Koppensteiner,C., _Mathematical Foundations of Elliptic Curve Cryptography_ , Diploma Thesis, 2009.
* [44] Lapidus,M. and Frankenhuijsen,M., _Fractal Geometry, Complex Dimensions and Zeta Functions_ , Springer, 2006.
* [45] Laudal,O., _Noncommutative algebraic geometry_ , Source: Rev. Mat. Iberoamericana Volume 19, Number 2 (2003), 509-580.
* [46] Loday,J.L., _Cyclic Homology_ , Springer, Berlin, 1992.
* [47] Lubkin,Eli, _Private Communication_ , 1990-Present.
* [48] Lubkin,E., _Schrodinger’s Cat_ , International Journal of Theoretical Physics, Volume 18(1979), Issue 8, pp.519-600 .
* [49] Maxim,L., Cappell S., Ohmoto T., Schuermann J., Yokura S. , _Hilbert schemes of points via symmetric products_ , arXiv:1204.0473, submitted: 2 Apr 2012.
* [50] McMahon,D., _Quantum Field Theory Demystified_ , McGraw-Hill, 2008.
* [51] Milne,J., _Algebraic Geometry_ , http://www.jmilne.org/math/CourseNotes/ag.html, 2012.
* [52] Milne,J., _Algebraic Number Theory_ ,http://www.jmilne.org/math/CourseNotes/ant.html, 2012.
* [53] Milnor,J., _Dynamics in One Complex Variable_ , Annals of Mathematics, 3rd Edition, 2006.
* [54] Moroianu,A., _Lectures on Kahler Geometry_ , London Mathematical Society 69 (2007).
* [55] Nakajima,H., _Heisenberg algebra and Hilbert schemes of points on projective surfaces_ , Ann. Math. 145 (1997), 379-388.
* [56] Ngo,B.C., _Le lemme fondamental pour les algebres de Lie_ , Publications Mathematique de L’HES,2010, Volume 111, Number 1, pages 1-169.
* [57] Ornstein,D., _Bernoulli shifts with the same entropy are isomorphic_ , Advances in Math (4), 337-352, 1970.
* [58] Ornstein,D., _Ergodic Theory, Randomness, and Dynamical Systems_ , Yale University Press, 1974.
* [59] Page,W., _Topological Uniform Structures_ , Dover Publications, 1988.
* [60] Patterson, S. J.,_An introduction to the theory of the Riemann zeta-function_ , Cambridge Studies in Advanced Mathematics, 14, Cambridge University Press, 1988.
* [61] Petz,D. _The algebra of the canonical commutator relation_ , in http://www.math.bme.hu/ petz/CCR.pdf, Alfred Renyi Institue, Hungary, 2011.
* [62] Planat,M., Henry,E., _The arithmetic of 1/f noise in a phase locked loop_ , Frequency and Control Symposium and PDA Exhibition, IEEE International(2002), pp.739-743.
* [63] Riemann,B., _On the Number of Prime Numbers Less Than a Given Quantity_ , Montatsberiche der Berliner Akademie, November, 1859 Translated by David R. Wilkins, Preliminary Version December, 1998.
* [64] Sarnak,P., _Problems of the Millennium: The Riemann Hypothesis_ , Clay Mathematics Institute(2004).
* [65] Schweber,S., _QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga_ , Princeton University Press, 1994.
* [66] Serre,J.P., _Coherent Algebraic Sheaves_ ,http://math.berkeley.edu/ achinger/fac/fac.pdf, Translated by Piotr Achinger and Lukasz Krupa .
* [67] Shields,P., _The Theory of Bernoulli Shifts_ , The University of Chicago Press(1973).
* [68] Shilov,G., _Elementary Functional Analysis_ , Dover(1974).
* [69] Spanier,E., _Algebraic Topology_ , McGraw-Hill(1966).
* [70] Stolarsky,K.B., _Algebraic Numbers and Diophantine Approximation_ , Marcel Dekker, Inc., New York, 1974.
* [71] Stum,B.L., _Rigid Cohomology_ , Cambridge University Press, 2007.
* [72] Vakil,R., _Math 216:Foundations of Algebraic Geometry_ , http://math.stanford.edu/ vakil/216blog/, 2012.
* [73] Vinogradov,I.M., _Elements of Number Theory_ , Dover Publications, 1954.
* [74] Walters,P., _Ergodic Theory-Introductory Lectures_ , Springer-Verlag, 1975.
* [75] Weyl,H., _The Theory of Groups and Quantum Mechanics_ , Dover Publications, 1950.
* [76] Weyl,H., _Space-Time-Matter_ , Dover Publications, 1952.
* [77] Weiss,E., _Algebraic Number Theory_ , Dover, 1998 .
* [78] Wolfram,S., _http://mathworld.wolfram.com/RiemannZetaFunction.html_.
* [79] Youssef,S., _Quantum Mechanics as Complex Probability Theory_ , arXiv:hep-th/9307019v2,1994.
* [80] Youssef,S., _Exotic Probability Theories and Quantum Mechanics: References_ , http://physics.bu.edu/ youssef/quantum/.
* [81] Zee,A., _Quantum Field Theory in a Nutshell_ , Princeton University Press, Princeton and Oxford, 2nd Edition, 2010.
|
arxiv-papers
| 2013-01-07T15:22:47 |
2024-09-04T02:49:41.940900
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "George T. Diderrich",
"submitter": "George Diderrich Dr.",
"url": "https://arxiv.org/abs/1302.5033"
}
|
1302.5038
|
# On the relation between connectivity, independence and generalized
caterpillars
M. Pedramfar [email protected] M. Shokrian [email protected]
M. Tefagh [email protected] Department of Mathematical Sciences, Sharif
University of Technology, P.O. Box 11155-9415,
Tehran, I.R. IRAN
###### Abstract
A spanning generalized caterpillar is a spanning tree in which all vertices of
degree more than two are on a path. In this note, we find a relation between
the existence of spanning generalized caterpillar and the independence and
connectivity number in a graph. We also point out to an error in a “theorem”
in the paper “Spanning spiders and light-splitting switches”, by L. Gargano et
al. in Discrete Math. (2004), and find out a relation between another
mentioned theorem and the existence of spanning generalized caterpillar.
###### keywords:
Spanning spider; Caterpillar; Spanning generalized caterpillar; Independence
Here, we consider only finite connected graphs without loops or multiple
edges. For standard graph-theoretic terminologies not explained in this note,
we refer to [1].
We denote the $\it{degree}$ of a vertex $x$ in a graph $G$ by $d_{G}(x)$, the
$\it{independence\ number}$ by $\alpha(G)$ and its $\it{connectivity\ number}$
by $\kappa(G)$. In a tree $T$, we call a vertex $v\in V(T)$ a $\it{branch\
vertex}$ when $d_{T}(v)>2$.
As defined in [2], a tree is called a $\it{generalized\ caterpillar}$ if all
vertices of degree more than two are on a path or equivalently a caterpillar
in which hairs edges incident to the spine, are replaced by paths.
A motivation to define such a tree comes from the definition of $\it{spider}$,
i.e. a star in which hairs are replaced by paths. Obviously a generalized
caterpillar is also a generalization of spider.
In earlier results, it is shown that:
###### Theorem A.
_([3])_ The bandwidth problem is NP-complete for generalized caterpillars of
hair length at most $3$.
But we are interested in the conditions under which a graph has a spanning
generalized caterpillar (or for simplicity an SGC). In order to find some
sufficient conditions, we concentrated on the special case of spiders. In this
way we looked at the paper [4], where it is discussed about the different
aspects of the existence of spanning spiders in a graph:
###### Theorem B.
_(Proposition 1 of[4])_ It is NP-complete to decide whether a given graph $G$
admits a spanning spider.
So finding necessary and sufficient conditions for the existence of SGC in a
graph does not seem to be an easy task! One of the results which was
particularly interesting for us is the following:
###### Theorem C.
_(Theorem 8 of[4])_ Let $G$ be a connected graph. Then $s(G)\leq
2\lceil\frac{\alpha(G)}{\kappa(G)}\rceil-2$, where $s(G)$ is the minimum
number of branch vertices in a spanning tree in $G$.
But we found that the following wrong theorem was used to prove Theorem C
which is:
###### Theorem D.
_([4])_ Vertices of any graph G can be covered by at most
$\lceil\frac{\alpha(G)}{\kappa(G)}\rceil$ vertex disjoint paths.
We checked the reference [5] to which the authors have referred, and we could
not find that used result in there. Indeed, we have a counterexample to
“Theorem D”:
###### Counterexample to “Theorem D”.
Let $G=K_{m,2m}$. We have $\alpha(G)=2\kappa(G)=2m$ but we can not cover the
vertices of $G$ by at most two vertex disjoint paths $P$ and $Q$.
###### Proof.
If $P=\emptyset$, it means that $G$ has a hamiltonian path which is obviously
wrong. So $P$ and $Q$ are not empty. Since $G$ is connected there is an edge
$e$ which is incident with a vertex in $P$ and also a vertex in $Q$. Adding
this edge leads to a spanning tree $T$ which has all vertices of degree at
most $2$, except for at most two vertices of degree $3$. $G$ is bipartite, so
the edges of $T$ have exactly one vertex in the part with $m$ vertices. But we
have:
$\displaystyle|E(T)|\leq 2+2+\cdots+2+3+3=2(m-2)+6=2m+4.$
On the other hand, since $T$ is a spanning tree we know that
$|E(T)|=|V(G)|-1=3m-1$ which is greater than $2m+4$ for $m>5$. So Theorem D is
wrong. ∎
In fact, we have:
###### Theorem E.
_([6])_ Vertices of any graph G can be covered by at most
$\lceil\frac{\alpha(G)}{\kappa(G)}\rceil$ cycles.
But what is the relation of Theorem C to the existence of SGC? At first
glance, the relation between connectivity, independence and existence of
certain kind of spanning trees might not be obvious. But there exist many
theorems discussing about this relation [7].
The following theorem answers the above question:
###### Theorem 1.
If $s(G)\leq\kappa(G)$, then $G$ has a spanning generalized caterpillar.
###### Proof.
We use the following well-known lemma to prove it.
###### Lemma.
_([1])_ In any graph $G$, any set with $\kappa(G)$ vertices can be covered by
a cycle.
The proof of the lemma given above can be found in [1], page 170. Now since
there exists a spanning tree $T$ with at most $\kappa(G)$ vertices, we can add
a cycle $C$ to the edges of $T$ so that in the new spanning subgraph
$T^{\prime}$ all vertices of degree more than $2$ are on that cycle. Now
delete one arbitrary edge from $C$, like $e$, and from each cycle in
$T^{\prime}$ delete one arbitrary edge which is not in $C-e$ (as $C-e$ is a
path we can do this). At last, we obtain a spanning tree in which all vertices
of degree more than $2$ are on a path : $C-e$. So we have an SGC. ∎
Now if we consider graphs for which
$\alpha(G)\leq\frac{(\kappa(G))^{2}+\kappa(G)}{2}$, then by Theorem C we have:
$\displaystyle s(G)\leq
2\lceil\frac{\frac{(\kappa(G))^{2}+\kappa(G)}{2}}{\kappa(G)}\rceil-2=\left\\{\begin{array}[]{ll}\kappa(G),&\mbox{
if }2|\kappa(G)\\\ \kappa(G)-1,&\mbox{ if }2|\kappa(G)+1.\end{array}\right.$
So as a corollary of Theorem 1, we can deduce that for all graphs with
$\alpha(G)\leq\frac{(\kappa(G))^{2}+\kappa(G)}{2}$ we have an SGC.
Now we try to construct graphs which do not have SGC:
###### Theorem 2.
There exists a graph G having no spanning generalized caterpillar as in the
following construction:
1. 1.
Consider an independent set of vertices $S=\\{v_{1},\ldots,v_{m}\\}$.
2. 2.
Consider $m+2$ graphs $G_{i}=K_{2m+1,m}\ ,\ 1\leq i\leq m+2$. From each
$G_{i}$ select $m$ arbitrary vertices from the part with $2m+1$ vertices like
$\\{t_{i,1},\ldots,t_{i,m}\\}$ and for each $1\leq i\leq m+2$ add the edge
$t_{i,j}v_{r}$ for all $1\leq r,j\leq m$.
Figure 1: Cases $m=1$ and $m=2$
###### Proof.
We have shown the cases $m=1$ and $m=2$ in Figure 1. We show by contradiction
that G does not have an SGC. Suppose that we have an SGC like $T$. Consider
$P$, the set that contains all branch vertices. Let $A_{i}$ be the part with
$m$ vertices in $G_{i}$ and $B_{i}$ be the other part which has $2m+1$
vertices.
###### Claim 1.
for each $i$, $1\leq i\leq m+2$ we have $G_{i}\cap P\neq\emptyset$.
$\it{Proof\ of\ the\ claim\ 1.}$ If for some $i$ we have $G_{i}\cap
P=\emptyset$, by considering the definition of GC, it means that all vertices
of $G_{i}$ are covered by, let us say, $n$ vertex disjoint paths. And also,
each of these paths has at least one of its ends, in the set of vertices
$\\{t_{i,1},\ldots,t_{i,m}\\}$, because these are the only vertices which are
connected to the vertices outside of $G_{i}$. Hence we get $n\leq m$.
If we denote the paths by $\\{P_{1},\ldots,P_{n}\\}$, then obviously as the
graph $G_{i}$ is bipartite we have $|P_{u}\cap B|\leq|P_{u}\cap A|+1$, for
each $u$, $1\leq u\leq n$. So we obtain:
$\displaystyle 2m+1=|B|=\sum_{u=1}^{n}|P_{u}\cap
B|\leq\sum_{u=1}^{n}(|P_{u}\cap A|+1)=|A|+n\implies m+1\leq n,$
which is a contradiction with $n\leq m$. Therefore for each $i$, $1\leq i\leq
m+2$ we have $\ G_{i}\cap P\neq\emptyset$.
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\
\ \ \square$
Define $Q$ to be the shortest path in $T$ that contains all branch vertices.
Now we claim the following and the contradiction is immediate:
###### Claim 2.
We have more than $2m$ edges which are in $Q$ and also are incident to a
vertex in $S$.
$\it{Proof\ of\ the\ claim\ 2.}$ In the last claim we proved that for each
$i$, $1\leq i\leq m+2$ we have $G_{i}\cap Q\neq\emptyset$ . Obviously
$G_{i}\cap Q$ is a set of vertex disjoint paths. Take one of them like $L$.
Now we look at the ends of $L$, namely $x$ and $y$. There exist three cases:
* (i)
$x$ or $y$ (and not both) is at the end of $Q$.
* (ii)
both of $x,y$ are vertices of degree $2$ in $Q$.
* (iii)
$x,y$ are the ends of $Q$.
In the case (i), if $y$ is at the end of $Q$ then since $x$ has degree $2$ in
$Q$, one of two edges is in $E(G_{i})$ and the other is an edge between
$G_{i}$ and $S$. On the other hand as $Q$ has two ends, this case can happen
for at most two $G_{i}$’s.
In the case (ii), both of the vertices have the same condition of $x$ in the
last case. So there are two edges between $G_{i}$ and $S$ which are in $Q$.
As for the case (iii), it can not happen, because $Q$ is connected and has
vertices in at least two different $G_{i}$’s. So there must exist an edge in
$Q$ but not in $L$ and incident to one of the vertices of $L$ so that $L$ can
be connected to other parts of $Q$ in other $G_{i}$’s.
Hence by considering the fact that (i) happens at most two times there exist
at least:
$\displaystyle 2+2+\cdots+2+1+1=2(m+2-2)+2=2m+2>2m$
edges between each $G_{i}$ and $S$ which are in $Q$.
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ \ \ \ \ \square$
But $|S|=m$ and each vertex in $Q$ has degree at most $2$. As
$\frac{2m+2}{|S|}>2$, there is a vertex $v\in S\cap Q$ which has degree at
least $3$ in $Q$. Thus $G$ does not have any SGC. ∎
It is easy to see that $\alpha(G)=(2m+1)(m+2)$ and $\kappa(G)=m$. Hence, we
have proved that if $\alpha(G)\geq(2\kappa(G)+1)(\kappa(G)+2)$, then we may
not have SGC. We don’t know whether Theorem C is true or wrong but if we can
construct ‘better’ graphs which have no SGC, i.e. a graph for which the ratio
$\frac{\alpha(G)}{\kappa(G)}$ is less than $\frac{\kappa(G)}{2}$, then we may
be able to find a counterexample for $s(G)\leq
2\lceil\frac{\alpha(G)}{\kappa(G)}\rceil-2$.
At last, we provide a bound for $\alpha(G)$ in terms of $\kappa(G)$ which
guarantees the existence of SGC:
###### Theorem 3.
If $\alpha(G)\leq 2\kappa(G)+1$, then $G$ has an spanning generalized
caterpillar.
###### Proof.
Define $V_{3}^{T}=\\{v|v\in T,d_{T}(v)=3\\}$ and let a $k-$tree be a tree with
$\Delta(T)\leq k$. We use the following result which appears in [8]:
###### Theorem F.
_([8])_ If $\alpha(G)\leq\kappa(G)+n+1$, where $0\leq n\leq\kappa(G)$, then we
have a spanning $3-$tree $T$ with $|V_{3}^{T}|\leq n$.
By using Theorem F for the case $n=\kappa(G)$ and Theorem 1 we have an SGC.
Since before adding the cycle $C$ mentioned in Theorem 1, the degree of each
vertex was at most $3$, we get an SGC with maximum degree at most $5$. ∎
## Acknowledgements
The authors greatly appreciate Prof. E.S. Mahmoodian for his corrections and
many valuable comments. We also would like to acknowledge Prof. S. Akbari for
introducing us this problem.
## References
## References
* [1] D. B. West, Introduction to graph theory, Prentice-Hall, Inc, United States of American, 2001.
* [2] A. M. S. Shrestha, S. Tayu, S. Ueno, Bandwidth of convex bipartite graphs and related graphs, in: Proceedings of the 17th Annual International Conference on Computing and Combinatorics, COCOON’11, Springer-Verlag, Berlin, Heidelberg, 2011, pp. 307–318.
URL http://dl.acm.org/citation.cfm?id=2033094.2033122
* [3] B. Monien, The bandwidth minimization problem for caterpillars with hair length 3 is np-complete, SIAM J. Algebraic Discrete Methods 7 (4) (1986) 505–512. doi:10.1137/0607057.
URL http://dx.doi.org/10.1137/0607057
* [4] L. Gargano, M. Hammar, P. Hell, L. Stacho, U. Vaccaro, Spanning spiders and light-splitting switches, Discrete Math. 285 (1-3) (2004) 83–95. doi:10.1016/j.disc.2004.04.005.
URL http://dx.doi.org/10.1016/j.disc.2004.04.005
* [5] V. Chvátal, P. Erdős, A note on Hamiltonian circuits, Discrete Math. 2 (1972) 111–113.
* [6] M. Kouider, Cycles in graphs with prescribed stability number and connectivity, J. Combin. Theory Ser. B 60 (2) (1994) 315–318. doi:10.1006/jctb.1994.1023.
URL http://dx.doi.org/10.1006/jctb.1994.1023
* [7] K. Ozeki, T. Yamashita, Spanning trees: a survey, Graphs Combin. 27 (1) (2011) 1–26. doi:10.1007/s00373-010-0973-2.
URL http://dx.doi.org/10.1007/s00373-010-0973-2
* [8] M. Tsugaki, A note on a spanning 3-tree, Combinatorica 29 (1) (2009) 127–129. doi:10.1007/s00493-009-2349-x.
URL http://dx.doi.org/10.1007/s00493-009-2349-x
|
arxiv-papers
| 2013-02-20T17:16:35 |
2024-09-04T02:49:41.950861
|
{
"license": "Public Domain",
"authors": "M. Pedramfar, M. Shokrian, M. Tefagh",
"submitter": "Mojtaba Shokrian",
"url": "https://arxiv.org/abs/1302.5038"
}
|
1302.5053
|
# Parabolic Littlewood-Paley inequality for $\phi(-\Delta)$-type operators and
applications to Stochastic integro-differential equations
Ildoo Kim Department of Mathematics, Korea University, 1 Anam-dong, Sungbuk-
gu, Seoul, 136-701, Republic of Korea [email protected] , Kyeong-Hun Kim
Department of Mathematics, Korea University, 1 Anam-dong, Sungbuk-gu, Seoul,
136-701, Republic of Korea [email protected] and Panki Kim Department
of Mathematical Sciences and Research Institute of Mathematics, Seoul National
University, Building 27, 1 Gwanak-ro, Gwanak-gu Seoul 151-747, Republic of
Korea. [email protected]
###### Abstract.
In this paper we prove a parabolic version of the Littlewood-Paley inequality
(1.3) for the operators of the type $\phi(-\Delta)$, where $\phi$ is a
Bernstein function. As an application, we construct an $L_{p}$-theory for the
stochastic integro-differential equations of the type
$du=(-\phi(-\Delta)u+f)\,dt+g\,dW_{t}$.
###### Key words and phrases:
Parabolic Littlewood-Paley inequality, Stochastic partial differential
equations, Integro-differential operators, Lévy processes, Estimates of
transition functions
###### 2010 Mathematics Subject Classification:
42B25, 26D10, 60H15, 60G51, 60J35
The research of the second author was supported by Basic Science Research
Program through the National Research Foundation of Korea(NRF) funded by the
Ministry of Education, Science and Technology (20120005158)
The research of the third author was supported by Basic Science Research
Program through the National Research Foundation of Korea(NRF) grant funded by
the Korea government(MEST)(2011-000251)
## 1\. Introduction
The operators we are considering in this article are certain functions of the
Laplacian. To be more precise, recall that a function
$\phi:(0,\infty)\to(0,\infty)$ such that $\phi(0+)=0$ is called a Bernstein
function if it is of the form
$\phi(\lambda)=b\lambda+\int_{(0,\infty)}(1-e^{-\lambda
t})\,\mu(dt)\,,\quad\lambda>0\,,$
where $b\geq 0$ and $\mu$ is a measure on $(0,\infty)$ satisfying
$\int_{(0,\infty)}(1\wedge t)\,\mu(dt)<\infty$, called the Lévy measure. By
Bochner’s functional calculus, one can define the operator
$\phi(\Delta):=-\phi(-\Delta)$ on $C_{b}^{2}({\mathbb{R}}^{d})$, which turns
out to be an integro-differential operator
$b\Delta f(x)+\int_{{\mathbb{R}}^{d}}\left(f(x+y)-f(x)-\nabla f(x)\cdot
y{\mathbf{1}}_{\\{|y|\leq 1\\}}\right)\,J(y)\,dy\,,$ (1.1)
where $J(x)=j(|x|)$ with $j:(0,\infty)\to(0,\infty)$ given by
$j(r)=\int_{0}^{\infty}(4\pi t)^{-d/2}e^{-r^{2}/(4t)}\,\mu(dt)\,.$
It is also known that the operator $\phi(\Delta)$ is the infinitesimal
generator of the $d$-dimensional subordinate Brownian motion. Let
$S=(S_{t})_{t\geq 0}$ be a subordinator (i.e. an increasing Lévy process
satisfying $S_{0}=0$) with Laplace exponent $\phi$, and let $W=(W_{t})_{t\geq
0}$ be a Brownian motion in ${\mathbb{R}}^{d}$, $d\geq 1$, independent of $S$
with
${\mathbb{E}}_{x}\left[e^{i\xi(W_{t}-W_{0})}\right]=e^{-t{|\xi|^{2}}},\xi\in{\mathbb{R}}^{d},t>0$.
Then $X_{t}:=W_{S_{t}}$, called the subordinate Brownian motion, is a
rotationally invariant Lévy process in ${\mathbb{R}}^{d}$ with characteristic
exponent $\phi(|\xi|^{2})$, and for any $f\in C^{2}_{b}(\mathbb{R}^{d})$
$\phi(\Delta)f(x)=\lim_{t\to 0}\frac{1}{t}[\mathbb{E}_{x}f(X_{t})-f(x)].$
(1.2)
For instance, by taking $\phi(\lambda)=\lambda^{\alpha/2}$ with
$\alpha\in(0,2)$, we get the fractional laplacian
$\Delta^{\alpha/2}:=-(-\Delta)^{\alpha/2}$ which is the infinitesimal
generator of a rotationally symmetric $\alpha$-stable process in
${\mathbb{R}}^{d}$.
In this article we prove a parabolic Littlewood-Paley inequality for
$\phi(\Delta)$:
###### Theorem 1.1.
Let $\phi$ be a Bernstein function, $T_{t}$ be the semigroup corresponding to
$\phi(\Delta)$ and $H$ be a Hilbert space. Suppose that $\phi$ satisfies
(H1)$:$ $\exists$ constants $0<\delta_{1}\leq\delta_{2}<1$ and $a_{1},a_{2}>0$
such that
$a_{1}\lambda^{\delta_{1}}\phi(t)\leq\phi(\lambda t)\leq
a_{2}\lambda^{\delta_{2}}\phi(t),\quad\lambda\geq 1,t\geq 1\,;$
(H2)$:$ $\exists$ constants $0<\delta_{3}\leq 1$ and $a_{3}>0$ such that
$\phi(\lambda t)\leq a_{3}\lambda^{\delta_{3}}\phi(t),\quad\lambda\leq 1,t\leq
1\,.$
Then for any $p\in[2,\infty),\,T\in(0,\infty)$ and $f\in
C_{0}^{\infty}(\mathbb{R}^{d+1},H)$,
$\displaystyle\int_{\mathbb{R}^{d}}\int_{0}^{T}[\int_{0}^{t}|\phi(\Delta)^{1/2}T_{t-s}f(s,\cdot)(x)|_{H}^{2}ds]^{p/2}dtdx\leq
N\int_{\mathbb{R}^{d}}\int_{0}^{T}|f(t,x)|^{p}_{H}~{}dtdx,$ (1.3)
where the constant $N$ depends only on $d,p,T,a_{i}$ and $\delta_{i}$
$(i=1,2,3)$.
(H1) is a condition on the asymptotic behavior of $\phi$ at infinity and it
governs the behavior of the corresponding subordinate Brownian motion $X$ for
small time and small space. (H2) is a condition about the asymptotic behavior
of $\phi$ at zero and it governs the behavior of the corresponding subordinate
Brownian motion $X$ for large time and large space. Note that it follows from
the second inequality in (H1) that $\phi$ has no drift, i.e., $b=0$ in (1.1).
It also follows from (H2) that $\phi(0+)=0$.
Using the tables at the end of [20], one can construct a lot of explicit
examples of Bernstein functions satisfying (H1)–(H2). Here are a few of them:
* (1)
$\phi(\lambda)=\lambda^{\alpha}+\lambda^{\beta}$, $0<\alpha<\beta<1$;
* (2)
$\phi(\lambda)=(\lambda+\lambda^{\alpha})^{\beta}$, $\alpha,\beta\in(0,1)$;
* (3)
$\phi(\lambda)=\lambda^{\alpha}(\log(1+\lambda))^{\beta}$, $\alpha\in(0,1)$,
$\beta\in(0,1-\alpha)$;
* (4)
$\phi(\lambda)=\lambda^{\alpha}(\log(1+\lambda))^{-\beta}$, $\alpha\in(0,1)$,
$\beta\in(0,\alpha)$;
* (5)
$\phi(\lambda)=(\log(\cosh(\sqrt{\lambda})))^{\alpha}$, $\alpha\in(0,1)$;
* (6)
$\phi(\lambda)=(\log(\sinh(\sqrt{\lambda}))-\log\sqrt{\lambda})^{\alpha}$,
$\alpha\in(0,1)$.
For example, the subordinate Brownian motion corresponding to the example (1)
$\phi(\lambda)=\lambda^{\alpha}+\lambda^{\beta}$ is the sum of two independent
symmetric $\alpha$ and $\beta$ stable processes. In this case the
characteristic exponent is
$\Phi(\theta)=|\theta|^{\alpha}+|\theta|^{\beta}\,,\
\theta\in{\mathbb{R}}^{d}\,,\quad 0<\beta<\alpha<2\,,$
and its infinitesimal generator is
$-(-\Delta)^{\beta/2}-(-\Delta)^{\alpha/2}$.
We remark here that relativistic stable processes satisfy (H1)–(H2) with
$\delta_{3}=1$; Suppose that $\alpha\in(0,2)$, $m>0$ and define
$\phi_{m}(\lambda)=(\lambda+m^{2/\alpha})^{\alpha/2}-m.$
The subordinate Brownian motion corresponding to $\phi_{m}$ is a relativistic
$\alpha$-stable process on $\mathbb{R}^{d}$ with mass $m$ whose characteristic
function is given by
$\exp(-t((|\xi|^{2}+m^{2/\alpha})^{\alpha/2}-m)),\qquad\xi\in\mathbb{R}^{d}.$
The infinitesimal generator is $m-(m^{2/\alpha}-\Delta)^{\alpha/2}$.
Note that when $m=1$, this infinitesimal generator reduces to
$1-(1-\Delta)^{\alpha/2}$. Thus the $1$-resolvent kernel of the relativistic
$\alpha$-stable process with mass $1$ on $\mathbb{R}^{d}$ is just the Bessel
potential kernel. When $\alpha=1$, the infinitesimal generator reduces to the
so-called free relativistic Hamiltonian $m-\sqrt{-\Delta+m^{2}}$. The operator
$m-\sqrt{-\Delta+m^{2}}$ is very important in mathematical physics due to its
application to relativistic quantum mechanics. We emphasize that the present
article covers this case.
The parabolic Littlewood-Paley inequality (1.3) was first proved by Krylov
([14, 15]) for the case $\phi(\Delta)=\Delta$ with $N=N(p)$ depending only on
$p$. In this case, if $f$ depends only on $x$ and $H=\mathbb{R}$ then (1.3)
leads to the the classical (elliptic) Littlewood-Paley inequality (cf. [23]):
$\int_{\mathbb{R}^{d}}\left(\int^{\infty}_{0}|\nabla
T_{t}f|^{2}dt\right)^{p/2}dx\leq N(p)\|f\|^{p}_{p},\quad\quad\forall\,\,f\in
L_{p}(\mathbb{R}^{d}).$
Recently, (1.3) was proved for the fractional Laplacian $\Delta^{\alpha/2}$,
$\alpha\in(0,2)$, in [2, 8]. Also, in [17] similar result was proved for the
case $J=J(t,y)=m(t,y)|y|^{-d-\alpha}$ in (1.1), where $\alpha\in(0,2)$ and
$m(t,y)$ is a bounded smooth function satisfying $m(t,y)=m(t,y|y|^{-1})$ (i.e.
homogeneous of degree zero) and $m(t,y)>c>0$ on a set $\Gamma\subset S^{d-1}$
of a positive Lebesgue measure. We note that even the case
$\phi(\lambda)=\lambda^{\alpha}+\lambda^{\beta}$ ($\alpha\neq\beta$) is not
covered in [17].
Our motivation of studying (1.3) is that (1.3) is the key estimate for the
$L_{p}$-theory of the corresponding stochastic partial differential equations.
For example, Krylov’s result ([14, 15]) for $\Delta$ is related to the
$L_{p}$-theory of the second-order stochastic partial differential equations.
Below we briefly explain the reason for this. See [9, 16] or Section 6 of this
article for more details. Consider the stochastic integro-differential
equation
$du=(\phi(\Delta)u+h)\,dt+\sum_{k=1}^{\infty}f^{k}dw^{k}_{t},\quad u(0,x)=0.$
(1.4)
Here $f=(f^{1},f^{2},\cdots)$ is an $\ell_{2}$-valued random function of
$(t,x)$, and $w^{k}_{t}$ are independent one-dimensional Wiener processes
defined on a probability space $(\Omega,P)$. Considering $u-w$, where
$w(t):=\int^{t}_{0}T_{t-s}h(s)ds$, we may assume $h=0$ (see Section 6). It
turns out that if $f=(f^{1},f^{2},\cdots)$ satisfies certain measurability
condition, the solution of this problem is given by
$u(t,x)=\sum_{k=1}^{\infty}\int^{t}_{0}T_{t-s}f^{k}(s,\cdot)(x)dw^{k}_{s}.$
By Burkholder-Davis-Gundy inequality,
$\displaystyle\mathbb{E}\int^{T}_{0}\|\phi(\Delta)^{1/2}u(t,\cdot)\|^{p}_{L_{p}}dt$
(1.5) $\displaystyle\leq$ $\displaystyle
N(p)\,\mathbb{E}\int^{T}_{0}\int_{\mathbb{R}^{d}}\left[\int^{t}_{0}|\phi(\Delta)^{1/2}T_{t-s}f(s,\cdot)(x)|^{2}_{\ell_{2}}ds\right]^{p/2}dxdt.$
Actually if $f$ is not random, then $u$ becomes a Gaussian process and the
reverse inequality of (1.5) also holds. Thus to prove $\phi(\Delta)^{1/2}u\in
L_{p}$ and to get a legitimate start of the $L_{p}$-theory of equation (1.4),
one has to estimate the right-hand side of (1.5) (or the left-hand side of
(1.3)). We will also see that (1.3) yields the uniqueness and existence of
equation (1.4) in certain Banach spaces.
The key of our approach is estimating the sharp function $(v)^{\sharp}(t,x)$
of
$v(t,x):=[\int_{0}^{t}|\phi(\Delta)^{1/2}T_{t-s}f(s,\cdot)(x)|_{H}^{2}ds]^{1/2}$:
$(v)^{\sharp}(t,x):=\sup_{(t,x)\in\mathcal{Q}}-\int_{\mathcal{Q}}|v-v_{\mathcal{Q}}|dtdx,$
(1.6)
where $v_{\mathcal{Q}}:=-\int_{\mathcal{Q}}v\;dxdt$ is the average of $v$ over
$Q$ and the supremum is taken for all cubes $Q$ containing $(t,x)$ of the type
$\mathcal{Q}_{c}(r,y):=(r-\phi(c^{-2})^{-1},r+\phi(c^{-2})^{-1})\times
B_{c}(y)$. We control $(v)^{\sharp}(t,x)$ in terms of the maximal functions of
$|f|_{H}$, and then apply Fefferman-Stein and Hardy-Littlewood theorems to
prove (1.3). The operators considered in [14, 8, 17] have simple scaling
properties, and so to estimate the mean oscillation
$-\int_{\mathcal{Q}}|v-v_{\mathcal{Q}}|dtdx$ in (1.6), it was enough to
consider the only case $\mathcal{Q}=\mathcal{Q}_{1}(0,0)$, that is the case
$c=1$ and $(r,y)=(0,0)$. However, in our case, due to the lack of the scaling
property, it is needed to consider the mean oscillation
$-\int_{\mathcal{Q}}|v-v_{\mathcal{Q}}|dtdx$ on every $Q_{c}(r,y)$ containing
$(t,x)$. This causes serious difficulties as can be seen in the proofs of
Lemmas 5.2–5.5. Our estimation of $-\int_{\mathcal{Q}}|v-v_{\mathcal{Q}}|dtdx$
relies on the upper bounds of $\phi(\Delta)^{n/2}D^{\beta}p(t,x)$, which are
obtained in this article. Here $\beta$ is an arbitrary multi-index,
$n=0,1,2,\cdots$ and $p(t,x)$ is the density of the semigroup $T_{t}$
corresponding to $\phi(\Delta)$.
The article is organized as follows. In Section 2 we give upper bounds of the
density $p(t,x)$. Section 3 contains various properties of Bernstein functions
and subordinate Brownian motions. In Section 4 we establish upper bounds of
the fractional derivatives of $p(t,x)$ in terms of $\phi$. Using these
estimates we give the proof of of Theorem 1.1 in Section 5. In Section 6 we
apply Theorem 1.1 and construct an $L_{p}$-theory for equation (1.4).
We finish the introduction with some notation. As usual $\mathbb{R}^{d}$
stands for the Euclidean space of points $x=(x^{1},...,x^{d})$,
$B_{r}(x):=\\{y\in\mathbb{R}^{d}:|x-y|<r\\}$ and $B_{r}:=B_{r}(0)$. For
$i=1,...,d$, multi-indices $\beta=(\beta_{1},...,\beta_{d})$,
$\beta_{i}\in\\{0,1,2,...\\}$, and functions $u(x)$ we set
$u_{x^{i}}=\frac{\partial u}{\partial x^{i}}=D_{i}u,\quad
D^{\beta}u=D_{1}^{\beta_{1}}\cdot...\cdot
D^{\beta_{d}}_{d}u,\quad|\beta|=\beta_{1}+...+\beta_{d}.$
We write $u\in C^{\infty}_{0}(X,Y)$ if $u$ is a $Y$-valued infinitely
differentiable function defined on $X$ with compact support. By
$C^{2}_{b}(\mathbb{R}^{d})$ we denote the space of twice continuously
differentiable functions on $\mathbb{R}^{d}$ with bounded derivatives up to
order $2$. We use “$:=$” to denote a definition, which is read as “is defined
to be”. We denote $a\wedge b:=\min\\{a,b\\}$, $a\vee b:=\max\\{a,b\\}$. If we
write $N=N(a,\ldots,z)$, this means that the constant $N$ depends only on
$a,\ldots,z$. The constant $N$ may change from location to location, even
within a line. By $\mathcal{F}$ and $\mathcal{F}^{-1}$ we denote the Fourier
transform and the inverse Fourier transform, respectively. That is, for a
suitable function $f$,
$\mathcal{F}(f)(\xi):=\int_{\mathbb{R}^{d}}e^{-ix\cdot\xi}f(x)dx$ and
$\mathcal{F}^{-1}(f)(x):=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}e^{i\xi\cdot
x}f(\xi)d\xi$. Finally, for a Borel set $A\subset\mathbb{R}^{d}$, we use $|A|$
to denote its Lebesgue measure.
## 2\. Upper bounds of $p(t,x)$
In this section we give upper bounds of the density $p(t,x)$ of the semigroup
$T_{t}$ corresponding to $\phi(\Delta)$. We give the result under slightly
more general setting. We will assume that $Y$ is a rotationally symmetric Lévy
process with Lévy exponent $\Psi_{Y}(\xi)$. Because of rotational symmetry,
the function $\Psi_{Y}$ is positive and depends on $|\xi|$ only. Accordingly,
by a slight abuse of notation we write $\Psi_{Y}(\xi)=\Psi_{Y}(|\xi|)$ and get
${\mathbb{E}}_{x}\left[e^{i\xi\cdot(Y_{t}-Y_{0})}\right]=e^{-t\Psi_{Y}(|\xi|)},\quad\quad\mbox{
for every }x\in{\mathbb{R}}^{d}\mbox{ and }\xi\in{\mathbb{R}}^{d}.$ (2.1)
We assume that the transition probability $\mathbb{P}(Y_{t}\in dy)$ is
absolutely continuous with respect to Lebesgue measure in ${\mathbb{R}}^{d}$.
Thus there is a function $p_{Y}(t,r)$, $t>0,r\geq 0$ such that
$\mathbb{P}(Y_{t}\in dy)=p_{Y}(t,|y|)dy.$
Note that $r\to\Psi_{Y}(r)$ and $r\to p_{Y}(t,r)$ may not be monotone in
general. We first consider the following mild condition on $\Psi_{Y}$.
(A1): There exists a positive function $h$ on $[0,\infty)$ such that for every
$t,\lambda>0$
${\Psi_{Y}(\lambda t)}/{\Psi_{Y}(t)}\leq
h(\lambda)\quad\text{and}\quad\int_{0}^{\infty}e^{-r^{2}/2}r^{d-1}h(r)dr<\infty.$
Note that by Lemma 3.1 below, (A1) always holds with
$h(\lambda)=1\vee\lambda^{2}$ for every subordinate Brownian motion. Moreover,
by [7, Lemma 3 and Proposition 11], (A1) always holds with
$h(\lambda)=24(1+\lambda^{2})$ for rotationally symmetric unimodal Lévy
process (i.e., $r\to p_{Y}(t,r)$ is decreasing for all $t>0$).
Recall that
$e^{-|z|^{2}}=(4\pi)^{-d/2}\int_{{\mathbb{R}}^{d}}e^{i\xi\cdot
z}e^{-|\xi|^{2}/4}d\xi.$
Using this and (2.1) we have for $\lambda>0$
$\displaystyle{\mathbb{E}}_{0}[e^{-\lambda|Y_{t}|^{2}}]=(4\pi)^{-d/2}\int_{{\mathbb{R}}^{d}}{\mathbb{E}}_{0}[e^{i\sqrt{\lambda}\xi\cdot
Y_{t}}]e^{-|\xi|^{2}/4}d\xi$
$\displaystyle=(4\pi)^{-d/2}\int_{{\mathbb{R}}^{d}}e^{-t\Psi_{Y}(\sqrt{\lambda}|\xi|)}e^{-|\xi|^{2}/4}d\xi.$
(2.2)
Thus
$\displaystyle{\mathbb{E}}_{0}[e^{-\lambda|Y_{t}|^{2}}-e^{-2\lambda|Y_{t}|^{2}}]=(4\pi)^{-d/2}\int_{{\mathbb{R}}^{d}}(e^{-t\Psi_{Y}(\sqrt{\lambda}|\xi|)}-e^{-t\Psi_{Y}(\sqrt{2\lambda}|\xi|)})e^{-|\xi|^{2}/4}d\xi.$
(2.3)
For $t,\lambda>0$, let
$g_{t}(\lambda):=\int_{0}^{\infty}(e^{-t\Psi_{Y}(\sqrt{\lambda}r)}-e^{-t\Psi_{Y}(\sqrt{2\lambda}r)})e^{-r^{2}/4}r^{d-1}dr,$
which is positive by (2.3).
###### Lemma 2.1.
Suppose that (A1) holds. Then there exists a constant $N=N(h,d)$ such that for
every $t,v>0$
$g_{t}(v^{-1})\leq Nt\Psi_{Y}(v^{-1/2}).$
###### Proof.
By (A1) we have
$\displaystyle\frac{1}{t\Psi_{Y}(v^{-1/2})}\leq\frac{\Psi_{Y}(\sqrt{2}v^{-1/2}r)+\Psi_{Y}(v^{-1/2}r)}{\Psi_{Y}(v^{-1/2})}\frac{1}{t|\Psi_{Y}(\sqrt{2}v^{-1/2}r)-\Psi_{Y}(v^{-1/2}r)|}$
$\displaystyle\leq$
$\displaystyle(h(\sqrt{2}r)+h(r))\frac{1}{t|\Psi_{Y}(\sqrt{2}v^{-1/2}r)-\Psi_{Y}(v^{-1/2}r)|}.$
Thus using the inequality $|e^{-a}-e^{-b}|\leq|a-b|$, $a,b>0$
$\displaystyle\frac{g_{t}(v^{-1})}{t\Psi_{Y}(v^{-1/2})}$ $\displaystyle\leq$
$\displaystyle\int_{0}^{\infty}\frac{|e^{-t\Psi_{Y}(v^{-1/2}r)}-e^{-t\Psi_{Y}(\sqrt{2}v^{-1/2}r)}|}{t|\Psi_{Y}(\sqrt{2}v^{-1/2}r)-\Psi_{Y}(v^{-1/2}r)|}e^{-r^{2}/4}r^{d-1}(h(\sqrt{2}r)+h(r))dr$
$\displaystyle\leq$
$\displaystyle\int_{0}^{\infty}e^{-r^{2}/4}r^{d-1}(h(\sqrt{2}r)+h(r))dr<\infty.$
Therefore the lemma is proved. $\Box$
Recall that $\mathbb{P}_{0}(Y_{t}\in dy)=p_{Y}(t,|y|)dy$. We now consider the
following mild condition on $p_{Y}(t,r)$.
(A2): For every $T\in(0,\infty]$, there exists a constant $c=c(T)>0$ such that
for every $t\in(0,T)$
$\displaystyle p_{Y}(t,r)\leq cp_{Y}(t,s)\qquad\forall r\geq s\geq 0.$ (2.4)
Obviously (2.4) always holds on all $t>0$ for rotationally symmetric unimodal
Lévy process.
###### Theorem 2.2.
Suppose that $Y$ is a rotationally symmetric Lévy process with Lévy exponent
$\Psi_{Y}(\xi)$ satisfying (A1). Assume that $\mathbb{P}(Y_{t}\in
dy)=p_{Y}(t,|y|)dy$ and (A2) holds. Then for every $T>0$, there exists a
constant $N=N(T,c,d,h)>0$ such that
$p_{Y}(t,r)\,\leq\,N\,t\,r^{-d}\Psi_{Y}(r^{-1}),\quad(t,r)\in(0,T]\times[0,\infty).$
###### Proof.
Fix $t\in(0,T]$. For $r\geq 0$ define $f_{t}(r)=r^{d/2}p_{Y}(t,r^{1/2}).$ By
(A2), for $r\geq 0$,
$\displaystyle\mathbb{P}_{0}(\sqrt{r/2}<|Y_{t}|<\sqrt{r})=\int_{\sqrt{r/2}<|y|<\sqrt{r}}p_{Y}(t,|y|)dy$
$\displaystyle\geq|B(0,1)|(1-2^{-d/2})c^{-1}r^{d/2}p_{Y}(t,r^{1/2})=|B(0,1)|(1-2^{-d/2})c^{-1}f_{t}(r).$
(2.5)
Denoting $\mathcal{L}f_{t}(\lambda)$ the Laplace transform of $f_{t}$, we have
$\displaystyle\mathcal{L}f_{t}(\lambda)\leq
N\int_{0}^{\infty}\mathbb{P}_{0}(\sqrt{r/2}<|Y_{t}|<\sqrt{r})e^{-\lambda
r}dr=N{\mathbb{E}}_{0}\int_{|Y_{t}|^{2}}^{2|Y_{t}|^{2}}e^{-\lambda r}dr$
$\displaystyle=N\lambda^{-1}{\mathbb{E}}_{0}[e^{-\lambda|Y_{t}|^{2}}-e^{-2\lambda|Y_{t}|^{2}}]=N\lambda^{-1}g_{t}(\lambda),\quad\lambda>0$
(2.6)
from (2.3).
Using (A2) again, we get that, for any $v>0$
$\displaystyle\mathcal{L}f_{t}(v^{-1})=$
$\displaystyle\int^{\infty}_{0}e^{-av^{-1}}f_{t}(a)\,da=v\int^{\infty}_{0}e^{-s}f_{t}\left(sv\right)ds$
$\displaystyle\geq$ $\displaystyle
v\int^{1}_{1/2}e^{-s}f_{t}\left(sv\right)ds=v\int^{1}_{1/2}e^{-s}s^{d/2}v^{d/2}p_{Y}\left(t,s^{1/2}v^{1/2}\right)ds$
$\displaystyle\geq$ $\displaystyle
c^{-1}v2^{-d/2}v^{d/2}p_{Y}\left(t,v^{1/2}\right)\int^{1}_{1/2}e^{-s}ds=c^{-1}2^{-d/2}vf_{t}\left(v\right)\left(\int^{1}_{1/2}e^{-s}ds\right).$
Thus
$\displaystyle f_{t}\left(v\right)\leq
c2^{d/2}\frac{v^{-1}\mathcal{L}f_{t}(v^{-1})}{e^{-1/2}-e^{-1}}.$ (2.7)
Now combining (2.6) and (2.7) with Lemma 2.1 we conclude
$p_{Y}(t,r)=r^{-d}f_{t}(r^{2})\leq Nr^{-d-2}\mathcal{L}f_{t}(r^{-2})\leq
Nr^{-d}g_{t}(r^{-2})\leq Ntr^{-d}\Psi_{Y}(r^{-1}).$
$\Box$
## 3\. Bernstein functions and subordinate Brownian motion
Let $S=(S_{t}:t\geq 0)$ be a subordinator, that is, an increasing Lévy process
taking values in $[0,\infty)$ with $S_{0}=0$. A subordinator $S$ is completely
characterized by its Laplace exponent $\phi$ via
${\mathbb{E}}[\exp(-\lambda S_{t})]=\exp(-t\phi(\lambda))\,,\quad\lambda>0.$
The Laplace exponent $\phi$ can be written in the form (cf. [1, p. 72])
$\displaystyle\phi(\lambda)=b\lambda+\int_{0}^{\infty}(1-e^{-\lambda
t})\,\mu(dt)\,.$ (3.1)
Here $b\geq 0$, and $\mu$ is a $\sigma$-finite measure on $(0,\infty)$
satisfying
$\int_{0}^{\infty}(t\wedge 1)\,\mu(dt)<\infty\,.$
We call the constant $b$ the drift and $\mu$ the Lévy measure of the
subordinator $S$.
A smooth function $g:(0,\infty)\to[0,\infty)$ is called a Bernstein function
if
$(-1)^{n}D^{n}g\leq 0,\quad\forall n\in\mathbb{N}.$
It is well known that a nonnegative function $\phi$ on $(0,\infty)$ is the
Laplace exponent of a subordinator if and only if it is a Bernstein function
with $\phi(0+)=0$ (see, for instance, Chapter 3 of [20]). By the concavity,
for any Bernstein function $\phi$,
$\phi(\lambda t)\leq\lambda\phi(t)\qquad\text{ for all }\lambda\geq 1,t>0\,,$
(3.2)
implying
$\frac{\phi(v)}{v}\leq\frac{\phi(u)}{u}\,,\quad 0<u\leq v\,.$ (3.3)
Clearly (3.2) implies the following.
###### Lemma 3.1.
Let $\phi$ be a Bernstein function. Then for all $\lambda,t>0$,
$1\wedge\lambda\leq{\phi(\lambda t)}/{\phi(t)}\leq 1\vee\lambda$.
The following will be used in section 6 to control the deterministic part of
equation (1.4).
###### Lemma 3.2.
For each nonnegative integer $n$, there is a constant $N(n)$ such that for
every Bernstein function with the drift $b=0$,
$\displaystyle\frac{\lambda^{n}|D^{n}\phi(\lambda)|}{\phi(\lambda)}\leq
N(n),\quad\forall\lambda.$ (3.4)
###### Proof.
The statement is trivial if $n=0$. So let $n\geq 1$. Due to (3.1),
$|D^{n}\phi(\lambda)|=\int_{0}^{\infty}t^{n}e^{-\lambda t}\mu(dt).$
Use $t^{n}e^{-t}\leq N(1-e^{-t})$ to conclude
$\displaystyle\lambda^{n}|D^{n}\phi(\lambda)|\leq\int_{0}^{\infty}(\lambda
t)^{n}e^{-\lambda t}\mu(dt)\leq N\int_{0}^{\infty}(1-e^{-\lambda t})\mu(dt).$
This obviously leads to (3.4). $\Box$
Throughout this article, we assume that $\phi$ is a Bernstein functions with
the drift $b=0$ and $\phi(1)=1$. Thus
$\displaystyle\phi(\lambda)=\int_{0}^{\infty}(1-e^{-\lambda t})\,\mu(dt).$
Let $d\geq 1$ and $W:=(W_{t}:t\geq 0)$ be a $d$-dimensional Brownian motion
with $W_{0}=0$. Then
${\mathbb{E}}\left[e^{i\xi\cdot
W_{t}}\right]=e^{-t|\xi|^{2}},\qquad\forall\xi\in{\mathbb{R}}^{d},\,\,t>0$
and $W$ has the transition density
$q(t,x,y)=q_{d}(t,x,y)=(4\pi t)^{-d/2}e^{-\frac{|x-y|^{2}}{4t}}\,,\quad
x,y\in{\mathbb{R}}^{d},\ t>0\,.$
Let $X=(X_{t}:t\geq 0)$ denote the subordinate Brownian motion defined by
$X_{t}:=W_{S_{t}}$. Then $X_{t}$ has the characteristic exponent
$\Psi(x)=\phi(|x|^{2})$ and has the transition density
$p(t,x)=p_{d}(t,x):=\int_{\mathbb{R}^{d}}e^{i\xi\cdot
x}e^{-t\phi(|\xi|^{2})}d\xi.$ (3.5)
For $t\geq 0$, let $\eta_{t}$ be the distribution of $S_{t}$. That is, for any
Borel set $A\subset[0,\infty)$, $\eta_{t}(A)=\mathbb{P}(S_{t}\in A)$. Then we
have
$p(t,x)=p_{d}(t,x)=\int_{(0,\infty)}(4\pi
s)^{-d/2}\exp\left(-\frac{|x|^{2}}{4s}\right)\,\eta_{t}(ds)$ (3.6)
(see [11, Section 13.3.1]). Thus $p(t,x)$ is smooth in $x$.
The Lévy measure $\Pi$ of $X$ is given by (see e.g. [19, pp. 197–198])
$\Pi(A)=\int_{A}\int_{0}^{\infty}p(t,x)\,\mu(dt)\,dx=\int_{A}J(x)\,dx\,,\quad
A\subset{\mathbb{R}}^{d}\,,$
where
$J(x):=\int_{0}^{\infty}p(t,x)\,\mu(dt)$ (3.7)
is the Lévy density of $X$. Define the function $j:(0,\infty)\to(0,\infty)$ as
$j(r)=j_{d}(r):=\int_{0}^{\infty}(4\pi)^{-d/2}t^{-d/2}\exp\left(-\frac{r^{2}}{4t}\right)\,\mu(dt)\,,\quad
r>0.$ (3.8)
Then $J(x)=j(|x|)$ and
$\Psi(\xi)=\int_{{\mathbb{R}}^{d}}(1-\cos(\xi\cdot y))j(|y|)dy.$ (3.9)
Note that the function $r\mapsto j(r)$ is strictly positive, continuous and
decreasing on $(0,\infty)$.
The next lemma is an extension of [13, Lemma 3.1].
###### Lemma 3.3.
There exists a constant $N>0$ depending only on $d$ such that
$j(r)\leq N\,r^{-d}\phi(r^{-2})\,,\qquad\forall r>0.$
###### Proof.
By Lemma 3.1, (A1) holds with $h(\lambda)=1\vee\lambda^{2}$, and (A2) holds
with $c=1$ since $r\to p(t,r)$ is decreasing. Thus by Theorem 2.2, we have
$p(t,r)\leq Ntr^{-d}\phi(r^{-2})\quad\forall t,r>0$ (3.10)
where $N>0$ depends only on $d$. The lemma now follows from (3.10) and (1.2).
Indeed, by (1.2) and Section 4.1 in [21] that for $f\in
C^{2}_{0}({\mathbb{R}}^{d}\setminus\\{0\\})$ (the set of $C^{2}$-functions on
$\mathbb{R}^{d}\setminus\\{0\\}$ with compact support), we have
$\lim_{t\to
0}\frac{1}{t}\int_{{\mathbb{R}}^{d}}p(t,|y|)f(y)dy=\phi(\Delta)f(0)=\int_{{\mathbb{R}}^{d}}j(|y|)f(y)dy.$
(3.11)
We fix $r>0$ and choose a $f\in C^{2}_{0}(\mathbb{R}^{d}\setminus\\{0\\})$
such that $f=1$ on $B(0,r)\setminus B(0,r/2)$ and $f=0$ on $B(0,2r)^{c}\cup
B(0,r/4)$. Note that since $s\to j(s)$ is decreasing, we have
$\displaystyle
r^{d}j(r)\leq\frac{d2^{d}}{2^{d}-1}\int_{r/2}^{r}j(s)s^{d-1}ds\leq
N\int_{B(0,r)\setminus B(0,r/2)}j(|y|)dy$ $\displaystyle\leq
N\int_{{\mathbb{R}}^{d}}j(|y|)f(y)dy$
where $N>0$ depends only on $d$. Thus by (3.10) and (3.11), we see that
$\displaystyle r^{d}j(r)\leq N\lim_{t\to
0}\frac{1}{t}\int_{{\mathbb{R}}^{d}}p(t,|y|)f(y)dy\leq
N\int_{{\mathbb{R}}^{d}}|y|^{-d}\phi(|y|^{-2})f(y)dy$ $\displaystyle\leq
N\int_{B(0,2r)\setminus B(0,r/4)}|y|^{-d}\phi(|y|^{-2})dy\leq
N\int_{B(0,2r)\setminus B(0,r/4)}|y|^{-d}dy\phi(4r^{-2})$ $\displaystyle\leq
N\int^{2r}_{r/4}r^{-1}dr\phi(4r^{-2})\leq N\phi(4r^{-2})$
where $N>0$ depends only on $d$. Now the lemma follows immediately by (3.2).
$\Box$
For $a>0$, we define $\phi^{a}(\lambda)=\phi(\lambda a^{-2})/\phi(a^{-2})$.
Then $\phi^{a}$ is again a Bernstein function satisfying $\phi^{a}(1)=1$. We
will use $\mu^{a}(dt)$ to denote the Lévy measure of $\phi^{a}$ and
$S^{a}=(S^{a}_{t})_{t\geq 0}$ to denote a subordinator with Laplace exponent
$\phi^{a}$.
Assume that $S^{a}=(S^{a})_{t\geq 0}$ is independent of the Brownian motion
$W$. Let $X^{a}=(X^{a}_{t})_{t\geq 0}$ be defined by
$X^{a}_{t}:=W_{S^{a}_{t}}$. Then $X^{a}$ is a rotationally invariant Lévy
process with characteristic exponent
$\Psi^{a}(\xi)=\phi^{a}(|\xi|^{2})=\frac{\phi(a^{-2}|\xi|^{2})}{\phi(a^{-2})}=\frac{\Psi(a^{-1}\xi)}{\phi(a^{-2})}\,,\quad\xi\in{\mathbb{R}}^{d}\,.$
(3.12)
This shows that $\\{X_{t}^{a}-X^{a}_{0}\\}_{t\geq 0}$ is identical in law to
the process $\\{a^{-1}(X_{t/\phi(a^{-2})}-X_{0})\\}_{t\geq 0}$. $X^{1}$ is
simply the process $X$.
Since, by (3.9) and (3.12),
$\Psi^{a}(\xi)=\frac{1}{\phi(a^{-2})}\int_{{\mathbb{R}}^{d}}(1-\cos(a^{-1}\xi\cdot
y))j(|y|)dy=\frac{a^{d}}{\phi(a^{-2})}\int_{{\mathbb{R}}^{d}}(1-\cos(\xi\cdot
z))j(a|z|)dz,$ (3.13)
the Lévy measure of $X^{a}$ has the density $J^{a}(x)=j^{a}(|x|)$, where
$j^{a}$ is given by
$\displaystyle j^{a}(r):=a^{d}\phi(a^{-2})^{-1}j(ar)\,.$ (3.14)
We use $p^{a}(t,x,y)=p^{a}(t,x-y)$ to denote the transition density of
$X^{a}$. Recall that the process $\\{a^{-1}(X_{t/\phi(a^{-2})}-X_{0}):t\geq
0\\}$ has the same law as $\\{X^{a}_{t}-X^{a}_{0}:t\geq 0\\}$. In terms of
transition densities, this can be written as
$p^{a}(t,x,y)=a^{d}p(\frac{t}{\phi(a^{-2})},ax,ay),\qquad(t,x,y)\in(0,\infty)\times{\mathbb{R}}^{d}\times{\mathbb{R}}^{d}.$
Thus
$p(t,x)=p^{1}(t,x)=a^{-d}p^{a}(t\phi(a^{-2}),a^{-1}x),\qquad(t,x)\in(0,\infty)\times{\mathbb{R}}^{d}.$
(3.15)
Denote
$a_{t}:=\frac{1}{\sqrt{\phi^{-1}(t^{-1})}}.$
From (3.15), we see that
$p(t,x)=(a_{t})^{-d}p^{a_{t}}(1,(a_{t})^{-1}x),\qquad(t,x)\in(0,\infty)\times{\mathbb{R}}^{d}.$
Let $\beta>0$. For appropriate functions $f=f(x)$, define
$T_{t}f(x):=(p(t,\cdot)\ast
f(\cdot))(x)=\int_{\mathbb{R}^{d}}p(t,x-y)f(y)dy,\quad t>0,$
$\phi(\Delta)^{\beta}f:=-\phi(-\Delta)^{\beta}f:=\mathcal{F}^{-1}(\phi(|\xi|^{2})^{\beta}\mathcal{F}(f))(x),\quad
t>0.$
In particular, if $\beta=1$ and $f\in C_{b}^{2}({\mathbb{R}}^{d})$ then we
have
$\displaystyle\phi(\Delta)f(x)=-\phi(-\Delta)f(x)=\int_{{\mathbb{R}}^{d}}\left(f(x+y)-f(x)-\nabla
f(x)\cdot y{\mathbf{1}}_{\\{|y|\leq 1\\}}\right)\,j(|y|)\,dy$
$\displaystyle=\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|y|>\varepsilon\\}}(f(x+y)-f(x))j(|y|)\,dy$
(3.16)
(see Section 4.1 in [21]).
Recall that
$\phi^{a_{t}}(\lambda):=\phi(\lambda(a_{t})^{-2})/\phi((a_{t})^{-2})$. Since
$t\phi(a_{t}^{-2})=1$, by (3.5)
$\displaystyle\phi(\Delta)^{1/2}p(t,\cdot)(x)$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{d}}\phi(|\xi|^{2})^{1/2}e^{ix\xi}e^{-t\phi(|\xi|^{2})}d\xi$
(3.17) $\displaystyle=$ $\displaystyle
t^{-1/2}\int_{\mathbb{R}^{d}}(\phi(|\xi|^{2})/\phi(a_{t}^{-2}))^{1/2}e^{ix\xi}e^{-\phi(|\xi|^{2})/\phi(a_{t}^{-2})}d\xi$
$\displaystyle=$ $\displaystyle
t^{-1/2}(a_{t})^{-d}\int_{\mathbb{R}^{d}}\phi^{a_{t}}(|\xi|^{2})^{1/2}e^{i(a_{t})^{-1}x\xi}e^{-\phi^{a_{t}}(|\xi|^{2})}d\xi$
$\displaystyle=$
$\displaystyle(a_{t})^{-d}t^{-1/2}\phi^{a_{t}}(\Delta)^{1/2}p^{a_{t}}(1,\cdot)((a_{t})^{-1}x).$
By [20, Corollary 3.7 (iii)], $\phi^{a}(\lambda)^{1/2}$ is also a Bernstein
function. Thus $\phi^{a}(\lambda)^{1/2}=\int_{0}^{\infty}(1-e^{-\lambda
t})\,\widehat{\mu}^{a}(dt)$ where $\widehat{\mu}^{a}$ is the Lévy measure of
$\phi^{a}(\lambda)^{1/2}$. Let
$\widehat{j}^{a}(r):=\int^{\infty}_{0}(4\pi
t)^{-d/2}e^{-r^{2}/(4t)}\widehat{\mu}^{a}(dt),\qquad r,a>0$
and $\widehat{j}(r):=\widehat{j}^{1}(r)$. Then, by (3.14)
$\displaystyle\widehat{j}^{a}(r)=a^{d}\phi(a^{-2})^{-1/2}\widehat{j}(ar)\,,\qquad
r,a>0.$ (3.18)
As (3.16), for every $f\in C_{b}^{2}({\mathbb{R}}^{d})$,
$\displaystyle\phi^{a}(\Delta)^{1/2}f(x)$ $\displaystyle:=$
$\displaystyle-\phi^{a}(-\Delta)^{1/2}f(x)$ (3.19) $\displaystyle=$
$\displaystyle\int_{{\mathbb{R}}^{d}}\left(f(x+y)-f(x)-\nabla f(x)\cdot
y{\mathbf{1}}_{\\{|y|\leq r\\}}\right)\,\widehat{j}^{a}(|y|)\,dy$
$\displaystyle=$ $\displaystyle\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|y|>\varepsilon\\}}(f(x+y)-f(x))\widehat{j}^{a}(|y|)\,dy$
Clearly by Lemma 3.3 we have the following. We emphasize that the constant
does not depend on neither $\phi$ nor $a$.
###### Lemma 3.4.
There exists a constant $N>1$ depending only on $d$ such that for any $a>0$
and $x\neq 0$
$\widehat{j}^{a}(r)\leq Nr^{-d}(\phi^{a}(r^{-2}))^{1/2}\,,\qquad\forall r>0.$
Recall conditions (H1) and (H2):
(H1): There exist constants $0<\delta_{1}\leq\delta_{2}<1$ and $a_{1},a_{2}>0$
such that
$a_{1}\lambda^{\delta_{1}}\phi(t)\leq\phi(\lambda t)\leq
a_{2}\lambda^{\delta_{2}}\phi(t)\quad\lambda\geq 1,t\geq 1\,.$
(H2): There exist constants $0<\delta_{3}\leq 1$ and $a_{3}>0$ such that
$\phi(\lambda t)\leq a_{3}\lambda^{\delta_{3}}\phi(t),\quad\lambda\leq 1,t\leq
1\,.$
By taking $t=1$ in (H1) and (H2) and using Lemma 3.1, we get that if (H1)
holds then
$a_{1}\lambda^{\delta_{1}}\leq\phi(\lambda)\leq
a_{2}\lambda^{\delta_{2}}\,,\quad\lambda\geq 1\,,$ (3.20)
and, if (H2) holds then
$\lambda\leq\phi(\lambda)\leq a_{3}\lambda^{\delta_{3}}\,,\quad\lambda\leq
1\,.$ (3.21)
Also, if (H1) holds we have
$a_{1}\lambda^{\delta_{1}}\phi^{a}(t)\leq\phi^{a}(\lambda t)\leq
a_{2}\lambda^{\delta_{2}}\phi^{a}(t)\,,\quad\lambda\geq 1,t\geq a^{2}\,.$
(3.22)
Thus, by taking $t=1$ in (3.22), if (H1) holds and $a\leq 1$ we get
$a_{1}\lambda^{\delta_{1}}\leq\phi^{a}(\lambda)\leq
a_{2}\lambda^{\delta_{2}}\,,\quad\lambda\geq 1.$
Thus, if (H1) holds
$a_{1}(T^{-2}\wedge
1)\lambda^{\delta_{1}}\leq\phi^{a}(\lambda)\leq\frac{a_{2}}{\phi(T^{-2})\wedge
1}\lambda^{\delta_{2}}\,,\quad a\in(0,T],\,\lambda\geq 1.$ (3.23)
In fact, if $T>1$ and $1\leq a\leq T$ then for $\lambda\geq 1$
$\phi^{a}(\lambda)=\frac{\phi(\lambda
a^{-2})}{\phi(a^{-2})}\leq\frac{\phi(\lambda)}{\phi(a^{-2})}\leq\frac{a_{2}\lambda^{\delta_{2}}}{\phi(T^{-2})}$
and using Lemma 3.1
$\phi^{a}(\lambda)=\frac{\phi(\lambda
a^{-2})}{\phi(a^{-2})}\geq\frac{\phi(\lambda
a^{-2})}{\phi(1)}=\frac{\phi(\lambda a^{-2})}{\phi(\lambda)}\phi(\lambda)\geq
a^{-2}\phi(\lambda)\geq T^{-2}\phi(\lambda)\geq
T^{-2}a_{1}\lambda^{\delta_{1}}.$
Recall that $p(t,x)$ is the transition density of $X_{t}$.
###### Corollary 3.5.
Suppose (H1) holds. Then for each $T>0$ there exists a constant
$N=N(T,d,\phi)>0$ such that for $(t,x)\in(0,T]\times{\mathbb{R}}^{d}$,
$p(t,x)\leq N\left(\left(\phi^{-1}(t^{-1})\right)^{d/2}\wedge
t\frac{\phi(|x|^{-2})}{|x|^{d}}\right).$ (3.24)
Proof. The corollary follows from Theorem 2.2 and the first display on page
1073 of [3]. Also one can see from (3.5) and (3.15) that
$\displaystyle
p(t,x)=(a_{t})^{-d}p^{a_{t}}(1,a_{t}^{-1}x)\leq(a_{t})^{-d}\int_{\mathbb{R}^{d}}e^{-\phi^{a_{t}}(|\xi|^{2})}d\xi$
$\displaystyle\leq$
$\displaystyle(a_{t})^{-d}\int_{|\xi|<1}d\xi+(a_{t})^{-d}\int_{|\xi|\geq
1}e^{-\phi^{a_{t}}(|\xi|^{2})}d\xi$ $\displaystyle\leq$ $\displaystyle
N(a_{t})^{-d}\left(1+\int_{|\xi|\geq 1}e^{-a_{1}(a_{T}^{-2}\wedge
1)|\xi|^{2\delta_{1}}}d\xi\right).$
where (3.23) is used in the last inequality. $\Box$
###### Remark 3.6.
If there exist constants $0<\delta_{3}\leq\delta_{4}<1$ and $a_{3},a_{4}>0$
such that
$a_{4}\lambda^{\delta_{4}}\phi(t)\leq\phi(\lambda t)\leq
a_{3}\lambda^{\delta_{3}}\phi(t),\quad\lambda\leq 1,t\leq 1\,,$ (3.25)
then, as in [13] the subordinate Brownian motion $X$ satisfies conditions
(1.4), (1.13) and (1.14) from [5]. Thus, in fact, by [5], if (H1) and (3.25)
hold we have the sharp two-sided estimates for all $t>0$
$N^{-1}\left(\left(\phi^{-1}(t^{-1})\right)^{d/2}\wedge
t\frac{\phi(|x|^{-2})}{|x|^{d}}\right)\leq p(t,x)\leq
N\left(\left(\phi^{-1}(t^{-1})\right)^{d/2}\wedge
t\frac{\phi(|x|^{-2})}{|x|^{d}}\right),$ (3.26)
where $N=N(\phi)>1$.
On the other hand, when $\delta_{3}=\delta_{4}=1$ in (3.25), (3.26) does not
hold and (3.24) is not sharp. For example, see [4, (2.2), (2.4) and Theorem
4.1].
For the rest of this article we assume that (H1) and (H2) hold. Now we further
discuss the scaling. If $0<r<1<R$, using (3.3), (3.20) and (3.21), we have
$\frac{\phi(R)}{\phi(r)}\leq\frac{R}{r}\quad\text{ and
}\quad\frac{\phi(R)}{\phi(r)}\geq\frac{a_{1}}{a_{3}}\frac{R^{\delta_{1}}}{r^{\delta_{3}}}\geq\frac{a_{1}}{a_{3}}\left(\frac{R}{r}\right)^{\delta_{1}\wedge\delta_{3}}.$
Combining these with (H1) and (H2) we get
$\frac{a_{1}}{a_{3}}\left(\frac{R}{r}\right)^{\delta_{1}\wedge\delta_{3}}\leq\frac{\phi(R)}{\phi(r)}\leq\frac{R}{r},\quad
0<r<R<\infty\,.$ (3.27)
Now applying this to $\phi^{a}$, we get
$\frac{a_{1}}{a_{3}}\left(\frac{R}{r}\right)^{\delta_{1}\wedge\delta_{3}}\leq\frac{\phi^{a}(R)}{\phi^{a}(r)}\leq\frac{R}{r},\quad
a>0,\ 0<r<R<\infty\,.$ (3.28)
Next two lemmas will be used several times in this article.
###### Lemma 3.7.
Assume (H1) and (H2). Then there exists a constant
$N=N(\delta_{1},\delta_{3})$ such that for all $\lambda>0$
$\int_{\lambda^{-1}}^{\infty}r^{-1}\phi(r^{-2})\,dr\leq N\phi(\lambda^{2}).$
###### Proof.
Changing variable $r\to\lambda^{-1}r$, from (3.27) we have
$\displaystyle\int_{\lambda^{-1}}^{\infty}r^{-1}\phi(r^{-2})\,dr~{}=~{}\int_{1}^{\infty}r^{-1}\phi(\lambda^{2}r^{-2})\,dr$
$\displaystyle=$
$\displaystyle\int_{1}^{\infty}r^{-1}\phi(\lambda^{2}r^{-2})\frac{\phi(\lambda^{2})}{\phi(\lambda^{2})}\,dr$
$\displaystyle\leq$
$\displaystyle\int_{1}^{\infty}r^{-1-2\delta_{1}\wedge\delta_{3}}\,dr\phi(\lambda^{2}).$
$\Box$
###### Corollary 3.8.
Assume (H1) and (H2). Then there exists a constant
$N=N(\delta_{1},\delta_{3})$ such that for all $\lambda>0$
$\int_{\lambda^{-1}}^{\infty}r^{-1}(\phi(r^{-2}))^{1/2}\,dr\leq
N(\phi(\lambda^{2}))^{1/2},$
###### Proof.
Since $(\phi(\lambda))^{1/2}$ also satisfies conditions (H1) and (H2) with
different $\delta_{1},\delta_{3}>0$, we get this corollary directly from the
previous lemma. $\Box$
## 4\. Upper bounds of $|\phi(\Delta)^{n/2}D^{\beta}p(t,x)|$
In this section we give upper bounds of $|\phi(\Delta)^{n/2}D^{\beta}p(t,x)|$
for any $n=0,1,2,\cdots$ and multi-index $\beta$.
Recall $a_{t}:=(\phi^{-1}(t^{-1}))^{-1/2}$ and so $t\phi(a_{t}^{-2})=1$. Thus
From (3.15) and (3.24), we have for every $t\in(0,T]$,
$\displaystyle p^{a_{t}}(1,x)$ $\displaystyle=$
$\displaystyle(a_{t})^{d}p(t,a_{t}x)$ $\displaystyle\leq$ $\displaystyle
N(T,\phi,d)\left(1\wedge
t\frac{\phi(|a_{t}x|^{-2})}{|x|^{d}}\right)=N\left(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})}{|x|^{d}}\right).$
###### Lemma 4.1.
For any constant $T>0$ there exists a constant $N=N(T,d,\phi)$ so that for
every $t\in(0,T]$
$\displaystyle|\nabla p^{a_{t}}(1,x)|\leq
N|x|\left(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})}{|x|^{d+2}}\right),$
$\displaystyle\sum_{i,j}|\partial_{x_{i},x_{j}}p^{a_{t}}(1,x)|$
$\displaystyle\leq
N|x|^{2}\left(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})}{|x|^{d+4}}\right)+N\left(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})}{|x|^{d+2}}\right),$
and
$\displaystyle\sum_{|\beta|\leq n}|D^{\beta}p^{a_{t}}(1,x)|$
$\displaystyle\leq N\sum_{n-2m\geq
0,m\in\mathbb{N}\cup\\{0\\}}|x|^{n-2m}\left(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})}{|x|^{d+2(n-m)}}\right).$
###### Proof.
To distinguish the dimension, we denote
$p^{a_{t}}_{d}(1,x):=\int_{(0,\infty)}(4\pi
s)^{-d/2}\exp\left(-\frac{|x|^{2}}{4s}\right)\,\eta^{a_{t}}_{1}(ds).$
By (3.6),
$\displaystyle\partial_{x_{i}}p^{a_{t}}_{d}(1,x)$
$\displaystyle=\int_{(0,\infty)}(4\pi
s)^{-d/2}\partial_{x_{i}}\exp\left(-\frac{|x|^{2}}{4s}\right)\,\eta^{a_{t}}_{1}(ds)$
$\displaystyle=-\frac{x_{i}}{2}\int_{(0,\infty)}s^{-1}(4\pi
s)^{-d/2}\exp\left(-\frac{|x|^{2}}{4s}\right)\,\eta^{a_{t}}_{1}(ds)$
$\displaystyle=-2\pi x_{i}p^{a_{t}}_{d+2}(1,(x,0,0)),$
$\displaystyle\partial_{x_{i},x_{i}}p^{a_{t}}_{d}(1,x)$ $\displaystyle=4\pi
x_{i}^{2}p^{a_{t}}_{d+4}(1,(x,0,0,0,0))-2\pi p^{a_{t}}_{d+2}(1,(x,0,0)),$
and, for $i\not=j$,
$\displaystyle\partial_{x_{i},x_{j}}p^{a_{t}}_{d}(1,x)$ $\displaystyle=4\pi
x_{i}x_{j}p^{a_{t}}_{d+4}(1,(x,0,0,0,0)).$
Thus
$\displaystyle|\nabla p^{a_{t}}_{d}(1,x)|\leq
2\pi|x|p^{a_{t}}_{d+2}(1,(x,0,0))$
and
$\displaystyle\sum_{i,j}|\partial_{x_{i},x_{j}}p^{a_{t}}_{d}(1,x)|$
$\displaystyle\leq 4d^{2}\pi|x|^{2}p^{a+t}_{d+4}(1,(x,0,0,0,0))+2d\pi
p^{a_{t}}_{d+2}(1,(x,0,0)).$
Similarly.
$\displaystyle\sum_{i,j,k}|\partial_{x_{i},x_{j}x_{k}}p^{a_{t}}_{d}(1,x)|$
$\displaystyle\leq
N(d)[|x|^{3}p^{a_{t}}_{d+6}(1,(x,0,0,0,0,0,0))+|x|p^{a_{t}}_{d+4}(1,(x,0,0,0,0))].$
Repeating the product rule of differentiation and applying (4), we prove the
lemma. $\Box$
###### Lemma 4.2.
For any $(t,x)\in(0,T]\times\mathbb{R}^{d}$ and multi-index $\beta$,
$\displaystyle|\phi(\Delta)^{1/2}D^{\beta}p(t,\cdot)(x)|\leq\,N(d,T,|\beta|,\phi)\
\left(t^{-1/2}\left(\phi^{-1}(t^{-1})\right)^{(d+|\beta|)/2}\wedge\frac{\phi(|x|^{-2})^{1/2}}{|x|^{d+|\beta|}}\right).$
###### Proof.
First we prove the lemma when $\beta=0$. Recall
$a_{t}:=\frac{1}{\sqrt{\phi^{-1}(t^{-1})}}\leq\frac{1}{\sqrt{\phi^{-1}(T^{-1})}}.$
Note that by (3.5)
$\displaystyle|\phi^{a_{t}}(\Delta)^{1/2}p^{a_{t}}(1,\cdot)(x)|$
$\displaystyle=$
$\displaystyle|\int_{\mathbb{R}^{d}}\phi^{a_{t}}(|\xi|^{2})^{1/2}e^{ix\xi}e^{-\phi^{a_{t}}(|\xi|^{2})}d\xi|$
$\displaystyle\leq$
$\displaystyle\int_{\mathbb{R}^{d}}\phi^{a_{t}}(|\xi|^{2})^{1/2}e^{-\phi^{a_{t}}(|\xi|^{2})}d\xi$
$\displaystyle\leq$
$\displaystyle\int_{|\xi|<1}\phi^{a_{t}}(|\xi|^{2})^{1/2}d\xi+\left(\sup_{b>0}b^{1/2}e^{-b/2}\right)\int_{|\xi|\geq
1}e^{-2^{-1}\phi^{a_{t}}(|\xi|^{2})}d\xi$ $\displaystyle\leq$ $\displaystyle
N\phi^{a_{t}}(1)^{1/2}+N\int_{|\xi|\geq
1}e^{-2^{-1}\phi^{a_{t}}(|\xi|^{2})}d\xi\leq N+N\int_{|\xi|\geq
1}e^{-2^{-1}\phi^{a_{t}}(|\xi|^{2})}d\xi.$
Thus it is uniformly bounded by (3.28). By (3.19),
$\displaystyle|\phi^{a_{t}}(\Delta)^{1/2}p^{a_{t}}(1,\cdot)(x)|$
$\displaystyle=|\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|y|>\varepsilon\\}}(p^{a_{t}}(1,x+y)-p^{a_{t}}(1,x))\widehat{j}^{a_{t}}(|y|)\,dy|$
$\displaystyle\leq|p^{a_{t}}(1,x)|\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}\widehat{j}^{a_{t}}(|y|)\,dy+\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}|p^{a_{t}}(1,x+y)|\widehat{j}^{a_{t}}(|y|)\,dy$
$\displaystyle\quad+|\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|x|/2>|y|>\varepsilon\\}}\int_{0}^{1}|\nabla
p^{a_{t}}(1,x+sy)|ds|y|\widehat{j}^{a_{t}}(|y|)\,dy$
$\displaystyle:=p^{a_{t}}(1,x)\times I+II+III.$
Since from (3.18)
$\displaystyle\int_{s}^{\infty}\widehat{j}^{a}(r)r^{d-1}dr=a^{d}\phi(a^{-2})^{-1/2}\int_{s}^{\infty}\widehat{j}(ar)r^{d-1}dr=\phi(a^{-2})^{-1/2}\int_{as}^{\infty}\widehat{j}(t)t^{d-1}dt,$
we get
$\displaystyle\int_{|x|}^{\infty}\widehat{j}^{a_{t}}(r)r^{d-1}dr$
$\displaystyle=$
$\displaystyle(a_{t})^{d}\phi((a_{t})^{-2})^{-1/2}\int_{|x|}^{\infty}\widehat{j}(a_{t}r)r^{d-1}dr$
$\displaystyle=$
$\displaystyle\phi((a_{t})^{-2})^{-1/2}\int_{|x|a_{t}}^{\infty}\widehat{j}(s)s^{d-1}ds$
$\displaystyle=$
$\displaystyle\phi^{a_{t}}(|x|^{-2})^{1/2}\phi((a_{t})^{-2}|x|^{-2})^{-1/2}\int_{|x|a_{t}}^{\infty}\widehat{j}(s)s^{d-1}ds.$
In addition to this, applying Lemma 3.4 and Corollary 3.8, we see
$\phi((a_{t})^{-2}|x|^{-2})^{-1/2}\int_{|x|a_{t}}^{\infty}\widehat{j}(s)s^{d-1}ds\leq
N.$
Combining these with
$p^{a_{t}}(1,x)\leq
N(T)\left(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})}{|x|^{d}}\right),$
we get
$\displaystyle p^{a_{t}}(1,x)\times I$ $\displaystyle\leq
N(T)\phi^{a_{t}}(|x|^{-2})^{1/2}\left(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})}{|x|^{d}}\right)\leq
N(T)\frac{\phi^{a_{t}}(|x|^{-2})^{3/2}}{|x|^{d}}.$
Using the fact that $r\to j^{a_{t}}(r)$ is decreasing,
$\displaystyle II$
$\displaystyle\leq\widehat{j}^{a_{t}}(|x|/2)\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}p^{a_{t}}(1,x+y)\,dy$
$\displaystyle\leq\widehat{j}^{a_{t}}(|x|/2)\int_{\mathbb{R}^{d}}p^{a_{t}}(1,x+y)\,dy\leq\widehat{j}^{a_{t}}(|x|/2).$
Finally, from Lemma 4.1 we get
$\displaystyle III=|\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|x|/2>|y|>\varepsilon\\}}\int_{0}^{1}|\nabla
p^{a_{t}}(1,x+sy)|ds|y|\widehat{j}^{a_{t}}(|y|)\,dy$ $\displaystyle\leq
N\int_{|y|<|x|/2}\int_{0}^{1}\frac{\phi^{a_{t}}(|x+sy|^{-2})}{|x+sy|^{d+1}}ds|y|\widehat{j}^{a_{t}}(|y|)dy$
$\displaystyle\leq
N\int_{|y|<|x|/2}\int_{0}^{1}\frac{\phi^{a_{t}}((|x|-s|y|)^{-2})}{(|x|-s|y|)^{d+1}}ds|y|\widehat{j}^{a_{t}}(|y|)dy$
$\displaystyle\leq
N\int_{|y|<|x|/2}\int_{0}^{1}\frac{\phi^{a_{t}}(4|x|^{-2})}{|x|^{d+1}}ds|y|\widehat{j}^{a_{t}}(|y|)dy$
$\displaystyle\leq
N\frac{\phi^{a_{t}}(4|x|^{-2})}{|x|^{d+1}}\int_{|y|<|x|/2}|y|\widehat{j}^{a_{t}}(|y|)dy.$
By Lemma 3.4,
$\displaystyle\int_{|y|<|x|/2}|y|\widehat{j}^{a_{t}}(|y|)dy\leq
N\int_{|y|<|x|/2}|y|^{-d+1}(\phi^{a_{t}}(|y|^{-2}))^{1/2}dy$
$\displaystyle\leq N\int_{0}^{|x|}(\phi^{a_{t}}(r^{-2}))^{1/2}dr.$
Since $|\phi^{a_{t}}(\Delta)^{1/2}p^{a_{t}}(1,\cdot)(x)|$ is bounded for $x$,
we may assume that $|x|\geq 1$. So from the monotone property of
$\phi^{a_{t}}(r^{-2})$ and (3.23), we get
$\displaystyle\int_{0}^{|x|}(\phi^{a_{t}}(r^{-2}))^{1/2}dr$ $\displaystyle=$
$\displaystyle\int_{0}^{1}(\phi^{a_{t}}(r^{-2}))^{1/2}dr+\int_{1}^{|x|}(\phi^{a_{t}}(r^{-2}))^{1/2}dr$
$\displaystyle\leq$ $\displaystyle
N\left(\int_{0}^{1}r^{-\delta_{2}}dr+\int_{1}^{|x|}(\phi^{a_{t}}(r^{-2}))^{1/2}dr\right)$
$\displaystyle\leq$ $\displaystyle
N\left(|x|^{1-\delta_{2}}+|x|\phi^{a_{t}}(1)^{1/2}\right).$
Thus
$|\phi^{a_{t}}(\Delta)^{1/2}p^{a_{t}}(1,\cdot)(x)|\leq
N(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})^{1/2}}{|x|^{d}}).$
Now applying (3.17) and using the fact that $t\phi(a_{t}^{-2})=1$ and
$\phi^{a}(\lambda)=\phi(\lambda a^{-2})/\phi(a^{-2})$, we get
$\displaystyle|\phi(\Delta)^{1/2}p(t,\cdot)(x)|=(a_{t})^{-d}t^{-1/2}|\phi^{a_{t}}(\Delta)^{1/2}p^{a_{t}}(1,\cdot)((a_{t})^{-1}x)|$
$\displaystyle\leq\,N\,t^{-1/2}\,((a_{t})^{-d}\wedge\frac{\phi^{a_{t}}(|(a_{t})^{-1}x|^{-2})^{1/2}}{|x|^{d}})$
$\displaystyle=\,N\,t^{-1/2}\,((a_{t})^{-d}\wedge\frac{\phi(|x|^{-2})^{1/2}}{\phi((a_{t})^{-2})^{1/2}|x|^{d}})$
$\displaystyle=\,N\,\,(t^{-1/2}(a_{t})^{-d}\wedge\frac{\phi(|x|^{-2})^{1/2}}{(t\phi((a_{t})^{-2}))^{1/2}|x|^{d}})\,=\,N\,\,(t^{-1/2}(a_{t})^{-d}\wedge\frac{\phi(|x|^{-2})^{1/2}}{|x|^{d}}).$
The case $|\beta|=1$ is proved similarly. First, one can check that
$|\phi^{a_{t}}(\Delta)^{1/2}D^{\beta}p^{a_{t}}(1,\cdot)(x)|$ is uniformly
bounded. Note
$\displaystyle|\phi^{a_{t}}(\Delta)^{1/2}D^{\beta}p^{a_{t}}(1,\cdot)(x)|$
(4.2) $\displaystyle=$ $\displaystyle|\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|y|>\varepsilon\\}}(D^{\beta}p^{a_{t}}(1,x+y)-D^{\beta}p^{a_{t}}(1,x))\widehat{j}^{a_{t}}(|y|)\,dy|$
$\displaystyle\leq$
$\displaystyle|D^{\beta}p^{a_{t}}(1,x)|\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}\widehat{j}^{a_{t}}(|y|)\,dy$
$\displaystyle+|\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}D^{\beta}p^{a_{t}}(1,x+y)\widehat{j}^{a_{t}}(|y|)\,dy|$
$\displaystyle+|\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|x|/2>|y|>\varepsilon\\}}\int_{0}^{1}|\nabla
D^{\beta}p^{a_{t}}(1,x+sy)|ds|y|\widehat{j}^{a_{t}}(|y|)\,dy$
$\displaystyle:=$ $\displaystyle|D^{\beta}p^{a_{t}}(1,x)|\times I+II+III.$
Since $I$ and $III$ can be estimated similarly as in the case $|\beta|=0$, we
only pay attention to the estimation of $II$. We use integration by parts and
get
$\displaystyle II$ $\displaystyle\leq$
$\displaystyle\int_{|y|=|x|/2}|\widehat{j}^{a_{t}}(|x|/2)p^{a_{t}}(1,x+y)|dS$
$\displaystyle+\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}\frac{d}{dr}\widehat{j}^{a_{t}}(|y|)p^{a_{t}}(1,x+y)\,dy.$
We use notation $\widehat{j}^{a_{t}}_{d}(r)$ in place of
$\widehat{j}^{a_{t}}(r)$ to express its dimension. That is,
$\widehat{j}^{a_{t}}_{d}(r):=\int^{\infty}_{0}(4\pi
t)^{-d/2}e^{-r^{2}/(4t)}\widehat{\mu}^{a_{t}}(dt),\qquad r>0.$
By its definition, we can easily see that
$\frac{d}{dr}\widehat{j}^{a_{t}}_{d}(r)=-2\pi r\widehat{j}^{a_{t}}_{d+2}(r)$.
So from Lemmas 3.4 and 4.1 we get
$\displaystyle II$ $\displaystyle\leq
N[\frac{\phi^{a_{t}}(|x|^{-2})^{3/2}}{|x|^{d+1}}+\int_{|y|>|x|/2}|y|\widehat{j}^{a_{t}}_{d+2}(|y|)|p^{a_{t}}(1,x+y)|\,dy]$
$\displaystyle\leq
N[\frac{\phi^{a_{t}}(|x|^{-2})^{3/2}}{|x|^{d+1}}+\int_{|y|>|x|/2}\frac{\phi^{a_{t}}(|y|^{-2})^{1/2}}{|y|^{d+1}}|p^{a_{t}}(1,x+y)|\,dy]$
$\displaystyle\leq
N[\frac{\phi^{a_{t}}(|x|^{-2})^{3/2}}{|x|^{d+1}}+\frac{\phi^{a}(|x|^{-2})^{1/2}}{|x|^{d+1}}].$
Therefore we get
$|\phi^{a_{t}}(\Delta)^{1/2}p_{x^{i}}^{a_{t}}(1,\cdot)(x)|\leq
N(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})^{1/2}}{|x|^{d+1}}).$
By fact that $t\phi(a_{t}^{-2})=1$ and $\phi^{a}(\lambda)=\phi(\lambda
a^{-2})/\phi(a^{-2})$,
$\displaystyle|\phi(\Delta)^{1/2}p_{x^{i}}(t,\cdot)(x)|$ (4.3)
$\displaystyle=$
$\displaystyle\left|\int_{\mathbb{R}^{d}}(-i\xi^{i})\phi(|\xi|^{2})^{1/2}e^{ix\xi}e^{-t\phi(|\xi|^{2})}d\xi\right|$
$\displaystyle=$ $\displaystyle
t^{-1/2}\left|\int_{\mathbb{R}^{d}}(-i\xi^{i})(\phi(|\xi|^{2})/\phi(a_{t}^{-2}))^{1/2}e^{ix\xi}e^{-\phi(|\xi|^{2})/\phi(a_{t}^{-2})}d\xi\right|$
$\displaystyle=$ $\displaystyle
t^{-1/2}(a_{t})^{-d-1}\left|\int_{\mathbb{R}^{d}}(-i\xi^{i})\phi^{a_{t}}(|\xi|^{2})^{1/2}e^{i(a_{t})^{-1}x\xi}e^{-\phi^{a_{t}}(|\xi|^{2})}d\xi\right|$
$\displaystyle=$
$\displaystyle(a_{t})^{-d-1}t^{-1/2}\left|\phi^{a_{t}}(\Delta)^{1/2}p_{x^{i}}^{a_{t}}(1,\cdot)((a_{t})^{-1}x)\right|$
$\displaystyle\leq$
$\displaystyle\,N\,t^{-1/2}(a_{t})^{-d-1}\,\left(1\wedge\frac{\phi^{a_{t}}(|a_{t}^{-1}x|^{-2})^{1/2}}{|a_{t}^{-1}x|^{d+1}}\right)$
$\displaystyle=$
$\displaystyle\,N\,t^{-1/2}(a_{t})^{-d-1}\,\left(1\wedge\frac{\phi(|x|^{-2})^{1/2}}{\phi((a_{t})^{-2})^{1/2}|a_{t}^{-1}x|^{d+1}}\right)$
$\displaystyle=$ $\displaystyle\,N\,t^{-1/2}(a_{t})^{-d-1}\,\left(1\wedge
t^{1/2}a_{t}^{d+1}\frac{\phi(|x|^{-2})^{1/2}}{(t\phi((a_{t})^{-2}))^{1/2}|x|^{d+1}}\right)$
$\displaystyle=$
$\displaystyle\,N\,\,\left(t^{-1/2}(a_{t})^{-d-1}\wedge\frac{\phi(|x|^{-2})^{1/2}}{|x|^{d+1}}\right).$
Finally, we consider the case $|\beta|\geq 2$. Introduce $I,II$ and $III$ as
in (4.2). $I$ and $III$ can be estimated as in the case $|\beta|=0$. Also,
$II$ can be estimated by doing the integration by parts $|\beta|$-times. For
instance, if $|\beta|=2$,
$\displaystyle II$ $\displaystyle\leq$
$\displaystyle\int_{|y|=|x|/2}|\widehat{j}^{a_{t}}(|x|/2)p_{x^{i}}^{a_{t}}(1,x+y)|dS+\int_{|y|=|x|/2}|\frac{d}{dr}\widehat{j}^{a_{t}}(|x|/2)p^{a_{t}}(1,x+y)|dS$
$\displaystyle+\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}\frac{d^{2}}{dr^{2}}\widehat{j}^{a_{t}}(|y|)p^{a_{t}}(1,x+y)\,dy,$
where $dS$ is the surface measure on $\\{y\in\mathbb{R}^{d}:|y|=|x|/2\\}$. By
its definition, we easily see that
$\frac{d}{dr}\widehat{j}^{a_{t}}_{d}(r)=-2\pi
r\widehat{j}^{a_{t}}_{d+2}(r),\quad\frac{d^{2}}{dr^{2}}\widehat{j}^{a_{t}}_{d}(r)=-2\pi\widehat{j}^{a_{t}}_{d+2}(r)+(2\pi)^{2}r^{2}\widehat{j}^{a_{t}}_{d+4}(r).$
So from Lemma 3.4 and Lemma 4.1 and we get
$\displaystyle II$ $\displaystyle\leq
N\left(\frac{\phi^{a_{t}}(|x|^{-2})^{3/2}}{|x|^{d+2}}+\int_{|y|>|x|/2}|y|^{2}\widehat{j}_{d+4}^{a_{t}}(|y|)|p^{a_{t}}(1,x+y)|\,dy\right)$
$\displaystyle\leq
N\left(\frac{\phi^{a_{t}}(|x|^{-2})^{3/2}}{|x|^{d+2}}+\int_{|y|>|x|/2}|y|^{-(d+2)}\phi^{a_{t}}(|y|^{-2})^{1/2}|p^{a_{t}}(1,x+y)|\,dy\right)$
$\displaystyle\leq
N\left(\frac{\phi^{a_{t}}(|x|^{-2})^{3/2}}{|x|^{d+2}}+\frac{\phi^{a}(|x|^{-2})^{1/2}}{|x|^{d+2}}\right).$
Therefore we have
$|\phi^{a_{t}}(\Delta)^{1/2}p_{x^{i}x^{j}}^{a_{t}}(1,\cdot)(x)|\leq
N\left(1\wedge\frac{\phi^{a_{t}}(|x|^{-2})^{1/2}}{|x|^{d+2}}\right)$
and we are done by the scaling as in (4.3). $\Box$
We generalize Lemma 4.2 as follows.
###### Lemma 4.3.
For any $n\in\mathbb{N}$ and multi-index $\beta$, there exists a constant
$N=N(d,\phi,T,|\beta|,n)>0$ such that
$\displaystyle|\phi(\Delta)^{n/2}D^{\beta}p(t,\cdot)(x)|\leq
N\left(t^{-n/2}(\phi^{-1}(t^{-1}))^{(d+|\beta|)/2}\wedge
t^{-(n-1)/2}\frac{\phi(|x|^{-2})^{1/2}}{|x|^{d+|\beta|}}\right).$ (4.4)
###### Proof.
We use the induction. Due to the previous lemma, the statement is true if
$n=1$. Assume that the statement is true for $n-1$. We put
$\displaystyle|\phi^{a_{t}}(\Delta)^{n/2}D^{\beta}p^{a_{t}}(1,\cdot)(x)|$
$\displaystyle=$ $\displaystyle\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|y|>\varepsilon\\}}(\phi(\Delta)^{(n-1)/2}D^{\beta}p^{a_{t}}(1,x+y)-\phi(\Delta)^{(n-1)/2}D^{\beta}p^{a_{t}}(1,x))\widehat{j}^{a_{t}}(|y|)\,dy|$
$\displaystyle\leq$
$\displaystyle|\phi(\Delta)^{(n-1)/2}p^{a_{t}}(1,x)|\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}\widehat{j}^{a_{t}}(|y|)\,dy$
$\displaystyle+\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}|\phi(\Delta)^{(n-1)/2}p^{a_{t}}(1,x+y)|\widehat{j}^{a_{t}}(|y|)\,dy$
$\displaystyle+|\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|x|/2>|y|>\varepsilon\\}}\int_{0}^{1}|\phi(\Delta)^{1/2}\nabla
p^{a_{t}}(1,x+sy)|ds|y|\widehat{j}^{a_{t}}(|y|)\,dy$ $\displaystyle:=$
$\displaystyle|\phi(\Delta)^{1/2}p^{a_{t}}(1,x)|\times I+II+III$
and follow the proof of Lemma 4.2 with the result (4.4) for $n-1$. Below, we
provide details only for $II$. Let $|\beta|=0$. Then since $r\to j^{a_{t}}(r)$
is decreasing,
$\displaystyle II$
$\displaystyle\leq\widehat{j}^{a_{t}}(|x|/2)\int_{\\{y\in\mathbb{R}^{d}:\,|y|>|x|/2\\}}\phi(\Delta)^{(n-1)/2}p^{a_{t}}(1,x+y)\,dy$
$\displaystyle\leq\widehat{j}^{a_{t}}(|x|/2)\int_{\mathbb{R}^{d}}\phi(\Delta)^{(n-1)/2}p^{a_{t}}(1,x+y)\,dy\leq
N\widehat{j}^{a_{t}}(|x|/2)\leq
N\frac{\phi^{a_{t}}(|x|^{-2})^{1/2}}{|x|^{d}}.$
If $|\beta|>0$ then as in Lemma 4.2 we use integration by parts
$|\beta|$-times and get
$II\leq N\frac{\phi^{a_{t}}(|x|^{-2})^{1/2}}{|x|^{d+|\beta|}}.$
Therefore, since $|\phi^{a_{t}}(\Delta)^{n/2}D^{\beta}p^{a_{t}}(1,\cdot)(x)|$
is uniformly bounded, we have
$|\phi^{a_{t}}(\Delta)^{n/2}D^{\beta}p^{a_{t}}(1,\cdot)(x)|\leq
N\left(\frac{\phi^{a_{t}}(|x|^{-2})^{1/2}}{|x|^{d+|\beta|}}\wedge 1\right)$
and the lemma is proved by the scaling as in (4.3). $\Box$
## 5\. Proof of Theorem 1.1
Let $f\in C^{\infty}_{0}(\mathbb{R}^{d+1},H)$. For each $a\in\mathbb{R}$
denote
$u_{a}(t,x):=\mathcal{G}_{a}(t,x):=[\int_{a}^{t}|\phi(\Delta)^{1/2}T_{t-s}f(s,\cdot)(x)|_{H}^{2}ds]^{1/2},$
$u(t,x):=u_{0}(t,x)$ and $\mathcal{G}(t,x):=\mathcal{G}_{0}(t,x)$.
Here is a version of Theorem 1.1 for $p=2$.
###### Lemma 5.1.
For any $\infty\geq\beta\geq\alpha\geq-\infty$ and $\beta\geq a$,
$\displaystyle\|u_{a}\|^{2}_{L_{2}([\alpha,\beta]\times\mathbb{R}^{d})}\leq
N\||f|_{H}\|^{2}_{L_{2}([a,\beta]\times\mathbb{R}^{d})},$ (5.1)
where $N=N(d)$.
###### Proof.
By the continuity of $f$, the range of $f$ belongs to a separable subspace of
$H$. Thus by using a countable orthonormal basis of this subspace and the
Fourier transform one easily finds
$\displaystyle\|u_{a}\|^{2}_{L_{2}([\alpha,\beta]\times\mathbb{R}^{d})}$
$\displaystyle=$
$\displaystyle(2\pi)^{d}\int_{\mathbb{R}^{d}}\int_{\alpha}^{\beta}\int_{a}^{t}|\mathcal{F}\\{\phi(\Delta)^{1/2}p(t-s,\cdot)\\}(\xi)|^{2}\,|\mathcal{F}(f)(s,\xi)|^{2}_{H}dsdtd\xi$
$\displaystyle\leq$
$\displaystyle(2\pi)^{d}\int_{\mathbb{R}^{d}}\int_{a}^{\beta}\int_{\alpha}^{\beta}I_{0\leq
t-s}\phi(|\xi|^{2})e^{-2(t-s)(\phi(|\xi|^{2}))}dt|\mathcal{F}(f)(s,\xi)|^{2}_{H}dsd\xi.$
Changing $t-s\to t$, we find that the last term above is equal to
$\displaystyle(2\pi)^{d}\int_{\mathbb{R}^{d}}\int_{a}^{\beta}\int_{\alpha-s}^{\beta-s}I_{0\leq
t}\phi(|\xi|^{2})e^{-2t(\phi(|\xi|^{2}))}dt|\mathcal{F}(f)(s,\xi)|^{2}_{H}dsd\xi$
$\displaystyle\leq$
$\displaystyle(2\pi)^{d}\int_{\mathbb{R}^{d}}\int_{a}^{\beta}\int_{0}^{\infty}\phi(|\xi|^{2})e^{-2t(\phi(|\xi|^{2}))}dt|\mathcal{F}(f)(s,\xi)|^{2}_{H}dsd\xi.$
Since $\int_{0}^{\infty}\phi(|\xi|^{2})e^{-2t(\phi(|\xi|^{2}))}dt=1/2$, we
have
$\displaystyle\|u_{a}\|^{2}_{L_{2}([\alpha,\beta]\times\mathbb{R}^{d})}\leq
N\int_{a}^{\beta}\int_{\mathbb{R}^{d}}|\hat{f}(s,\xi)|^{2}_{H}~{}d\xi ds.$
The last expression is equal to the right-hand side of (5.1), and therefore
the lemma is proved. $\Box$
For $c>0$ and $(r,z)\in\mathbb{R}^{d+1}$, we denote
$B_{c}(z)=\\{y\in\mathbb{R}^{d}:|z-y|<c\\},\quad\hat{B}_{c}(z)=\prod_{i=1}^{d}(z^{i}-c/2,z^{i}+c/2),$
$I_{c}(r)=({r-\phi(c^{-2})^{-1}},\,r+\phi(c^{-2})^{-1}),\quad
Q_{c}(r,z)=I_{r}(c)\times\hat{B}_{c}(z).$
Also we denote
$Q_{c}(r)=Q_{c}(r,0),\quad\hat{B}_{c}=\hat{B}_{c}(0),\quad B_{c}=B_{c}(0).$
For a measurable function $h$ on $\mathbb{R}^{d}$, define the maximal
functions
$\mathbb{M}_{x}h(x):=\sup_{r>0}\frac{1}{|B_{r}(x)|}\int_{B_{r}(x)}|h(y)|dy,$
$\mathcal{M}_{x}h(x):=\sup_{\hat{B}_{r}(z)\ni
x}\frac{1}{|\hat{B}_{r}(z)|}\int_{\hat{B}_{r}(z)}|h(y)|dy.$
Similarly, for a measurable function $h=h(t)$ on $\mathbb{R}$,
$\mathbb{M}_{t}h(t):=\sup_{r>0}\frac{1}{2r}\int_{-r}^{r}|h(t+s)|\,ds.$
Also for a function $h=h(t,x)$, we set
$\mathbb{M}_{x}h(t,x):=\mathbb{M}_{x}(h(t,\cdot))(x),\quad\mathcal{M}_{x}h(t,x):=\mathcal{M}_{x}(h(t,\cdot))(x),$
$\mathbb{M}_{t}\mathbb{M}_{x}h(t,x)=\mathbb{M}_{t}(\mathbb{M}_{x}h(\cdot,x))(t),\quad\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}h(t,x)=\mathcal{M}_{x}(\mathbb{M}_{t}\mathbb{M}_{x}h(t,\cdot))(x).$
Since we have estimates of $p(t,x)$ only for $t\leq T$, we introduce the
following functions. Denote $\hat{p}(t,x)=p(t,x)$ if $t\in[0,T]$, and
$\hat{p}(t,x)=0$ otherwise. Define
$\hat{T}_{t}f(x)=\int_{\mathbb{R}^{d}}\hat{p}(t,y)f(x-y)dy$, and for every
$a\in\mathbb{R}$
$\hat{u}_{a}(t,x):=\hat{\mathcal{G}}_{a}(t,x):=\begin{cases}\left[\int_{a}^{t}|\phi(\Delta)^{1/2}\hat{T}_{t-s}f(s,\cdot)(x)|_{H}^{2}ds\right]^{1/2}&:\,t\geq
a\\\
\left[\int_{a}^{2a-t}|\phi(\Delta)^{1/2}\hat{T}_{2a-t-s}f(s,\cdot)(x)|_{H}^{2}ds\right]^{1/2}&:\,t<a.\end{cases}$
We use $\hat{u}(t,x)$ and $\hat{\mathcal{G}}(t,x)$ in place of
$\hat{u}_{0}(t,x)$ and $\hat{\mathcal{G}}_{0}(t,x)$ respectively. Obviously,
Lemmas 4.2 and 4.3 hold with $\hat{p}(t,x)$ instead of $p(t,x)$ (for all $t$).
Moreover,
$\hat{u}_{a}(t,x)=\hat{u}_{a}(2a-t,x)\quad\forall\,t\in\mathbb{R},\quad\quad\hat{u}_{a}(t,x)\leq
u_{a}(t,x)\quad\text{if}\,\,\,t\geq a,$ (5.2)
$\hat{u}_{a}(t,x)=u_{a}(t,x)\quad\text{if}\,\,\,t\in[a,T+a].$ (5.3)
###### Lemma 5.2.
Assume that the support of $f$ belongs to $\mathbb{R}\times B_{3dc}$. Then for
any $c>0$ and $(t,x)\in Q_{c}(r)$
$\displaystyle\int_{Q_{c}(r)}|\hat{u}_{a}(s,y)|^{2}~{}dsdy\leq
N[|r-a|+\phi(c^{-2})^{-1}]c^{d}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x),$
where $N$ depends only on $d$.
###### Proof.
Fix $(t,x)\in Q_{c}(r)$. Using (5.2) and Lemma 5.1, we get
$\displaystyle\int_{Q_{c}(r)}|\hat{u}_{a}(s,y)|^{2}~{}dsdy$
$\displaystyle\leq$ $\displaystyle\int_{(r-\phi(c^{-2})^{-1})\wedge
a}^{(r+\phi(c^{-2})^{-1})\vee
a}\int_{\mathbb{R}^{d}}|\hat{u}_{a}(s,y)|^{2}dyds$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{d}}\left[\int_{(r-\phi(c^{-2})^{-1})\wedge
a}^{a}|\hat{u}_{a}(2a-s,y)|^{2}ds+\int_{a}^{(r+\phi(c^{-2})^{-1})\vee
a}|\hat{u}_{a}(s,y)|^{2}ds\right]dy$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{d}}\left[\int_{a}^{2a-[(r-\phi(c^{-2})^{-1})\wedge
a]}|\hat{u}_{a}(s,y)|^{2}ds+\int_{a}^{(r+\phi(c^{-2})^{-1})\vee
a}|\hat{u}_{a}(s,y)|^{2}ds\right]dy$ $\displaystyle\leq$ $\displaystyle
N\int_{B_{3dc}}\left[\int_{a}^{2a-[(r-\phi(c^{-2})^{-1})\wedge
a]}|f(s,y)|_{H}^{2}ds+\int_{a}^{(r+\phi(c^{-2})^{-1})\vee
a}|f(s,y)|_{H}^{2}ds\right]dy.$
Since $|x-y|\leq|x|+|y|\leq 4dc$ for any $(t,x)\in Q_{c}(r)$ and $y\in
B_{3dc}$, the last term above is less than or equal to constant times of
$\int_{|x-y|\leq 4dc}\left[\int_{a}^{2a-[(r-\phi(c^{-2})^{-1})\wedge
a]}|f(s,y)|_{H}^{2}ds+\int_{a}^{(r+\phi(c^{-2})^{-1})\vee
a}|f(s,y)|_{H}^{2}ds\right]dy$ $\displaystyle\leq$ $\displaystyle
Nc^{d}\left[\int_{a}^{2a-[(r-\phi(c^{-2})^{-1})\wedge
a]}\mathbb{M}_{x}|f(s,x)|_{H}^{2}ds+\int_{a}^{(r+\phi(c^{-2})^{-1})\vee
a}\mathbb{M}_{x}|f(s,x)|_{H}^{2}ds\right]$ $\displaystyle\leq$ $\displaystyle
N[|r-a|+\phi(c^{-2})^{-1}]c^{d}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
In order to explain the last inequality above, we denote
$\int_{a}^{2a-[(r-\phi(c^{-2})^{-1})\wedge
a]}\mathbb{M}_{x}|f(s,x)|_{H}^{2}ds+\int_{a}^{(r+\phi(c^{-2})^{-1})\vee
a}\mathbb{M}_{x}|f(s,x)|_{H}^{2}ds:=I+J.$
First we estimate $I$. $I=0$ if $r-\phi(c^{-2})^{-1}\geq a$. So assume
$r-\phi(c^{-2})^{-1}<a$.
If $a\leq t\leq 2a-(r-\phi(c^{-2})^{-1})$, then we can easily get
$I\leq[|r-a|+\phi(c^{-2})^{-1}]\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
If $t>2a-(r-\phi(c^{-2})^{-1})$ and $t\geq a$, then
$\displaystyle I\leq\int_{a}^{t+(t-a)}\mathbb{M}_{x}|f(s,x)|_{H}^{2}ds$
$\displaystyle\leq$ $\displaystyle
2(t-a)\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x)$ $\displaystyle\leq$
$\displaystyle
2(r+\phi(c^{-2})^{-1}-a)\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
Finally, if $t<a$, then
$\displaystyle I$ $\displaystyle\leq$
$\displaystyle\int_{t}^{2a-(r-\phi(c^{-2})^{-1})}\mathbb{M}_{x}|f(s,x)|_{H}^{2}ds\leq\int_{t+t-[2a-(r-\phi(c^{-2})^{-1})]}^{2a-(r-\phi(c^{-2})^{-1})}\mathbb{M}_{x}|f(s,x)|_{H}^{2}ds$
$\displaystyle\leq$ $\displaystyle
2([2a-(r-\phi(c^{-2})^{-1})]-t)\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x)$
$\displaystyle\leq$ $\displaystyle
4[a-(r-\phi(c^{-2})^{-1})]\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
The estimation of $J$ is similar. Therefore, the lemma is proved. $\Box$
We will use the following version of integration by parts: if
$0\leq\varepsilon\leq R\leq\infty$, and $F$ and $G$ are smooth enough then
(see [14])
$\displaystyle\int_{R\geq|z|\geq\varepsilon}F(z)G(|z|)~{}dz=-\int_{\varepsilon}^{R}G^{\prime}(\rho)(\int_{|z|\leq\rho}F(z)dz)d\rho$
$\displaystyle+G(R)\int_{|z|\leq
R}F(z)dz-G(\varepsilon)\int_{|z|\leq\varepsilon}F(z)dz.$ (5.4)
We generalize Lemma 5.2 as follows.
###### Lemma 5.3.
For any $(t,x)\in Q_{c}(r)$
$\displaystyle\int_{Q_{c}(r)}|\hat{u}_{a}(s,y)|^{2}~{}dsdy\leq
N[|r-a|+\phi(c^{-2})^{-1}]c^{d}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x),$
where $N=N(d,T,\phi)$.
###### Proof.
Take $\zeta\in C_{0}^{\infty}(\mathbb{R}^{d})$ such that $\zeta=1$ in
$B_{2dc}$, $\zeta=0$ outside of $B_{3dc}$, and $0\leq\zeta\leq 1$. Set
$\mathcal{A}=\zeta f$ and $\mathcal{B}=(1-\zeta)f$. By Minkowski’s inequality,
$\hat{\mathcal{G}}_{a}f\leq\hat{\mathcal{G}}_{a}\mathcal{A}+\hat{\mathcal{G}}_{a}\mathcal{B}$.
Since $\hat{\mathcal{G}}_{a}\mathcal{A}$ can be estimated by Lemma 5.2, assume
that $f(t,x)=0$ for $x\in B_{2dc}$. Let $e_{1}:=(1,0,\ldots,0)$ and
$s\geq\mu\geq a$. Then since $\phi(\Delta)^{1/2}\hat{p}(t,x)$ is rotationally
invariant with respect to $x$, we have
$\displaystyle|\phi(\Delta)^{1/2}\hat{T}_{s-\mu}f(\mu,\cdot)(y)|_{H}$ (5.5)
$\displaystyle=$
$\displaystyle|\int_{\mathbb{R}^{d}}\phi(\Delta)^{1/2}\hat{p}(s-\mu,|z|e_{1})f(\mu,y-z)~{}dz|_{H}$
$\displaystyle=$
$\displaystyle|\int_{0}^{\infty}\phi(\Delta)^{1/2}\hat{p}_{x^{1}}(s-\mu,\rho
e_{1})\int_{|z|\leq\rho}f(\mu,y-z)~{}d\rho|_{H}.$
For the second equality above, (5.4) is used with
$G(|z|)=\phi(\Delta)^{1/2}\hat{p}(s-\mu,|z|e_{1})$ and $F(z)=f(\mu,y-z)$.
Observe that if $y\in\hat{B}_{c}$ then for $x\in(-c/2,c/2)^{d}$
$\displaystyle|x-y|\leq 2dc,\quad B_{\rho}(y)\subset B_{2dc+\rho}(x).$
Moreover, if $|z|\leq c$, then $|y-z|\leq 2dc$ and $f(\mu,y-z)=0$. Thus by
Corollary 3.8 and Lemma 4.2,
$\displaystyle|\phi(\Delta)^{1/2}\hat{T}_{s-\mu}f(\mu,\cdot)(y)|_{H}$
$\displaystyle\leq$ $\displaystyle
N\int_{c}^{\infty}\frac{\phi(\rho^{-2})^{1/2}}{\rho^{d+1}}\int_{|z|\leq\rho}|f|_{H}(\mu,y-z)~{}dz\,d\rho$
$\displaystyle=$ $\displaystyle
N\int_{c}^{\infty}\frac{\phi(\rho^{-2})^{1/2}}{\rho^{d+1}}\int_{B_{\rho}(y)}|f|_{H}(\mu,z)~{}dz\,d\rho$
$\displaystyle\leq$ $\displaystyle
N\int_{c}^{\infty}\frac{\phi(\rho^{-2})^{1/2}}{\rho^{d+1}}\int_{B_{2dc+\rho}(x)}|f|_{H}(\mu,z)~{}dz\,d\rho$
$\displaystyle\leq$ $\displaystyle
N\int_{c}^{\infty}\frac{\phi(\rho^{-2})^{1/2}}{\rho^{d+1}}\int_{B_{2d\rho+\rho}(x)}|f|_{H}(\mu,z)~{}dz\,d\rho$
$\displaystyle\leq$ $\displaystyle
N\mathbb{M}_{x}|f|_{H}(\mu,x)\int_{c}^{\infty}\frac{\phi(\rho^{-2})^{1/2}}{\rho}~{}d\rho$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{1/2}\mathbb{M}_{x}|f|_{H}(\mu,x).$
By Jensen’s inequality
$(\mathbb{M}_{x}|f|_{H})^{2}\leq\mathbb{M}_{x}|f|_{H}^{2}$, and therefore, we
get for any $s\geq a$ and $y\in\hat{B}(c)$
$\displaystyle|\hat{u}_{a}(s,y)|^{2}$ $\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})\int_{a}^{s}\mathbb{M}_{x}|f|_{H}^{2}(\mu,x)d\mu.$
So if $r+\phi(c^{-2})^{-1}\geq s\geq a$, then we have
$\displaystyle|\hat{u}_{a}(s,y)|^{2}$ $\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})[r+\phi(c^{-2})^{-1}-(a\wedge(r-\phi(c^{-2})^{-1}))]\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x)$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})[|r-a|+\phi(c^{-2})^{-1}]\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
If $r-\phi(c^{-2})^{-1}\leq s<a$, then we get
$\displaystyle|\hat{u}_{a}(s,y)|^{2}=|\hat{u}_{a}(2a-s,y)|^{2}$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})\int_{a}^{2a-s}\mathbb{M}_{x}|f|_{H}^{2}(\mu,x)d\mu$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})[|r-a|+\phi(c^{-2})^{-1}]\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
Therefore, we get for any $(t,x)\in Q_{c}(r)$
$\displaystyle\int_{Q_{c}(r)}|\hat{u}_{a}(s,y)|^{2}~{}dsdy\leq
N[|r-a|+\phi(c^{-2})^{-1}]c^{d}\cdot\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
The lemma is proved. $\Box$
###### Lemma 5.4.
Assume $2\phi(c^{-2})^{-1}<r$. Then for any $(t,x)\in Q_{c}(r)$,
$\displaystyle\int_{Q_{c}(r)}\int_{r-\phi(c^{-2})^{-1}}^{r+\phi(c^{-2})^{-1}}|\hat{u}(s_{1},y)-\hat{u}(s_{2},y)|^{2}~{}ds_{1}ds_{2}dy$
$\displaystyle\leq
N\phi(c^{-2})^{-2}c^{d}\left[\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x)+\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x)+\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t-T,x)\right],$
(5.6)
where $N=N(d,T,\phi)$.
###### Proof.
Due to the symmetry, to estimate the left term of (5.6), we only consider the
case $s_{1}>s_{2}$. Since $r-\phi(c^{-2})^{-1}>\phi(c^{-2})^{-1}>0$, we have
$s_{2}>0$ for any $(s_{2},y)\in Q_{c}(r)$. Observe that by Minkowski’s
inequality
$\displaystyle|\hat{u}(s_{1},y)-\hat{u}(s_{2},y)|^{2}$ $\displaystyle=$
$\displaystyle|(\int_{0}^{s_{1}}|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa)^{1/2}-(\int_{0}^{s_{2}}|\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa)^{1/2}|^{2}$
$\displaystyle\leq$
$\displaystyle\int_{s_{2}}^{s_{1}}|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa$
$\displaystyle+\int_{0}^{s_{2}}|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}f(\kappa,\cdot)(y)-\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa$
$\displaystyle:=$ $\displaystyle I^{0}(s_{1},s_{2},y)+J^{0}(s_{1},s_{2},y).$
One can easily estimate $I^{0}$ using Lemma 5.3 with $a=s_{2}$ and $|r-a|\leq
2\phi(c^{-2})^{-1}$, and thus we only need to show that
$\int_{Q_{c}(r)}\int_{r-\phi(c^{-2})^{-1}}^{r+\phi(c^{-2})^{-1}}J^{0}(s_{1},s_{2},y)\,ds_{1}ds_{2}dy$
is less than or equal to the right hand side of (5.6). We divide $J^{0}$ into
two parts :
$\displaystyle
I:=\int_{0}^{r-2\phi(c^{-2})^{-1}}|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}f(\kappa,\cdot)(y)-\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa$
$\displaystyle
J:=\int_{r-2\phi(c^{-2})^{-1}}^{s_{2}}|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}f(\kappa,\cdot)(y)-\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa.$
Note that since $s_{1}\geq s_{2}\geq r-\phi(c^{-2})^{-1}$, we have $\eta
s_{1}+(1-\eta)s_{2}-\kappa\geq\phi(c^{-2})^{-1}$ for any $\eta\in[0,1]$ and
$\kappa\in[0,r-2\phi(c^{-2})^{-1}]$.
If $s_{1}-\kappa>T$ and $s_{2}-\kappa\leq T$ then
$|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}-\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}=|\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}.$
Otherwise, using $\frac{\partial}{\partial
t}\phi(\Delta)^{1/2}T_{t}f(x)=\phi(\Delta)^{3/2}T_{t}f(x)$, we get
$\displaystyle|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}f(\kappa,\cdot)(y)-\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}$
$\displaystyle\leq$
$\displaystyle(s_{1}-s_{2})^{2}\int_{0}^{1}|\phi(\Delta)^{3/2}\hat{T}_{\eta
s_{1}+(1-\eta)s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\eta.$
Therefore,
$\displaystyle I$ $\displaystyle\leq$
$\displaystyle\int_{s_{2}-T}^{s_{1}-T}|\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa$
$\displaystyle+$
$\displaystyle(s_{1}-s_{2})^{2}\int_{0}^{r-2\phi(c^{-2})^{-1}}\int_{0}^{1}|\phi(\Delta)^{3/2}\hat{T}_{\eta
s_{1}+(1-\eta)s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\eta d\kappa.$
Denote $\bar{s}=\bar{s}(\eta)=\eta s_{1}+(1-\eta)s_{2}$. As in (5.5),
$\displaystyle|\phi(\Delta)^{3/2}\hat{T}_{\bar{s}-\kappa}f(\kappa,\cdot)(y)|_{H}$
$\displaystyle=$
$\displaystyle|\int_{0}^{\infty}\phi(\Delta)^{3/2}\hat{p}_{x^{1}}(\bar{s}-\kappa,\rho
e_{1})\int_{|z|\leq\rho}f(\kappa,y-z)dzd\rho|_{H}$ $\displaystyle\leq$
$\displaystyle\mathbb{M}_{x}|f|_{H}(\kappa,y)\int_{0}^{\infty}|\phi(\Delta)^{3/2}\hat{p}_{x^{1}}(\bar{s}-\kappa,\rho
e_{1})|\rho^{d}\,d\rho.$
’Note that $\bar{s}-\kappa\geq\phi(c^{-2})^{-1}$ and so
$\phi^{-1}((\bar{s}-\kappa)^{-1})\leq c^{-2}$. By Lemma 4.3 and Corollary 3.8,
$\displaystyle\int_{0}^{\infty}|\phi(\Delta)^{3/2}\hat{p}_{x^{1}}(\bar{s}-\kappa,\rho
e_{1})|\rho^{d}\,d\rho$ $\displaystyle=$
$\displaystyle\int^{c}_{0}|\phi(\Delta)^{3/2}\hat{p}_{x^{1}}(\bar{s}-\kappa,\rho
e_{1})|\rho^{d}\,d\rho+\int^{\infty}_{c}|\phi(\Delta)^{3/2}\hat{p}_{x^{1}}(\bar{s}-\kappa,\rho
e_{1})|\rho^{d}\,d\rho$ $\displaystyle\leq$ $\displaystyle
N\int_{0}^{c}(\bar{s}-\kappa)^{-3/2}\phi^{-1}((\bar{s}-\kappa)^{-1})^{(d+1)/2}\rho^{d}d\rho+N(\bar{s}-\kappa)^{-1}\int^{\infty}_{c}\frac{\phi(\rho^{-2})^{1/2}}{\rho^{d+1}}\rho^{d}\,d\rho$
$\displaystyle\leq$ $\displaystyle
N(\bar{s}-\kappa)^{-1}\left[\phi(c^{-2})^{1/2}c^{-(d+1)}\int_{0}^{c}\rho^{d}d\rho+\phi(c^{-2})^{1/2}\right]\leq
N(\bar{s}-\kappa)^{-1}\phi(c^{-2})^{1/2}.$
Similarly, one can check
$|\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}\leq
N\phi(c^{-2})^{1/2}\mathbb{M}_{x}|f|_{H}(\kappa,y).$
Therefore, remembering $|s_{1}-s_{2}|\leq 2\phi(c^{-2})^{-1}$, we get
$\displaystyle I$ $\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-1}\int_{0}^{r-2\phi(c^{-2})^{-1}}(r-\phi(c^{-2})^{-1}-\kappa)^{-2}\mathbb{M}_{x}|f|^{2}_{H}(\kappa,y)d\kappa$
$\displaystyle+N\phi(c^{-2})\int_{s_{2}-T}^{s_{1}-T}\mathbb{M}_{x}|f|^{2}_{H}(\kappa,y)d\kappa.$
Note that $\mathbb{M}_{x}|f|^{2}_{H}(\kappa,y)$ in (5) can be replaced by
$I_{0<\kappa<r-2\phi(c^{-2})}$ times of it. Thus by integration by parts,
$\displaystyle\phi(c^{-2})^{-1}\int_{0}^{r-2\phi(c^{-2})^{-1}}(r-\phi(c^{-2})^{-1}-\kappa)^{-2}\mathbb{M}_{x}|f|^{2}_{H}(\kappa,y)d\kappa$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-1}\int_{-\infty}^{r-2\phi(c^{-2})^{-1}}(r-\phi(c^{-2})^{-1}-\kappa)^{-3}\int_{\kappa}^{r+\phi(c^{-2})^{-1}}\mathbb{M}_{x}|f|^{2}_{H}(\nu,y)d\nu
d\kappa$ $\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-1}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,y)\int_{-\infty}^{r-2\phi(c^{-2})^{-1}}\frac{r+\phi(c^{-2})^{-1}-\kappa}{({r-\phi(c^{-2})^{-1}}-\kappa)^{3}}d\kappa$
$\displaystyle=$ $\displaystyle
N\phi(c^{-2})^{-1}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,y)\int_{\phi(c^{-2})^{-1}}^{\infty}\frac{\kappa+2\phi(c^{-2})^{-1}}{\kappa^{3}}d\kappa$
$\displaystyle\leq$ $\displaystyle
N\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,y).$
Also,
$\displaystyle\phi(c^{-2})\int_{s_{2}-T}^{s_{1}-T}\mathbb{M}_{x}|f|^{2}_{H}(\kappa,y)d\kappa$
$\displaystyle\leq$
$\displaystyle\phi(c^{-2})\int_{r-\phi(c^{-2})^{-1}-T}^{r+\phi(c^{-2})^{-1}-T}\mathbb{M}_{x}|f|^{2}_{H}(\kappa,y)d\kappa$
$\displaystyle\leq$ $\displaystyle
2\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t-T,y).$
Therefore,
$I\leq
N[\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,y)+\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t-T,y)],$
where $N$ depends only on $d,T,a_{i},\delta_{i}$ $(i=1,2,3)$, and this
certainly implies
$\displaystyle\int_{\hat{B}_{c}}\int_{r-\phi(c^{-2})^{-1}}^{r+\phi(c^{-2})^{-1}}\int_{r-\phi(c^{-2})^{-1}}^{r+\phi(c^{-2})^{-1}}I~{}ds_{1}ds_{2}dy$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-2}c^{d}[\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x)+\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t-T,x)].$
It only remains to estimate $J$. Since $s_{1}\geq s_{2}$,
$\displaystyle\int_{r-2\phi(c^{-2})^{-1}}^{s_{2}}|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}f(\kappa,\cdot)(x)-\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa$
$\displaystyle\leq$ $\displaystyle
2\int_{r-2\phi(c^{-2})^{-1}}^{s_{1}}|\phi(\Delta)^{1/2}\hat{T}_{s_{1}-\kappa}f(\kappa,\cdot)(x)|_{H}^{2}d\kappa$
$\displaystyle+2\int_{r-2\phi(c^{-2})^{-1}}^{s_{2}}|\phi(\Delta)^{1/2}\hat{T}_{s_{2}-\kappa}f(\kappa,\cdot)(y)|_{H}^{2}d\kappa.$
Therefore, we are done by Lemma 5.3 with $a=r-2\phi(c^{-2})^{-1}$. $\Box$
###### Lemma 5.5.
Assume $2\phi(c^{-2})^{-1}<r$. Then for any $(t,x)\in Q_{c}(r)$,
$\displaystyle\int_{r-\phi(c^{-2})^{-1}}^{r+\phi(c^{-2})^{-1}}\int_{\hat{B}_{c}}\int_{\hat{B}_{c}}|\hat{u}(s,y_{1})-\hat{u}(s,y_{2})|^{2}~{}dy_{1}dy_{2}ds$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-1}c^{2d}[\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x)+\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x)].$
where $N=N(d,T,\phi)$.
###### Proof.
By Minkowski’s inequality,
$\displaystyle|u(s,y_{1})-u(s,y_{2})|^{2}$ $\displaystyle=$
$\displaystyle|(\int_{0}^{s}|\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,y_{1})|_{H}^{2}d\kappa)^{1/2}-(\int_{0}^{s}|\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,y_{2})|_{H}^{2}d\kappa)^{1/2}|^{2}$
$\displaystyle\leq$
$\displaystyle\int_{0}^{s}|\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,y_{1})-\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,y_{2})|_{H}^{2}d\kappa$
$\displaystyle\leq$
$\displaystyle\int_{0}^{r-2\phi(c^{-2})^{-1}}|\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,y_{1})-\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,y_{2})|_{H}^{2}d\kappa$
$\displaystyle+\int_{r-2\phi(c^{-2})^{-1}}^{s}|\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,y_{1})-\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,y_{2})|_{H}^{2}d\kappa$
$\displaystyle:=$ $\displaystyle I(s,y_{1},y_{2})+J(s,y_{1},y_{2}).$
By Lemma 5.3 with $a=r-2\phi(c^{-2})^{-1}$,
$\int_{r-\phi(c^{-2})^{-1}}^{r+\phi(c^{-2})^{-1}}\int_{\hat{B}_{c}}\int_{\hat{B}_{c}}J(s)~{}dy_{1}dy_{2}ds\leq
N\phi(c^{-2})^{-1}c^{2d}[\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
Therefore, we only need to estimate $I$. Let
$r-\phi(c^{-2})^{-1}<s<r+\phi(c^{-2})^{-1}$. Observe that for
$(s,y_{1}),(s,y_{2})\in Q_{c}(r)$
$I\leq
Nc^{2d}\int_{0}^{1}\int_{0}^{r-2\phi(c^{-2})^{-1}}|\nabla\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,\cdot)(\eta
y_{1}+(1-\eta)y_{2})|_{H}^{2}d\kappa d\eta.$ (5.8)
Recall that $\hat{p}(s-\kappa,y)=0$ if $s-\kappa>T$. Therefore if
$T+r-2\phi(c^{-2})^{-1}<s$, then
$c^{2d}\int_{0}^{1}\int_{0}^{r-2\phi(c^{-2})^{-1}}|\nabla\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,\cdot)(\eta
y_{1}+(1-\eta)y_{2})|_{H}^{2}d\kappa d\eta=0.$
So we assume $T+r-2\phi(c^{-2})^{-1}\geq s$, which certainly implies
$\displaystyle T\geq s-(r-2\phi(c^{-2})^{-1})\geq\phi(c^{-2})^{-1},\quad
c\leq\phi^{-1}(\frac{1}{T})^{-1/2}.$
Moreover from (3.20) and (3.21) we see that
$\displaystyle
c\leq\phi^{-1}(\frac{1}{T})^{-1/2}\leq\left((\frac{1}{a_{2}T})^{-1/(2\delta_{2})}\vee(\frac{1}{a_{3}T})^{-1/(2\delta_{3})}\right).$
(5.9)
Recall $a_{t}:=\left(\phi^{-1}(t^{-1})\right)^{-1/2}$ and
$t\phi(a_{t}^{-2})=1$. For simplicity, denote
$\bar{y}=\bar{y}(\eta)=\eta y_{1}+(1-\eta)y_{2}.$
As before, using (5.4), we get
$\displaystyle|\nabla\phi(\Delta)^{1/2}\hat{T}_{s-\kappa}f(\kappa,\cdot)(\bar{y})|_{H}$
$\displaystyle=$
$\displaystyle|\int_{0}^{\infty}\nabla\phi(\Delta)^{1/2}\hat{p}_{x^{1}}(s-\kappa,\rho
e_{1})\int_{|z|\leq\rho}f(\kappa,\bar{y}-z)dzd\rho|_{H}$ $\displaystyle\leq$
$\displaystyle|\int_{0}^{\infty}\nabla\phi(\Delta)^{1/2}\hat{p}_{x^{1}}(s-\kappa,\rho
e_{1})\int_{|\bar{y}-z|\leq\rho}f(\kappa,z)dzd\rho|_{H}$ $\displaystyle\leq$
$\displaystyle
N\mathbb{M}_{x}|f|_{H}(s,\bar{y})\sum_{i=1}^{d}\int_{0}^{\infty}|\phi(\Delta)^{1/2}\hat{p}_{x^{i}x^{1}}(s-\kappa,\rho
e_{1})|\rho^{d}d\rho$ $\displaystyle\leq$ $\displaystyle
N\mathbb{M}_{x}|f|_{H}(s,\bar{y})\sum_{i=1}^{d}[\int_{0}^{a_{s-\kappa}}|\phi(\Delta)^{1/2}\hat{p}_{x^{i}x^{1}}(s-\kappa,\rho
e_{1})|\rho^{d}d\rho$
$\displaystyle+\int_{a_{s-\kappa}}^{\infty}|\phi(\Delta)^{1/2}\hat{p}_{x^{i}x^{1}}(s-\kappa,\rho
e_{1})|\rho^{d}d\rho]$ $\displaystyle:=$ $\displaystyle I_{1}+I_{2}.$
By Lemma 4.2,
$\displaystyle I_{1}$ $\displaystyle\leq$ $\displaystyle
N\mathbb{M}_{x}|f|_{H}(s,\bar{y})(s-\kappa)^{-1/2}\phi^{-1}((s-\kappa)^{-1})^{(d+2)/2}\int_{0}^{a_{s-\kappa}}\rho^{d}d\rho$
(5.10) $\displaystyle\leq$ $\displaystyle
N\mathbb{M}_{x}|f|_{H}(s,\bar{y})(s-\kappa)^{-1/2}\phi^{-1}((s-\kappa)^{-1})^{1/2},$
and by Lemma 4.2 and Corollary 3.8,
$\displaystyle I_{2}$ $\displaystyle\leq$ $\displaystyle
N\mathbb{M}_{x}|f|_{H}(s,\bar{y})\int_{a_{s-\kappa}}^{\infty}\frac{\phi(\rho^{-2})^{1/2}}{\rho^{2}}~{}d\rho$
(5.11) $\displaystyle\leq$ $\displaystyle
N\mathbb{M}_{x}|f|_{H}(s,\bar{y})a_{s-\kappa}^{-1}\int_{a_{s-\kappa}}^{\infty}\frac{\phi(\rho^{-2})^{1/2}}{\rho}~{}d\rho$
$\displaystyle\leq$ $\displaystyle
N\mathbb{M}_{x}|f|_{H}(s,\bar{y})a_{s-\kappa}^{-1}\phi(a_{s-\kappa}^{-2})^{1/2}$
$\displaystyle=$ $\displaystyle
N\mathbb{M}_{x}|f|_{H}(s,\bar{y})(s-\kappa)^{-1/2}\phi^{-1}((s-\kappa)^{-1})^{1/2}.$
Therefore, using (5.10) and (5.11), and coming back to (5.8), we get
$\displaystyle
c^{2d}\int_{0}^{r-2\phi(c^{-2})^{-1}}\mathbb{M}_{x}|f|^{2}_{H}(\kappa,\bar{y})(s-\kappa)^{-1}\phi^{-1}((s-\kappa)^{-1})d\kappa$
(5.12) $\displaystyle\leq$ $\displaystyle
Nc^{2d}\int_{-\infty}^{r-2\phi(c^{-2})^{-1}}\mathbb{M}_{x}|f|^{2}_{H}(\kappa,\bar{y})(r-\phi(c^{-2})^{-1}-\kappa)^{-1}$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\times\,\phi^{-1}((r-\phi(c^{-2})^{-1}-\kappa)^{-1})\,\,d\kappa$
$\displaystyle\leq$ $\displaystyle
Nc^{2d}\int_{\phi(c^{-2})^{-1}}^{\infty}\mathbb{M}_{x}|f|^{2}_{H}(r-\phi(c^{-2})^{-1}-\kappa,\bar{y})\kappa^{-1}\phi^{-1}(\kappa^{-1})d\kappa$
$\displaystyle\leq$ $\displaystyle
Nc^{2d}\int_{1}^{\infty}\mathbb{M}_{x}|f|^{2}_{H}(r-\phi(c^{-2})^{-1}-\phi(c^{-2})^{-1}\kappa,\bar{y})\kappa^{-1}\phi^{-1}(\phi(c^{-2})\kappa^{-1})d\kappa$
$\displaystyle\leq$ $\displaystyle
Nc^{2(d-1)}\int_{1}^{\infty}\mathbb{M}_{x}|f|^{2}_{H}(r-\phi(c^{-2})^{-1}-\phi(c^{-2})^{-1}\kappa,\bar{y})\kappa^{-2}d\kappa.$
For the last inequality above, we used Lemma 3.1. Indeed, for $\kappa\geq 1$
and $t>0$
$\phi(t)\kappa^{-1}=\phi(\kappa
t\kappa^{-1})\kappa^{-1}\leq\phi(t\kappa^{-1}),\quad\phi^{-1}(\phi(t)\kappa^{-1})\leq
t\kappa^{-1}.$
Note that $|f|^{2}_{H}(r-\phi(c^{-2})^{-1}-\phi(c^{-2})^{-1}\kappa,\bar{y})$
in (5.12) can be replaced by $I_{\kappa>1}$ times of it. Therefore, by
integration by parts
$\displaystyle
c^{2(d-1)}\int_{1}^{\infty}\int_{0}^{\kappa}\mathbb{M}_{x}|f|^{2}_{H}(r-\phi(c^{-2})^{-1}-\phi(c^{-2})^{-1}\nu,\bar{y})d\nu~{}\kappa^{-3}d\kappa$
$\displaystyle\leq$ $\displaystyle
Nc^{2(d-1)}\phi(c^{-2})\int_{1}^{\infty}\int_{r-\phi(c^{-2})^{-1}-\phi(c^{-2})^{-1}\kappa}^{r+\phi(c^{-2})^{-1}}\mathbb{M}_{x}|f|^{2}_{H}(\nu,\bar{y})d\nu~{}\kappa^{-3}d\kappa$
$\displaystyle\leq$ $\displaystyle
Nc^{2(d-1)}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,\bar{y})\phi(c^{-2})\int_{1}^{\infty}(2\phi(c^{-2})^{-1}+\phi(c^{-2})^{-1}\kappa)\kappa^{-3}d\kappa$
$\displaystyle\leq$ $\displaystyle
Nc^{2(d-1)}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,\bar{y})\leq
N\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,\bar{y}),$
where (5.9) is used for the last inequality. Thus,
$I(s,y_{1},y_{2})\leq
N\int_{0}^{1}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,\eta
y_{1}+(1-\eta)y_{2})d\eta,$
where $N$ depends only on $d,T,a_{i},\delta_{i}$ $(i=1,2,3)$. Finally, we
conclude that
$\displaystyle\int_{r-\phi(c^{-2})^{-1}}^{r+\phi(c^{-2})^{-1}}\int_{\hat{B}_{c}}\int_{\hat{B}_{c}}I(s,y_{1},y_{2})\,dy_{1}dy_{2}ds$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-1}\int_{0}^{1}\int_{\hat{B}_{c}}\int_{\hat{B}_{c}}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,\eta
y_{1}+(1-\eta)y_{2})dy_{1}dy_{2}d\eta$ $\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-1}\int_{0}^{1}\int_{\hat{B}_{c}}\int_{\eta\hat{B}_{c}+(1-\eta)y_{2}}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,y)dydy_{2}d\eta$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-1}\int_{\hat{B}_{c}}\int_{\hat{B}_{c}}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,y)dydy_{2}$
$\displaystyle\leq$ $\displaystyle
N\phi(c^{-2})^{-1}c^{2d}\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t,x).$
The lemma is proved. $\Box$
For a measurable function $h(t,x)$ on $\mathbb{R}^{d+1}$ define the sharp
function
$h^{\\#}(t,x):=\sup_{Q_{c}(r,z)\ni(t,x)}\frac{1}{|Q_{c}(r,z)|}\int_{Q_{c}(r,z)}|h(s,y)-h_{Q_{c}(r,z)}|~{}dsdy,$
where
$h_{Q_{c}(r,z)}(x)=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{Q_{c}(r,z)}h(s,y)~{}dsdy:=\frac{1}{|Q_{c}(r,z)|}\int_{Q_{c}(r,z)}h(s,y)~{}dsdy.$
The following two theorems are classical results and can be found in [23].
###### Theorem 5.6.
(Hardy-Littlewood) For $1<p<\infty$ and $f\in L_{p}(\mathbb{R}^{d})$, we have
$\|\mathcal{M}_{x}f\|_{L_{p}(\mathbb{R}^{d})}+\|\mathbb{M}_{x}f\|_{L_{p}(\mathbb{R}^{d})}\leq
N(d,p,\phi)\|f\|_{L_{p}(\mathbb{R}^{d})}.$
###### Theorem 5.7.
(Fefferman-Stein). For any $1<p<\infty$ and $h\in L_{p}(\mathbb{R}^{d+1})$,
$\|h\|_{L_{p}(\mathbb{R}^{d+1})}\leq
N(d,p,\phi)\|h^{\\#}\|_{L_{p}(\mathbb{R}^{d+1})}.$
###### Proof.
We can get this result from Theorem IV.2.2 in [23]. Indeed, due to (3.27) we
can easily check that the balls $Q_{c}(s,y)$ satisfy the conditions (i)-(iv)
in section 1.1 of [23] :
(i) $Q_{c}(t,x)\cap Q_{c}(s,y)\neq\emptyset$ implies $Q_{c}(s,y)\subset
Q_{N_{1}c}(t,x)$ ;
(ii) $|Q_{N_{1}c}(t,x)|\leq N_{2}|Q_{c}(t,x)|$ ;
(iii) $\cap_{c>0}\overline{Q}_{c}(t,x)=\\{(t,x)\\}$ and
$\cup_{c}Q_{c}(t,x)=\mathbb{R}^{d+1}$ ;
(iv) for each open set $U$ and $c>0$, the function $(t,x)\to|Q_{c}(t,x)\cap
U|$ is continuous. $\Box$
Proof of Theorem 1.1.
First assume $f(t,x)=0$ if $t\not\in[0,T]$.
Since the theorem is already proved if $p=2$ in Lemma 5.1, we assume $p>2$.
First, we prove
$((\hat{\mathcal{G}}f)^{\\#})^{2}(t,x)\leq N(G(t,x)+G(-t,x)),$ (5.13)
where
$G(t,x):=\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,x)+\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H}(t,x)+\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2}(t-T,x).$
Because of Jensen’s inequality, to prove (5.13) it suffices to prove that for
each $Q_{c}(r,z)\in\mathcal{F}$ and $(t,x)\in Q_{c}(r,z)$,
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{Q_{c}(r,z)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{Q_{c}(r,z)}|\hat{\mathcal{G}}f(s_{1},y_{1})-\hat{\mathcal{G}}f(s_{2},y_{2})|^{2}~{}ds_{1}dy_{1}ds_{2}dy_{2}$
(5.14) $\displaystyle\leq$ $\displaystyle N(G(t,x)+G(-t,x)).$
To prove this we use translation and apply Lemma 5.3, 5.4, and 5.5.
By the definition of $\hat{\mathcal{G}}f$ and the fact
$\hat{T}_{t}g(y+z)=\hat{T}_{t}g(z+\cdot)(y)$, we see for $s\geq 0$
$\displaystyle\hat{\mathcal{G}}f(s,y+z)$ $\displaystyle=$
$\displaystyle[\int_{0}^{s}|\phi(\Delta)^{1/2}\hat{T}_{s-\rho}f(\rho,\cdot)(y+z)|_{H}^{2}d\rho]^{1/2}$
(5.15) $\displaystyle=$
$\displaystyle[\int_{0}^{s}|\phi(\Delta)^{1/2}\hat{T}_{s-\rho}f(\rho,z+\cdot)(y)|_{H}^{2}d\rho]^{1/2}$
$\displaystyle=$
$\displaystyle[\int_{0}^{s}|\phi(\Delta)^{1/2}\hat{T}_{s-\rho}f(\rho,z+\cdot)(y)|_{H}^{2}d\rho]^{1/2}$
$\displaystyle=$ $\displaystyle\hat{\mathcal{G}}f(\cdot,z+\cdot)(s,y),$
and
$\hat{\mathcal{G}}f(-s,y+z)=\hat{\mathcal{G}}f(s,y+z)=\hat{\mathcal{G}}f(\cdot,z+\cdot)(s,y)=\hat{\mathcal{G}}f(\cdot,z+\cdot)(-s,y).$
Therefore we get
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{Q_{c}(r,z)}|\hat{\mathcal{G}}\\{f(\cdot,\cdot)\\}(s,y)|^{2}~{}dyds$
$\displaystyle=$
$\displaystyle\frac{1}{|Q_{c}(r)|}\int_{Q_{c}(r)}|\hat{\mathcal{G}}\\{f(\cdot,\cdot)\\}(s,y+z)|^{2}~{}dyds$
$\displaystyle=$
$\displaystyle\frac{1}{|Q_{c}(r)|}\int_{Q_{c}(r)}|\hat{\mathcal{G}}\\{f(\cdot,z+\cdot)\\}(s,y)|^{2}~{}dyds.$
This shows that we may assume that $z=0$ and $Q_{c}(r,z)=Q_{c}(r)$.
If $|r|\leq 2\phi(c^{-2})^{-1}$, then (5.14) follows from Lemma 5.3. Also if
$r>2\phi(c^{-2})^{-1}$ then (5.14) follows from Lemmas 5.4 and 5.5. Therefore
it only remains to consider the case $r<-2\phi(c^{-2})^{-1}$. In this case,
(5.14) follows from the identity
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{Q_{c}(r)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{Q_{c}(r)}|\hat{\mathcal{G}}f(s_{1},y_{1})-\hat{\mathcal{G}}f(s_{2},y_{2})|^{2}~{}ds_{1}dy_{1}ds_{2}dy_{2}$
$\displaystyle=$
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{Q_{c}(-r)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{Q_{c}(-r)}|\hat{\mathcal{G}}f(s_{1},y_{1})-\hat{\mathcal{G}}f(s_{2},y_{2})|^{2}~{}ds_{1}dy_{1}ds_{2}dy_{2}.$
This is because, for $r^{\prime}:=-r$, we have $r^{\prime}>2\phi(c^{-2})^{-1}$
and this case is already proved above. Thus we have proved (5.13).
Recall that $\mathcal{G}f(t,x)=\hat{\mathcal{G}}f(t,x)$ for $t\in[0,T]$. Thus
by Theorem 5.7 and (5.13)
$\displaystyle\|\mathcal{G}f\|_{L_{p}([0,T]\times\mathbb{R}^{d})}^{p}=\|I_{[0,T]}\hat{\mathcal{G}}f\|_{L_{p}(\mathbb{R}^{d+1})}^{p}\leq
N\|(\hat{\mathcal{G}}f)^{\\#}\|_{L_{p}(\mathbb{R}^{d+1})}^{p}$
$\displaystyle\leq$ $\displaystyle
N\left(\|(\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H})^{1/2}(t,x)\|_{L_{p}(\mathbb{R}^{d+1})}^{p}+\|(\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|^{2}_{H})^{1/2}(t,x)\|_{L_{p}(\mathbb{R}^{d+1})}^{p}\right),$
where for the last inequality we use (5.13) and the fact that the $L_{p}$-norm
is invariant under reflection and translation. Now we use Theorem 5.6 to get
$\displaystyle\int_{\mathbb{R}^{d}}\int_{-\infty}^{\infty}(\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2})^{p/2}~{}dtdx+\int_{-\infty}^{\infty}\int_{\mathbb{R}^{d}}(\mathcal{M}_{x}\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2})^{p/2}~{}dtdx$
$\displaystyle\leq$ $\displaystyle
N\int_{-\infty}^{\infty}\int_{\mathbb{R}^{d}}(\mathbb{M}_{x}|f|_{H}^{2})^{p/2}~{}dtdx+N\int_{\mathbb{R}^{d}}\int_{-\infty}^{\infty}(\mathbb{M}_{t}\mathbb{M}_{x}|f|_{H}^{2})^{p/2}~{}dtdx$
$\displaystyle\leq$ $\displaystyle
N\int_{-\infty}^{\infty}\int_{\mathbb{R}^{d}}(|f|_{H}^{2})^{p/2}~{}dtdx+N\int_{-\infty}^{\infty}\int_{\mathbb{R}^{d}}(\mathbb{M}_{x}|f|_{H}^{2})^{p/2}~{}dtdx$
$\displaystyle\leq$ $\displaystyle
N\|f\|_{L_{p}(\mathbb{R}^{d+1},H)}^{p}=N\|f\|_{L_{p}([0,T]\times\mathbb{R}^{d},H)}^{p}.$
Finally, for the general case, choose $\zeta_{n}\in C_{0}^{\infty}(0,T)$ such
that $|\zeta_{n}|\leq 1$, $\zeta_{n}=1$ on $[1/n,T-1/n]$. Then from the above
inequality and Lebesgue dominated convergence theorem we conclude that
$\displaystyle\|\mathcal{G}f\|_{L_{p}([0,T]\times\mathbb{R}^{d})}\leq\limsup_{n\to\infty}\|\mathcal{G}(f\zeta_{n})\|_{L_{p}([0,T]\times\mathbb{R}^{d})}$
$\displaystyle\leq\limsup_{n\to\infty}\|f\zeta_{n}\|_{L_{p}(\mathbb{R}^{d+1},H)}\leq\|f\|_{L_{p}([0,T]\times\mathbb{R}^{d},H)}.$
Hence the theorem is proved. $\Box$
## 6\. An application
In this section we construct an $L_{p}$-theory for the stochastic integro-
differential equation
$du=(\phi(\Delta)u+f)dt+g^{k}dw^{k}_{t},\quad u(0)=0.$ (6.1)
The condition $u(0)=0$ is not necessary and is assumed only for the simplicity
of the presentation.
Let $(\Omega,\mathcal{I},P)$ be a complete probability space, and
$\\{\mathcal{I}_{t},t\geq 0\\}$ be an increasing filtration of $\sigma$-fields
$\mathcal{I}_{t}\subset\mathcal{I}$, each of which contains all
$(\mathcal{I},P)$-null sets. By $\mathcal{P}$ we denote the predictable
$\sigma$-field generated by $\\{\mathcal{I}_{t},t\geq 0\\}$ and we assume that
on $\Omega$ we are given independent one-dimensional Wiener processes
$w^{1}_{t},w^{2}_{t},...$, each of which is a Wiener process relative to
$\\{\mathcal{I}_{t},t\geq 0\\}$.
For $\gamma\in\mathbb{R}$, denote
$H^{\phi,\gamma}_{p}=(1-\phi(\Delta))^{-\gamma/2}L_{p}$, that is $u\in
H^{\phi,\gamma}_{p}$ if
$\|u\|_{H^{\phi,\gamma}_{p}}=\|(1-\phi(\Delta))^{\gamma/2}u\|_{p}:=\|\mathcal{F}^{-1}\\{(1+\phi(|\xi|^{2}))^{\gamma/2}\mathcal{F}(u)(\xi)\\}\|_{p}<\infty,$
(6.2)
where $\mathcal{F}$ is the Fourier transform and $\mathcal{F}^{-1}$ is the
inverse Fourier transform. Similarly, for a $\ell_{2}$-valued function
$u=(u^{1},u^{2},\cdots)$ we define
$\|u\|_{H^{\phi,\gamma}_{p}(\ell_{2})}=\||\mathcal{F}^{-1}\\{(1+\phi(|\xi|^{2}))^{\gamma/2}\mathcal{F}(u)(\xi)\\}|_{\ell_{2}}\|_{p}.$
The following result can be found, for instance, in [6].
###### Lemma 6.1.
(i) For any $\mu,\gamma\in\mathbb{R}$, the map
$(1-\phi(\Delta))^{\mu/2}:H^{\phi,\gamma}_{p}\to H^{\phi,\gamma-\mu}_{p}$ is
an isometry.
(ii) For any $\gamma\in\mathbb{R}$, $H^{\phi,\gamma}_{p}$ is a Banach space.
(iii) If $\gamma_{1}\leq\gamma_{2}$, then $H^{\phi,\gamma_{2}}_{p}\subset
H^{\phi,\gamma_{1}}_{p}$ and
$\|u\|_{H^{\phi,\gamma_{1}}_{p}}\leq c\|u\|_{H^{\phi,\gamma_{2}}_{p}}.$
(iv) Let $\gamma\geq 0$. Then there is a constant $c>1$ so that
$c^{-1}\|u\|_{H^{\phi,\gamma}_{p}}\leq(\|u\|_{p}+\|\phi(\Delta)^{\gamma/2}u\|_{p})\leq
c\|u\|_{H^{\phi,\gamma}_{p}}.$
###### Proof.
(i) follows from definition (6.2). For (ii), it suffices to prove the
completeness. Let $\\{u_{n}:n=1,2,\cdots\\}$ be a Cauchy sequence in
$H^{\phi,\gamma}_{p}$. Then $f_{n}:=(1-\phi(\Delta))^{\gamma/2}u_{n}$ is a
Cauchy sequence in $L_{p}$, and there exists $f\in L_{p}$ so that $f_{n}\to f$
in $L_{p}$. Define $u:=(1-\phi(\Delta))^{-\gamma/2}f$. Then $u\in
H^{\phi,\gamma}_{p}$ and
$\|u_{n}-u\|_{H^{\phi,\gamma}_{p}}=\|f_{n}-f\|_{p}\to 0.$
Finally (iii) and (iv) are consequences of a Fourier multiplier theorem.
Indeed, due to Theorem 0.2.6 in [22], we only need to show that for any
$\gamma_{1}\leq 0$ and $\gamma_{2}\in\mathbb{R}$
$|D^{n}[1+\phi(|\xi|^{2})^{\gamma_{1}}]|+|D^{n}[\frac{(1+\phi(|\xi|^{2}))^{\gamma_{2}}}{1+\phi(|\xi|^{2})^{\gamma_{2}}}]|+|D^{n}[\frac{(\phi(|\xi|^{2}))^{\gamma_{2}}}{(1+\phi(|\xi|^{2}))^{\gamma_{2}}}]|\leq
N(n)|\xi|^{-n}.$
This comes from Lemma 3.2. The lemma is proved. $\Box$
Denote
$\mathbb{H}^{\phi,\gamma}_{p}(T)=L_{p}(\Omega\times[0,T],\mathcal{P},H^{\phi,\gamma}_{p}),\quad\mathbb{L}_{p}(T):=\mathbb{H}^{\phi,0}_{p}(T),$
$\mathbb{H}^{\phi,\gamma}_{p}(T,\ell_{2})=L_{p}(\Omega\times[0,T],\mathcal{P},H^{\phi,\gamma}_{p}(\ell_{2})).$
###### Definition 6.2.
We write $u\in\mathcal{H}^{\phi,\gamma+2}_{p,\theta}(T)$ if
$u\in\mathbb{H}^{\phi,\gamma+2}_{p,\theta}(T)$, $u(0)=0$ and for some
$f\in\mathbb{H}^{\phi,\gamma}_{p,\theta}(T)$ and
$g=(g^{1},g^{2},\cdots)\in\mathbb{H}^{\phi,\gamma+1}_{p}(T,\ell_{2})$, it
holds that
$du=fdt+g^{k}dw^{k}_{t}$
in the sense of distributions, that is for any $\varphi\in
C^{\infty}_{0}(\mathbb{R}^{d})$, the equality
$(u(t),\varphi)=\int^{t}_{0}(f(s),\varphi)ds+\sum_{k}\int^{t}_{0}(g^{k}(s),\varphi)dw^{k}_{s}$
(6.3)
holds for all $t\leq T$ (a.s.). In this case we write
$f=\mathbb{D}u,\quad g=\mathbb{S}u.$
The norm in $\mathcal{H}^{\phi,\gamma+2}_{p}(T)$ is given by
$\|u\|_{\mathcal{H}^{\phi,\gamma+2}_{p}(T)}=\|u\|_{\mathbb{H}^{\phi,\gamma+2}_{p}(T)}+\|\mathbb{D}u\|_{\mathbb{H}^{\phi,\gamma}_{p}(T)}+\|\mathbb{S}u\|_{\mathbb{H}^{\phi,\gamma+1}_{p}(T,\ell_{2})}.$
###### Remark 6.3.
As explained in [16, Remark 3.2], for any
$g\in\mathbb{H}^{\phi,\gamma+1}_{p}(T,\ell_{2})$ and $\varphi\in
C^{\infty}_{0}(\mathbb{R}^{d})$, the series of stochastic integral
$\sum_{k}(g^{k},\varphi)dw^{k}_{t}$ converges in probability uniformly in
$[0,T]$ and defines a continuous square integrable martingale on $[0,T]$.
###### Theorem 6.4.
For any $\gamma$ and $p\geq 2$, $\mathcal{H}^{\phi,\gamma+2}_{p}(T)$ is a
Banach space. Also, for any $u\in\mathcal{H}^{\phi,\gamma+2}_{p}(T)$, we have
$u\in C([0,T],H^{\phi,\gamma}_{p})$ (a.s.) and
${\mathbb{E}}\sup_{t\leq T}\|u(t,\cdot)\|^{p}_{H^{\phi,\gamma}_{p}}\leq
N\left(\|\mathbb{D}u\|^{p}_{\mathbb{H}^{\phi,\gamma}_{p}(T)}+\|\mathbb{S}u\|^{p}_{\mathbb{H}^{\phi,\gamma}_{p}(T,\ell_{2})}\right).\mathrm{}$
(6.4)
In particular, for $t\leq T$,
$\|u\|^{p}_{\mathbb{H}^{\phi,\gamma}_{p}(t)}\leq
N\int^{t}_{0}\|u\|^{p}_{\mathcal{H}^{\phi,\gamma+2}_{p}(s)}\,ds.$
###### Proof.
Since the operator
$(1-\phi(\Delta))^{\gamma/2}:\mathcal{H}^{\phi,\gamma+2}_{p}(T)\to\mathcal{H}^{\phi,2}_{p}(T)$
is an isometry, it suffices to prove the case $\gamma=0$. In this case
$H^{\phi,\gamma}_{p}=L_{p}$, and therefore (6.4) is proved, for instance, in
Theorem 3.4 of [9]. Also, the completeness of the space
$\mathcal{H}^{\phi,\gamma+2}_{p}(T)$ can be proved using (6.4) as in the proof
of Theorem 3.4 of [9]. $\Box$
The following maximal principle will be used to prove the uniqueness result of
equation (6.1).
###### Lemma 6.5.
Let $\lambda>0$ be a constant. Suppose that $u$ is continuous in
$[0,T]\times\mathbb{R}^{d}$, $u(t,\cdot)\in C^{2}_{b}(\mathbb{R}^{d})$ for
each $t>0$, $u_{t},\phi(\Delta)u$ are continuous in
$(0,T]\times\mathbb{R}^{d})$, $u_{t}-\phi(\Delta)u+\lambda u=0$ for
$t\in(0,T]$, $u(t_{n},x)\to u(t,x)$ as $t_{n}\to t$ uniformly for
$x\in\mathbb{R}^{d}$, $u(0,x)=0$ for all $x\in\mathbb{R}^{d}$, and for each
$t$ $u(t,x)\to 0$ as $|x|\to\infty$. Then $u\equiv 0$ in
$[0,T]\times\mathbb{R}^{d}$.
###### Proof.
Suppose $\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}u(t,x)>0$. Then we claim that
there exists a $(t_{0},x_{0})\in[0,T]\times\mathbb{R}^{d}$ such that
$u(t_{0},x_{0})=\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}u(t,x)$. The
explanation is as follows. Since there exists a sequence $(t_{n},x_{n})$ such
that $u(t_{n},x_{n})\to\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}u(t,x)$, one
can choose a subsequence $t_{n_{k}}$ such that $t_{n_{k}}\to t_{0}$ for some
$t_{0}\in[0,T]$ as $k\to\infty$. If $\\{x_{n}\\}$ is unbounded, then there
exists a subsequence $x_{n_{k}}$ such that $|x_{n_{k}}|\to\infty$. Due to the
assumption : $u(t_{n},x)\to u(t,x)$ as $t_{n}\to t$ uniformly for
$x\in\mathbb{R}^{d}$, we have $u(t_{n_{k}},x_{n_{k}})\to u(t_{0},x_{n_{k}})$
as $k\to\infty$. But since $u(t_{0},x_{n_{k}})\to 0$ as $k\to\infty$ this is
contradiction to the fact
$u(t_{n_{k}},x_{n_{k}})\to\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}u(t,x)>0$.
Therefore, $\\{x_{n}\\}$ is bounded and this means that $x_{n}$ also has a
subsequence $x_{n_{k}}$ such that $x_{n_{k}}\to x_{0}$ for some
$x_{0}\in\mathbb{R}^{d}$. So we know that our claim is true. Note that since
$u(0,x)=0~{}\forall x\in\mathbb{R}^{d}$, $t_{0}>0$ and if $t_{0}\in(0,T)$ then
$u_{t}(t_{0},x_{0})=0$ otherwise $u_{t}(T,x_{0})\geq 0$. Recall that
$\displaystyle\phi(\Delta)u(t,x)=\lim_{\varepsilon\downarrow
0}\int_{\\{y\in\mathbb{R}^{d}:\,|y|>\varepsilon\\}}(u(t,x+y)-u(t,x))j(|y|)\,dy$
and $j$ is strictly positive. Therefore, we get $(u_{t}-\phi(\Delta)u+\lambda
u)(t_{0},x_{0})>0$ and this is contradiction to our assumption. So we have
$\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}u(t,x)\leq 0$. Similarly, we can
easily show that $\inf_{(t,x)\in[0,T]\times\mathbb{R}^{d}}u(t,x)\geq 0$. The
lemma is proved. $\Box$
The following result will be used to estimate the deterministic part of (6.1).
###### Lemma 6.6.
Let $m(\tau,\xi):=\frac{\phi(|\xi|^{2})}{i\tau+\phi(|\xi|^{2})}$. Then, $m$ is
a $L_{p}(\mathbb{R}^{d+1})$-multiplier. In other words,
$\|\mathcal{F}^{-1}(m\mathcal{F}f)\|_{L_{p}(\mathbb{R}^{d+1})}\leq
N\|f\|_{L_{p}(\mathbb{R}^{d+1})},\quad\forall f\in L_{p}(\mathbb{R}^{d+1}),$
where $N$ depends only on $d$ and $p$.
###### Proof.
First we estimate derivatives of $m$. Let
$\alpha=(\alpha_{1},\cdots,\alpha_{d})\neq 0$ be a $d$-dimensional multi-index
with $\alpha_{i}=0$ or $1$ for $i=1,\ldots,d$. Assume $\beta,\gamma$ are
multi-indices so that $\beta+\gamma=\alpha$. Then from Lemma 3.2 we can easily
get
$\displaystyle|D^{\beta}(\phi(|\xi|^{2}))|\leq
N\phi(|\xi|^{2})|\xi|^{-|\beta|}.$ (6.5)
Suppose $\gamma\neq 0$. Without loss of generality assume $\gamma_{1}=1$. Then
by Leibniz’s rule and (6.5), we get
$\displaystyle|D^{\gamma}(i\tau+\phi(|\xi|^{2}))^{-1}|$ (6.6) $\displaystyle=$
$\displaystyle|D^{\gamma^{\prime}}D^{\gamma_{1}}(i\tau+\phi(|\xi|^{2}))^{-1}|$
$\displaystyle=$
$\displaystyle|D^{\gamma^{\prime}}(i\tau+\phi(|\xi|^{2}))^{-2}D_{\xi^{1}}(\phi(|\xi|^{2}))|$
$\displaystyle\leq$ $\displaystyle
N\sum_{\bar{\gamma}^{\prime}+\hat{\gamma}^{\prime}=\gamma^{\prime}}|(i\tau+\phi(|\xi|^{2}))^{-(|\bar{\gamma}^{\prime}|+2)}$
$\displaystyle\quad\quad\quad\quad\quad\times[D_{\xi^{1}}(\phi(|\xi|^{2}))]^{\bar{\gamma}_{1}^{\prime}}\cdots[D_{\xi^{d}}(\phi(|\xi|^{2}))]^{\bar{\gamma}_{d}^{\prime}}D^{\hat{\gamma}^{\prime}}D_{\xi^{1}}(\phi(|\xi|^{2}))|$
$\displaystyle\leq$ $\displaystyle
N\sum_{\bar{\gamma}^{\prime}+\hat{\gamma}^{\prime}=\gamma^{\prime}}|\tau^{2}+\phi(|\xi|^{2})^{2}|^{-(|\bar{\gamma}^{\prime}|+2)/2}\phi(|\xi|^{2})^{|\bar{\gamma}^{\prime}|}|\xi|^{-|\bar{\gamma}^{\prime}|}\phi(|\xi|^{2})|\xi|^{-(|\hat{\gamma}^{\prime}|+1)}$
$\displaystyle\leq$ $\displaystyle N\
|\tau^{2}+\phi(|\xi|^{2})^{2}|^{-1/2}|\xi|^{-|\gamma|}.$
Obviously even if $\gamma=0$, (6.6) holds. Therefore from (6.5) and (6.6), we
get
$\displaystyle|D^{\alpha}m(\tau,\xi)|\leq
N\frac{\phi(|\xi|^{2})}{|\tau|+\phi(|\xi|^{2})}|\xi|^{-|\alpha|}.$ (6.7)
Next let
$\hat{\alpha}=(\alpha_{0},\alpha_{1},\ldots,\alpha_{d})=(\alpha_{0},\alpha)$
be a $(d+1)$-dimensional multi-index with $\alpha_{0}\neq 0$ and
$\alpha_{i}=0$ or $1$ for $i=1,\ldots,d$. Then from (6.7) we get
$\displaystyle|D^{\hat{\alpha}}m(\tau,\xi)|$ $\displaystyle\leq$
$\displaystyle N(|\tau|+\phi(|\xi|^{2}))^{-\alpha_{0}}|D^{\alpha}m(\tau,\xi)|$
(6.8) $\displaystyle\leq$ $\displaystyle
N\frac{\phi(|\xi|^{2})}{(|\tau|+\phi(|\xi|^{2}))^{\alpha_{0}+1}}|\xi|^{-|\alpha|}.$
Now to conclude that $m$ is a multiplier, we use Theorem 4.6’ in p 109 of
[24]. Due to (6.8), we see that for each $0<k\leq d+1$
$\displaystyle|\frac{\partial^{k}m}{\partial\tau\partial\xi_{1}\cdots\partial\xi_{k-1}}|\leq
N\frac{\phi(|\xi|^{2})}{(|\tau|+\phi(|\xi|^{2}))^{2}}|\xi|^{-|k-1|}.$
Therefore, for any dyadic rectangles $A=\Pi_{1\leq i\leq
k}[2^{k_{i}},2^{k_{i}+1}]$, we have
$\displaystyle\int_{\Pi}|\frac{\partial^{k}m}{\partial x_{1}\partial
x_{2}\cdots\partial x_{k}}|d\tau d\xi_{1}\cdots\xi_{k-1}\leq N.$
We can easily check that from (6.7) and (6.8), the above statement is also
valid for every one of the $n!$ permutations of the variables
$\tau,\xi_{1},\ldots,\xi_{d}$. The lemma is proved. $\Box$
Here is our $L_{p}$-theory.
###### Theorem 6.7.
For any $f\in\mathbb{H}^{\phi,\gamma}_{p}(T)$ and
$g=(g^{1},g^{2},\cdots)\in\mathbb{H}^{\phi,\gamma+1}_{p}(T,\ell_{2})$,
equation (6.1) has a unique solution $u\in\mathcal{H}^{\phi,\gamma+2}_{p}(T)$,
and for this solution
$\|u\|_{\mathcal{H}^{\phi,\gamma+2}_{p}(T)}\leq
N\|f\|_{\mathbb{H}^{\phi,\gamma}_{p}(T)}+N\|g\|_{\mathbb{H}^{\phi,\gamma+1}_{p}(T,\ell_{2})}.$
(6.9)
###### Proof.
Due to the isometry, we may assume $\gamma=0$. Note that if $u$ is a solution
of (6.1) in $\mathcal{H}^{\phi,2}_{p}(T)$, then we have $u\in C([0,T],L_{p})$
(a.s.) by Theorem 6.4.
Step 1. First we prove the uniqueness of Equation (6.1). Let $u_{1}$, $u_{2}$
be solutions of equation (6.1). Then putting $v:=u_{1}-u_{2}$, we see that $v$
satisfies (6.1) with $f=g^{k}=0$.
Take a non-negative smooth function $\varphi\in C^{\infty}_{0}$ with unit
integral. For $\varepsilon>0$, define
$\varphi_{\varepsilon}(x)=\varepsilon^{-d}\varphi(x/\varepsilon)$. Also denote
$w(t,x):=e^{-\lambda t}u(t,x)$ and $w^{\varepsilon}=w*\varphi_{\varepsilon}$.
Then by plugging $\varphi_{\varepsilon}(\cdot-x)$ in (6.3) in place of
$\varphi$, we have $w^{\varepsilon}_{t}-\phi(\Delta)w^{\varepsilon}+\lambda
w^{\varepsilon}=0$. Also one can easily check that $w^{\varepsilon}$ satisfies
the conditions in Lemma 6.5 and concludes that $w^{\varepsilon}\equiv 0$. This
certainly proves $v\equiv 0$ (a.s.).
Step 2. We consider the case $g=0$. By approximation argument (see the next
step for the detail), to prove the existence and (6.9), we may assume that $f$
is sufficiently smooth in $x$ and vanishes if $|x|$ is sufficiently large. In
this case, one can easily check that for each $\omega$,
$u(t,x)=\int^{t}_{0}T_{t-s}f(s,x)ds$ (6.10)
satisfies $u_{t}=\phi(\Delta)u+f$. In addition to it, denoting
$\bar{p}(t,x)=I_{0\leq t}p(t,x)$ and $\bar{f}=I_{0\leq t\leq T}f(t,x)$, we see
that for $(t,x)\in[0,T]\times\mathbb{R}^{d}$
$\displaystyle u(t,x)=\bar{p}(\cdot,\cdot)\ast\bar{f}(\cdot,\cdot)(t,x).$
(6.11)
We use notation $\mathcal{F}_{d}$ and $\mathcal{F}_{d+1}$ to denote the
Fourier transform for $x$ and $(t,x)$, respectively. Moreover for convenience
we put $\mathcal{F}_{d}u(t,x)=\mathcal{F}_{d}(u(t,\cdot))(x)$. Under this
setting, observe that
$\displaystyle\mathcal{F}_{d+1}(\bar{p})(\tau,\xi)=\int_{\mathbb{R}}e^{-i\tau
t}\mathcal{F}_{d}(\bar{p})(t,\xi)dt=\int_{0}^{\infty}e^{-i\tau
t}e^{-t\phi(|\xi|^{2})}dt=\frac{1}{i\tau+\phi(|\xi|^{2})}.$
So denoting $\bar{u}=\bar{p}(\cdot,\cdot)\ast\bar{f}(\cdot,\cdot)(t,x)$ we see
$\displaystyle\mathcal{F}_{d+1}^{-1}[(1+\phi(|\xi|^{2}))\mathcal{F}_{d+1}\bar{u}](t,x)$
$\displaystyle=$
$\displaystyle\bar{u}+\mathcal{F}_{d+1}^{-1}[\phi(|\xi|^{2})\mathcal{F}_{d+1}(\bar{p})\mathcal{F}_{d+1}(\bar{f})]$
(6.12) $\displaystyle=$
$\displaystyle\bar{u}+\mathcal{F}_{d+1}^{-1}[\frac{\phi(|\xi|^{2})}{i\tau+\phi(|\xi|^{2})}\mathcal{F}_{d+1}(\bar{f})].$
Due to generalized Minkowski’s inequality, we can easily check that
$\|\bar{u}\|_{L_{p}(\mathbb{R}^{d+1})}\leq
N\|f\|_{L_{p}([0,T]\times\mathbb{R}^{d})}$. Moreover we know that
$\frac{\phi(|\xi|^{2})}{i\tau+\phi(|\xi|^{2})}$ is a
$L_{p}(\mathbb{R}^{d+1})$-multiplier from Lemma 6.6. Therefore, from (6.11) we
conclude that
$\displaystyle\|u\|_{\mathbb{H}^{\phi,2}_{p}(T)}\leq\|f\|_{\mathbb{L}_{p}(T)}.$
Step 3. We consider the case $f=0$. First, assume that $g^{k}=0$ for all
sufficiently large $k$ (say for all $k\geq N_{0}$), and each $g^{k}$ is of the
type
$g^{k}(t,x)=\sum_{i=0}^{m_{k}}{\bf
1}_{(\tau^{k}_{i},\tau^{k}_{i+1}]}(t)g^{k_{i}}(x)\qquad\text{for }k\leq
N_{0},$ (6.13)
where $\tau^{k}_{i}$ are bounded stopping times and $g^{k_{i}}(x)\in
C^{\infty}_{0}(\mathbb{R}^{d})$. Define
$v(t,x):=\sum_{k=1}^{N_{0}}\int^{t}_{0}g^{k}(s,x)dw^{k}_{s}=\sum_{k=1}^{N_{0}}\sum_{i=1}^{m_{k}}g^{k_{i}}(x)(w^{k}_{t\wedge\tau^{k}_{i+1}}-w^{k}_{t\wedge\tau^{k}_{i}})$
and
$u(t,x):=v(t,x)+\int^{t}_{0}\phi(\Delta)T_{t-s}v(s,x)\,ds=v(t,x)+\int^{t}_{0}T_{t-s}\phi(\Delta)v(s,x)\,ds.$
(6.14)
Then $u-v=\int^{t}_{0}T_{t-s}\phi(\Delta)v(s,x)ds$, and therefore (see (6.10))
we have
$(u-v)_{t}=\phi(\Delta)(u-v)+\phi(\Delta)v=\phi(\Delta)u,$
and
$du=d(u-v)+dv=\phi(\Delta)udt+\sum_{k=1}^{N_{0}}g^{k}dw^{k}_{t}.$
Also by (6.14) and stochastic Fubini theorem ([18, Theorem 64]), almost
surely,
$\displaystyle u(t,x)$ $\displaystyle=$ $\displaystyle
v(t,x)+\sum_{k=1}^{N_{0}}\int^{t}_{0}\int^{s}_{0}\phi(\Delta)T_{t-s}g^{k}(r,x)dw^{k}_{r}ds$
$\displaystyle=$ $\displaystyle
v(t,x)-\sum_{k=1}^{N_{0}}\int^{t}_{0}\int^{t}_{r}\frac{\partial}{\partial
s}T_{t-s}g^{k}(r,x)dsdw^{k}_{r}$ $\displaystyle=$
$\displaystyle\sum_{k=1}^{N_{0}}\int^{t}_{0}T_{t-s}g^{k}(s,x)dw^{k}_{s}.$
Hence,
$\phi(\Delta)u(t,x)=\sum_{k=1}^{N_{0}}\int^{t}_{0}\phi(\Delta)^{1/2}T_{t-s}\phi(\Delta)^{1/2}g^{k}(s,\cdot)(x)dw^{k}_{s},$
and by Burkholder-Davis-Gundy’s inequality, we have
${\mathbb{E}}\left[\big{|}\phi(\Delta)u(t,x)\big{|}^{p}\right]\leq
c(p){\mathbb{E}}\left[\left(\int^{t}_{0}\sum_{k=1}^{N_{0}}|\phi(\Delta)^{1/2}T_{t-s}\phi(\Delta)^{1/2}g^{k}(s,\cdot)(x)|^{2}ds\right)^{p/2}\right].$
Also, similarly we get
${\mathbb{E}}\left[\big{|}u(t,x)\big{|}^{p}\right]\leq
c(p){\mathbb{E}}\left[\left(\int^{t}_{0}\sum_{k=1}^{N_{0}}|T_{t-s}g^{k}(s,\cdot)(x)|^{2}ds\right)^{p/2}\right].$
Now it is enough to use Theorem 1.1 and Lemma 6.1 to conclude
$\|u\|_{\mathbb{H}^{\phi,2}_{p}(T)}\leq
N\|g\|_{\mathbb{H}^{\phi,1}_{p}(T,\ell_{2})}.$ (6.15)
For general $g$, take a sequence $g_{n}$ so that $g_{n}\to g$ in
$\mathbb{H}^{\phi,1}_{p}(T,\ell_{2})$ and each $g_{n}$ satisfies above
described conditions. Then, by the above result, for
$u_{n}:=\int^{t}_{0}T_{t-s}g^{k}_{n}dw^{k}_{s}$, we have
$du_{n}=\phi(\Delta)u_{n}dt+g^{k}_{n}dw^{k}_{t}$, and
$\|u_{n}\|_{\mathbb{H}^{\phi,2}_{p}(T)}\leq
N\|g_{n}\|_{\mathbb{H}^{\phi,1}_{p}(T,\ell_{2})},$
$\|u_{n}-u_{m}\|_{\mathbb{H}^{\phi,2}_{p}(T)}\leq
N\|g_{n}-g_{m}\|_{\mathbb{H}^{\phi,1}_{p}(T,\ell_{2})}.$
Thus $u_{n}$ is a Cauchy sequence in $\mathcal{H}^{\phi,2}_{p}(T)$ and
converges to a certain function $u\in\mathcal{H}^{\phi,2}_{2}(T)$. One easily
gets (6.15) by letting $n\to\infty$, and by Theorem 6.4 it also follows
${\mathbb{E}}\sup_{t\leq T}\|u_{n}-u_{m}\|^{p}_{L_{p}}\to 0$ as
$n,m\to\infty$. Finally by taking the limit from
$(u_{n}(t),\varphi)=\int^{t}_{0}(\phi(\Delta)u_{n},\varphi)ds+\sum_{k}\int^{t}_{0}(g^{k}_{n},\varphi)dw^{k}_{s},\quad\forall\,t\leq
T\,(a.s.)$
and remembering ${\mathbb{E}}\sup_{t\leq T}\|u_{n}-u\|^{p}_{L_{p}}\to 0$, we
prove that $u$ satisfies
$(u(t),\varphi)=\int^{t}_{0}(\phi(\Delta)u,\varphi)ds+\sum_{k}\int^{t}_{0}(g^{k},\varphi)dw^{k}_{s},\quad\forall\,t\leq
T\,(a.s.)$
Step 4. General case. The uniqueness follows from Step 1. For the existence
and the estimate it is enough to add the solutions in Steps 2 and 3. The
theorem is proved.
$\Box$
## References
* [1] J. Bertoin, Lévy Processes , Cambridge University Press, Cambridge, 1996\.
* [2] T. Chang and K. Lee, On a stochastic partial differential equation with a fractional Laplacian operator, Stochastic Process. Appl., 122 (2012), pp. 3288-3311. 41
* [3] Z.-Q. Chen, P. Kim and T. Kumagai. On heat kernel estimates and parabolic Harnack inequality for jump processes on metric measure spaces, Acta Mathematica Sinica, English Series 25 (2009), 1067–1086.
* [4] Z.-Q. Chen, P. Kim and R. Song. Sharp heat kernel estimates for relativistic stable processes in open sets, Ann. Probab. 40 (2012), 213–244.
* [5] Z.-Q. Chen and T. Kumagai, Heat kernel estimates for jump processes of mixed types on metric measure spaces, Probab. Theory Relat. Fields, 140 (2008), 277-317.
* [6] W. Farkas, N. Jacob and R. L. Schilling, Function spaces related to continuous negative definite functions: $\psi$-Bessel potential spaces, Dissertationes Math. (Rozprawy Mat.) 393 (2001), 62 pp.
* [7] T. Grzywny, On Harnack inequality and Hölder regularity for isotropic unimodal Lévy processes, arXiv:1301.2441 [math.PR] (2013)
* [8] I. Kim and K. Kim, A generalization of the Littlewood-paley inequality for the fractional Laplacian $(-\Delta)^{\alpha/2}$, J. Math. Anal. Appl., 388(1) (2012), 175–190.
* [9] K. Kim and P. Kim, An Lp-theory of stocastic parabolic equations with the random fractional Laplacian driven by Levy processes Stochastic process. appl., 122(12) (2012), 3921–3952.
* [10] P. Kim and A. Mimica, Harnack inequalities for subordinate Brownian motions, Electron. J. Probab., 17 (2012), #37.
* [11] P. Kim, R. Song and Z. Vondraček, Potential theory of subordinated Brownian motions revisited, Stochastic anal. applications to finance, essays in honour of Jia-an Yan. Interdisciplinary Mathematical Sciences - Vol. 13, World Scientific, 2012, pp. 243–290.
* [12] P. Kim, R. Song and Z. Vondraček, Uniform boundary Harnack principle for rotationally symmetric Lévy processes in general open sets, Sci. China Math., 55, (2012), 2193–2416.
* [13] P. Kim, R. Song and Z. Vondraček, Global uniform boundary Harnack principle with explicit decay rate and its application, arXiv:1212.3092 [math.PR] (2012)
* [14] N.V. Krylov, On the foundation of the $L_{p}$-Theory of SPDEs, Stochastic partial differential equations and applications—VII, 179-191, Lect. Notes Pure Appl. Math., 245, Chapman & Hall/CRC, Boca Raton, FL, 2006.
* [15] N.V. Krylov, A generalization of the Littlewood-Paley inequality and some other results related to stochastic partial differential equations, Ulam Quaterly, 2 (1994), no. 4, 16-26.
* [16] N.V. Krylov, An analytic approach to SPDEs, pp. 185-242 in Stochastic Partial Differential Equations: Six Perspectives, Mathematical Surveys and Monographs, 64 (1999), AMS, Providence, RI.
* [17] R. Mikulevicius and H. Pragarauskas, On Lp-estimates of some singular integrals related to jump processes, SIAM J. Math. Anal., 44 (2012), No. 4, pp. 2305-2328.
* [18] P. E. Protter, Stochastic Integration and Differential Equations. Second edition. Version 2.1. Corrected third printing, Springer-Verlag, Berlin, 2005
* [19] K.-I. Sato, Lévy Processes and Infinitely Divisible Distributions . Cambridge University Press, 1999.
* [20] R. L. Schilling, R. Song and Z. Vondraček, Bernstein Functions: Theory and Applications, de Gruyter Studies in Mathematics 37. Berlin: Walter de Gruyter, 2010.
* [21] A. V. Skorohod. Random Processes with Independent Increments. Kluwer, Dordrecht, 1991.
* [22] C. D. Sogge Fourier Integrals in Classical Analysis, Cambridge, 1993.
* [23] E. Stein, Harmonic analysis : real-variable methods, orthogonality, and oscillatory integrals, Princeton University Press, 1993.
* [24] E. Stein, Singular integrals and differentiability properties of functions, Princeton. N.J, 1970.
|
arxiv-papers
| 2013-02-20T18:01:49 |
2024-09-04T02:49:41.956635
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ildoo Kim, Kyeong-Hun Kim and Panki Kim",
"submitter": "Kyeong-Hun Kim",
"url": "https://arxiv.org/abs/1302.5053"
}
|
1302.5073
|
# On the existence of solutions to nonlinear systems of higher order Poisson
type
Yifei Pan and Yuan Zhang111Supported in part by National Science Foundation
DMS-1200652.
###### Abstract
In this paper, we study the existence of higher order Poisson type systems. In
detail, we prove a Residue type phenomenon for the fundamental solution of
Laplacian in ${\mathbb{R}}^{n},n\geq 3$. This is analogous to the Residue
theorem for the Cauchy kernel in ${\mathbb{C}}$. With the aid of the Residue
type formula for the fundamental solution, we derive the higher order
derivative formula for the Newtonian potential and obtain its appropriate
$\mathcal{C}^{k,\alpha}$ estimates. The existence of solutions to higher order
Poisson type nonlinear systems is concluded as an application of the fixed
point theorem.
## 1 Introduction and background
We study the existence of solutions ${u}=(u_{1},\ldots,u_{N})$ to the
following nonlinear system in ${\mathbb{R}}^{n}$, $n\geq 3$:
$\triangle^{m}u(x)=a(x,u,\nabla u,\ldots,\nabla^{2m}u).$ (1)
Here $m\in{\mathbb{Z}}^{+}$, $\nabla^{j}u$ represents all $j$-th order partial
derivatives of all the components of $u$, and $a:=(a_{1},\ldots,a_{N})$ is a
vector-valued function on $x$ and the derivatives of $u$ up to order $2m$.
Label the variables of $a$ by $(p_{-1},p_{0},p_{1},\ldots,p_{2m})$, with
$p_{-1}$ representing the position of the variable $x$ and $p_{j}$
representing the position of $\nabla^{j}u$, $0\leq j\leq m$.
The solvability has been one of the central problems in the theory of partial
differential equations and has been explored widely since the counterexample
of Hans Lewy [Lw] in 1957. See [Ho1], [NT], [Mo], [Ho2], [BF], [Ln], [De] and
the references therein. Unlike linear equations, there is in general no
systematic theory about the solvability for nonlinear equations, not to
mention nonlinear system. Most recently, [Pan1] investigated the existence
problem to (1) through Cauchy Riemann operator in the case when $n=2$. In a
consequent paper [Pan2], Pan studied the solvability when $m=1$ for general
$n\geq 3$.
Our main theorems are stated as follows.
###### Theorem 1.1.
Let $a\in\mathcal{C}^{1,\alpha}(0<\alpha<1)$. For any given appropriate
initial data $\\{c_{j}\\}_{0\leq j\leq 2m-1}$, there exist infinitely many
solutions in the class of $\mathcal{C}^{2m,\alpha}$ to the initial value
system
$\begin{split}&\triangle^{m}u(x)=a(x,u,\nabla u,\ldots,\nabla^{2m-1}u);\\\
&u(0)=c_{0};\\\ &\nabla u(0)=c_{1};\\\ &\cdots\\\
&\nabla^{2m-1}u(0)=c_{2m-1}\end{split}$ (2)
in some small neighborhood of 0. Moreover, all those solutions are of
vanishing order at most $2m$ and not radially symmetric.
Here we call $\\{c_{j}\\}_{0\leq j\leq 2m-1}$ an appropriate initial data if
it satisfies the appropriate symmetry conditions as derivatives of any vector-
valued function. This is apparently necessary for the existence of solutions
to the system (2). A function $u$ in the class of $\mathcal{C}^{k}$ is said to
be of vanishing order $m$ ($m\leq k$) at 0 if $\nabla^{j}u(0)=0$ for all
$0\leq j\leq m-1$ and $\nabla^{m}u(0)\neq 0$.
We point out, since the solutions obtained in Theorem 1.1 are of vanishing
order at most $2m$, they are never trivial solutions. Moreover, since the
solutions are not radially symmetric, they are not obtained by possibly
reducing the system into an ODE system with respect only to the radial
variable $r=|x|$.
Due to the flexibility of $a$, Theorem 1.1 can be used to construct local
$m$-harmonic maps from Euclidean space to any given Riemannian manifold. The
resulting image in the target manifold can be either smooth or singular,
depending on the given initial data.
When $a$ is dependent also on $p_{2m}$ variable, we obtain the following
existence theorem with some additional assumption on $a$.
###### Theorem 1.2.
If $a\in\mathcal{C}^{2}$ and
$a(0)=\nabla_{p_{2m}}a(0)=\nabla^{2}_{p_{2m}}a(0)=0$, then there exist
infinitely many solutions in the class of $\mathcal{C}^{2m,\alpha}$
($0<\alpha<1$) to the system
$\triangle^{m}u(x)=a(x,u,\nabla u,\ldots,\nabla^{2m}u)$
in some small neighborhood of 0. Moreover, all those solutions are of
vanishing order $2m$ and not radially symmetric.
On the other hand, if the system (1) is autonomous, i.e., independent of the
variable $x$, then there exists solutions over large domains in the following
sense.
###### Theorem 1.3.
If $a\in\mathcal{C}^{2}$ and $a(0)=\nabla a(0)=0$, then for any $R>0$, there
exist infinitely many solutions in the class of $\mathcal{C}^{2m,\alpha}$ to
the autonomous system
$\triangle^{m}u=a(u,\nabla u,\ldots,\nabla^{2m}u)$
in $\\{x\in{\mathbb{R}}^{n}:|x|<R\\}$. Moreover, all those solutions are of
vanishing order $2m$ and are not radially symmetric.
We would like to point out, even though the autonomous system in Theorem 1.3
is itself translation invariant, none of the solutions obtained there is a
trivial translation of the radial solution, from the proof of Theorem 1.3. On
the other hand, the regularity of $a$ in Theorem 1.3 can be reduced to
$\mathcal{C}^{1,\alpha}$ if $a$ is in addition independent of $\nabla^{2m}u$
variable. This will be seen from the proof of Theorem 1.2 and 1.3 in section 9
and 10. The fact will be used in some of the following examples.
We also note the neighborhood in Theorem 1.1 where the solutions exist is
necessarily small, as indicated by the following example of Osserman.
###### Remark 1.4.
Consider the initial value system in $n$-dimensions ($n\geq 3$):
$\begin{split}&\triangle u=|u|^{\frac{n+2}{n-2}};\\\ &u(0)=c_{0};\\\ &\nabla
u(0)=c_{1}.\end{split}$
Theorem 1.1 applies to obtain some $\mathcal{C}^{2,\alpha}$ solution over a
small neighborhood of 0, say, $\\{x\in{\mathbb{R}}^{n}:|x|<R\\}$. On the other
hand, by a result of [Os], if the solution exists in
$\\{x\in{\mathbb{R}}^{n}:|x|<R\\}$ and $c_{0}>0$, then $R\leq
nu(0)^{-\frac{2}{n-2}}=nc_{0}^{-\frac{2}{n-2}}$. Consequently, $R\rightarrow
0$ as $c_{0}\rightarrow+\infty$. This does not contradict with Theorem 1.3
apparently, since the solutions constructed in Theorem 1.3 are of vanishing
order $2m$ and hence $c_{0}=0$.
As a matter of fact, a large class of the systems fit into one or more of the
three theorems. Especially, we have the following systems solvable.
###### Example 1.5.
For any $p>1$ and any given $R>0$, the system
$\triangle^{m}u=\pm|u|^{p}$
has infinitely many $\mathcal{C}^{2m,\alpha}$ non-radial solutions over
$\\{x\in{\mathbb{R}}^{n}:|x|<R\\}$, as a consequence of Theorem 1.3. Here
$\alpha=\min\\{1-\epsilon,p-1\\}$ with $\epsilon$ any arbitrarily small
positive number. Those solutions are necessarily smooth after a standard
bootstrap argument.
The following system has been well studied in the literature.
###### Example 1.6.
Let $H\in\mathcal{C}^{3}$ and $H^{\prime}(0)=0$. Consider the system
$\triangle u=\nabla\big{(}H(u)\big{)}.$
According to Theorem 1.3, for any $R>0$, the above system has infinitely many
non-radial solutions in
$\mathcal{C}^{2,\alpha}(\\{x\in{\mathbb{R}}^{n}:|x|<R\\})$ for any
$0<\alpha<1$.
Indeed, a straight forward computation shows in the above example that
$a(u,\nabla u)=\nabla\big{(}H(u)\big{)}=H^{\prime}(u)\nabla u$,
$\nabla_{p_{0}}\big{(}a(u,\nabla u)\big{)}=H^{\prime\prime}(u)\nabla u$ and
$\nabla_{p_{1}}\big{(}a(u,\nabla u)\big{)}=H^{\prime}(u)$ and hence the system
satisfies $a\in\mathcal{C}^{2}$ and $a(0)=\nabla a(0)=0$. By Theorem 1.3, for
any $R>0$, there exist infinitely many solutions in the class of
$\mathcal{C}^{2,\alpha}(\\{x\in{\mathbb{R}}^{n}:|x|<R\\})$ and none of them is
radially symmetric.
One similarly has the solvability for the following $m$-th order Poisson type
system.
###### Example 1.7.
Let $H\in\mathcal{C}^{3}$ and $H^{\prime}(0)=0$. Then for any $R>0$,
$\triangle^{m}u=\nabla\big{(}H(u,\nabla u,\ldots,\nabla^{2m-2}u)\big{)}$
has infinitely many non-radial smooth solutions in
$\mathcal{C}^{2m,\alpha}(\\{x\in{\mathbb{R}}^{n}:|x|<R\\})$ for any
$0<\alpha<1$.
To see the solvability of the above example, a similar computation shows
$a(u,\nabla u,\ldots,\nabla^{2m-1}u)=\nabla\big{(}H(u,\nabla
u,\ldots,\nabla^{2m-2}u)\big{)}=\sum_{j=0}^{2m-2}\nabla_{j}H(u,\nabla
u,\ldots,\nabla^{2m-2}u)\nabla^{j+1}u,$
where $\nabla_{j}H$ is the derivative of $H$ with respect to $\nabla^{j}u$
variable. Furthermore,
$\nabla_{p_{0}}\big{(}a(u,\nabla
u,\ldots,\nabla^{2m-1}u)\big{)}=\sum_{j=0}^{2m-2}\nabla_{j}\nabla_{0}H(u,\nabla
u,\ldots,\nabla^{2m-2}u)\nabla^{j+1}u$
and for $k\geq 1$,
$\begin{split}\nabla_{p_{k}}\big{(}a(u,\nabla
u,\ldots,\nabla^{2m-1}u)\big{)}=&\nabla_{p_{k}}\big{(}\sum_{j=0}^{2m-2}\nabla_{j}H(u,\nabla
u,\ldots,\nabla^{2m-2}u)\nabla^{j+1}u\big{)}\\\ =&\sum_{0\leq j,k\leq
2m-2}\nabla_{j}\nabla_{k}H(u,\nabla u,\ldots,\nabla^{2m-2}u)\nabla^{j+1}u\\\
&+\nabla_{k-1}H(u,\nabla u,\ldots,\nabla^{2m-2}u).\end{split}$
Hence $a\in\mathcal{C}^{2}$ and $a(0)=\nabla a(0)=0$. By Theorem 1.3, for any
$R>0$, there exist infinitely many solutions in the class of
$\mathcal{C}^{2,\alpha}(\\{x\in{\mathbb{R}}^{n}:|x|<R\\})$ and none of them is
radially symmetric.
The proof of the main theorems relies largely on a residue-type result for the
fundamental solution $\Gamma(\cdot)$ of Laplacian in ${\mathbb{R}}^{n},n\geq
3$. This phenomenon is motivated by the Residue theorem for holomorphic
functions in ${\mathbb{C}}$. In fact, the Cauchy integral formula specially
indicates the integral of the Cauchy kernel with a degree $k$ holomorphic
polynomial over a simple closed curve in ${\mathbb{C}}$ is necessarily a
holomorphic polynomial of the same degree. In this paper, we show the similar
phenomenon also holds for $\Gamma(\cdot)$. Precisely speaking, denoting by
$\mathcal{P}_{k}$ the space of polynomials of degree $k$ restricted in
$\\{x\in{\mathbb{R}}^{n}:|x|<R\\}$, we have, making use of zonal spherical
harmonics,
###### Theorem 1.8.
For any $f\in\mathcal{P}_{k}$ with $k\geq 0$,
$\int_{|y|=R}\Gamma(\cdot-y)f(y)d\sigma_{y}\in\mathcal{P}_{k}.$
Here $d\sigma_{y}$ is the surface area element of
$\\{y\in{\mathbb{R}}^{n}:|y|=R\\}$.
The above theorem plays an essential role in deriving the higher order
derivatives for the Newtonian potential and the corresponding estimates via an
induction process. Furthermore, as a side product, the residue formula allows
to define the principle value of the higher order derivatives of the Newtonian
potentials.
The rest of the paper is outlined as follows. The notations for the function
spaces and the corresponding norms are given in Section 2. In Section 3, we
prove Theorem 1.8. As a consequence, the principle value of higher order
derivatives of the Newtonian potential is well defined and computed in Section
4. As another application of the residue-type phenomenon for the fundamental
solution, we derive the general higher order derivative formula and the
corresponding estimates for the Newtonian potential in Section 5 and Section
6. Section 7 is devoted to the construction of the contraction map with the
corresponding estimates necessary for the application of the fixed point
theorem following the idea of [Pan2]. After a delicate chasing of the
parameters, we show the main theorems hold in the last three sections.
In Appendix A, a formula of the higher order derivative of the Newtonian
potential over any general domain is derived making use of the same argument
as in [GT]. Appendix B computes an interesting integral concerning the
fundamental solution over the sphere making use of Gegenbauer polynomials.
This provides a practical way to compute all the residue-type formulas for the
fundamental solution.
## 2 Notations
Denote by ${\mathbf{B}}_{R}$ the open ball centered at the origin with radius
$R$ in ${\mathbb{R}}^{n},n\geq 3$, and denote by $\partial{\mathbf{B}}_{R}$
its boundary. Namely, ${\mathbf{B}}_{R}=\\{x\in{\mathbb{R}}^{n}:|x|<R\\}$ and
$\partial{\mathbf{B}}_{R}=\\{x\in{\mathbb{R}}^{n}:|x|=R\\}$. Here $|\cdot|$ is
the standard Euclidean norm. We consider the following function spaces and
norms over ${\mathbf{B}}_{R}$ following [Pan2].
Let $\mathcal{C}({\mathbf{B}}_{R})$ be the set of continuous functions in
${\mathbf{B}}_{R}$ and $\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$ the hölder
space in ${\mathbf{B}}_{R}$ with order $\alpha$. For
$f\in\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$, the norm of $f$ is defined by
$\parallel f\parallel_{\alpha}:=\parallel f\parallel+R^{\alpha}H_{\alpha}[f],$
where
$\begin{split}&\parallel f\parallel:=\sup\\{|f(x):x\in{\mathbf{B}}_{R}\\};\\\
&H_{\alpha}[f]:=\sup\big{\\{}\frac{|f(x)-f(x^{\prime})|}{|x-x^{\prime}|^{\alpha}}:x,x^{\prime}\in{\mathbf{B}}_{R}\big{\\}}\end{split}$
if $\parallel f\parallel_{\alpha}$ is finite. We note when $\parallel
f\parallel_{\alpha}$ is finite, the trivial extension of $f$ onto
$\bar{\mathbf{B}}_{R}$ is then in
$\mathcal{C}^{\alpha}(\bar{\mathbf{B}}_{R})$.
$\mathcal{C}^{\alpha}(\bar{\mathbf{B}}_{R})$ is a Banach space under the norm
$\parallel\cdot\parallel_{\alpha}$.
For $k>0$, denote by $\mathcal{C}^{k}({\mathbf{B}}_{R})$ the collection of all
functions in ${\mathbf{B}}_{R}$ whose partial derivatives exist and is
continuous up to order $k$. Denote by$\|\cdot\|_{\mathcal{C}^{k}}$ the
corresponding norm, where
$\|f\|_{\mathcal{C}^{k}}:=\sup\\{\parallel D^{\beta}f\parallel:|\beta|=k\\}$
if $\|f\|_{\mathcal{C}^{k}}$ is finite.
$\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$ is the subset of
$\mathcal{C}^{k}({\mathbf{B}}_{R})$ whose $k$-th order derivatives belong to
$\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$. For any multi-index
$\beta=(\beta_{1},\ldots,\beta_{n})$ with nonnegative entries, define
$|\beta|:=\sum_{j=1}^{n}\beta_{j}$ and $\beta!:=\beta_{1}!\cdots\beta_{n}!$.
Given any $f\in\mathcal{C}^{k}({\mathbf{B}}_{R})$, we represent
$D^{\beta}f:=\partial_{1}^{\beta_{1}}\partial_{2}^{\beta_{2}}\cdots\partial_{n}^{\beta_{n}}f$
with $\partial_{j}$ the partial derivative with respect to $x_{j}$ variables.
If $f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$, we define the semi-norm
$\parallel f\parallel^{(k)}_{\alpha}:=\sup\\{\parallel
D^{\beta}f\parallel_{\alpha}:|\beta|=k\\}$
if $\parallel f\parallel^{(k)}_{\alpha}$ is finite.
Of special interest, we introduce the subset of
$\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$ as follows.
$\mathcal{C}^{k,\alpha}_{0}({\mathbf{B}}_{R}):=\\{f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R}):D^{\beta}f(0)=0,|\beta|\leq
k-1\\}.$
The following lemmas play an important role in the rest of the paper. The
proof can be found in [Pan2] and is omitted here.
###### Lemma 2.1.
[Pan2] If $f\in\mathcal{C}^{k,\alpha}(\bar{\mathbf{B}}_{R})$, then for any
$x,x^{\prime}\in\bar{\mathbf{B}}_{R}$ and $0<\alpha<1$,
$|f(x^{\prime})-T_{k}^{x}(f)(x^{\prime})|\leq
C|x-x^{\prime}|^{k+\alpha}\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}.$
Here $T_{k}^{x}(f)(x^{\prime})$ is the $k$-th order power series expansion of
$f$ at $x$.
###### Lemma 2.2.
[Pan2] If $f\in\mathcal{C}_{0}^{k,\alpha}(\bar{\mathbf{B}}_{R})$, then for any
$l\leq k$ and $0<\alpha<1$,
$\parallel f\parallel_{\alpha}^{(l)}\leq CR^{k-l}\parallel
f\parallel_{\alpha}^{(k)}.$
###### Remark 2.3.
Lemma 2.2 can be shown to hold for $\alpha=0$.
As a consequence of Lemma 2.2,
$\mathcal{C}^{k,\alpha}_{0}(\bar{\mathbf{B}}_{R})$ ($0<\alpha<1$) becomes a
Banach space under the norm $\parallel\cdot\parallel_{\alpha}^{(k)}$.
Here and in the rest of the paper, we use $C$ to represent any positive
constant number dependent only on $n,\alpha$ and $N$, where $0<\alpha<1$,
$n\geq 3$ and $N\geq 1$. Especially, we point out that $C$ is independent of
$R$, which is later on a key parameter in the proof of the paper.
## 3 Residue-type theorem for the fundamental solution in ${\mathbb{R}}^{n}$
In complex analysis, the Residue theorem or the Cauchy integral formula
states, for any holomorphic function $f$ in
${\mathbf{B}}_{R}\subset{\mathbb{C}}$ and any integer $k\geq 1$,
$z\in{\mathbf{B}}_{R}$,
$\int_{|\xi|=R}\frac{f(\xi)}{\xi-z}d\xi=2\pi if(z),$
and hence
$\int_{|\xi|=R}D_{z}^{k}\big{(}\frac{1}{\xi-z}\big{)}f(\xi)d\xi=k!\int_{|\xi|=R}\frac{f(\xi)}{(\xi-z)^{k+1}}d\xi=2\pi
if^{(k)}(z),$
where $f^{(k)}$ is the $k$-th derivative of $f$ with respect to $z$. Recall
the holomorphic kernel $\frac{1}{z}$ is the Cauchy kernel for $\bar{\partial}$
operator in ${\mathbb{C}}$ and is also related to the first derivative of the
fundamental solution in ${\mathbb{R}}^{2}$. As a special case, if $f$ is a
holomorphic polynomial of degree $k$ in ${\mathbf{B}}_{R}\subset{\mathbb{C}}$,
then for $z\in{\mathbf{B}}_{R}$,
$\int_{|\xi|=R}D_{z}^{k+1}\big{(}\frac{1}{\xi-z}\big{)}f(\xi)d\xi=0.$ (3)
When $n\geq 3$, there is no holomorphic kernel for $\bar{\partial}$ operator
in general.
On the other hand, the fundamental solution of Laplacian in
${\mathbb{R}}^{n},n\geq 3$ is
$\Gamma(x)=c_{n}\frac{1}{|x|^{n-2}}.$
Here $c_{n}=\frac{1}{n(2-n)\omega_{n}}$, with $\omega_{n}$ the surface area of
the unit sphere in ${\mathbb{R}}^{n}$. With respect to the fundamental
solution, the Newtonian potential of $f$ in ${\mathbf{B}}_{R}$ is defined by
$\mathcal{N}(f)(x):=\int_{{\mathbf{B}}_{R}}\Gamma(x-y)f(y)dy$
for any integrable function $f$ in ${\mathbf{B}}_{R}$ and for
$x\in{\mathbf{B}}_{R}$. The Newtonian potential has attracted great attention
in physics, and there have been many references concerning it, for instance,
[GT] and [NW].
The proof of Theorem 1.8 makes use of zonal spherical harmonics $Z_{x}^{(l)}$
and its reproducing property for spherical harmonics. See [SW] for the
reference. In detail, let $H_{l}$ be the set of all spherical harmonics of
degree $l$, then for any $f\in H_{l}$,
$f(x)=\int_{\partial{\mathbf{B}}_{1}}Z_{x}^{(l)}(y)f(y)d\sigma_{y}.$
Moreover, if $f\in H_{k}$ with $l\neq k$, then
$0=\int_{\partial{\mathbf{B}}_{1}}Z_{x}^{(l)}(y)f(y)d\sigma_{y}.$
On the other hand, denote by $\mathcal{P}^{h}_{k}$ the space of all
homogeneous polynomials of degree $k$ restricted in ${\mathbf{B}}_{R}$. For
any $f\in\mathcal{P}^{h}_{k}$, there exist $P_{j}$’s, some homogenous harmonic
polynomials of degree $j$, such that
$f(x)=P_{k}(x)+|x|^{2}P_{k-2}(x)+\cdots+|x|^{k}P_{0}(x),\ \text{when $k$ is
even,}$ (4)
and
$f(x)=P_{k}(x)+|x|^{2}P_{k-2}(x)+\cdots+|x|^{k-1}P_{1}(x),\ \text{when $k$ is
odd.}$ (5)
Note $P_{j}|_{\partial{\mathbf{B}}_{1}}\in H_{j}$. We now are in a position to
prove the residue-type Theorem 1.8 for the fundamental solution $\Gamma$ in
${\mathbb{R}}^{n}$.
Proof of Theorem 1.8: Without loss of generality, we assume $f$ is a monomial
of degree $k$. We also assume that $R=1$. This is due to the following simple
fact that for any $f\in\mathcal{P}^{h}_{k}$,
$\int_{\partial{\mathbf{B}}_{R}}\Gamma(x-y)f(y)d\sigma_{y}=R^{k+1}\int_{\partial{\mathbf{B}}_{1}}\Gamma(\frac{x}{R}-y)f(y)d\sigma_{y}.$
Under the zonal spherical harmonics, we have, when $x\in{\mathbf{B}}_{1}$,
$\Gamma(x-y)=\sum_{l=0}^{\infty}C_{n,l}\frac{|x|^{l}}{|y|^{n+k-2}}Z^{(l)}_{\frac{x}{|x|}}(\frac{y}{|y|}),$
where $C_{n,l}=\frac{2l+n-2}{(n-2)\omega_{n}}$. Letting
$y\in\partial{\mathbf{B}}_{1}$, the above expression for
$x\in{\mathbf{B}}_{1}$ simplifies as
$\Gamma(x-y)=\sum_{l=0}^{\infty}C_{n,l}|x|^{l}Z^{(l)}_{\frac{x}{|x|}}({y}).$
(6)
When $k$ is odd, letting $y\in\partial{\mathbf{B}}_{1}$ and making use of (5),
one has
$f(y)=P_{k}(y)+P_{k-2}(y)+\cdots+P_{1}(y)$ (7)
for some harmonic spherics $P_{j}\in H_{j}$. Therefore, combining (6) and (7)
together with the reproducing property of the zonal spherical harmonics, we
have
$\begin{split}\int_{\partial{\mathbf{B}}_{1}}\Gamma(x-y)f(y)d\sigma_{y}&=\int_{\partial{\mathbf{B}}_{1}}\bigg{(}\sum_{l=0}^{\infty}C_{n,l}|x|^{l}Z^{(l)}_{\frac{x}{|x|}}({y})\bigg{)}\bigg{(}P_{k}(y)+P_{k-2}(y)+\cdots+P_{1}(y)\bigg{)}d\sigma_{y}\\\
&=C_{n,k}|x|^{k}P_{k}(\frac{x}{|x|})+C_{n,k-2}|x|^{k-2}P_{k-2}(\frac{x}{|x|})+\cdots+C_{n,1}|x|P_{1}(\frac{x}{|x|})\\\
&=C_{n,k}P_{k}(x)+C_{n,k-2}P_{k-2}(x)+\cdots+C_{n,1}P_{1}(x)\in\mathcal{P}_{k}.\end{split}$
The case when $k$ is even can be treated similarly and is omitted here.
We remark that, despite of the constructive proof of the Residue-type formula
in Theorem 1.8 for the fundamental solution, the integral can actually be
computed directly. See Appendix B for a computation of the formula when $k=1$.
The same method can practically be used for general $k>1$.
As an immediate consequence of Theorem 1.8, we have the following corollary,
analogous to (3) with respect to the Cauchy kernel in ${\mathbb{C}}$.
###### Corollary 3.1.
For any $f\in\mathcal{P}_{k}$ and any multi-index $\beta$ with $|\beta|\geq
k+1$,
$\int_{\partial{\mathbf{B}}_{R}}D_{x}^{\beta}\Gamma(x-y)f(y)d\sigma_{y}=0$
for $x\in{\mathbf{B}}_{R}$.
As another consequence of Theorem 1.8, we also have
###### Theorem 3.2.
For any $f\in\mathcal{P}_{k}$ and any multi-index $\beta$ with $|\beta|\geq
k+2$,
$\int_{{\mathbf{B}}_{R}\setminus{\mathbf{B}}_{\epsilon}(z)}D_{x}^{\beta}\Gamma(x-y)f(y)dy=0,$
(8)
when $x\in{\mathbf{B}}_{\epsilon}(z)\subset{\mathbf{B}}_{R}$. Here
${\mathbf{B}}_{\epsilon}(z)$ is the ball centered at $z$ with radius
$\epsilon$.
Proof of Theorem 3.2: Write $\beta=(\beta_{1},\ldots,\beta_{n})$. Without loss
of generality, assume $R=1$, $\beta_{1}>0$ and $f$ is a monomial of degree
$k$. Moreover we write $\beta^{\prime}=(\beta_{1}-1,\ldots,\beta_{n})$. Hence
applying Stokes’ Theorem on $D^{\beta^{\prime}}\Gamma(x-y)f(y)$ over the
domain ${\mathbf{B}}_{R}\setminus{\mathbf{B}}_{\epsilon}(z)$, one has
$\begin{split}&\int_{{\mathbf{B}}_{1}\setminus{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\beta}\Gamma(y-x)f(y)dy\\\
=&-\int_{{\mathbf{B}}_{1}\setminus{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\beta^{\prime}}\Gamma(y-x)\partial_{1}f(y)dy+\int_{\partial{\mathbf{B}}_{1}}D_{y}^{\beta^{\prime}}\Gamma(y-x)f(y)y_{1}d\sigma_{y}\\\
&-\int_{\partial{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\beta^{\prime}}\Gamma(y-x)f(y)\frac{y_{1}-z_{1}}{|y-z|}d\sigma_{y}\\\
\end{split}$ (9)
Write
$I:=\int_{\partial{\mathbf{B}}_{1}}D_{y}^{\beta^{\prime}}\Gamma(y-x)f(y)y_{1}d\sigma_{y}$
and
$II:=\int_{\partial{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\beta^{\prime}}\Gamma(y-x)f(y)\frac{y_{1}-z_{1}}{|y-z|}d\sigma_{y}$.
We show next that $I=II$ in ${\mathbf{B}}_{1}$ and therefore
$\int_{{\mathbf{B}}_{1}\setminus{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\beta}\Gamma(y-x)f(y)dy=-\int_{{\mathbf{B}}_{1}\setminus{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\beta^{\prime}}\Gamma(y-x)\partial_{1}f(y)dy.$
(10)
First note for $II$, after a change of coordinates by letting
$y=z+\epsilon\tau$,
$\begin{split}II=&\epsilon^{2-|\beta|}\int_{\partial{\mathbf{B}}_{1}}D_{y}^{\beta^{\prime}}\Gamma(\frac{z-x}{\epsilon}+\tau)f(z+\epsilon\tau)\tau_{1}d\sigma_{\tau}\\\
=&\epsilon^{2-|\beta|}\int_{\partial{\mathbf{B}}_{1}}D_{y}^{\beta^{\prime}}\Gamma(\frac{z-x}{\epsilon}+\tau)\big{(}f(\epsilon\tau)+P_{k-1}(\tau)\big{)}\tau_{1}d\sigma_{\tau}\\\
=&\epsilon^{2-|\beta|+k}\int_{\partial{\mathbf{B}}_{1}}D_{y}^{\beta^{\prime}}\Gamma(\frac{z-x}{\epsilon}+\tau)f(\tau)\tau_{1}d\sigma_{\tau}.\end{split}$
(11)
Here $P_{k-1}(\cdot)$ is some polynomial of degree $k-1$. The last identity is
due to the fact that $f$ is a monomial together with an application of
Corollary 3.1 onto $P_{k-1}(\tau)\tau_{1}$.
When $|\beta|\geq k+3$ and hence $|\beta^{\prime}|\geq k+2$, $I$ and $II$ are
both zero due to the Theorem 1.8 and we are done. When $|\beta|=k+2$, from
(11) we have
$II=\int_{\partial{\mathbf{B}}_{1}}D_{y}^{\beta^{\prime}}\Gamma(\tau+\frac{z-x}{\epsilon})f(\tau)\tau_{1}d\sigma_{\tau}.$
On the other hand, $I$ by Theorem 1.8 is constant independent of
$x\in{\mathbf{B}}_{1}$ and therefore $I=II$ when $|\beta|\geq k+2$ and hence
(10) holds.
Now applying the induction process on (10), we get immediately, for
$x\in{\mathbf{B}}_{1}$,
$\displaystyle\int_{{\mathbf{B}}_{1}\setminus{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\beta}\Gamma(y-x)f(y)dy=$
$\displaystyle-\int_{{\mathbf{B}}_{1}\setminus{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\beta^{\prime}}\Gamma(y-x)\partial_{1}f(y)dy$
$\displaystyle=$ $\displaystyle\cdots$ $\displaystyle=$ $\displaystyle
C(f)\int_{{\mathbf{B}}_{1}\setminus{\mathbf{B}}_{\epsilon}(z)}D_{y}^{\mu}\Gamma(y-x)dy$
$\displaystyle=$ $\displaystyle 0.$
Here $\mu$ is some multi-index with $|\mu|\geq 2$ and $C(f)$ is some constant
dependent only on $f$ and $\beta$.
## 4 Principle value of higher order derivatives of the Newtonian potential
As a side product of Theorem 3.2, we show the principal value of the
derivatives of the Newtonian potential exists. Denote by
$\mathcal{C}^{\infty}_{c}({\mathbb{R}}^{n})$ the set of smooth functions in
${\mathbb{R}}^{n}$ with compact supports. For any
$f\in\mathcal{C}^{\infty}_{c}({\mathbb{R}}^{n})$, recall the principle value
of the derivatives of the Newtonian potential is defined as follows.
###### Definition 4.1.
$p.v.\int_{{\mathbb{R}}^{n}}D^{\beta}\Gamma(x-y)f(y)dy:=\lim_{\epsilon\rightarrow
0}\int_{{\mathbb{R}}^{n}\setminus{\mathbf{B}}_{\epsilon}(x)}D^{\beta}\Gamma(x-y)f(y)dy$.
###### Theorem 4.2.
For any multi-index $\beta$ with $|\beta|=k\geq 2$,
$f\in\mathcal{C}_{c}^{\infty}$,
$p.v.\int_{{\mathbb{R}}^{n}}D^{\beta}\Gamma(x-y)f(y)dy$ exists for all
$x\in{\mathbb{R}}^{n}$. Moreover, if $suppf\subset{\mathbf{B}}_{R}$, then
$p.v.\int_{{\mathbb{R}}^{n}}D^{\beta}\Gamma(x-y)f(y)dy=\int_{{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy,$
(12)
where $T^{x}_{j}(f)(y)$ is the $j$-th order power series expansion of $f$ at
$x$. Moreover, the right hand integral of (12) is independent of $R$.
Proof of Theorem 4.2: We first show the independence of the right hand side of
(12) of $R$. Indeed, for any two numbers $R^{\prime}>R>0$ with
$suppf\subset{\mathbf{B}}_{R}\subset{\mathbf{B}}_{R^{\prime}}$ and when $x\in
suppf$,
$\begin{split}&\int_{{\mathbf{B}}_{R^{\prime}}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy\\\
=&\int_{{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy+\int_{{\mathbf{B}}_{R^{\prime}}\setminus{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy\\\
=&\int_{{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy-\int_{{\mathbf{B}}_{R^{\prime}}\setminus{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)T^{x}_{k-2}(f)(y)dy\end{split}$
and hence by Theorem 3.2, when $x\in
suppf\subset{\mathbf{B}}_{R}\subset{\mathbf{B}}_{R^{\prime}}$,
$\int_{{\mathbf{B}}_{R^{\prime}}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy=\int_{{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy.$
If $x\notin suppf$, then
$\begin{split}&\int_{{\mathbf{B}}_{R^{\prime}}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy\\\
=&\int_{{\mathbf{B}}_{R^{\prime}}}D^{\beta}\Gamma(x-y)f(y)dy\\\
=&\int_{{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)f(y)dy\\\
=&\int_{{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy.\end{split}$
To prove (12), we first note (12) is trivially true if $x\notin suppf$.
When $x\in suppf$, the right hand side of (12) is finite, due to the simple
fact that when $y\in{\mathbf{B}}_{1}(x)$,
$|D^{\beta}\Gamma(x-y)|\leq C|x-y|^{2-n+k}$
and
$|f(y)-T^{x}_{k-2}(f)(y)|\leq C(f)|x-y|^{k-1}$
with $C(f)$ some constant dependent on $f$.
On the other side, making use of Theorem 3.2, one has
$\int_{{\mathbf{B}}_{R}\setminus{\mathbf{B}}_{\epsilon}(x)}D^{\beta}\Gamma(x-y)T^{x}_{k-2}(f)(y)dy=0.$
As a combination of the above two facts, we obtain when $x\in supp{f}$,
$\displaystyle p.v.\int_{{\mathbb{R}}^{n}}D^{\beta}\Gamma(x-y)f(y)dy=$
$\displaystyle\lim_{\epsilon\rightarrow
0}\int_{{\mathbf{B}}_{R}\setminus{\mathbf{B}}_{\epsilon}(x)}D^{\beta}\Gamma(x-y)f(y)dy$
$\displaystyle=$ $\displaystyle\lim_{\epsilon\rightarrow
0}\int_{{\mathbf{B}}_{R}\setminus{\mathbf{B}}_{\epsilon}(x)}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy$
$\displaystyle=$
$\displaystyle\int_{{\mathbf{B}}_{R}}D^{\beta}\Gamma(x-y)\big{(}f(y)-T^{x}_{k-2}(f)(y)\big{)}dy.\hbox{\vrule
height=6.45831pt,width=5.0pt}$
## 5 Higher order derivatives of the Newtonian potential
It is well known that the second order derivatives of the Newtonian potential
in general do not exist due to the nonintegrability of the fundamental
solution after differentiation more than once. However, when $f$ is nice
enough in the sense that $f\in\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$, then
$\mathcal{N}(f)\in\mathcal{C}^{2,\alpha}({\mathbf{B}}_{R})$. Especially, one
has the following formula.
###### Lemma 5.1.
[Fr] Let $f\in\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$. Then for any
$x\in{\mathbf{B}}_{R}$,
$\partial_{i}\partial_{j}\mathcal{N}(f)(x)=\int_{{\mathbf{B}}_{R}}\partial_{x_{i}}\partial_{x_{j}}\Gamma(x-y)(f(y)-f(x))dy-\frac{\delta_{ij}}{n}f(x).$
Moreover, for all $f\in\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$,
$\parallel\mathcal{N}(f)\parallel_{\alpha}^{(2)}\leq C\parallel
f\parallel_{\alpha},$
whenever $\parallel f\parallel_{\alpha}$ is finite.
For higher order derivatives of the Newtonian potential, in the case when
$n=2$, [Pan1] studied it from the point of view of complex analysis. There are
few references in the literature for $n>3$, though. In this section, we derive
higher order derivatives of the Newtonian potential for functions in the
appropriate spaces.
Throughout the rest of the paper, unless otherwise indicated, we always regard
derivatives inside the integration as derivatives with respect to $y$
variables. For instance, inside an integral,
$\partial_{1}\Gamma(x-y):=\frac{\partial(\Gamma(x-y))}{\partial y_{1}}$ while
$\partial_{x_{1}}\Gamma(x-y):=\frac{\partial\Gamma(x-y)}{\partial x_{1}}$.
###### Definition 5.2.
For a given multi-index $\beta$ with $|\beta|=k+2$, $k\geq 0$,
$\mathcal{N}_{\beta}:\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})\rightarrow\mathcal{C}({\mathbf{B}}_{R})$
is defined as follows.
$\mathcal{N}_{\beta}(f)(x):=\int_{{\mathbf{B}}_{R}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}dy,$
for $f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$ and $x\in{\mathbf{B}}_{R}$,
where $T^{x}_{k}(f)(y)$ is the $k$-th order power series expansion of $f$ at
$x$.
It is clear that the operator $\mathcal{N}_{\beta}$ is well defined over
$\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$.
We next introduce the following notation for the convenience of the statement
of the theorem. Given any two multi-indices
$\beta=(\beta_{1},\ldots,\beta_{n})$ and $\mu=(\mu_{1},\ldots,\mu_{n})$, we
say $\beta<\mu$ if $\beta_{j}\leq\mu_{j}$ for each $1\leq j\leq n$ and
$|\beta|<|\mu|$. Moreover, we define
$\mu-\beta:=(\mu_{1}-\beta_{1},\cdots,\mu_{n}-\beta_{n})$ if $\beta<\mu$. If
in addition $|\mu|=|\beta|+1$ with $\partial_{i}D^{\beta}=D^{\mu}$, we write
$\mu-\beta=i$ .
###### Definition 5.3.
* •
Given a multi-index $\beta$, we call $\\{\beta^{(j)}\\}_{j=1}^{k}$ a
continuously increasing nesting of length $k$ for $\beta$ if $|\beta^{(j)}|=j$
for $1\leq j\leq k$ and $\beta^{(j)}<\beta^{(j+1)}\leq\beta$ for $1\leq j\leq
k-1$.
* •
Given two multi-indices $\gamma$ and $\gamma^{\prime}$, we say
$\gamma^{\prime}$ is the dual of $\gamma$ with respect to $\beta$ if
$D^{\beta}=D^{\gamma}D^{\gamma^{\prime}}$.
Making use of Theorem 1.8 together with Theorem A.2, the following theorem
gives the formula for higher order derivatives of the Newtonian potential.
###### Theorem 5.4.
Let $f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$. Let $\beta$ be a multi-
index with $|\beta|=k+2$ and $\\{\beta^{(j)}\\}$ a continuously increasing
nesting of length $k+2$ for $\beta$. Then $D^{\beta}\mathcal{N}(f)(x)$ exists
for $x\in{\mathbf{B}}_{R}$. Moreover,
$D^{\beta}\mathcal{N}(f)=\mathcal{N}_{\beta}(f)-\sum_{j=2}^{k+2}\sum_{|\mu|=j-2}\frac{C(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})}{\mu!}D^{\mu+\beta^{(j)^{\prime}}}f.$
(13)
Here $\beta^{(j)^{\prime}}$ is the dual of $\beta^{(j)}$ with respect to
$\beta$, and $C(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})$ is some constant
dependent only on $(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})$.
We point out, on the right hand side of (13), the order
$|\mu+\beta^{(j)^{\prime}}|$ of the derivative of $f$ in the second term is
always equal to $k$ by definition.
Proof of Theorem 5.4: $k=0$ is given by Lemma 5.1. When $k>0$, for any multi-
indices $\beta$ and $\mu$ with $|\beta|=|\mu|+1$, one has by Corollary 3.1,
$\begin{split}\mathcal{I}_{{\mathbf{B}}_{R}}(\beta,\mu,j)(x):=&\int_{\partial{\mathbf{B}}_{R}}D^{\beta}_{x}\Gamma(x-y)(y-x)^{\mu}\nu_{j}d\sigma_{y}\\\
=&\frac{1}{R}\int_{\partial{\mathbf{B}}_{R}}D^{\beta}_{x}\Gamma(x-y)y^{\mu}y_{j}d\sigma_{y}\\\
=&\frac{1}{R}D^{\beta}_{x}\int_{\partial{\mathbf{B}}_{R}}\Gamma(x-y)y^{\mu}y_{j}d\sigma_{y}\\\
=&R^{1+|\mu|}D^{\beta}_{x}\int_{\partial{\mathbf{B}}_{1}}\Gamma(\frac{x}{R}-y)y^{\mu}y_{j}d\sigma_{y}.\end{split}$
According to Theorem 1.8,
$\int_{\partial{\mathbf{B}}_{1}}\Gamma(\frac{x}{R}-y)y^{\mu}y_{j}d\sigma_{y}$
is a polynomial of degree $|\mu|+1$ in $x$ when $|x|<R$ and hence
$\mathcal{I}_{{\mathbf{B}}_{R}}(\beta,\mu,j)(x)\equiv C(\beta,\mu,j)$
with $C(\beta,\mu,j)$ some constant dependent only on $(\beta,\mu,j)$.
Therefore from Theorem A.2 by choosing $\Omega={\mathbf{B}}_{R}$ and
$\Omega^{\prime}={\mathbf{B}}_{R^{\prime}}$ with $R^{\prime}>R>0$, one obtains
$\begin{split}D^{\beta}\mathcal{N}(f)(x)=&\int_{{\mathbf{B}}_{R^{\prime}}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}dy\\\
&-\sum_{j=2}^{k+2}D^{\beta^{(j)^{\prime}}}\big{(}\sum_{|\mu|=j-2}\frac{D^{\mu}f(x)}{\mu!}C(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})\big{)}\\\
=&\int_{{\mathbf{B}}_{R^{\prime}}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}dy\\\
&-\sum_{j=2}^{k+2}\sum_{|\mu|=j-2}\frac{C(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})}{\mu!}D^{\mu+\beta^{(j)^{\prime}}}f(x)\end{split}$
for any $x\in{\mathbf{B}}_{R}$ and any $R^{\prime}>R$. We then get
$\mathcal{N}(f)\in\mathcal{C}^{k+2}({\mathbf{B}}_{R})$. Moreover, for any
$\beta$ with $|\beta|\leq k+2$ and for any $x\in{\mathbf{B}}_{R}$, after
passing $R^{\prime}$ to $R$,
$\begin{split}D^{\beta}\mathcal{N}(f)(x)=&\mathcal{N}_{\beta}(f)(x)-\sum_{j=2}^{k+2}\sum_{|\mu|=j-2}\frac{C(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})}{\mu!}D^{\mu+\beta^{(j)^{\prime}}}f(x).\hbox{\vrule
height=6.45831pt,width=5.0pt}\end{split}$
To simplify the notation, we define an operator $\mathcal{T}_{\beta}$ by
$\mathcal{T}_{\beta}(f)(x):=\sum_{j=2}^{k+2}\sum_{|\mu|=j-2}\frac{C(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})}{\mu!}D^{\mu+\beta^{(j)^{\prime}}}f(x)$
for any $f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$. Then
$\mathcal{T}_{\beta}:\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})\rightarrow\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$
and
$\parallel\mathcal{T}_{\beta}(f)\parallel_{\alpha}\leq C\parallel
f\parallel_{\alpha}^{(k)}.$ (14)
Under this definition, Theorem 5.4 can be rewritten as
$D^{\beta}\mathcal{N}(f)=\mathcal{N}_{\beta}(f)-\mathcal{T}_{\beta}(f).$ (15)
for any $f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$.
## 6 Hölder norm of $D^{\beta}\mathcal{N}(f)$
In the derivation of the higher derivative formula of the Newtonian potential,
the following operator shows up frequently and in itself is of an independent
importance as well.
###### Definition 6.1.
Given multi-indices $\beta$ and $\beta^{\prime}$ with $|\beta|=k+2$, $k\geq 0$
and $D^{\beta}=\partial_{j}D^{\beta^{\prime}}$, a linear operator
$\mathcal{\tilde{S}}_{\beta}:\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})\rightarrow\mathcal{C}({\mathbf{B}}_{R})$
is defined as follows:
$\mathcal{\tilde{S}}_{\beta}(f)(x):=\int_{\partial{\mathbf{B}}_{R}}D^{\beta^{\prime}}_{x}\Gamma(x-y)f(y)\nu_{j}d\sigma_{y}$
for $f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$ and $x\in{\mathbf{B}}_{R}$.
Here $d\sigma_{y}$ is the surface area element of $\partial{\mathbf{B}}_{R}$
with the unit outer normal $(\nu_{1},\ldots,\nu_{n})$.
We point out
$\mathcal{\tilde{S}}_{\beta}(f)=D^{\beta^{\prime}}(\int_{\partial{\mathbf{B}}_{R}}\Gamma(\cdot-y)f(y)\nu_{j}d\sigma_{y})$
is the counterpart in ${\mathbb{R}}^{n}$ of the derivatives of the Cauchy
integral for holomorphic functions in ${\mathbb{C}}$. For the convenience of
computation, we slightly modify the operator $\mathcal{\tilde{S}}_{\beta}$ and
define the following operator.
###### Definition 6.2.
Given multi-indices $\beta$ and $\beta^{\prime}$ with $|\beta|=k+2$, $k\geq 0$
and $D^{\beta}=\partial_{j}D^{\beta^{\prime}}$, a linear operator
$\mathcal{S}_{\beta}:\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})\rightarrow\mathcal{C}({\mathbf{B}}_{R})$
is defined as follows:
$\mathcal{S}_{\beta}(f)(x):=\int_{\partial{\mathbf{B}}_{R}}D^{\beta^{\prime}}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}\nu_{j}d\sigma_{y}$
for $f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$ and $x\in{\mathbf{B}}_{R}$.
Here $T^{x}_{k}(f)(y)$ is the $k$-th order Taylor series expansion of $f$ at
$x$, $d\sigma_{y}$ is the surface area element of $\partial{\mathbf{B}}_{R}$
with the unit outer normal $(\nu_{1},\ldots,\nu_{n})$.
Note that due to Corollary 3.1 and $|\beta^{\prime}|=k+1$,
$\int_{\partial{\mathbf{B}}_{R}}D^{\beta^{\prime}}\Gamma(x-y)T^{x}_{k}(f)(y)\nu_{j}d\sigma_{y}$
as a function of $x\in{\mathbf{B}}_{R}$ is a constant independent of $R$. As a
result of this, $\mathcal{\tilde{S}}_{\beta}(f)$ and $\mathcal{S}(f)$ are
differed by a constant only dependent on $T^{x}_{k}(f)$, and especially,
independent of $R$.
We now are ready to prove the induction formula for the derivatives of the
Newtonian potential.
###### Lemma 6.3.
Let $f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$. let $\beta,\beta^{\prime}$
be two multi-index with $|\beta|=k+2$ and
$D^{\beta}=\partial_{j}D^{\beta^{\prime}}$. We have
$D^{\beta}\mathcal{N}(f)=D^{\beta^{\prime}}\mathcal{N}(\partial_{j}f)-\mathcal{S}_{\beta}(f)-\mathcal{T}_{\beta}(f).$
Proof of Lemma 6.3: Making use of Stokes’ Theorem in (15) and Corollary 3.1,
we have for $x\in{\mathbf{B}}_{R}$,
$\begin{split}D^{\beta}\mathcal{N}(f)(x)=&\lim_{\epsilon\rightarrow
0}\int_{{\mathbf{B}}_{R}-{\mathbf{B}}_{\epsilon}(x)}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}dy-\mathcal{T}_{\beta}(f)(x)\\\
=&\lim_{\epsilon\rightarrow
0}\int_{{\mathbf{B}}_{R}-{\mathbf{B}}_{\epsilon}(x)}D_{x}^{\beta^{\prime}}\Gamma(x-y)\partial_{j}\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}dy\\\
&-\lim_{\epsilon\rightarrow
0}\int_{{\mathbf{B}}_{R}-{\mathbf{B}}_{\epsilon}(x)}\partial_{j}\bigg{(}D_{x}^{\beta^{\prime}}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}\bigg{)}dy-\mathcal{T}_{\beta}(f)(x)\\\
=&\lim_{\epsilon\rightarrow
0}\int_{{\mathbf{B}}_{R}-{\mathbf{B}}_{\epsilon}(x)}D_{x}^{\beta^{\prime}}\Gamma(x-y)\big{(}\partial_{j}f(y)-T^{x}_{k-1}(\partial_{j}f)(y)\big{)}dy\\\
&-\int_{\partial{\mathbf{B}}_{R}}D_{x}^{\beta^{\prime}}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}\nu_{j}d\sigma_{y}\\\
&+\lim_{\epsilon\rightarrow
0}\int_{\partial{\mathbf{B}}_{\epsilon}(x)}D_{x}^{\beta^{\prime}}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}\nu_{j}d\sigma_{y}-\mathcal{T}_{\beta}(f)(x)\\\
=&D^{\beta^{\prime}}\mathcal{N}(\partial_{j}f)(x)-\mathcal{S}_{\beta}(f)(x)-\mathcal{T}_{\beta}(f)(x).\end{split}$
The last identity is because the third term is
$O(\epsilon^{2-n-k-1+k+\alpha+n-1})=O(\epsilon^{\alpha})$.
The following lemma shows the operator $\mathcal{S}_{\beta}$ is a bounded
operator from $\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$ into
$\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$.
###### Lemma 6.4.
let $\beta$ be a multi-index with $|\beta|=k+2$. The operator
$\mathcal{S}_{\beta}$ sends $\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$ into
$\mathcal{C}^{\alpha}({\mathbf{B}}_{R})$. Moreover, for any
$f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$,
$\parallel\mathcal{S}_{\beta}(f)\parallel_{\alpha}\leq C\parallel
f\parallel_{\alpha}^{(k)},$
whenever $\parallel f\parallel_{\alpha}^{(k)}$ is finite.
In order to prove Lemma 6.4, we need the following lemma.
###### Lemma 6.5.
For any $x\in{\mathbf{B}}_{1}$, $0<\alpha<1$,
$\int_{|y|=1}\frac{1}{|x-y|^{n-\alpha}}d\sigma_{y}\leq C(1-|x|)^{\alpha-1}.$
Proof of Lemma 6.5: Assume $x=(r,0,\ldots,0)$ after rotation if necessary. One
can assume in addition that $r\geq\frac{1}{2}$.
Choose spherical coordinates
$y_{1}=\cos\theta_{1},y_{2}=\sin\theta_{1}\cos\theta_{2},\ldots,y_{n}=\sin\theta_{1}\sin\theta_{2}\cdots\sin\theta_{n-1}$,
where $0\leq\theta_{1}\leq\pi,0\leq\theta_{i}\leq 2\pi$ for $2\leq i\leq n-1$
and denote by $\partial{\mathbf{B}}_{1}^{n-1}$ the unit sphere in
${\mathbb{R}}^{n-1}$, then we have,
$\begin{split}\int_{|y|=1}\frac{1}{|x-y|^{n-\alpha}}d\sigma_{y}=&\int_{0}^{\pi}\frac{\sin^{n-2}\theta_{1}}{[(\cos\theta_{1}-r)^{2}+\sin^{2}\theta_{1}]^{\frac{n-\alpha}{2}}}d\theta_{1}\int_{\partial{\mathbf{B}}^{n-1}_{1}}d\sigma_{z}\\\
=&C\int_{0}^{\pi}\frac{\sin^{n-2}\theta}{[1-2r\cos\theta+r^{2}]^{\frac{n-\alpha}{2}}}d\theta\\\
=&C\int_{0}^{\pi}\frac{\sin^{n-2}\theta}{[(1-r)^{2}+4r\sin^{2}\frac{\theta}{2}]^{\frac{n-\alpha}{2}}}d\theta\\\
\leq&C(\int_{0}^{1-r}\frac{\sin^{n-2}\theta}{(1-r)^{n-\alpha}}d\theta+\int_{1-r}^{\pi}\frac{\sin^{n-2}\theta}{(2\sqrt{r}\sin\frac{\theta}{2})^{n-\alpha}}d\theta)\\\
=&A+B.\end{split}$
For $A$, since $\sin\theta\leq\theta$ when $\theta>0$,
$\displaystyle A\leq$
$\displaystyle\frac{C}{(1-r)^{n-\alpha}}\int_{0}^{1-r}\theta^{n-2}d\theta=\frac{C}{(1-r)^{n-\alpha}}(1-r)^{n-1}=C(1-r)^{\alpha-1}.$
For $B$, making use of $\sin\theta\geq C\theta$ when
$0\leq\theta\leq\frac{\pi}{2}$ and the assumption $r\geq\frac{1}{2}$, we get
$B\leq
C\int_{1-r}^{\pi}\frac{\sin^{n-2}\frac{\theta}{2}}{(\sin\frac{\theta}{2})^{n-\alpha}}d\theta=C\int_{1-r}^{\pi}\sin^{\alpha-2}\frac{\theta}{2}d\theta\leq
C\int_{1-r}^{\pi}\big{(}\frac{\theta}{2}\big{)}^{\alpha-2}d\theta\leq
C(1-r)^{\alpha-1}+C\leq C(1-r)^{\alpha-1}.$
The lemma is thus concluded.
The following lemma in [NW], Appendix 6.2a plays an essential role in the
proof of Lemma 6.4.
###### Lemma 6.6.
[NW] If $z$ and $z^{\prime}$ are two points of the open unit disk in
${\mathbb{C}}$, and $\gamma$ is the shorter segment of the circle through $z$
and $z^{\prime}$ and orthogonal to the unit circle, then
$\int_{\gamma}\frac{|dw|}{(1-w\bar{w})^{1-\alpha}}\leq\frac{2}{1-\alpha}|z-z^{\prime}|^{1-\alpha}$
for $0<\alpha<1$.
Proof of Lemma 6.4: Write $g(y):=f(y)-T^{x}_{k}(f)(y)$.
(i) The estimate for $\parallel\mathcal{S}_{\beta}(f)\parallel$. Indeed, by
Lemma 2.1,
$\begin{split}|\int_{\partial{\mathbf{B}}_{R}}D_{x}^{\beta^{\prime}}\Gamma(x-y)g(y)\nu_{j}d\sigma_{y}|\leq&C\parallel
f\parallel_{\alpha}^{(k)}R^{-\alpha}\int_{\partial{\mathbf{B}}_{R}}|y-x|^{2-n-k-1}|y-x|^{k+\alpha}d\sigma_{y}\\\
=&C\parallel
f\parallel_{\alpha}^{(k)}R^{-\alpha}\int_{\partial{\mathbf{B}}_{R}}|y-x|^{1-n+\alpha}d\sigma_{y}\\\
=&C\parallel
f\parallel_{\alpha}^{(k)}\int_{\partial{\mathbf{B}}_{1}}|y-\frac{x}{R}|^{1-n+\alpha}d\sigma_{y}\\\
\leq&C\parallel f\parallel_{\alpha}^{(k)}.\end{split}$
(ii) Given $x,x^{\prime}\in{\mathbf{B}}_{R}$, we estimate
$|\mathcal{S}_{\beta}(f)(x)-\mathcal{S}_{\beta}(f)(x^{\prime})|$. Assume
without loss of generality that $x,x^{\prime}$ lie on the plane
$\\{y_{3}=\cdots=y_{n}=0\\}$ and write $x=Rz,x^{\prime}=Rz^{\prime}$ with
$z,z^{\prime}\in{\mathbf{B}}_{1}$. Then
$\begin{split}\mathcal{S}_{\beta}(f)(x)-\mathcal{S}_{\beta}(f)(x^{\prime})=&\int_{\partial{\mathbf{B}}_{R}}\big{(}D_{x}^{\beta^{\prime}}\Gamma(x-y)-D_{x}^{\beta^{\prime}}\Gamma(x^{\prime}-y)\big{)}g(y)\nu_{j}d\sigma_{y}\\\
=&R^{-k}\int_{\partial{\mathbf{B}}_{1}}\big{(}D_{z}^{\beta^{\prime}}\Gamma(z-y)-D_{z^{\prime}}^{\beta^{\prime}}\Gamma(z^{\prime}-y)\big{)}g(Ry)\nu_{j}d\sigma_{y}.\end{split}$
Let
$\gamma(t)=(\gamma_{1}(t),\gamma_{2}(t),0,\ldots,0):[0,1]\rightarrow\\{y_{3}=\cdots=y_{n}=0\\}\cong{\mathbb{C}}$
be a parametrization of the shorter segment of the circle through $z$ and
$z^{\prime}$ and orthogonal to the unit circle in ${\mathbb{C}}$ with
$\gamma(0)=z^{\prime},\gamma(1)=z$. We then have
$\begin{split}\mathcal{S}_{\beta}(f)(x)-\mathcal{S}_{\beta}(f)(x^{\prime})=&R^{-k}\int_{\partial{\mathbf{B}}_{1}}\int_{0}^{1}\frac{d}{dt}\big{(}D_{\gamma}^{\beta^{\prime}}\Gamma(\gamma(t)-y)\big{)}dtg(Ry)\nu_{j}d\sigma_{y}\\\
=&R^{-k}\int_{0}^{1}\sum_{k=1}^{2}\gamma^{\prime}_{k}(t)dt\int_{\partial{\mathbf{B}}_{1}}\big{(}\partial_{\gamma_{k}}D_{\gamma}^{\beta^{\prime}}\Gamma(\gamma(t)-y)\big{)}g(Ry)\nu_{j}d\sigma_{y}.\end{split}$
Making use of Corollary 3.1, we have for any $0\leq t\leq 1$,
$\begin{split}\int_{\partial{\mathbf{B}}_{1}}\big{(}\partial_{\gamma_{k}}D_{\gamma}^{\beta^{\prime}}\Gamma(\gamma(t)-y)\big{)}g(Ry)\nu_{j}d\sigma_{y}=\int_{\partial{\mathbf{B}}_{1}}\big{(}\partial_{\gamma_{k}}D_{\gamma}^{\beta^{\prime}}\Gamma(\gamma(t)-y)\big{)}\big{(}g(Ry)-T_{k}^{R\gamma(t)}(g)(Ry)\big{)}\nu_{j}d\sigma_{y},\end{split}$
where $T_{k}^{R\gamma(t)}(g)(y)$ is the $k$-th order power series expansion of
$g$ at $R\gamma(t)$. Furthermore, by Lemma 2.1,
$\begin{split}|g(Ry)-T_{k}^{R\gamma(t)}(g)(Ry)|&\leq
C|Ry-R\gamma(t)|^{k+\alpha}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}g]\\\
&=CR^{k+\alpha}|y-\gamma(t)|^{k+\alpha}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f].\end{split}$
Therefore,
$\begin{split}&|\mathcal{S}_{\beta}(f)(x)-\mathcal{S}_{\beta}(f)(x^{\prime})|\\\
\leq&C\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}R^{-k}\int_{0}^{1}\sum_{k=1}^{2}|\gamma^{\prime}_{k}(t)|dt\int_{\partial{\mathbf{B}}_{1}}|\gamma(t)-y|^{2-n-k-2}R^{k+\alpha}|y-\gamma(t)|^{k+\alpha}d\sigma_{y}\\\
\leq&C\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}R^{\alpha}\int_{0}^{1}\sum_{k=1}^{2}|\gamma^{\prime}_{k}(t)|dt\int_{\partial{\mathbf{B}}_{1}}|\gamma(t)-y|^{-n+\alpha}d\sigma_{y}.\end{split}$
Applying Lemma 6.5 to
$\int_{\partial{\mathbf{B}}_{1}}|\gamma(t)-y|^{-n+\alpha}d\sigma_{y}$ in the
last expression,
$\begin{split}|\mathcal{S}_{\beta}(f)(x)-\mathcal{S}_{\beta}(f)(x^{\prime})|\leq&C\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}R^{\alpha}\int_{0}^{1}\sum_{k=1}^{2}\frac{|\gamma^{\prime}_{k}(t)|}{(1-|\gamma(t)|)^{1-\alpha}}dt\\\
\leq&C\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}R^{\alpha}\int_{0}^{1}\frac{|\gamma^{\prime}(t)|dt}{(1-|\gamma(t)|^{2})^{1-\alpha}}\\\
=&C\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}R^{\alpha}\int_{\gamma}\frac{|dw|}{(1-w\bar{w})^{1-\alpha}}.\end{split}$
Hence by Lemma 6.6,
$|\mathcal{S}_{\beta}(f)(x)-\mathcal{S}_{\beta}(f)(x^{\prime})|\leq
C\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}R^{\alpha}|z-z^{\prime}|^{\alpha}=C\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}|x-x^{\prime}|^{\alpha}.$
Namely,
$H_{\alpha}[\mathcal{S}_{\beta}(f)]\leq
C\big{(}\sum_{|\mu|=k}H_{\alpha}[D^{\mu}f]\big{)}.$
We finally have shown, combining (i) and (ii),
$\parallel\mathcal{S}_{\beta}(f)\parallel_{\alpha}\leq C\parallel
f\parallel_{\alpha}^{(k)}.\hbox{\vrule height=6.45831pt,width=5.0pt}$
###### Remark 6.7.
In the proof of Lemma 6.4(ii) when carrying out the hölder norm for
$\mathcal{S}_{\beta}$, a natural choice of $\gamma$ would usually be the
segment connecting $z$ and $z$. However, the estimate in Lemma 6.6 actually
fails if $\gamma$ is chosen to be the segment instead of the geodesic as in
[NW].
Applying Lemma 6.3 and Lemma 6.4 inductively, one eventually obtains the
following formula.
###### Theorem 6.8.
Given a multi-index $\beta$ with $|\beta|=k+2$, let $\\{\beta^{(j)}\\}$ be a
continuously increasing nesting for $\beta$ of length $k+2$ and
$\beta^{(j)^{\prime}}$ be the dual of $\beta^{(j)}$ with respect to $\beta$
for $2\leq j\leq k+2$. Then for any
$f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$,
$D^{\beta}\mathcal{N}(f)=D^{\beta^{(2)}}\mathcal{N}(D^{\beta^{(2)^{\prime}}}f)-\sum_{j=3}^{k+2}\mathcal{S}_{\beta^{(j)}}(D^{\beta^{(j)^{\prime}}}f)-\mathcal{T}_{\beta}(f),$
in ${\mathbf{B}}_{R}$. Moreover, for any
$f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$,
$\parallel\mathcal{N}(f)\parallel_{\alpha}^{(k+2)}\leq C\parallel
f\parallel_{\alpha}^{(k)},$
and consequently, for any $m\in{\mathbb{Z}}^{+}$,
$\parallel\mathcal{N}^{m}(f)\parallel_{\alpha}^{(k+2m)}\leq C\parallel
f\parallel_{\alpha}^{(k)},$
whenever $\parallel f\parallel_{\alpha}^{(k)}$ is finite.
Proof of Lemma 6.8: By Lemma 6.3,
$\begin{split}D^{\beta}\mathcal{N}(f)&=D^{\beta^{(k+1)}}\mathcal{N}(D^{\beta^{(k+1)^{\prime}}}f)-\mathcal{S}_{\beta^{(k+2)}}(f)-\mathcal{T}_{\beta}(f)\\\
&=D^{\beta^{(k)}}\mathcal{N}(D^{\beta^{(k)^{\prime}}}f)-\mathcal{S}_{\beta^{(k+1)}}(D^{\beta^{(k+1)^{\prime}}}f)-\mathcal{S}_{\beta^{(k+2)}}(f)-\mathcal{T}_{\beta}(f)\\\
&=\cdots\\\
&=D^{\beta^{(2)}}\mathcal{N}(D^{\beta^{(2)^{\prime}}}f)-\sum_{j=3}^{k+2}\mathcal{S}_{\beta^{(j)}}(D^{\beta^{(j)^{\prime}}}f)-\mathcal{T}_{\beta}(f).\end{split}$
Hence from the above identity, for any
$f\in\mathcal{C}^{k,\alpha}({\mathbf{B}}_{R})$ as long as $\parallel
f\parallel_{\alpha}^{(k)}$ is finite,
$\begin{split}\parallel\mathcal{N}(f)\parallel_{\alpha}^{(k+2)}:&=\sup_{|\beta|=k+2}\parallel
D^{\beta}\mathcal{N}(f)\parallel_{\alpha}\\\ &\leq
C\sup_{|\beta|=k+2}\big{[}\parallel
D^{\beta^{(2)}}\mathcal{N}(D^{\beta^{(2)^{\prime}}}f)\parallel_{\alpha}+\sum_{j=3}^{k+2}\parallel\mathcal{S}_{\beta^{(j)}}(D^{\beta^{(j)^{\prime}}}f)\parallel_{\alpha}+\parallel\mathcal{T}_{\beta}(f)\parallel_{\alpha}\big{]}.\end{split}$
Since $|\beta^{(j)}|=j$ and $|\beta^{(j)^{\prime}}|=k+2-j$ from definition, by
Lemma 5.1, Lemma 6.4 and (14), we get
$\begin{split}\parallel\mathcal{N}(f)\parallel_{\alpha}^{(k+2)}&\leq
C\sup_{|\beta|=k+2}\big{[}\parallel\mathcal{N}(D^{\beta^{(2)^{\prime}}}f)\parallel^{(2)}_{\alpha}+\sum_{j=3}^{k+2}\parallel
D^{\beta^{(j)^{\prime}}}f\parallel_{\alpha}^{(j-2)}+\parallel
f\parallel_{\alpha}^{(k)}\big{]}\\\ &\leq C\sup_{|\beta|=k+2}\big{[}\parallel
D^{\beta^{(2)^{\prime}}}f\parallel_{\alpha}+\parallel
f\parallel_{\alpha}^{(k+2-j+j-2)}+\parallel
f\parallel_{\alpha}^{(k)}\big{]}\\\ &\leq C\parallel
f\parallel_{\alpha}^{(k)}.\end{split}$
Finally, applying induction in the above expression,
$\begin{split}\parallel\mathcal{N}^{m}(f)\parallel_{\alpha}^{(k+2m)}&=\parallel\mathcal{N}(\mathcal{N}^{m-1}(f))\parallel_{\alpha}^{(k+2m)}\\\
&\leq C\parallel\mathcal{N}^{m-1}(f)\parallel_{\alpha}^{(k+2m-2)}\\\
&\leq\cdots\\\ &\leq C\parallel f\parallel_{\alpha}^{(k)}.\hbox{\vrule
height=6.45831pt,width=5.0pt}\end{split}$
## 7 Construction of the contraction map
In this section, we construct a contraction map from the system (1). Assume
$a\in\mathcal{C}^{2}$. For any vector-valued function
$f\in\mathcal{(}C^{2m,\alpha}_{0}({\mathbf{B}}_{R}))^{N}$, introduce
$\omega^{(1)}(f):=(\omega_{1}^{(1)}(f),\ldots,\omega_{N}^{(1)}(f))$ with
$\omega_{j}^{(1)}(f)(x)=\int_{{\mathbf{B}}_{R}}\Gamma(x-y)a_{j}(y,f(y),\nabla
f(y),\ldots,\nabla^{2m}f(y))dy$
for $1\leq j\leq N$. According to Theorem 6.8,
$\omega^{(1)}(f)\in\mathcal{(}C^{2,\alpha}({\mathbf{B}}_{R}))^{N}$ and
$\parallel\omega^{(1)}_{j}(f)\parallel_{\alpha}^{(2)}\leq C\parallel
a_{j}(\cdot,f,\ldots,\nabla^{2m}f)\parallel_{\alpha}.$
We define inductively
$\omega^{(l)}(f)=(\omega_{1}^{(l)}(f),\ldots,\omega_{N}^{(l)}(f))$ for $1\leq
l\leq m$ as follows. For each $1\leq j\leq N$ and $x\in{\mathbf{B}}_{R}$,
$\omega^{(l)}_{j}(f)(x):=\mathcal{N}(\omega^{(l-1)}_{j}(f))(x).$
Note that, in terms of the Newtonian potential,
$\omega^{(l)}_{j}(f)=\mathcal{N}^{l}\big{(}a_{j}(\cdot,f,\ldots,\nabla^{2m}f)\big{)}.$
Therefore, by Theorem 6.8,
$\omega^{(l)}(f)\in\mathcal{(}C^{2l,\alpha}({\mathbf{B}}_{R}))^{N}$ and
$\parallel\omega^{(l)}_{j}(f)\parallel_{\alpha}^{(2l)}\leq\parallel
a_{j}(\cdot,f,\ldots,\nabla^{2m}f)\parallel_{\alpha}.$ (16)
We also define $\theta(f):=(\theta_{1}(f),\ldots,\theta_{N}(f))$ from
$\omega^{(m)}(f)$ by truncating degree less than $2m$ terms and part of the
degree $2m$ terms in its power series expansion at 0. Precisely speaking, for
$1\leq j\leq N$ and $x\in{\mathbf{B}}_{R}$,
$\theta_{j}(f)(x)=\omega_{j}^{(m)}(f)(x)-T_{2m-1}(\omega_{j}^{(m)}(f))(x)-\sum_{\beta\in\Lambda}\frac{D^{\beta}(\omega_{j}^{(m)}(f))(0)}{\beta!}x^{\beta},$
(17)
where $T_{2m-1}(\omega_{j}^{(m)}(f))$ is the $(2m-1)$-th power series
expansion of $\omega_{j}^{(m)}(f)$ at 0, $\Lambda=\\{\beta:|\beta|=2m,\ \text{
and at least one of}\ \beta_{j}\ \text{is odd for}\ 1\leq j\leq n\\}$.
From the construction, it is immediate to see that for any
$f\in\mathcal{(}C^{2m,\alpha}_{0}({\mathbf{B}}_{R}))^{N}$,
$\triangle^{m}\theta(f)(x)=a(x,f(x),\nabla f(x),\ldots,\nabla^{2m}f(x))$ when
$x\in{\mathbf{B}}_{R}$. Moreover,
$\omega^{(m)}(f)\in\mathcal{(}C^{2m,\alpha}({\mathbf{B}}_{R}))^{N}$ and so
$\theta(f)\in(\mathcal{C}_{0}^{2m,\alpha}({\mathbf{B}}_{R}))^{N}$. We note
that, because of (16) and (17), $\theta(f)$ is automatically in
$(C^{2m,\alpha}_{0}(\bar{\mathbf{B}}_{R}))^{N}$ after a trivial extension onto
$\bar{\mathbf{B}}_{R}$ if $f\in(C^{2m,\alpha}_{0}(\bar{\mathbf{B}}_{R}))^{N}$.
Recall
$(\mathcal{C}_{0}^{2m,\alpha}(\bar{\mathbf{B}}_{R}),\parallel\cdot\parallel_{\alpha}^{(k)})$
is a Banach space. We now have constructed an operator between two Banach
spaces as follows.
$\theta:(C^{2m,\alpha}_{0}(\bar{\mathbf{B}}_{R}))^{N}\rightarrow(C^{2m,\alpha}_{0}(\bar{\mathbf{B}}_{R}))^{N}$
with the corresponding norm
$\parallel f\parallel_{\alpha}^{(2m)}=\max_{1\leq j\leq N}\parallel
f_{j}\parallel_{\alpha}^{(2m)}.$
The ball of radius $\gamma$ in $C^{2m,\alpha}_{0}({\mathbf{B}}_{R})^{N}$ is
denoted by
$\mathcal{B}(R,\gamma):=\\{f\in
C^{2m,\alpha}_{0}({\mathbf{B}}_{R}))^{N}:\parallel
f\parallel_{\alpha}^{(2m)}<\gamma\\}.$
On the other hand, recall a function $u\in\mathcal{C}^{2k}$ is called
$k$-harmonic if $\triangle^{k}u=0$. Given $h=(h_{1},\ldots,h_{N})$ with
$h_{j}$ any homogeneous $m$-harmonic polynomial of degree $2m$ and for any
$f\in(\mathcal{C}^{2m,\alpha}_{0}({\mathbf{B}}_{R}))^{N}$, consider
$\theta_{h}(f)=h+\theta(f).$
Then $\theta_{h}(f)\in(\mathcal{C}_{0}^{2m,\alpha}({\mathbf{B}}_{R}))^{N}$,
$\triangle^{m}\theta_{h}(f)(x)=\triangle^{m}\theta(f)(x)=a(x,f(x),\nabla
f(x),\ldots,\nabla^{2m}f(x))$ in ${\mathbf{B}}_{R}$ while the $2m$ jets
$D^{\beta}\theta_{h}(f)(0)$ with $\beta\in\Lambda$ coincide with those of the
given $h$.
We will seek the solutions to (1) by making use of the fixed point theorem.
Indeed, we first show there exists $\gamma>0$ and $R>0$, such that
$\theta:\mathcal{B}(R,\gamma)\rightarrow\mathcal{B}(R,\frac{\gamma}{2})$ and
$\theta$ is a contraction map. We then pick some nontrivial $h$ as above with
$h\in\mathcal{B}(R,\frac{\gamma}{2})$ and consider the corresponding operator
$\theta_{h}$. Consequently,
$\theta_{h}:\mathcal{B}(R,\gamma)\rightarrow\mathcal{B}(R,\gamma)$ and is a
contraction map. As an application of the fixed point theorem, there exists
some $u\in(\mathcal{C}_{0}^{2m,\alpha}({\mathbf{B}}_{R}))^{N}$ such that
$\theta_{h}(u)=u$. This $u$ apparently satisfies
$\triangle^{m}u=\triangle^{m}\theta_{h}(u)=a(\cdot,u,\nabla
u,\ldots,\nabla^{2m}u)$ in ${\mathbf{B}}_{R}$ and hence is a solution to (1)
over ${\mathbf{B}}_{R}$.
###### Remark 7.1.
The construction of $\theta_{h}$ guarantees the solution $u$ obtained from the
fixed point theorem is not a trivial solution. More precisely, $u$ is of
vanishing order $2m$.
We divide our proof into two steps. In each of the steps, we need to utilize
Theorem 6.8 and then the estimates in [Pan2].
### 7.1 Estimate of $\parallel\theta(f)-\theta(g)\parallel_{\alpha}^{(2m)}$
First, we note from (17) that for $1\leq j\leq N$, for any
$f,g\in\mathcal{B}(R,\gamma)$,
$\begin{split}\parallel\theta_{j}(f)-\theta_{j}(g)\parallel_{\alpha}^{(2m)}\leq&\parallel\omega_{j}^{(m)}(f)-\omega_{j}^{(m)}(g)\parallel_{\alpha}^{(2m)}+\parallel\nabla^{2m}(\omega_{j}^{(m)}(f)-\omega_{j}^{(m)}(g))\parallel\\\
\leq&\parallel\omega_{j}^{(m)}(f)-\omega_{j}^{(m)}(g)\parallel_{\alpha}^{(2m)}\\\
=&\parallel\mathcal{N}(\omega^{(m-1)}_{j}(f))-\mathcal{N}(\omega^{(m-1)}_{j}(g))\parallel_{\alpha}^{(2m)}\\\
=&\parallel\mathcal{N}\big{(}\omega^{(m-1)}_{j}(f)-\omega^{(m-1)}_{j}(g)\big{)}\parallel_{\alpha}^{(2m)}\\\
=&\cdots\\\ =&\parallel\mathcal{N}^{m}\big{(}a_{j}(\cdot,f(\cdot),\nabla
f(\cdot),\ldots,\nabla^{2m}f(\cdot))-a_{j}(\cdot,g(\cdot),\nabla
g(\cdot),\ldots,\nabla^{2m}g(\cdot))\big{)}\parallel_{\alpha}^{(2m)}.\end{split}$
Making use of Theorem 6.8 into the above expression, we have then
$\begin{split}\parallel\theta_{j}(f)-\theta_{j}(g)\parallel_{\alpha}^{(2m)}\leq&C\parallel
a_{j}(\cdot,f(\cdot),\nabla
f(\cdot),\ldots,\nabla^{2m}f(\cdot))-a_{j}(\cdot,g(\cdot),\nabla
g(\cdot),\ldots,\nabla^{2m}g(\cdot))\parallel_{\alpha}.\end{split}$ (18)
We next proceed with the same derivation of the estimate as in section 4.1 of
[Pan2] without proof. Note that due to Lemma 2.2, when
$f\in\mathcal{B}(R,\gamma)$, then $\parallel\nabla^{j}f\parallel\leq
CR^{2m-j}\gamma$ for $0\leq j\leq 2m$. Therefore as a vector-valued function
of $(p_{-1},p_{0},p_{1},\ldots,p_{2m})$, $a$ takes value in
$E:=\\{p_{-1}\in{\mathbf{B}}_{R},p_{j}\in{\mathbf{B}}_{CR^{2m-j}\gamma},0\leq
j\leq 2m\\}$ when $u\in\mathcal{B}(R,\gamma)$.
Denote by $A_{j}:=\sup_{E}|\nabla_{p_{j}}a|$,
$Q_{j}:=\sup\big{\\{}\frac{|\nabla_{p_{j}}a(x)-\nabla_{p_{j}}a(x^{\prime})|}{|x-x^{\prime}|^{\alpha}}:x,x^{\prime}\in
E\big{\\}}$ and $L_{j}:=\sup_{E}(\nabla^{2}_{p_{j}p_{2m}}a)$ with $-1\leq
j\leq 2m$. Therefore, for $-1\leq j\leq 2m$,
$\begin{split}A_{j}&\leq
C\parallel\nabla_{p_{j}}a\parallel_{\mathcal{C}(E)}\leq C\parallel
a\parallel_{\mathcal{C}^{1,\alpha}(E)},\\\ Q_{j}&\leq C\parallel
a\parallel_{\mathcal{C}^{1,\alpha}(E)},\\\ L_{j}&\leq
C\parallel\nabla_{p_{2m}}a\parallel_{\mathcal{C}^{1}(E)}\leq C\parallel
a\parallel_{\mathcal{C}^{2}(E)}.\end{split}$ (19)
Here $\parallel a\parallel_{\mathcal{C}^{1,\alpha}(E)}=\parallel
a\parallel_{\mathcal{C}^{1}}+\sup\big{\\{}\frac{|\nabla a(x)-\nabla
a(x^{\prime})|}{|x-x^{\prime}|^{\alpha}}:x,x^{\prime}\in E\big{\\}}$.
###### Lemma 7.2.
[Pan2] For any $f,g\in\mathcal{B}(R,\gamma)$, if $a\in\mathcal{C}^{2}$,
$\parallel a(\cdot,f(\cdot),\nabla
f(\cdot),\ldots,\nabla^{2m}f(\cdot))-a(\cdot,g(\cdot),\nabla
g(\cdot),\ldots,\nabla^{2m}g(\cdot))\parallel_{\alpha}\leq\delta(R,\gamma)\parallel
f-g\parallel_{\alpha},$
where
$\delta(R,\gamma)=C\sum_{j=0}^{2m}R^{2m-j}\big{(}A_{j}+R^{\alpha}(1+R^{\alpha}\gamma^{\alpha}+\gamma)Q_{j}+\gamma
L_{j}\big{)}.$ (20)
Moreover, if $a$ is independent of $p_{2m}$, then when
$a\in\mathcal{C}^{1,\alpha}$,
$\delta(R,\gamma)=C\sum_{j=0}^{2m-1}R^{2m-j}\big{(}A_{j}+R^{\alpha}(1+R^{\alpha}\gamma^{\alpha}+\gamma)Q_{j}\big{)}.$
(21)
We then have obtained from (18), by using Lemma 7.2 that
$\parallel\theta(f)-\theta(g)\parallel_{\alpha}^{(2m)}\leq\delta(R,r)\parallel
f-g\parallel_{\alpha},$ (22)
with $\delta(R,\gamma)$ given in (20) or (21).
### 7.2 Estimate of $\parallel\theta(f)\parallel_{\alpha}^{(2m)}$
Similarly, for $f\in\mathcal{B}(R,\gamma)$, $1\leq j\leq N$,
$\begin{split}\parallel\theta_{j}(f)\parallel_{\alpha}^{(2m)}\leq&\parallel\omega_{j}^{(m)}(f)\parallel_{\alpha}^{(2m)}+|\nabla^{2m}(\omega_{j}^{(m)}(f))|\\\
\leq&\parallel\omega_{j}^{(m)}(f)\parallel_{\alpha}^{(2m)}\\\
=&\parallel\mathcal{N}^{m}\big{(}a_{j}(\cdot,f(\cdot),\nabla
f(\cdot),\ldots,\nabla^{2m}f(\cdot))\big{)}\parallel_{\alpha}^{(2m)}\\\
\leq&C\parallel a_{j}(\cdot,f(\cdot),\nabla
f(\cdot),\ldots,\nabla^{2m}f(\cdot))\parallel_{\alpha}.\end{split}$ (23)
According to the estimate in section 4.2 of [Pan2],
###### Lemma 7.3.
[Pan2] For any $f\in\mathcal{B}(R,\gamma)$, if $a\in\mathcal{C}^{2}$,
$\parallel a(\cdot,f(\cdot),\nabla
f(\cdot),\ldots,\nabla^{2m}f(\cdot))\parallel_{\alpha}\leq\eta(R,r),$
where
$\eta(R,\gamma)=|a(0)|+C\big{(}R(A_{-1}+R^{\alpha}\big{(}1+R^{\alpha}\gamma^{\alpha}+\gamma)Q_{-1}+\gamma
L_{-1}\big{)}+\gamma\delta(R,\gamma)\big{)}$ (24)
with $\delta(R,\gamma)$ given in (20).
Moreover, if $a$ is independent of $p_{2m}$, then when
$a\in\mathcal{C}^{1,\alpha}$,
$\eta(R,\gamma)=|a(0)|+C\big{(}R(A_{-1}+R^{\alpha}\big{(}1+R^{\alpha}\gamma^{\alpha}+\gamma)Q_{-1}\big{)}+\gamma\delta(R,\gamma)\big{)}.$
(25)
with $\delta(R,\gamma)$ given in (21).
Combining Lemma 7.3 and (23), we have
$\parallel\theta(f)\parallel_{\alpha}^{(2m)}\leq\eta(R,r),$ (26)
with $\eta(R,\gamma)$ given in (24) or in (25).
## 8 Proof of Theorem 1.2
We now prove a slightly more general result than the main theorems following
[Pan2].
###### Theorem 8.1.
Let $a\in\mathcal{C}^{2}$ and $a(0)=0$. There is a constant $\delta(<1)$
depending only on $n,N$ and $\alpha$, such that when
$\begin{split}&|\nabla_{p_{2m}}a(0)|+|\nabla^{2}_{p_{2m}p_{2m}}a(0)|\leq\delta,\end{split}$
the system (1) has infinitely many solutions in $\mathcal{C}^{2m,\alpha}$ of
vanishing order $2m$ at the origin in some small neighborhood.
Proof of Theorem 8.1: Our goal is to show $\theta$ sends
$\mathcal{B}(R,\gamma)$ into $\mathcal{B}(R,\frac{\gamma}{2})$ for some
positive $R$ and $\gamma$ and is a contraction map between
$\mathcal{B}(R,\gamma)$. In other words, we show there exist $\gamma>0$ and
$R>0$ such that for any $f,g\in\mathcal{B}(R,\gamma)$,
$\parallel\theta(f)-\theta(g)\parallel_{\alpha}^{(2m)}\leq c\parallel
f-g\parallel_{\alpha}^{(2m)}\ \ \text{with}\ c<1$
and
$\parallel\theta(f)\parallel_{\alpha}^{(2m)}<\frac{\gamma}{2}.$
From (22) and (26), it boils down to show there exist $\gamma>0$ and $R>0$
such that
$\begin{split}\delta(R,\gamma)&\leq c<1\\\
\eta(R,\gamma)&<\frac{\gamma}{2}.\end{split}$ (27)
Denote by $\tau:=|\nabla_{p_{2m}}a(0)|+|\nabla^{2}_{p_{2m}p_{2m}}a(0)|$, use
$\epsilon_{\gamma}(R)$ to represent a constant converging to 0 as
$R\rightarrow 0$ for each fixed $\gamma$, and use $\epsilon(R+\gamma)$ to
represent a constant converging to 0 as both $R$ and $\gamma$ go to 0. Then by
continuity of $a$,
$A_{2m}\leq\tau+\epsilon(R+\gamma),\ \ Q_{2m}\leq C\tau+\epsilon(R+\gamma),\ \
L_{2m}\leq\tau+\epsilon(R+\gamma).$
(20) and (24) can hence be written as
$\displaystyle\delta(R,\gamma)$
$\displaystyle=C_{a}\tau(1+\gamma)+\epsilon_{\gamma}(R)+\epsilon(R+\gamma),$
(28) $\displaystyle\eta(R,\gamma)$
$\displaystyle=C_{a}\gamma\delta(R,\gamma)+\epsilon_{\gamma}(R).$ (29)
with $C_{a}$ dependent on $\parallel a\parallel_{\mathcal{C}^{2}(E)}$.
First, for each $\gamma$, choose $R_{0}$ such that
$\epsilon_{\gamma}(R)\leq\frac{\gamma}{4}$ when $R\leq R_{0}$ in (29). Then we
will choose $\gamma$ and $R$ small enough so $\delta(R,\gamma)\leq
c:=\min\\{\frac{1}{4C_{a}\gamma},\frac{1}{2}\\}<1$. Indeed, by choosing
$\gamma(\leq 1)$ and $R$ small, we can make
$\epsilon_{\gamma}(R)+\epsilon(R+\gamma)<\frac{c}{2}$ in (28) and hence
$\delta(R,\gamma)<2C_{a}\tau+\frac{c}{2}.$
When $\tau\leq\frac{c}{8C_{a}}$, we thus have (27) holds.
Now recall $\Lambda=\\{\beta:|\beta|=2m,\ \text{ and at least one of}\
\beta_{j}\ \text{is odd for}\ 1\leq j\leq n\\}$. For $R$ and $\gamma$ chosen
as above, Pick $h(x)=bx^{\beta}$ with $\beta\in\Lambda$, and make $b>0$ small
enough such that $\parallel h\parallel_{\alpha}^{(2m)}<\frac{\gamma}{2}$ and
hence $h\in\mathcal{B}(R,\frac{\gamma}{2})$. Consider the operator
$\theta_{h}(f):=h+\theta(f)$. Then
$\theta_{h}:\mathcal{B}(R,\gamma)\rightarrow\mathcal{B}(R,\gamma)$ forms a
contraction map from the construction. By fixed point theorem for Banach
spaces, there is some $u\in\mathcal{B}(R,\gamma)$ such that $\theta_{h}(u)=u$.
$u$ thus solves the system (1) in the class $\mathcal{C}^{2m,\alpha}$ and is
of vanishing order $2m$ by the construction.
###### Remark 8.2.
None of the solutions constructed in the proof of Theorem 8.1 is radially
symmetric, i.e., none of them is obtained by reducing the system (8) possibly
into an ODE system with respect to the radial variable $r=|x|$ only. Indeed,
if the solution $u(x)=u(r)\in\mathcal{C}_{0}^{2m,\alpha}$, then near 0,
$u(r)=er^{2m}+o(|r|^{2m})$ for some constant $e$. In particular,
$D^{\beta}u(0)=0$ for all $\beta\in\Lambda$. This apparently can not happen
because from the construction, $h=bx^{\beta_{0}}$ with some
$\beta_{0}\in\Lambda$ and $D^{\beta_{0}}u(0)=D^{\beta_{0}}h(0)\neq 0$.
Proof of Theorem 1.2: Theorem 1.2 is a consequence of Theorem 8.1 and Remark
8.2.
## 9 Proof of Theorem 1.1
When $c_{j}=0,0\leq j\leq 2m-1$ and $a$ is independent of $p_{2m}$,
$A_{2m},Q_{2m}$ and $L_{j}(-1\leq j\leq 2m)$ are all 0 and so (22) and (26)
becomes
$\begin{split}\delta(R,\gamma)&\leq\epsilon_{\gamma}(R),\\\
\eta(R,\gamma)&\leq|a(0)|+\epsilon_{\gamma}(R).\end{split}$
Here we only need $\mathcal{C}^{1,\alpha}$ regularity for $a$ from the
estimates (21) and (25). Now we choose some positive $\gamma_{0}$ so that
$\gamma_{0}>4|a(0)|$. Consequently, we choose $R$ sufficiently small so
$\epsilon_{\gamma_{0}}(R)\leq
c:=\min\\{\frac{1}{2},\frac{\gamma_{0}}{4}\\}<1$. Hence
$\begin{split}\delta(R,\gamma_{0})&\leq c<1;\\\
\eta(R,\gamma_{0})&<\frac{\gamma_{0}}{2}.\end{split}$
Applying the same strategy as in the proof of Theorem 8.1, we can find a
solution $u\in\mathcal{B}(R,\gamma_{0})$ to the ODE system (2) which is not
radially symmetric.
For general given $c_{\beta}$’s with multi-indices $\beta$, we write
$T_{2m-1}(x):=\sum_{j=0}^{2m-1}\frac{c_{\beta}}{\beta!}x^{\beta}$. Consider
the new system
$\begin{split}&\triangle^{m}\tilde{u}(x)=a(x,\tilde{u}+T_{2m-1}(x),\nabla(\tilde{u}+T_{2m-1}(x)),\ldots,\nabla^{2m-1}(\tilde{u}+T_{2m-1}(x));\\\
&D^{\beta}\tilde{u}(0)=0,\ \ 0\leq|\beta|\leq 2m-1.\\\ \end{split}$
This is a system with all the initial values equal to $0$. We then obtain some
solution $\tilde{u}$ in the class of $\mathcal{C}^{2m,\alpha}$ in some small
neighborhood of 0. Then $u=\tilde{u}+T_{2m-1}$ solves the system (2) in the
class of $\mathcal{C}^{2m,\alpha}$ in some small neighborhood of 0.
Apparently, the solution obtained in this way is of vanishing order at most
$2m$. Moreover, $u$ is not radially symmetric since $\tilde{u}$ is not.
## 10 Proof of Theorem 1.3
Since $a$ is independent of $x$ and $a(0)=0$, $A_{-1}$, $Q_{-1}$ and $L_{-1}$
are 0 and hence in (24),
$\eta(R,\gamma)\leq C_{a}\gamma\delta(R,\gamma).$
In order to prove Theorem 1.3, we need to show for any fixed $R>0$, there
exists some $\gamma_{0}>0$ such that
$\begin{split}\delta(R,\gamma_{0})<1;\\\
\eta(R,\gamma_{0})<\frac{\gamma_{0}}{2},\end{split}$
which is equivalent to showing
$\delta(R,\gamma_{0})\leq c:=\min\\{\frac{1}{2},\frac{1}{2C_{a}}\\}<1.$ (30)
Indeed, since $\nabla a(0)=0$, we have $a\in\mathcal{C}^{2,0}_{0}(E)$ and
hence by Lemma 2.2, for $0\leq j\leq 2m$,
$A_{j}\leq
C\parallel\nabla_{p_{j}}a\parallel_{\mathcal{C}^{1}(E)}R^{2m-j}\gamma\leq
C\parallel a\parallel_{\mathcal{C}^{2}(E)}R^{2m-j}\gamma.$
On the other hand, we also have by definition, for $0\leq j\leq 2m$,
$\begin{split}Q_{j}&\leq
C\parallel\nabla_{p_{j}}a\parallel_{\mathcal{C}^{1}(E)}(R^{2m-j}\gamma)^{1-\alpha}\leq
C\parallel a\parallel_{\mathcal{C}^{2}(E)}(R^{2m-j}\gamma)^{1-\alpha},\\\
L_{j}&\leq C\parallel\nabla_{p_{j}}a\parallel_{\mathcal{C}^{1}(E)}\leq
C\parallel a\parallel_{\mathcal{C}^{2}(E)}.\end{split}$
Therefore, (20) can be written as
$\delta(R,\gamma)=\epsilon_{R}(\gamma),$
where $\epsilon_{R}(\gamma)$ represents some function converging to 0 as
$\gamma$ goes to 0 for each fixed $R>0$. (30) is thus true and the proof of
Theorem 1.3 is complete.
Appendix
## Appendix A Higher order derivatives of the Newtonian potential
We derive the higher order derivatives of the Newtonian potential following
[GT]. Let $\Omega\subset{\mathbb{R}}^{n}$ be bounded.
###### Definition A.1.
Given two multi-indices $\beta$ and $\mu$, and $j$ with $1\leq j\leq n$, we
define for $x\in\Omega$,
$\mathcal{I}_{\Omega}(\beta,\mu,j)(x):=\int_{\partial\Omega}D^{\beta}_{x}\Gamma(x-y)(y-x)^{\mu}\nu_{j}d\sigma_{y},$
where $d\sigma_{y}$ is the surface area element of $\partial\Omega$ with the
unit outer normal $(\nu_{1},\ldots,\nu_{n})$.
It is clear to see that
$\mathcal{I}_{\Omega}(\beta,\mu,j)\in\mathcal{C}^{\infty}(\Omega)$.
###### Theorem A.2.
Let $\beta$ be a multi-index with $|\beta|=k+2$. Let $\\{\beta^{(j)}\\}$ be a
continuously increasing nesting of length $k+2$ for $\beta$ and let
$\beta^{(j)^{\prime}}$ be the dual of $\beta^{(j)}$ with respect to $\beta$
for $1\leq j\leq k+2$. Then given a bounded and locally
$\mathcal{C}^{k,\alpha}$ function $f$ in $\Omega$ and for any $x\in\Omega$,
$\begin{split}D^{\beta}\mathcal{N}(f)(x)=&\int_{\Omega^{\prime}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k}(f)(y)\big{)}dy\\\
&-\sum_{j=2}^{k+2}D^{\beta^{(j)^{\prime}}}\big{(}\sum_{|\mu|=j-2}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})(x)\big{)}.\end{split}$
(31)
Here $\Omega^{\prime}\supset\bar{\Omega}$ are such that Stokes’ Theorem holds
on $\Omega^{\prime}$ and $f$ extends to vanish on
$\Omega^{\prime}\setminus\Omega$.
Proof of Theorem A.2: The theorem is proved by induction on $k$. When $k=0$,
the theorem reduces to the case in [GT]. Assume (31) is true for $k=k_{0}\geq
0$, i.e., for any $f\in\mathcal{C}^{k_{0},\alpha}$, and any $\beta$ with
$|\beta|=k_{0}+2$,
$\begin{split}D^{\beta}\mathcal{N}(f)(x)=&\int_{\Omega^{\prime}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k_{0}}(f)(y)\big{)}dy\\\
&-\sum_{j=2}^{{k_{0}}+2}D^{\beta^{(j)^{\prime}}}\big{(}\sum_{|\mu|=j-2}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})(x)\big{)}.\end{split}$
(32)
We want to show it is true for $k=k_{0}+1$. Namely, for any $\beta$ with
$|\beta|=k_{0}+3$ and $f\in\mathcal{C}^{k_{0}+1,\alpha}$,
$\begin{split}D^{\beta}\mathcal{N}(f)(x)=&\int_{\Omega^{\prime}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k_{0}+1}(f)(y)\big{)}dy\\\
&-\sum_{j=2}^{k_{0}+3}D^{\beta^{(j)^{\prime}}}\big{(}\sum_{|\mu|=j-2}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})(x)\big{)}.\end{split}$
(33)
Without loss of generality, assume
$D^{\beta}=\partial_{1}D^{{\beta^{(k_{0}+2)}}}$ with
$|{\beta^{(k_{0}+2)}}|=k_{0}+2$. Let
$\begin{split}v_{\epsilon}(x)=&\int_{\Omega^{\prime}}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\big{(}f(y)-T^{x}_{k_{0}}(f)(y)\big{)}dy\\\
&-\sum_{j=2}^{{k_{0}}+2}D^{\beta^{(k_{0}+2)}-\beta^{(j)}}\big{(}\sum_{|\mu|=j-2}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})(x)\big{)},\end{split}$
where $\eta_{\epsilon}(x-y)=\eta(\frac{|x-y|}{\epsilon})$ with $\eta$ some
smooth increasing function such that $\eta(t)=0$ when $t\leq 1$ and
$\eta(t)=1$ when $t\geq 2$. Here we choose
$\epsilon\leq\frac{dist\\{\Omega^{\prime c},\Omega\\}}{2}$. When
$\epsilon\rightarrow 0$, $v_{\epsilon}(x)\rightarrow
D^{\beta^{(k_{0}+2)}}\mathcal{N}(f)(x)$ for all $x\in\Omega$ by induction.
Now consider
$\begin{split}\partial_{1}v_{\epsilon}(x)=&-\int_{\Omega^{\prime}}\partial_{1}\big{(}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\big{)}\big{(}f(y)-T^{x}_{k_{0}}(f)(y)\big{)}dy\\\
&+\int_{\Omega^{\prime}}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\partial_{x_{1}}\big{(}f(y)-T^{x}_{k_{0}}(f)(y)\big{)}dy\\\
&-\partial_{1}\big{[}\sum_{j=2}^{{k_{0}}+2}D^{\beta^{(k_{0}+2)}-\beta^{(j)}}\big{(}\sum_{|\mu|=j-2}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})(x)\big{)}\big{]}\\\
=&A+B-\sum_{j=2}^{{k_{0}}+2}D^{\beta^{(j)^{\prime}}}\big{(}\sum_{|\mu|=j-2}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})(x)\big{)}.\end{split}$
(34)
Here
$A:=-\int_{\Omega^{\prime}}\partial_{1}\big{(}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\big{)}\big{(}f(y)-T^{x}_{k_{0}}(f)(y)\big{)}dy$
and
$B:=\int_{\Omega^{\prime}}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\partial_{x_{1}}\big{(}f(y)-T^{x}_{k_{0}}(f)(y)\big{)}dy$.
We will show as $\epsilon\rightarrow 0$, for all $x\in\Omega$,
$\begin{split}A+B\rightarrow&\int_{\Omega^{\prime}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k_{0}+1}(f)(y)\big{)}dy\\\
&-\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}(\beta^{(k_{0}+2)},\mu,\beta^{(k_{0}+3)}-\beta^{(k_{0}+2)})(x).\end{split}$
(35)
(34) thus gives for $x\in\Omega$,
$\begin{split}\partial_{1}v_{\epsilon}(x)\rightarrow&\int_{\Omega^{\prime}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k_{0}+1}(f)(y)\big{)}dy\\\
&-\sum_{j=2}^{k_{0}+3}D^{\beta^{(j)^{\prime}}}\big{(}\sum_{|\mu|=j-2}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}(\beta^{(j-1)},\mu,\beta^{(j)}-\beta^{(j-1)})(x)\big{)}.\end{split}$
and hence (33) is concluded.
For $A$,
$\begin{split}A=&-\int_{\Omega^{\prime}}\partial_{1}\big{(}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\big{)}\big{(}f(y)-T^{x}_{k_{0}+1}(f)(y)\big{)}dy\\\
&-\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\int_{\Omega^{\prime}}\partial_{1}\big{(}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\big{)}(y-x)^{\mu}dy.\end{split}$
Applying Stokes’ Theorem to the second term of the above expression, we then
have
$\begin{split}A=&-\int_{\Omega^{\prime}}\partial_{1}\big{(}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\big{)}\big{(}f(y)-T^{x}_{k_{0}+1}(f)(y)\big{)}dy\\\
&-\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\int_{\partial\Omega^{\prime}}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)(y-x)^{\mu}\nu_{1}d\sigma_{y}\\\
&+\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\int_{\Omega^{\prime}}D_{x}^{{\beta^{(k_{0}+2)}}}\Gamma(x-y)\eta_{\epsilon}(x-y)\partial_{1}(y-x)^{\mu}dy.\end{split}$
On the other hand,
$\begin{split}B=&-\int_{\Omega^{\prime}}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\partial_{x_{1}}\big{(}T^{x}_{k_{0}}(f)(y)\big{)}dy.\end{split}$
Therefore
$\begin{split}A+B=&-\int_{\Omega^{\prime}}\partial_{1}\big{(}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)\big{)}\big{(}f(y)-T^{x}_{k_{0}+1}(f)(y)\big{)}dy\\\
&-\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\int_{\partial\Omega^{\prime}}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)\eta_{\epsilon}(x-y)(y-x)^{\mu}\nu_{1}d\sigma_{y}\\\
&+\int_{\Omega^{\prime}}D_{x}^{{\beta^{(k_{0}+2)}}}\Gamma(x-y)\eta_{\epsilon}(x-y)\big{[}\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\partial_{1}(y-x)^{\mu}-\partial_{x_{1}}\big{(}T^{x}_{k_{0}}(f)(y)\big{)}\big{]}dy\\\
=&I+II+III.\end{split}$
As $\epsilon\rightarrow 0$, for $x\in\Omega$,
$\begin{split}I\rightarrow&\int_{\Omega^{\prime}}D^{\beta}_{x}\Gamma(x-y)\big{(}f(y)-T^{x}_{k_{0}+1}(f)(y)\big{)}dy\\\
II\rightarrow&-\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\int_{\partial\Omega^{\prime}}D^{{\beta^{(k_{0}+2)}}}_{x}\Gamma(x-y)(y-x)^{\mu}\nu_{1}d\sigma_{y}\\\
&=-\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\mathcal{I}_{\Omega^{\prime}}({\beta^{(k_{0}+2)}},\mu,1)(x).\end{split}$
(36)
For $III$, notice $T_{k_{0}}^{x}(f)(y)=\sum_{|\mu|\leq
k_{0}}\frac{D^{\mu}f(x)(y-x)^{\mu}}{\mu!}$, so
$\begin{split}\partial_{x_{1}}\big{(}T^{x}_{k_{0}}(f)(y)\big{)}=&\sum_{|\mu|\leq
k_{0}}\frac{\partial_{1}D^{\mu}f(x)(y-x)^{\mu}}{\mu!}+\sum_{|\mu|\leq
k_{0}}\frac{D^{\mu}f(x)\partial_{x_{1}}(y-x)^{\mu}}{\mu!}\end{split}$
On the other hand, one observes
$\begin{split}\sum_{|\mu|\leq
k_{0}}\frac{\partial_{1}D^{\mu}f(x)(y-x)^{\mu}}{\mu!}=&\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\partial_{1}(y-x)^{\mu}+\sum_{|\mu|\leq
k_{0}-1}\frac{\partial_{1}D^{\mu}f(x)(y-x)^{\mu}}{\mu!},\\\ \sum_{|\mu|\leq
k_{0}}\frac{D^{\mu}f(x)\partial_{x_{1}}(y-x)^{\mu}}{\mu!}=&-\sum_{|\mu|\leq
k_{0}-1}\frac{\partial_{1}D^{\mu}f(x)(y-x)^{\mu}}{\mu!}.\end{split}$
Hence
$\begin{split}\sum_{|\mu|=k_{0}+1}\frac{D^{\mu}f(x)}{\mu!}\partial_{1}(y-x)^{\mu}-\partial_{x_{1}}\big{(}T^{x}_{k_{0}}(f)(y)\big{)}=0,\end{split}$
and
$III=0.$ (37)
Combining (36) and (37), (35) thus holds.
## Appendix B Computation of $\mathcal{I}_{B_{1}}(0,0,1)$
We will compute
$\mathcal{I}_{B_{1}}(0,0,1)(x):=\int_{\partial{\mathbf{B}}_{1}}\Gamma(x-y)\nu_{1}d\sigma_{y}$
for $x\in{\mathbf{B}}_{1}$.
Write $x=U\cdot[a,0,\ldots,0]^{t}$, where $U=(u_{ij})_{1\leq i,j\leq n}$ is
some unitary matrix and $a=|x|$, and then make a change of coordinates by
letting $y=U\cdot\tilde{y}$ in the expression of $\mathcal{I}_{B_{1}}(0,0,1)$.
We then get
$\mathcal{I}_{B_{1}}(0,0,1):=\int_{\partial{\mathbf{B}}_{1}}\frac{\sum_{0\leq
j\leq
n}u_{1j}\tilde{y}_{j}}{\sqrt{(a-\tilde{y}_{1})^{2}+\tilde{y}_{2}^{2}+\cdots+\tilde{y}_{n}^{2}}^{n-2}}d\sigma_{\tilde{y}}$
Write $\tilde{y}$ back by $y$ allowing an abuse of notation, then
$\begin{split}\mathcal{I}_{B_{1}}(0,0,1):=&u_{11}\int_{\partial{\mathbf{B}}_{1}}\frac{y_{1}}{\sqrt{(a-y_{1})^{2}+y_{2}^{2}+\cdots+y_{n}^{2}}^{n-2}}d\sigma_{y}\\\
&+\sum_{2\leq j\leq
n}u_{1j}\int_{\partial{\mathbf{B}}_{1}}\frac{y_{j}}{\sqrt{(a-y_{1})^{2}+y_{2}^{2}+\cdots+y_{n}^{2}}^{n-2}}d\sigma_{y}\end{split}$
Since $\frac{y_{j}}{\sqrt{(a-y_{1})^{2}+y_{2}^{2}+\cdots+y_{n}^{2}}^{n-2}}$ is
odd with respect to $y_{j}$ when $j\geq 2$,
$\int_{\partial{\mathbf{B}}_{1}}\frac{y_{j}}{\sqrt{(a-y_{1})^{2}+y_{2}^{2}+\cdots+y_{n}^{2}}^{n-2}}d\sigma_{y}=0$
when $j\geq 2$ and hence
$\mathcal{I}_{B_{1}}(0,0,1):=u_{11}\int_{\partial{\mathbf{B}}_{1}}\frac{y_{1}}{\sqrt{(a-y_{1})^{2}+y_{2}^{2}+\cdots+y_{n}^{2}}^{n-2}}d\sigma_{y}.$
Next, rewrite the above integral in terms of the spherical coordinates, then
we obtain
$\begin{split}\mathcal{I}_{B_{1}}(0,0,1)=&\omega_{n-1}u_{11}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{\sin
t\cos^{n-2}t}{\sqrt{(a-\sin t)^{2}+\cos^{2}t}^{n-2}}dt\\\
=&\omega_{n-1}u_{11}\int_{-1}^{1}\frac{u(1-u^{2})^{\frac{n-3}{2}}}{(1-2au+a^{2})^{\frac{n-2}{2}}}du\end{split}$
(38)
Here $\omega_{n-1}$ is the surface area of the unit sphere in
${\mathbb{R}}^{n-1}$.
In order to compute (38), we need to use Gegenbauer polynomials. Recall for
each fixed $\rho$, the Gegenbauer polynomials are $\\{C_{n}^{(\rho)}(x)\\}$ in
$[-1,1]\subset{\mathbb{R}}$ satisfying
$\frac{1}{(1-2xt+t^{2})^{\rho}}=\sum_{n=0}^{\infty}C_{n}^{(\rho)}(x)t^{n}$
in $(-1,1)$. In particular,
$\begin{split}C_{0}^{(\rho)}(x)&=1,\\\ C_{1}^{(\rho)}(x)&=2\rho x,\\\
C_{n}^{(\rho)}(x)&=\frac{1}{n}[2x(n+\rho-1)C_{n-1}^{(\rho)}(x)-(n+2\rho-2)C_{n-2}^{(\rho)}(x)].\end{split}$
Moreover, $\\{C_{n}^{(\rho)}(x)\\}$ are orthogonal polynomials on the interval
[-1,1] with respect to the weight function $(1-x^{2})^{\rho-\frac{1}{2}}$. In
other words,
$\begin{split}&\int_{-1}^{1}C_{n}^{(\rho)}(x)C_{m}^{(\rho)}(x)(1-x^{2})^{\rho-\frac{1}{2}}dx=0,m\neq
n\\\
&\int_{-1}^{1}[C_{n}^{(\rho)}(x)]^{2}(1-x^{2})^{\rho-\frac{1}{2}}dx=\frac{\pi
2^{1-2\rho}\Gamma(n+2\rho)}{n!(n+\rho)\Gamma(\rho)^{2}}.\end{split}$
Letting $\rho=\frac{n-2}{2}$, then
$\frac{1}{(1-2au+a^{2})^{\frac{n-2}{2}}}=\sum_{n=0}^{\infty}C_{n}^{(\rho)}(u)a^{n}$
and $u=\frac{C_{1}^{(\rho)}(u)}{2\rho}$. (38) can hence be written as
$\begin{split}\mathcal{I}_{B_{1}}(0,0,1)=&\omega_{n-1}u_{11}\int_{-1}^{1}\sum_{n=0}^{\infty}C_{n}^{(\rho)}(u)a^{n}\cdot\frac{C_{1}^{(\rho)}(u)}{2\rho}(1-u^{2})^{\rho-\frac{1}{2}}du\\\
=&\omega_{n-1}u_{11}\int_{-1}^{1}C_{1}^{(\rho)}(u)a\cdot\frac{C_{1}^{(\rho)}(u)}{2\rho}(1-u^{2})^{\rho-\frac{1}{2}}du\\\
=&\omega_{n-1}u_{11}\frac{a}{2\rho}\frac{\pi
2^{1-2\rho}\Gamma(1+2\rho)}{(1+\rho)\Gamma(\rho)^{2}}\\\
=&\frac{4\pi^{\frac{n}{2}}}{n\Gamma(\frac{n-2}{2})}x_{1}.\end{split}$
###### Remark B.1.
Making use of the same approach as the above in addition to observing some
symmetry of the integrand over the sphere, one can practically compute
$\mathcal{I}_{{\mathbf{B}}_{R}}(\beta,\mu,j)$ for all $(\beta,\mu,j)$.
## References
* [BF] Beals, R. and Fefferman, C.: On local solvability of linear partial differential equations. Ann. of Math. (2)97(1973), 482-498.
* [De] Dencker, N.: The resolution of the Nirenberg-Treves conjecture. Ann. of Math. (4)163(2006), 405-444.
* [Fr] Fraenkel, L.: Introduction to maximum principles and symmetry in elliptic problems. 128, Cambridge University Press, 2000.
* [GT] Gilbarg, D. and Trudinger, N.: Elliptic partial differential equations of second order. Springer-Verlag, Berlin, 2001.
* [Ho1] Hormander, L.: Propagation of singularities and semiglobal existence theorems for (psendo)differential operators of principal type. Ann. of Math. (1) (1962), 271-302.
* [Ho2] Hormander, L.: Pseudo-differential operators of principal type, in Singularities in Boundary Value Problems. Proc. NATO Adv. Study Inst. (Maratea, 1980), 69–96, NATO Adv. Study Inst. Ser. C: Math. Phys. Sci. 65, Reidel, Dordrecht, Boston, MA, 1981.
* [Ln] Lerner, N.: Sufficiency of condition (p) for local solvability in two dimensions. Ann. of Math. 128 (1988), 243-256.
* [Lw] Lewy, H.: An example of a smooth linear partial differential equation without solution. Ann. of Math. (6)66 (1957), 155-158.
* [Mo] Moyer, R.: Local solvability in two dimensions: Necessary conditions for the principaltype case. Mimeographed manuscript, University of Kansas, 1978.
* [NT] Nirenberg, L. and Treves, F.: On local solvability of linear partial differential equations. Part I: Necessary conditions, Comm. Pure Appl. Math. 23 (1970), 1-388; Part II: Sufficient conditions, ibid. 23 (1970), 459-509; Correction, ibid. 24 (1971).
* [NW] Nijenhuis, A. and Woolf, W.: Some integration problems in almost-complex and complex manifolds. Ann. Math, (3)77(1963), 426-489.
* [Os] Osserman, R.: On the inequality $\triangle u\geq f(u)$. Pacific. J. (4)(1957), 1641-1647.
* [Pan1] Pan, Y.: On existence theorems of nonlinear partial differential systems of dimension two. Preprint, 2012.
* [Pan2] Pan, Y.: On existence of non-radial solutions of nonlinear partial differential system of Poisson type in ${\mathbb{R}}^{n}$. Preprint, 2012.
* [SW] Stein, E. and Weiss, G.: Introduction to Fourier analysis on Euclidean spaces. Princeton Mathematical Series, No. 32. Princeton University Press, Princeton, N.J., 1971. Yifei Pan, [email protected], Department of Mathematical Sciences, Indiana University - Purdue University, Fort Wayne, IN 46805, USA Yuan Zhang, [email protected], Department of Mathematical Sciences, Indiana University - Purdue University, Fort Wayne, IN 46805, USA
|
arxiv-papers
| 2013-02-20T19:04:03 |
2024-09-04T02:49:41.969573
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yifei Pan, Yuan Zhang",
"submitter": "Yuan Zhang",
"url": "https://arxiv.org/abs/1302.5073"
}
|
1302.5079
|
# Extremal Myers-Perry black holes coupled to Born-Infeld electrodynamics in
five dimensions
Masoud [email protected], José P. S.
[email protected], Ahmad Sheykhi2,[email protected] 1
Centro Multidisciplinar de Astrofísica - CENTRA, Departamento de Física,
Instituto Superior Técnico - IST, Universidade Técnica de Lisboa - Av. Rovisco
Pais 1, 1049-001 Lisboa, Portugal.
2 Physics Department and Biruni Observatory, College of Sciences, Shiraz
University, Shiraz 71454, Iran
3 Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), P.O.
Box 55134-441, Maragha, Iran
###### Abstract
Abstract
We construct a new class of perturbative extremal charged rotating black hole
solutions in five dimensions in the Einstein–Born-Infeld theory. We start with
an extremal five-dimensional Myers-Perry black hole seed in which the two
possible angular momenta have equal magnitude, and add Born-Infeld electrical
charge keeping the extremality condition as a constraint. The perturbative
parameter is assumed to be the electric charge $q$ and the perturbations are
performed up to 4th order. We also study some physical properties of these
black holes. It is shown that the perturbative parameter $q$ and the Born-
Infeld parameter $\beta$ modify the values of the physical quantities of the
black holes. The solution suggests that the magnetic moment and gyromagnetic
ratio of the black hole spacetime change sign when the Born-Infeld character
of the solution starts to depart strongly from the Maxwell limit. We venture
an interpretation for the effect.
## I Introduction
The field of black hole solutions in higher dimensions was started in 1963 by
Tangherlini Tan . After some discussion of the old problem of the
dimensionality of space he proceed to find static spherical symmetric
solutions of the $d$ dimensional Einstein-Maxwell equations, with $d\geq 4$,
generalizing thus the Schwarzschild and the Reissner-Nordström solutions.
Tangherlini also included a cosmological term finding the corresponding de
Sitter (dS) and anti-de Sitter (AdS) $d$-dimensional black holes Tan .
Since then, with the belief that at a more fundamental level Einstein’s theory
has to be modified and extra dimensions might be crucial in the process,
static higher-dimensional black holes solutions have been devised in
gravitational theories such as Kaluza-Klein, Gauss-Bonnet theory and its
Lovelock extension, scalar-tensor gravity, supergravity, and low-energy string
theory. Likewise, there has been attempts to modify Maxwell’s electromagnetism
in order to deal in a more consistent manner with the point particle problem
and its field divergences. An important modification is given by the Born-
Infeld nonlinear electrodynamics. Although it became less prominent with the
appearance of quantum electrodynamics and its renormalization scheme, the
Born-Infeld theory has been now occurring repeatedly with the development of
string theory, where the dynamics of D-branes is governed by a Born-Infeld
type action. It is thus natural to study higher dimensional black holes in
these extended gravity-electromagnetic set ups.
For the static case, several examples of higher dimensional black holes can be
given. Black holes in Lovelock gravity and Maxwell electromagnetism were
studied in banteitzadilemos . Black hole solutions in Einstein-Born-Infeld
gravity are less singular in comparison with the Tangherlini-Reissner-
Nordström solution and such solutions with and without a cosmological constant
have been discussed in general relativity Garcia1 ; Wiltshire1 ; Aiello1 ;
Tamaki1 ; Dey1 ; Gunasekaran , and in Gauss-Bonnet GB and Lovelock Lov
gravities. The extension to the cases where the horizon has zero or negative
curvature has also been considered Cai1 . Scalar-tensor theories of gravity
coupled to Born-Infeld black hole solutions have also been studied in yaz3 .
Attempts to find solutions in general relativity coupled to other fields, such
as dilaton, rank three antisymmetric tensor, and Born-Infeld and others, have
been performed to construct exact solutions YI ; Tam ; yaz ; she . In these
theories the dilaton field can be thought of as coming from a scalar field of
low energy limit of string theory. The appearance of the dilaton field changes
the asymptotic behavior of the solutions to be neither asymptotically flat nor
de Sitter or Anti-de Sitter.
Introducing rotation makes the search for solutions much more difficult.
Following the example of the extension of the Schwarzschild and Reissner-
Nordström solutions to higher dimensions in different theories, it was natural
to do the same for the Kerr and the Kerr-Newman solutions. The generalization
of the Kerr solution to higher dimensional Einstein gravity was performed by
Myers and Perry Myer . These Myers-Perry solutions include the non-trivial
cases of several modes of rotation due to the existence of other rotation
planes in higher dimensions. The inclusion of a cosmological constant on these
higher dimensional solutions was performed in many . Rotating black hole
solutions in other gravity theories like Gauss-Bonnet or Lovelock are not
know.
Finally, the generalization of the Kerr-Newman black holes to higher
dimensional Einstein-Maxwell theory has not been found. Likewise, the
generalization of Kerr with some other gauge field, different from Maxwell, to
higher dimensions has not had success. In order to study this problem one has
to resort to approximations, as was done in Aliev2 , when studying charged
rotating black hole solutions in higher dimensions with a single rotation
parameter in the limit of slow rotation (see also Aliev3 ; Aliev4 ; kunz1 ).
Further study, relying on perturbative or numerical methods, to construct
those solutions in asymptotically flat backgrounds has been performed in Kunz2
and in asymptotically anti-de Sitter spacetimes in Kunz3 . Employing higher
perturbation theory with the electric charge as the perturbation parameter,
solutions and properties of charged rotating Einstein-Maxwell asymptotically
flat black holes have been constructed in five dimensions Navarro . Focusing
on extremal black holes, this perturbative method was also applied to obtain
Einstein-Maxwell black holes with equal magnitude angular momenta in odd
dimensions Allahverdi1 . A generalization to include a scalar field in a
Einstein-Maxwell-dilaton theory was performed in Allahverdi2 where black
holes with equal magnitude angular momenta in general odd dimensions were
obtained.
In this paper, we use this perturbative approach to find extremal rotating
Einstein-Born-Infeld black holes in a five dimensional spacetime. Starting
from the Myers-Perry black holes Myer , we evaluate the perturbative series up
to 4th order in the Born-Infeld electric charge parameter $q$. We determine
the physical properties of these black holes for general Born-Infeld coupling
constant $\beta$. In fact, we study the effects of the perturbative parameter
$q$ and the Born-Infeld parameter $\beta$ on the mass, angular momentum,
magnetic moment, gyromagnetic ratio, and horizon radius of these rotating
black holes.
The outline of this paper is as follows: In section II, we present the basic
field equations of nonlinear Born-Infeld theory in Einstein gravity and obtain
a new class of perturbative charged rotating solutions in five dimensions. In
section III, we calculate the physical quantities of the solutions and discuss
their properties. In section IV we study the mass formula for these black
holes. Section V is devoted to conclusions.
## II Metric and Gauge Potential
We start with the Einstein-Hilbert action coupled to the Born-Infeld nonlinear
gauge field in five dimensions
$\displaystyle S$ $\displaystyle=$ $\displaystyle\int
dx^{5}\sqrt{-g}\left(\frac{R}{16\pi G_{5}}\text{ }+L(F)\right),$ (1)
where $G_{5}$ is five dimensional Newtonian constant, ${R}$ is the curvature
scalar and $L(F)$ is the Lagrangian of the nonlinear Born-Infeld gauge field
given by
$\displaystyle L(F)$ $\displaystyle=$ $\displaystyle
4\beta^{2}\left(1-\sqrt{1+\frac{F^{\mu\nu}F_{\mu\nu}}{2\beta^{2}}}\right),$
(2)
where, in turn, $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is
the electromagnetic field tensor, $A_{\mu}$ is the electromagnetic vector
potential, and $\beta$ is the Born-Infeld parameter with unit of mass. In the
limit $\beta\rightarrow\infty$, $L(F)$ reduces to the Lagrangian of the
standard Maxwell field, $L(F)={F^{\mu\nu}F_{\mu\nu}}$. The equations of motion
can be obtained by varying the action with respect to the gravitational field
$g_{\mu\nu}$ and the gauge field $A_{\mu}$. This procedure yields the
gravitational field equations
$G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\frac{1}{2}g_{\mu\nu}L(F)+\frac{2F_{\mu\eta}F_{\nu}^{\text{
}\eta}}{\sqrt{1+\frac{F^{\mu\nu}F_{\mu\nu}}{2\beta^{2}}}},$ (3)
and the electromagnetic equation
$\partial_{\mu}{\left(\frac{\sqrt{-g}F^{\mu\nu}}{\sqrt{1+\frac{F^{\mu\nu}F_{\mu\nu}}{2\beta^{2}}}}\right)}=0\,.$
(4)
Our aim here is to find perturbative extremal charged rotating black hole
solutions of the above field equations in five dimensions. Using coordinates
$(t,r,\theta,\varphi_{1},\varphi_{2})$, the $5$-dimensional Myers-Perry
solution Myer restricted to the case where the two possible angular momenta
have equal magnitude, and following Navarro for the parametrization of the
metric, one has
$\displaystyle ds^{2}$ $\displaystyle=$ $\displaystyle
g_{tt}dt^{2}+\frac{dr^{2}}{W}+r^{2}\left(d\theta^{2}+\sin^{2}\theta
d\varphi^{2}_{1}+\cos^{2}\theta
d\varphi^{2}_{2}\right)+N\left(\varepsilon_{1}\sin^{2}\theta
d\varphi_{1}+\varepsilon_{2}\cos^{2}\theta d\varphi_{2}\right)^{2}$ (5)
$\displaystyle-2B\left(\varepsilon_{1}\sin^{2}\theta
d\varphi_{1}+\varepsilon_{2}\cos^{2}\theta d\varphi_{2}\right)dt,$
where $\varepsilon_{k}$ denotes the sense of rotation in the $k$-th orthogonal
plane of rotation, such that $\varepsilon_{k}=\pm 1$, $k=1,2$. For the
mentioned Myers-Perry solution Myer one has
$g_{tt}=-1+\frac{2\hat{M}}{r^{2}}$,
$W=1-\frac{2\hat{M}}{r^{2}}+\frac{2\hat{J}^{2}}{\hat{M}r^{4}}$, and
$N=\frac{2\hat{J}^{2}}{\hat{M}r^{2}}$, where $\hat{M}$ and $\hat{J}$ are two
constants, namely, the mass and angular momentum parameters, respectively,
related to the mass $M$ and angular momenta $J$ of the Myers-Perry solution,
through the relations $\hat{M}=\frac{16\pi G_{5}}{3A}M$ and
$\hat{J}=\frac{4\pi G_{5}}{A}J$, where $A$ is the area of the unit $3$-sphere.
An adequate parametrization for the gauge potential is given by
$\displaystyle A_{\mu}dx^{\mu}$ $\displaystyle=$ $\displaystyle
a_{0}+a_{\varphi}\left(\varepsilon_{1}\sin^{2}\theta
d\varphi_{1}+\varepsilon_{2}\cos^{2}\theta d\varphi_{2}\right).$ (6)
We further assume the metric functions $g_{tt}$, $W$, $N$, $B$ and also the
two functions $a_{0}$, $a_{\varphi}$ for the gauge field, depend only on the
radial coordinate $r$.
We consider perturbations around the Myers-Perry solution, with a Born-Infeld
electric charge $q$ as the perturbative parameter, so that $q$ is much less
than $\hat{M}$ or $\hat{J}^{\,2/3}$. Taking into account the symmetry with
respect to charge reversal and the seed solution, the metric and gauge
potentials take the form
$\displaystyle
g_{tt}=-1+\frac{2\hat{M}}{r^{2}}+q^{2}g^{(2)}_{tt}+q^{4}g^{(4)}_{tt}+O(q^{6}),$
(7) $\displaystyle
W=1-\frac{2\hat{M}}{r^{2}}+\frac{2\hat{J}^{2}}{\hat{M}r^{4}}+q^{2}W^{(2)}+q^{4}W^{(4)}+O(q^{6}),$
(8) $\displaystyle
N=\frac{2\hat{J}^{2}}{\hat{M}r^{2}}+q^{2}N^{(2)}+q^{4}N^{(4)}+O(q^{6}),$ (9)
$\displaystyle B=\frac{2\hat{J}}{r^{2}}+q^{2}B^{(2)}+q^{4}B^{(4)}+O(q^{6}),$
(10) $\displaystyle a_{0}=qa^{(1)}_{0}+q^{3}a^{(3)}_{0}+O(q^{5}),$ (11)
$\displaystyle
a_{\varphi}=qa^{(1)}_{\varphi}+q^{3}a^{(3)}_{\varphi}+O(q^{5})\,.$ (12)
Here $g^{(2)}_{tt}$ and $g^{(4)}_{tt}$ are the second and fourth order
perturbative terms, respectively. The other perturbative terms are defined
similarly.
We now fix the angular momenta at any perturbative order, and impose the
extremal condition in all orders. We also assume that the horizon is regular.
With these assumptions we are able to fix all constants of integration. To
simplify the notation we introduce a parameter $\nu$ through the equations
$\hat{M}=2\nu^{2}\quad,\quad\hat{J}=2\nu^{3}\,,$ (13)
meaning that the extremal Myers-Perry solution holds in five dimensions. Then,
using the field equations (3)-(4), the perturbative solutions up to 4th order
can be written as,
$\displaystyle g_{tt}$ $\displaystyle=$
$\displaystyle-1+\frac{4\nu^{2}}{r^{2}}+\frac{(r^{2}-4\nu^{2})q^{2}}{3\nu^{2}r^{4}}+\Bigg{\\{}{\frac{11}{135}}\,{\frac{1}{{\nu}^{2}{\beta}^{2}{r}^{8}}}-{\frac{16}{45}}\,{\frac{{\nu}^{2}}{{\beta}^{2}{r}^{12}}}+{\frac{58}{135}}\,{\frac{1}{{\beta}^{2}{r}^{10}}}$
(14)
$\displaystyle-\,{\frac{8\,{\nu}^{2}{\beta}^{2}+3}{9{\beta}^{2}{\nu}^{6}{r}^{4}}}+{\frac{1}{540}}\,{\frac{79+240\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{4}{r}^{6}}}+{\frac{1}{720}}\,{\frac{79+220\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{8}{r}^{2}}}$
$\displaystyle{\frac{4\left({r}^{2}-2\,{\nu}^{2}\right)^{2}}{27{\beta}^{2}{\nu}^{10}{r}^{4}}}\,\left({\nu}^{2}{\beta}^{2}+\frac{3}{8}\right)\ln\left(1-2\,{\frac{{\nu}^{2}}{{r}^{2}}}\right)\Bigg{\\}}q^{4}+O(q^{6}),$
$\displaystyle W$ $\displaystyle=$ $\displaystyle
1-\frac{4\nu^{2}}{r^{2}}+\frac{4\nu^{4}}{r^{4}}-\frac{q^{2}(r^{2}-2\nu^{2})}{3\nu^{2}r^{4}}$
(15)
$\displaystyle+\Bigg{\\{}{\frac{1}{2160}}\,{\frac{522+480\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{10}}}+{\frac{1}{2160}}\,{\frac{-2420\,{\nu}^{4}{\beta}^{2}-2607\,{\nu}^{2}}{{\beta}^{2}{\nu}^{10}{r}^{2}}}+{\frac{1}{2160}}\,{\frac{3840\,{\nu}^{4}+3620\,{\beta}^{2}{\nu}^{6}}{{\beta}^{2}{\nu}^{10}{r}^{4}}}$
$\displaystyle+{\frac{1}{2160}}\,{\frac{-1032\,{\nu}^{6}-1280\,{\nu}^{8}{\beta}^{2}}{{\beta}^{2}{\nu}^{10}{r}^{6}}}-\frac{1}{5}\,{\frac{1}{{r}^{8}{\nu}^{2}{\beta}^{2}}}-{\frac{23}{45}}\,{\frac{1}{{\beta}^{2}{r}^{10}}}+{\frac{44}{45}}\,{\frac{{\nu}^{2}}{{\beta}^{2}{r}^{12}}}-{\frac{56}{45}}\,{\frac{{\nu}^{4}}{{\beta}^{2}{r}^{14}}}$
$\displaystyle+\frac{\left({r}^{2}-2\,{\nu}^{2}\right)^{3}}{9{\beta}^{2}{\nu}^{12}{r}^{4}}\,\left({\frac{87}{80}}+{\nu}^{2}{\beta}^{2}\right)\ln\left(1-2\,{\frac{{\nu}^{2}}{{r}^{2}}}\right)\Bigg{\\}}q^{4}+O(q^{6}),$
$\displaystyle N$ $\displaystyle=$
$\displaystyle\frac{4\nu^{4}}{r^{2}}-\frac{2q^{2}(r^{2}+2\nu^{2})}{3r^{4}}+\Bigg{\\{}-{\frac{1}{360}}\,{\frac{\left(87+80\,{\nu}^{2}{\beta}^{2}\right){r}^{2}}{{\beta}^{2}{\nu}^{10}}}+{\frac{52}{135}}\,{\frac{{\nu}^{2}}{{\beta}^{2}{r}^{10}}}+{\frac{4}{135}}\,{\frac{1}{{\beta}^{2}{r}^{8}}}-{\frac{16}{45}}\,{\frac{{\nu}^{4}}{{\beta}^{2}{r}^{12}}}$
(16)
$\displaystyle+\frac{1}{45}\,{\frac{1+20\,{\nu}^{2}{\beta}^{2}}{{r}^{6}{\nu}^{2}{\beta}^{2}}}-{\frac{1}{90}}\,{\frac{61+40\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{4}{r}^{4}}}+{\frac{1}{360}}\,{\frac{87+80\,{\nu}^{2}{\beta}^{2}}{{\nu}^{8}{\beta}^{2}}}+{\frac{1}{180}}\,{\frac{144+35\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{6}{r}^{2}}}$
$\displaystyle-\,\ln\left(1-2\,{\frac{{\nu}^{2}}{{r}^{2}}}\right)\left({\frac{87}{80}}\,{r}^{6}+{r}^{6}{\nu}^{2}{\beta}^{2}-{\frac{57}{20}}\,{\nu}^{4}{r}^{2}+{\nu}^{6}+\frac{8}{3}\,{\nu}^{8}{\beta}^{2}\right)\frac{\left({r}^{2}-2\,{\nu}^{2}\right)}{9{\beta}^{2}{\nu}^{12}{r}^{4}}\Bigg{\\}}q^{4}+O(q^{6}),$
$\displaystyle B$ $\displaystyle=$
$\displaystyle\frac{4\nu^{3}}{r^{2}}-\frac{4\nu}{3r^{4}}q^{2}+\Bigg{\\{}{\frac{1}{1440}}\,{\frac{480\,{\nu}^{2}{\beta}^{2}+864}{{\beta}^{2}{\nu}^{7}{r}^{2}}}+{\frac{2}{27}}\,{\frac{1}{{\beta}^{2}\nu\,{r}^{8}}}+{\frac{52}{135}}\,{\frac{\nu}{{\beta}^{2}{r}^{10}}}-{\frac{1}{60}}\,{\frac{29+40\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{5}{r}^{4}}}$
(17)
$\displaystyle+{\frac{1}{90}}\,{\frac{9+40\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{3}{r}^{6}}}-{\frac{16}{45}}\,{\frac{{\nu}^{3}}{{\beta}^{2}{r}^{12}}}-\frac{1}{45}\,{\frac{9+5\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{9}}}+(-{\frac{1}{270}}\,{\frac{27+15\,{\nu}^{2}{\beta}^{2}}{{\beta}^{2}{\nu}^{11}}}$
$\displaystyle+{\frac{1}{270}}\,{\frac{54\,{\nu}^{2}+30\,{\nu}^{4}{\beta}^{2}}{{\beta}^{2}{\nu}^{11}{r}^{2}}}-{\frac{1}{270}}\,{\frac{80\,{\beta}^{2}{\nu}^{6}+30\,{\nu}^{4}}{{\beta}^{2}{\nu}^{11}{r}^{4}}})\left({r}^{2}-2\,{\nu}^{2}\right)\ln\left(1-2\,{\frac{{\nu}^{2}}{{r}^{2}}}\right)\Bigg{\\}}q^{4}+O(q^{6}),$
$\displaystyle a_{0}$ $\displaystyle=$
$\displaystyle\frac{q}{r^{2}}+\Bigg{\\{}{\frac{1}{180}}\,{\frac{40\,{\nu}^{2}{\beta}^{2}+15}{{\beta}^{2}{\nu}^{6}{r}^{2}}}-{\frac{1}{180}}\,{\frac{15\,{\nu}^{2}+40\,{\nu}^{4}{\beta}^{2}}{{\beta}^{2}{\nu}^{6}{r}^{4}}}-{\frac{11}{180}}\,{\frac{1}{{r}^{6}{\nu}^{2}{\beta}^{2}}}-{\frac{26}{45}}\,{\frac{1}{{\beta}^{2}{r}^{8}}}+\frac{2}{3}\,{\frac{{\nu}^{2}}{{\beta}^{2}{r}^{10}}}$
(18)
$\displaystyle+\frac{1}{9}\,\left({\nu}^{2}{\beta}^{2}+\frac{3}{8}\right)\left({r}^{2}-2\,{\nu}^{2}\right)\ln\left(1-2\,{\frac{{\nu}^{2}}{{r}^{2}}}\right){\beta}^{-2}{\nu}^{-8}{r}^{-2}\Bigg{\\}}q^{3}+O(q^{5}),$
$\displaystyle a_{\varphi}$ $\displaystyle=$ $\displaystyle-\frac{\nu
q}{r^{2}}+\Bigg{\\{}-{\frac{1}{720}}\,{\frac{80\,{\nu}^{2}{\beta}^{2}+30}{{\nu}^{7}{\beta}^{2}}}-{\frac{1}{720}}\,{\frac{40\,{\nu}^{4}{\beta}^{2}+27\,{\nu}^{2}}{{\nu}^{7}{\beta}^{2}{r}^{2}}}-{\frac{1}{720}}\,{\frac{-160\,{\beta}^{2}{\nu}^{6}-88\,{\nu}^{4}}{{\nu}^{7}{\beta}^{2}{r}^{4}}}+{\frac{7}{60}}\,{\frac{1}{\nu\,{\beta}^{2}{r}^{6}}}$
(19)
$\displaystyle+{\frac{26}{45}}\,{\frac{\nu}{{\beta}^{2}{r}^{8}}}-\frac{2}{3}\,{\frac{{\nu}^{3}}{{\beta}^{2}{r}^{10}}}-{\frac{1}{144}}\,\left(3+8\,{\nu}^{2}{\beta}^{2}\right)\left(\frac{r^{2}}{\beta^{2}\nu^{9}}-\frac{4}{r^{2}\beta^{2}\nu^{5}}\right)\ln\left(1-2\,{\frac{{\nu}^{2}}{{r}^{2}}}\right)\Bigg{\\}}q^{3}+O(q^{5}).$
It is seen that there are the usual $1/r^{n}$ polynomial expressions as well
as terms involving logarithmic functions. It is also worth mentioning that the
Born-Infeld parameter $\beta$ appears in terms which are of 3rd and 4th order
in the electric charge parameter $q$. One may note that in Maxwell’s limit,
$\beta\longrightarrow\infty$, these perturbative solutions reduce to the five
dimensional perturbative charged rotating black holes in Einstein-Maxwell
theory presented in Navarro . A consistent check of these solutions can be
provided by Smarr’s formula.
## III Physical Quantities
The mass $M$, the angular momenta $J$, the electric charge $Q$, and the
magnetic moment $\mu_{\rm mag}$ can be read off the asymptotic behavior of the
metric and the gauge potential Kunz2 . The asymptotic forms are,
$\displaystyle g_{tt}=-1+\frac{\tilde{M}}{r^{2}}+...,\quad
B=\frac{2\tilde{J}}{r^{2}}+...,\quad a_{0}=\frac{\tilde{Q}}{r^{2}}+...,\quad
a_{\varphi}=\frac{\tilde{\mu}_{\rm mag}}{r^{2}}+...,$ (20)
where $\tilde{M}$, $\tilde{J}$, $\tilde{Q}$, and $\tilde{\mu}_{\rm mag}$ are
the mass, angular momentum, electric charge, and magnetic moment parameters,
respectively, and we have defined $\tilde{Q}\equiv q$ for notational
consistency. These parameters are related to the real mass $M$, angular
momentum $J$, electric charge $Q$, and magnetic moment ${\mu}_{\rm mag}$,
through the relations,
$\displaystyle\tilde{M}$ $\displaystyle=$ $\displaystyle\frac{16\pi
G_{5}}{3A}M,\ \ \quad\tilde{J}=\frac{4\pi G_{5}}{A}J,$
$\displaystyle\quad\tilde{Q}$ $\displaystyle=$ $\displaystyle\frac{4\pi
G_{5}}{2A}Q,\quad\tilde{\mu}_{\rm mag}=\frac{4\pi G_{5}}{2A}\mu_{\rm mag},$
(21)
Note that when the perturbative parameter $q$ is equal to zero, the tilde
quantities of Eq. (20) reduce to the hat quantities of Eq. (13), i.e.,
$\tilde{M}=\hat{M}$, $\tilde{J}=\hat{J}$, $\tilde{Q}=\hat{Q}=0$, and
$\tilde{\mu}_{\rm mag}=\hat{\mu}_{\rm mag}=0$.
Now, comparing the expansions in Eqs. (20) and Eq. (III) with the asymptotic
behavior of the solutions given in Eqs. (14)-(19), we obtain
$\displaystyle M=\frac{3\pi\nu^{2}}{2}+\frac{\pi q^{2}}{8\nu^{2}}+\frac{\pi
q^{4}\left(20\nu^{2}\beta^{2}-3\right)}{5760\beta^{2}\nu^{8}}+O(q^{6}),$ (22)
$\displaystyle J=\pi\nu^{3},$ (23) $\displaystyle Q=\pi q,$ (24)
$\displaystyle\mu_{\rm mag}=\pi\nu q-\frac{\pi
q^{3}\left(40\nu^{2}\beta^{2}+3\right)}{720\nu^{5}\beta^{2}}+O(q^{5}),$ (25)
The gyromagnetic ratio $g$ is then given by
$\displaystyle g=2\,\frac{\mu_{\rm mag}/Q}{J/M}=\frac{2M\mu_{\rm
mag}}{QJ}=3+\frac{q^{2}\left(20\nu^{2}\beta^{2}-3\right)}{240\nu^{6}\beta^{2}}-\frac{q^{4}\left(10\nu^{2}\beta^{2}+3\right)}{1440\nu^{10}\beta^{2}}+O(q^{6})\,.$
(26)
The horizon radius $r_{H}$ is given by
$\displaystyle
r_{H}=\sqrt{2}\nu+{\frac{{q}^{2}\sqrt{2}}{24{\nu}^{3}}}+{\frac{11}{1152}}\,{\frac{{q}^{4}\sqrt{2}}{{\nu}^{7}}}+O(q^{6})\,.$
(27)
All these quantities are worth commenting.
The mass $M$ of the black holes as a function of the Born-Infeld parameter
$\beta$ has an interesting behavior, as shown in Fig. 1, see also Eq. (22).
The mass $M$ increases with increasing $\beta$ and as
$\beta\rightarrow\infty$, i.e., in the Maxwell limit, the mass takes the value
$M=\frac{3\pi\nu^{2}}{2}+\frac{\pi q^{2}}{8\nu^{2}}+\frac{\pi
q^{4}}{288\nu^{6}}+O(q^{6})$, exactly the result obtained for the five-
dimensional perturbative Einstein-Maxwell black hole Navarro . For very small
$\beta$ the mass turns negative. We do not attach any significance to this
result since the value of $\beta$ for which the mass is zero uses a
perturbative $q^{2}$ term which is much larger than than the zeroth order term
in the expressions for the magnetic moment and gyromagnetic ratio. In fact,
the values of $\beta$ for which the results make thorough sense are values of
$\beta$ larger than the ones which yield the zeros of the magnetic moment and
gyromagnetic ratio.
The angular momenta $J$ in Eq. (23) and the charge $q$ in Eq. (24) are fixed
and do not depend on $\beta$, following our approach. The magnetic dipole
moment $\mu_{\rm mag}$ given in Eq. (25) appears due to the rotation of the
electrically charged black hole. The first order term $\pi\,q\,\nu\ =Q\,\nu$
is equivalent to the magnetic moment of a point particle rotating around an
axis, with charge $q_{\rm particle}=3\pi Q$, and with the same angular
momentum $j_{\rm particle}$ and mass $m_{\rm particle}$ of the black hole,
i.e., $j_{\rm particle}=J$ and $m_{\rm particle}=M$, since for such a system
$\mu_{\rm mag}=\frac{1}{2}\,q_{\rm particle}\frac{j_{\rm particle}}{m_{\rm
particle}}=Q\,\nu$. The higher order terms presumably come from the spacetime
curvature. From Fig. 2 we find out that the magnetic dipole moment $\mu_{\rm
mag}$ increases with increasing $\beta$. In the limit
$\beta\rightarrow\infty$, the mass, the magnetic moment $\mu_{\rm mag}=\pi\nu
q-\frac{\pi q^{3}}{18\nu^{3}}+O(q^{5})$ which is exactly the result obtained
for the five-dimensional perturbative Einstein-Maxwell black hole Navarro .
For small $\beta$ the magnetic moment is negative. We analyze this result
below, when we comment on the gyromagnetic ratio.
Figure 1: The black hole mass $M$ versus the Born-Infeld parameter $\beta$ for
$\nu=1.16$ and $q=0.09$.
Figure 2: The black hole magnetic moment $\mu_{\rm mag}$ versus the Born-
Infeld parameter $\beta$ for $\nu=1.16$ and $q=0.09$.
Figure 3: The black hole gyromagnetic ratio $g$ versus the Born-Infeld
parameter $\beta$ for $\nu=1.16$ and $q=0.09$.
The dimensional gyromagnetic ratio for a given system defined in Eq. (26) is
twice the ratio of the magnetic moment divided by the charge to the angular
momentum divided by the mass. It has the value $1$ for a classical body with
uniform mass and uniform charge distribution rotating about an axis of
symmetry. For an electron it has the value of $2.00$ plus small quantum
corrections, and for the proton $5.59$ and neutron has the value $5.59$ and
$-3.826$, respectively. From Eq. (26) we see that the perturbative parameter
$q$ and the parameters $\beta$ and $\nu$ modify the gyromagnetic ratio of
asymptotically flat five-dimensional charged rotating black holes as compared
to the uncharged extremal Myers-Perry black holes. We want to study in more
detail this modification of the gyromagnetic ratio when one varies the Born-
Infeld parameter $\beta$, see Fig. 3. From the figure we find that the
gyromagnetic ratio $g$ increases with increasing $\beta$, and in the limit
$\beta\rightarrow\infty$, the gyromagnetic ratio reduces to
$g=3+\frac{q^{2}}{12\nu^{4}}-\frac{q^{4}}{144\nu^{8}}+O(q^{6})$ which is
exactly the result obtained for the five-dimensional perturbative Einstein-
Maxwell black hole Navarro . Now, from Fig. 3 one finds that for some low
value of $\beta$ the gyromagnetic ratio is zero, and then turns negative. This
change of sign comes from a perturbative $q^{2}$ term, and thus the result
might not hold for the full exact solution. On the other hand the result is
sufficiently intriguing that deserves some attention. One can speculate that
it is at least qualitatively correct, and perhaps expected. Let us see why. It
is known that the Born-Infeld theory is different from Maxwell’s theory when
the electromagnetic fields are very strong. The Born-Infeld theory gives a
finite total energy $E$ for the field around a point particle with charge
$q_{\rm particle}$, indeed $E\simeq\sqrt{q_{\rm particle}^{3}\,\beta}$. It
also gives an effective radius $r_{0}$ for the charge distribution,
$r_{0}=\sqrt{\frac{q_{\rm particle}}{\beta}}$. Curiously, the reversal of the
gyromagnetic ratio $g$ in Fig. 3 (concomitant to the reversal of the magnetic
dipole moment $\mu_{\rm mag}$ in Fig. 2) happens when
$r_{0}=\sqrt{\frac{Q}{\beta}}$ is of the order or larger than the horizon
radius $r_{H}$. Indeed, for the values of $\nu$ and $q$ used in the figures,
one finds $r_{0}=8.79$ and $r_{H}=1.65$. This reversal could then be
interpreted as follows. For large $\beta$ much of the electrical charge is
distributed in a point-like manner, as in Maxwell’s theory. For small $\beta$
the charge distribution is extended, and for sufficiently small $\beta$ it
even extends outside the horizon. It is known that an object with magnetic
moment ${\vec{\mu}}_{\rm mag}$ placed on a magnetic field $\vec{B}$ suffers a
torque given by ${\vec{\mu}}_{\rm mag}\times\vec{B}$. A black hole with
magnetic moment is also subjected to this kind of torque. So, for large
$\beta$, i.e., the point like case, when a magnetic field is applied to the
black hole spacetime the resultant torque on the black hole tends to rotate it
in the expected sense, and thus the magnetic dipole moment $\mu_{\rm mag}$ and
$g$ are positive, On the other hand, when $\beta$ is small, i.e., the charge
distribution outside the black hole case, it is the region external to the
black hole horizon that is effectively charged and it is this very region that
upon application of a magnetic field tends to rotate in the expected sense. So
here, the black hole rotates in the opposite sense, giving a negative magnetic
dipole moment $\mu_{\rm mag}$ and thus a negative gyromagnetic ratio $g$.
The horizon radius $r_{H}$ in Eq. (27) has the unexpected feature that it does
not depend on $\beta$ at least up to 4th order in the charge. Up to this order
$r_{H}$ is equal to the Einstein-Maxwell case Navarro .
## IV The mass formula
Define $\xi$ as the timelike Killing vector and $\eta_{k}$, $k=1,2$, as the
two azimuthal Killing vectors. The two equal horizon constant angular
velocities $\Omega$ can then be defined by imposing that the Killing vector
field
$\displaystyle\chi=\xi+\Omega\sum^{2}_{k=1}\epsilon_{k}\eta_{k},$ (28)
is null on the horizon and orthogonal to it as well. This yields,
$\displaystyle\Omega=\frac{1}{2\nu}-\frac{q^{2}}{24\nu^{5}}-\frac{q^{4}\left(5\nu^{2}\beta^{2}-1\right)}{1440\beta^{2}\nu^{11}}+O(q^{6})\,.$
(29)
The 3-area of the horizon $A_{H}$ and the electrostatic potential at the
horizon $\Phi_{H}$ are given by
$\displaystyle A_{H}=8\pi^{2}\nu^{3}+O(q^{6})\,.$ (30)
$\displaystyle\Phi_{H}=\frac{q}{4\nu^{2}}+\frac{q^{3}\left(20\beta^{2}\nu^{2}\right)}{1440\beta^{2}\nu^{8}}+O(q^{5})\,.$
(31)
The surface gravity $\kappa$ is defined by
$\kappa^{2}=-\frac{1}{2}(\nabla_{\mu}\chi_{\nu})(\nabla^{\mu}\chi^{\nu})$.
Taking into account the conserved quantities obtained in the last section, one
can check that these quantities satisfy the Smarr mass formula up to 4th order
Gunasekaran . Indeed, in general the formula is
$\displaystyle M=\frac{3\kappa_{sg}A_{H}}{16\pi G_{D}}+3\Omega
J+\Phi_{H}Q-\frac{\beta}{2}\frac{\partial M}{\partial\beta}.$ (32)
For an extremal solution with $\kappa=0$, the Smarr mass formula reduces to
$\displaystyle M=3\Omega J+\Phi_{H}Q-\frac{\beta}{2}\frac{\partial
M}{\partial\beta}\,.$ (33)
Taking into account that the mass $M$ of the black holes is given by Eq. (22),
one can determine the last term in (33), and find
$\displaystyle M=3\Omega J+\Phi_{H}Q+\Phi_{\beta}Q^{4},$ (34)
where
$\displaystyle\Phi_{\beta}=-\frac{\beta}{2Q^{4}}\frac{\partial
M}{\partial\beta}=-{\frac{1}{1920}}\,{\frac{1}{{\pi}^{3}{\nu}^{8}{\beta}^{2}}}\,.$
(35)
## V Conclusions
In conclusion, we have presented a new class of perturbative charged rotating
black hole solutions in five dimensions in the presence of a nonlinear Born-
Infeld gauge field. This class of solutions is restricted to the extremal
black holes with equal angular momenta. At infinity, the metric is
asymptotically locally flat. Our strategy for obtaining these solutions was
through a perturbative method up to the $4$th order for the perturbative
parameter $q$. We have started from rotating Myers-Perry black hole solutions
Myer in five dimensions, and then studied the effects of adding a charge
parameter to the solutions. We have calculated the conserved quantities of the
solutions such as mass, angular momentum, electric charge, magnetic moment,
gyromagnetic ratio, and horizon radius. We found that the Born-Infeld
parameter $\beta$ modifies the values of all the physical quantities, except
the horizon radius, relative to the corresponding Einstein-Maxwell five
dimensional rotating solutions. For large $\beta$ the solutions reduce to the
perturbative rotating Einstein-Maxwell solutions Navarro , as we expected. We
also speculated on what might happen for these solutions in the strong
electromagnetic regime, i.e., when $\beta$ is small. The generalization of the
present work to all higher dimensional is quite an interesting subject which
will be addressed elsewhere.
###### Acknowledgements.
The support of the Fundação para a Ciência e a Tecnologia of Portugal Project
PTDC/FIS/098962/2008 and PEst-OE/FIS/UI0099/2011 is gratefully acknowledged.
M. Allaverdizadeh is supported by a FCT grant. The work of A. Sheykhi has been
supported financially by Research Institute for Astronomy and Astrophysics of
Maragha (RIAAM), Iran.
## References
* (1) F. Tangherlini, Nuovo Cimento 27, 636 (1963).
* (2) M. Bañados, C. Teitelboim and J. Zanelli, Phys. Rev. D 49, 975 (1994);
G. A. S. Dias, S. Gao and J. P. S. Lemos, Phys. Rev. D 75, 024030 (2007).
* (3) A. Garcia, H. Salazar and J. F. Plebanski, Nuovo. Cimento 84, 65 (1984);
N. Breton, Phys. Rev. D 67, 124004 (2003).
* (4) D. L. Wiltshire, Phys. Rev. D 38, 2445 (1988);
H. P. de Oliveira, Class. Quant. Grav. 11, 1469 (1994).
* (5) M. Aiello, R. Ferraro and G. Giribet, Phys. Rev. D 70, 104014 (2004).
* (6) T. Tamaki, Journ. Cosm. Astropart. Physics JCAP 0405, 004 (2004)
* (7) T. K. Dey, Phys. Lett. B 595, 484 (2004).
* (8) S. Gunasekaran, D. Kubiznak and R. B. Mann, Journal of High Energy Physic JHEP 11, 110 (2012).
* (9) M. H. Dehghani and S. H. Hendi, Int. J. Mod. Phys. D 16, 1829 (2007).
* (10) M. Aiello, R. Ferraro and G. Giribet, Phys. Rev. D 70, 104014 (2004);
M. H. Dehghani, N. Alinejadi and S. H. Hendi, Phys. Rev. D 77, 104025 (2008).
* (11) R. G. Cai, D. W. Pang and A. Wang, Phys. Rev. D 70, 124034 (2004).
* (12) I. Stefanov, S. S. Yazadjiev and M. D. Todorov, Phys. Rev. D 75, 084036 (2007);
I. Stefanov, S. S. Yazadjiev and M. D. Todorov, Mod. Phys. Lett. 22, 1217
(2007).
* (13) R. Yamazaki and D. Ida, Phys. Rev. D 64, 024009 (2001).
* (14) T. Tamaki and T. Torii, Phys. Rev. D 62, 061501 (2000);
T. Tamaki and T. Torii, Phys. Rev. D 64, 024027 (2001);
G. Clément and D. Gal’tsov, Phys. Rev. D 62, 124013 (2000).
* (15) S. S. Yazadjiev, Phys. Rev. D 72, 044006 (2005).
* (16) A. Sheykhi, N. Riazi and M. H. Mahzoon, Phys. Rev. D 74, 044025 (2006);
A. Sheykhi, N. Riazi, Phys. Rev. D 75, 024021 (2007);
M. H Dehghani, S. H. Hendi, A. Sheykhi and H. Rastegar Sedehi, Journ. Cosm.
Astropart. Physics JCAP 0702, 020 (2007);
A. Sheykhi, Int.J. Mod.Phys. D 18, 25 (2009);
A. Sheykhi, Phys. Lett. B 662, 7 (2008).
* (17) R. C. Myers and M. J. Perry, Ann. Phys. (N.Y.) 172, 304 (1986).
* (18) G. W. Gibbons, H. Lü, D. N. Page and C. N . Pope, J. Geom. Phys. 53, 49 (2005).
* (19) A. N. Aliev, Phys. Rev. D 74, 024011 (2006).
* (20) A. N. Aliev, Mod. Phys. Lett. A 21, 751 (2006);
A. N. Aliev, Class. Quant. Gravit. 24, 4669 (2007);
A. Sheykhi, Phys. Rev. D 77, 104022 (2008).
* (21) A.N. Aliev and D.K. Ciftci, Phys. Rev. D 79, 044004 (2009).
* (22) J. Kunz, F. Navarro-Lerida and A. K. Petersen, Phys. Lett. B 614, 104 (2005).
* (23) J. Kunz, F. Navarro-Lerida and J. Viebahn, Phys. Lett. B 639, 362 (2006).
* (24) J. Kunz, F. Navarro-Lerida and E. Radu, Phys. Lett. B 649, 463 (2007);
A. N. Aliev, Phys. Rev. D 75, 084041 (2007);
H. C. Kim and R. G. Cai, Phys. Rev. D 77, 024045 (2008);
Y. Brihaye and T. Delsate, Phys. Rev. D 79, 105013 (2009);
A. Sheykhi, M. Allahverdizadeh, Phys. Rev. D 78, 064073 (2008).
* (25) F. Navarro-Lerida, Gen. Relat. Gravit. 42, 2891 (2010).
* (26) M. Allahverdizadeh, J. Kunz and F. Navarro-Lerida, Phys. Rev. D 82, 024030 (2010).
* (27) M. Allahverdizadeh, J. Kunz and F. Navarro-Lerida, Phys. Rev D 82, 064034 (2010).
|
arxiv-papers
| 2013-02-19T15:06:46 |
2024-09-04T02:49:41.979184
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Masoud Allahverdizadeh, Jose P. S. Lemos and Ahmad Sheykhi",
"submitter": "Masoud Allahverdizadeh",
"url": "https://arxiv.org/abs/1302.5079"
}
|
1302.5148
|
# An obstruction to subfactor principal graphs from the graph planar algebra
embedding theorem
Scott Morrison Mathematical Sciences Institute, the Australian National
University [email protected] URL: http://tqft/net/
###### Abstract.
We find a new obstruction to the principal graphs of subfactors. It shows that
in a certain family of 3-supertransitive principal graphs, there must be a
cycle by depth 6, with one exception, the principal graph of the Haagerup
subfactor.
A $II_{1}$ subfactor is an inclusion $A\subset B$ of infinite von Neumann
algebras with trivial centre and a compatible trace with
$\operatorname{tr}(1)=1$. In this setting, one can analyze the bimodules
$\otimes-$generated by ${}_{A}B_{B}$ and ${}_{B}B_{A}$. The principal graph of
a subfactor has as vertices the simple bimodules appearing, and an edge
between vertices $X$ and $Y$ for each copy of $Y$ appearing inside $X\otimes
B$. It turns out that this principal graph is a very useful invariant of the
subfactor (at least in the amenable case), and many useful combinatorial
constraints on graphs arising in this way have been discovered. As examples,
see [MS12, Jon03, Sny12, Pen13]. Moreover, with sufficiently powerful
combinatorial (and number theoretic, c.f. [Asa07, AY09, CMS11, Ost09])
constraints in hand, it has proved possible to enumerate all possible
principal graphs for subfactors with small index. This approach was pioneered
by Haagerup in [Haa94], and more recently continued, resulting in a
classification of subfactors up to index 5 [MS12, MPPS12, IJMS12, PT12,
JMS13].
In this note we demonstrate the following theorem, providing a combinatorial
constraint on the principal graph of a subfactor, of a rather different nature
than previous results.
###### Theorem.
If the principal graph of a 3-supertransitive $II_{1}$ subfactor begins as
$\Gamma=\leavevmode\hbox to158.97pt{\vbox
to81.34pt{\pgfpicture\makeatletter\hbox{\hskip 6.89111pt\lower-31.48137pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{{}{}{{}}{}}}{{{}}}{{{{}}{{}}}}{{}}{{{
}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@moveto{2.82861pt}{0.0pt}\pgfsys@curveto{2.82861pt}{1.56221pt}{1.56221pt}{2.82861pt}{0.0pt}{2.82861pt}\pgfsys@curveto{-1.56221pt}{2.82861pt}{-2.82861pt}{1.56221pt}{-2.82861pt}{0.0pt}\pgfsys@curveto{-2.82861pt}{-1.56221pt}{-1.56221pt}{-2.82861pt}{0.0pt}{-2.82861pt}\pgfsys@curveto{1.56221pt}{-2.82861pt}{2.82861pt}{-1.56221pt}{2.82861pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{ }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-2.5pt}{6.8975pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$0$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}
{{{}{}{{}}{}}}{{{}}}{{{{}}{{}}}}{{}}{{{
}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@moveto{31.28137pt}{0.0pt}\pgfsys@curveto{31.28137pt}{1.56221pt}{30.01497pt}{2.82861pt}{28.45276pt}{2.82861pt}\pgfsys@curveto{26.89055pt}{2.82861pt}{25.62415pt}{1.56221pt}{25.62415pt}{0.0pt}\pgfsys@curveto{25.62415pt}{-1.56221pt}{26.89055pt}{-2.82861pt}{28.45276pt}{-2.82861pt}\pgfsys@curveto{30.01497pt}{-2.82861pt}{31.28137pt}{-1.56221pt}{31.28137pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{ }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{25.95276pt}{6.8975pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$1$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}
{{{}{}{{}}{}}}{{{}}}{{{{}}{{}}}}{{}}{{{
}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@moveto{59.73413pt}{0.0pt}\pgfsys@curveto{59.73413pt}{1.56221pt}{58.46773pt}{2.82861pt}{56.90552pt}{2.82861pt}\pgfsys@curveto{55.3433pt}{2.82861pt}{54.0769pt}{1.56221pt}{54.0769pt}{0.0pt}\pgfsys@curveto{54.0769pt}{-1.56221pt}{55.3433pt}{-2.82861pt}{56.90552pt}{-2.82861pt}\pgfsys@curveto{58.46773pt}{-2.82861pt}{59.73413pt}{-1.56221pt}{59.73413pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{56.90552pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{56.90552pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{ }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{54.40552pt}{6.8975pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$2$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}
{{{}{}{{}}{}}}{{{}}}{{{{}}{{}}}}{{}}{{{
}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@moveto{88.18689pt}{0.0pt}\pgfsys@curveto{88.18689pt}{1.56221pt}{86.92049pt}{2.82861pt}{85.35828pt}{2.82861pt}\pgfsys@curveto{83.79607pt}{2.82861pt}{82.52966pt}{1.56221pt}{82.52966pt}{0.0pt}\pgfsys@curveto{82.52966pt}{-1.56221pt}{83.79607pt}{-2.82861pt}{85.35828pt}{-2.82861pt}\pgfsys@curveto{86.92049pt}{-2.82861pt}{88.18689pt}{-1.56221pt}{88.18689pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{85.35828pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{85.35828pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{ }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{82.85828pt}{6.8975pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$3$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}
{{{}{}{{}}{}}}{{{}}}{{{{}}{{}}}}{{}}{{{
}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@moveto{116.63965pt}{28.45276pt}\pgfsys@curveto{116.63965pt}{30.01497pt}{115.37325pt}{31.28137pt}{113.81104pt}{31.28137pt}\pgfsys@curveto{112.24883pt}{31.28137pt}{110.98242pt}{30.01497pt}{110.98242pt}{28.45276pt}\pgfsys@curveto{110.98242pt}{26.89055pt}{112.24883pt}{25.62415pt}{113.81104pt}{25.62415pt}\pgfsys@curveto{115.37325pt}{25.62415pt}{116.63965pt}{26.89055pt}{116.63965pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{113.81104pt}{28.45276pt}\pgfsys@fill\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{113.81104pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{ }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{109.90652pt}{36.26227pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$P$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}
{{{}{}{{}}{}}}{{{}}}{{{{}}{{}}}}{{}}{{{
}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@moveto{116.63965pt}{-28.45276pt}\pgfsys@curveto{116.63965pt}{-26.89055pt}{115.37325pt}{-25.62415pt}{113.81104pt}{-25.62415pt}\pgfsys@curveto{112.24883pt}{-25.62415pt}{110.98242pt}{-26.89055pt}{110.98242pt}{-28.45276pt}\pgfsys@curveto{110.98242pt}{-30.01497pt}{112.24883pt}{-31.28137pt}{113.81104pt}{-31.28137pt}\pgfsys@curveto{115.37325pt}{-31.28137pt}{116.63965pt}{-30.01497pt}{116.63965pt}{-28.45276pt}\pgfsys@closepath\pgfsys@moveto{113.81104pt}{-28.45276pt}\pgfsys@fill\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{113.81104pt}{-28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{ }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{109.85826pt}{-18.96889pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$Q$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}
{{{}{}{{}}{}}}{{{}}}{{{{}}{{}}}}{{}}{{{
}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{145.0924pt}{28.45276pt}\pgfsys@curveto{145.0924pt}{30.01497pt}{143.826pt}{31.28137pt}{142.2638pt}{31.28137pt}\pgfsys@curveto{140.70158pt}{31.28137pt}{139.43518pt}{30.01497pt}{139.43518pt}{28.45276pt}\pgfsys@curveto{139.43518pt}{26.89055pt}{140.70158pt}{25.62415pt}{142.2638pt}{25.62415pt}\pgfsys@curveto{143.826pt}{25.62415pt}{145.0924pt}{26.89055pt}{145.0924pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{142.2638pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{142.2638pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{ }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{137.58928pt}{36.57556pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$P^{\prime}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}
{{{}{}{{}}{}}}{{{}}}{{{{}}{{}}}}{{}}{{{
}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{145.0924pt}{-28.45276pt}\pgfsys@curveto{145.0924pt}{-26.89055pt}{143.826pt}{-25.62415pt}{142.2638pt}{-25.62415pt}\pgfsys@curveto{140.70158pt}{-25.62415pt}{139.43518pt}{-26.89055pt}{139.43518pt}{-28.45276pt}\pgfsys@curveto{139.43518pt}{-30.01497pt}{140.70158pt}{-31.28137pt}{142.2638pt}{-31.28137pt}\pgfsys@curveto{143.826pt}{-31.28137pt}{145.0924pt}{-30.01497pt}{145.0924pt}{-28.45276pt}\pgfsys@closepath\pgfsys@moveto{142.2638pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{142.2638pt}{-28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{ }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{137.54103pt}{-18.63557pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$Q^{\prime}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{3.02861pt}{0.0pt}\pgfsys@lineto{82.32967pt}{0.0pt}\pgfsys@moveto{87.49982pt}{2.14154pt}\pgfsys@lineto{111.6695pt}{26.31122pt}\pgfsys@moveto{116.83965pt}{28.45276pt}\pgfsys@lineto{139.23518pt}{28.45276pt}\pgfsys@moveto{87.49982pt}{-2.14154pt}\pgfsys@lineto{111.6695pt}{-26.31122pt}\pgfsys@moveto{116.83965pt}{-28.45276pt}\pgfsys@lineto{139.23518pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}},$
and there is no vertex at depth $6$ which is connected to both $P^{\prime}$
and $Q^{\prime}$, then the subfactor must have the same standard invariant as
the Haagerup subfactor.
(A subfactor being 3-supertransitive merely means that the principal graph
begins with a chain of 3 edges. The hypothesis specifying the graph up to
depth 5 is equivalent to the subfactor being 3-supertransitive and having
‘annular multiplicities’ 10, as described in [Jon03].)
This result uses the graph planar algebra embedding theorem, proved in [JP11]
and alternatively in the forthcoming [MW]. (The first result only holds in
finite depth, while the second only assumes that the principal graph is
locally finite; the result here inherits these restrictions. The principal
graph of a finite index subfactor is always locally finite.)
This appears to be an instance of a potentially new class of obstructions to
principal graphs of subfactors, somewhat different in nature from those
derived by analyzing connections or quadratic tangles. It seems likely that
many generalizations of this result are possible, especially if a more
conceptual proof can be found.
As an example application, we can use this obstruction to rule out the
existence of a subfactor with principal graph
$\left({\begin{array}[]{c}\raisebox{-2.5pt}{\includegraphics[height=21.33955pt]{diagrams/graphs/08B56FAF803E9257}}\end{array}},{\begin{array}[]{c}\raisebox{-2.5pt}{\includegraphics[height=21.33955pt]{diagrams/graphs/C6B47C9F76B9B1C9}}\end{array}}\right)$
at index $3+\sqrt{5}$. The possibility of such a subfactor arose during a
combinatorial search for possible principal graphs, following on from [MS12,
MPPS12, IJMS12, PT12]. There is a bi-unitary connection on this principal
graph, and this result immediately shows that the connection must not be flat.
Given a subfactor with principal graph beginning as $\Gamma$ in the theorem,
in the corresponding planar algebra $P$ we have
$P_{4,+}=TL_{4}\oplus{\mathbb{C}}S$, where $S$ is a lowest weight vector (that
is, in the kernel of each of the cap maps to $P_{3,\pm}$) and also a
rotational eigenvalue with eigenvalue $\omega$ a fourth root of unity. In
fact, $\omega=\pm 1$, as otherwise $P$ and $Q$ are dual to each other; in this
case, the dual graph must begin the same way, and either Ocneanu’s or Jones’
triple point obstruction [Haa94, Jon03, MPPS12] rules out a possible
subfactor. The results of [Jon03] show that this element $S$, suitably
normalized, satisfies the quadratic identity
(0.1) $S^{2}=(1-r)S+rf^{(4)}$
where $r$ is the ratio $\dim P/\dim Q$ (or its reciprocal, if less than one)
and $f^{(4)}$ is the 4-strand Jones-Wenzl idempotent. See [BMPS12, Theorem
3.9] for details. The main identity of [Jon03] shows that
$r=\begin{cases}\frac{[5]+1}{[5]-1}&\text{when $\omega=+1$ and}\\\
1&\text{when $\omega=-1$.}\end{cases}$
Here $[5]$ denotes the quantum integer $q^{-4}+q^{-2}+1+q^{2}+q^{4}$, where
$q$ is a parameter determined by the (unknown) index of the subfactor
$N\subset M$ by $[M:N]=[2]^{2}=q^{-2}+2+q^{2}$.
The embedding theorems of [JP11, MW] show that there is a faithful map of
planar algebras $\varepsilon:P\to GPA(\Gamma)$. (See [Jon00, BMPS12, MP12a,
MP12b] for more details on the definition of the graph planar algebra, and
examples of calculations therein.) We thus consider the image
$\varepsilon(S)\in GPA(\Gamma)_{4,+}$ in the graph planar algebra for
$\Gamma$. The algebra $GPA(\Gamma)_{4,+}$ splits up as a direct sum of matrix
algebras, corresponding to loops on $\Gamma$ that have their base-point and
mid-point at specified vertices at even depths:
$GPA(\Gamma)_{4,+}\cong\bigoplus_{a,b}{\mathcal{M}}_{a,b}.$
For some, but not all, of these matrix algebras, every component corresponds
to a loop which stays within the first five depths of the principal graph
(that is, does not go above $P^{\prime}$ and $Q^{\prime}$ in the diagram
above). In particular, these matrix algebras are all those
${\mathcal{M}}_{a,b}$ where $a,b\in\\{0,2,P,Q\\}$, _except_ for
${\mathcal{M}}_{P,P}$ and ${\mathcal{M}}_{Q,Q}$. This condition only holds for
${\mathcal{M}}_{P,Q}$ and ${\mathcal{M}}_{Q,P}$ because of our additional
hypothesis that $P^{\prime}$ and $Q^{\prime}$ are not both connected to some
vertex at depth 6. We call the subalgebra of $GPA(\Gamma)_{4,+}$ comprising
these matrix algebras ${\mathcal{A}}$.
The condition that an element $S\in GPA(\Gamma)_{4,+}$ is a lowest weight
rotational eigenvector with eigenvalue $\omega$ consists of a collection of
linear equations ${\mathcal{L}}_{\omega}$ relating the coefficients of various
loops on $\Gamma$. (Notice that the coefficients in these equations depend on
$q$, because the lowest weight condition depends on the dimensions of the
objects in the principal graph, and these are determined by $q$ and $r$.) Some
of these equations only involve loops supported in the first 5 depths of
$\Gamma$, and we call these equations ${\mathcal{L}}^{+}_{\omega}$. (Thinking
of these equations as functionals on $GPA(\Gamma)_{4,+}$, we are taking the
subset of functionals which are supported on the subalgebra ${\mathcal{A}}$.)
The solutions of ${\mathcal{L}}_{\omega}$ are certainly a subspace of the
solutions of ${\mathcal{L}}^{+}_{\omega}$.
The general strategy is now straightforward.
* •
Identify the Jones-Wenzl idempotent $f^{(4)}$ in the matrix algebras
${\mathcal{A}}$, as a function of the parameter $q$.
* •
Solve the equations ${\mathcal{L}}^{+}_{\omega}$, which must hold for any
lowest weight rotational eigenvector with eigenvalue $\omega$, in the matrix
algebras ${\mathcal{A}}$, finding a linear subspace.
* •
Analyze the equation $S^{2}=(1-r)S+rf^{(4)}$ in this subspace.
This provides us with some quadratic equations in a small vector space over
$\mathbb{Q}(q)$. In fact, by carefully choosing particular equations to
consider, and by appropriate changes of variables, we can understand the
solutions to these equations directly. We find, in §3, that when $\omega=+1$,
no solutions are possible. On the other hand we see in §2 that when
$\omega=-1$ there is a discrete set of solutions if and only if a certain
identity is satisfied by the parameter $q$:
$q^{8}-q^{6}-q^{4}-q^{2}+1=0.$
The real solutions of this equation ensure that
$q^{2}+2+q^{-2}=(5+\sqrt{13})/2$, i.e. the index of our subfactor is exactly
the index of the Haagerup subfactor. It is easy to see that the only principal
graph extending the one we have specified up to depth 5 that has this index is
the principal graph of the Haagerup subfactor. Moreover, previous
classification results (c.f. [Haa94, AH99] and also [Pet10] for an alternative
construction) show that there is a unique (up to complex conjugation)
subfactor planar algebra at this index, besides the $A_{\infty}$ subfactor
planar algebra.
The remainder of this paper consists of an analysis of the equations discussed
above. Unfortunately, in the present state of development of computer algebra,
solving them remains an art, not a science. Gröbner basis algorithms,
unsupervised, can not draw the necessary conclusions. Essentially what follows
is an explanation of how to decide which equations to consider in which order,
so that the solutions are at each step easy to extract. The derivation is,
unfortunately, somewhat opaque, representing the human-readable form of an
argument better carried out in conversation with a computer.
## 1\. Preliminary calculations
We begin by noting the dimensions of all the vertices above, as functions of
$q$ and $r$.
###### Lemma 1.1.
$\displaystyle\dim(0)$ $\displaystyle=1$ $\displaystyle\dim(1)$
$\displaystyle=q+q^{-1}$ $\displaystyle\dim(2)$ $\displaystyle=q^{2}+1+q^{-2}$
$\displaystyle\dim(3)$ $\displaystyle=q^{3}+q+q^{-1}+q^{-3}$
$\displaystyle\dim(P)$
$\displaystyle=\frac{r}{r+1}(q^{4}+q^{2}+1+q^{-2}+q^{-4})$
$\displaystyle\dim(Q)$
$\displaystyle=\frac{1}{r+1}(q^{4}+q^{2}+1+q^{-2}+q^{-4})$
$\displaystyle\dim(P^{\prime})$
$\displaystyle=\frac{r}{r+1}(q^{5}+2q^{3}+2q+2q^{-1}+2q^{-3}+q^{-5})-(q^{3}+q+q^{-1}+q^{-3})$
$\displaystyle=\frac{r}{r+1}(q^{5}+q^{-5})+\frac{r-1}{r+1}(q^{3}+q+q^{-1}+q^{-3})$
$\displaystyle\dim(Q^{\prime})$
$\displaystyle=\frac{1}{r+1}(q^{5}+2q^{3}+2q+2q^{-1}+2q^{-3}+q^{-5})-(q^{3}+q+q^{-1}+q^{-3})$
$\displaystyle=\frac{1}{r+1}(q^{5}+q^{-5})-\frac{r-1}{r+1}(q^{3}+q+q^{-1}+q^{-3})$
###### Proof.
This is just the condition that the dimensions form an eigenvector of the
graph adjacency matrix, with eigenvalue $q+q^{-1}$, and the dimensions of the
two vertices at depth 4 have ratio $r$. ∎
###### Lemma 1.2.
When $\omega=-1$, $r=1$, and
$\displaystyle\dim(P)=\dim(Q)$
$\displaystyle=\frac{1}{2}(q^{4}+q^{2}+1+q^{-2}+q^{-4})$
$\displaystyle\dim(P^{\prime})=\dim(Q^{\prime})$
$\displaystyle=\frac{1}{2}(q^{5}+q^{-5}).$
When $\omega=+1$,
$r=\frac{q^{8}+q^{6}+2q^{4}+q^{2}+1}{q^{8}+q^{6}+q^{2}+1},$
and
$\displaystyle\dim(P)$
$\displaystyle=\frac{1}{2}(q^{4}+q^{2}+2+q^{-2}+q^{-4})$
$\displaystyle\dim(Q)$ $\displaystyle=\frac{1}{2}(q^{4}+q^{2}+q^{-2}+q^{-4})$
$\displaystyle\dim(P^{\prime})$
$\displaystyle=\frac{1}{2}(q^{5}+q+q^{-1}+q^{-5})$
$\displaystyle\dim(Q^{\prime})$
$\displaystyle=\frac{1}{2}(q^{5}-q-q^{-1}+q^{-5})$
###### Proof.
The formulas for $r$ follows immediately from Theorem 5.1.11 from [Jon03]. ∎
###### Lemma 1.3.
Suppose $S$ is a 4-box. We will denote by $S(\cdots abc\cdots)$ the evaluation
of $S$ on certain loop; in each equation, the parts of the loops indicated by
the ellipses are held fixed. If the loop passes through $b$ at either the base
point or the mid point of the loop, we define $\kappa=1$, and otherwise
$\kappa=1/2$.
If $S$ is a lowest weight 4-box,
(1.1) $\displaystyle S(\cdots 010\cdots)$ $\displaystyle=0$ (1.2)
$\displaystyle S(\cdots 121\cdots)$
$\displaystyle=-\left(\frac{1}{\dim(2)}\right)^{\kappa}S(\cdots 101\cdots)$
(1.3) $\displaystyle S(\cdots 232\cdots)$
$\displaystyle=-\left(\frac{\dim(1)}{\dim(3)}\right)^{\kappa}S(\cdots
212\cdots)$ (1.4) $\displaystyle S(\cdots P3P\cdots)$
$\displaystyle=-\left(\frac{\dim(3)}{\dim(P^{\prime})}\right)^{\kappa}S(\cdots
PP^{\prime}P\cdots),$ (1.5) $\displaystyle S(\cdots Q3Q\cdots)$
$\displaystyle=-\left(\frac{\dim(3)}{\dim(Q^{\prime})}\right)^{\kappa}S(\cdots
QQ^{\prime}Q\cdots),$ and further (1.6) $\displaystyle S(\cdots 3Q3\cdots)$
$\displaystyle=-\left(\frac{\dim(Q)}{\dim(2)}\right)^{\kappa}S(\cdots
323\cdots)-\left(\frac{1}{r}\right)^{\kappa}S(\cdots 3P3\cdots)$
###### Proof.
These follow directly from the definition of lowest weight vector. Calculate
the evaluations $\cap_{i}S(\cdots 1\cdots)$, $\cap_{i}S(\cdots 2\cdots)$,
$\cap_{i}S(\cdots 3\cdots)$ or $\cap_{i}S(\cdots P\cdots)$ as follows:
$0=\cap_{i}S(\cdots\alpha\cdots)=\sum_{\beta}\left(\frac{\dim(\beta)}{\dim(\alpha)}\right)^{\kappa}S(\cdots\alpha\beta\alpha\cdots).$
Thus for example we have $0={\dim(2)}^{\kappa}S(\cdots
121\cdots)+\dim(0)^{\kappa}S(\cdots 101\cdots)$. ∎
It is is easy to see that using Equations (1.2), (1.3), (1.4) and (1.5) (and
not needing Equation (1.6)) we can write the coefficient in a lowest weight
4-box of any loop supported in depths at most $5$ as a real multiple of the
coefficient of a corresponding ‘collapsed’ loop which does not leave the
immediate vicinity of the vertex $3$ (that is, supported on $2$, $3$, $P$ and
$Q$). There are 81 such collapsed loops, and 24 orbits of the rotation group
on the collapsed loops. Thus after fixing a rotational eigenvalue of
$\omega=\pm 1$, we may work in a 24-dimensional space (different for each
eigenvalue). Equation (1.1) ensures that any loop confined to the initial arm,
and in particular the collapsed loop supported on $2$ and $3$, is zero. (An
similar analysis of a lowest weight space in a graph planar algebra is
described in more detail in [MP12a, §A].) There remains all the instances of
Equation (1.6), which in fact cut down the space to either 4-dimensions when
$\omega=-1$ or to 3 dimensions when $\omega=+1$. We prefer not to pick bases
for these solution spaces at this point, however.
We next need certain coefficients of the 4-strand Jones-Wenzl idempotent.
###### Lemma 1.4.
(1.7) $\displaystyle(f^{(4)})_{0123P,0123P}$ $\displaystyle=1$ (1.8)
$\displaystyle(f^{(4)})_{0123Q,0123Q}$ $\displaystyle=1$ (1.9)
$\displaystyle(f^{(4)})_{2323P,2323P}$
$\displaystyle=\frac{q^{2}-1+q^{-2}}{q^{2}+2+q^{-2}}$ (1.10)
$\displaystyle(f^{(4)})_{2323Q,2323Q}$
$\displaystyle=\frac{q^{2}-1+q^{-2}}{q^{2}+2+q^{-2}}$ (1.11)
$\displaystyle(f^{(4)})_{2323P,23P3P}$
$\displaystyle=\frac{1}{\sqrt{\dim(2)\dim(P)}}\frac{1}{1+r}\frac{(q^{2}+q^{6})-r(1+q^{4}+q^{8})}{(1+q^{4})^{2}}$
(1.12) $\displaystyle(f^{(4)})_{23232,23P32}$
$\displaystyle=-\sqrt{\frac{\dim(P)}{\dim(2)}}\frac{q^{8}}{(1+q^{4})^{2}(1+q^{2}+q^{4})^{2}}$
(1.13) $\displaystyle(f^{(4)})_{P323Q,P323Q}$
$\displaystyle=\frac{1-q^{2}+q^{4}-q^{6}+q^{8}}{\left(1+q^{4}\right)^{2}}$
###### Proof.
We’ll just illustrate the method of calculation for $(f^{(4)})_{2323P,2323P}$.
We only need to calculate the contribution to $f^{(4)}$ of the Temperley-Lieb
diagrams
$\leavevmode\hbox to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@lineto{7.11319pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{14.22638pt}{-10.66978pt}\pgfsys@lineto{14.22638pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{21.33957pt}{-10.66978pt}\pgfsys@lineto{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\;,\leavevmode\hbox
to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@lineto{7.11319pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}}{{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{14.22638pt}{-10.66978pt}\pgfsys@curveto{14.22638pt}{-8.70552pt}{15.81871pt}{-7.11319pt}{17.78297pt}{-7.11319pt}\pgfsys@curveto{19.74724pt}{-7.11319pt}{21.33957pt}{-8.70552pt}{21.33957pt}{-10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{14.22638pt}{10.66978pt}\pgfsys@curveto{14.22638pt}{8.70552pt}{15.81871pt}{7.11319pt}{17.78297pt}{7.11319pt}\pgfsys@curveto{19.74724pt}{7.11319pt}{21.33957pt}{8.70552pt}{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\;,\leavevmode\hbox
to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{7.11319pt}{10.66978pt}\pgfsys@curveto{7.11319pt}{8.70552pt}{8.70552pt}{7.11319pt}{10.66978pt}{7.11319pt}\pgfsys@curveto{12.63405pt}{7.11319pt}{14.22638pt}{8.70552pt}{14.22638pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@lineto{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}}{{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{14.22638pt}{-10.66978pt}\pgfsys@curveto{14.22638pt}{-8.70552pt}{15.81871pt}{-7.11319pt}{17.78297pt}{-7.11319pt}\pgfsys@curveto{19.74724pt}{-7.11319pt}{21.33957pt}{-8.70552pt}{21.33957pt}{-10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\;,\leavevmode\hbox
to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}}{{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@curveto{7.11319pt}{-8.70552pt}{8.70552pt}{-7.11319pt}{10.66978pt}{-7.11319pt}\pgfsys@curveto{12.63405pt}{-7.11319pt}{14.22638pt}{-8.70552pt}{14.22638pt}{-10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{7.11319pt}{10.66978pt}\pgfsys@curveto{7.11319pt}{8.70552pt}{8.70552pt}{7.11319pt}{10.66978pt}{7.11319pt}\pgfsys@curveto{12.63405pt}{7.11319pt}{14.22638pt}{8.70552pt}{14.22638pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{21.33957pt}{-10.66978pt}\pgfsys@lineto{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\;,\leavevmode\hbox
to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}}{{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@curveto{7.11319pt}{-8.70552pt}{8.70552pt}{-7.11319pt}{10.66978pt}{-7.11319pt}\pgfsys@curveto{12.63405pt}{-7.11319pt}{14.22638pt}{-8.70552pt}{14.22638pt}{-10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{21.33957pt}{-10.66978pt}\pgfsys@lineto{7.11319pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{14.22638pt}{10.66978pt}\pgfsys@curveto{14.22638pt}{8.70552pt}{15.81871pt}{7.11319pt}{17.78297pt}{7.11319pt}\pgfsys@curveto{19.74724pt}{7.11319pt}{21.33957pt}{8.70552pt}{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\;$
because when the boundary is labelled with $2323P$ along both top and bottom,
any other diagram connects unequal boundary labels. Using the formulas from
[Mor] (easily derivable from the earlier work of [FK97]), we then have
$\displaystyle f^{(4)}$ $\displaystyle=\leavevmode\hbox to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@lineto{7.11319pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{14.22638pt}{-10.66978pt}\pgfsys@lineto{14.22638pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{21.33957pt}{-10.66978pt}\pgfsys@lineto{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}-\frac{[2]^{2}}{[4]}\leavevmode\hbox
to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@lineto{7.11319pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}}{{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{14.22638pt}{-10.66978pt}\pgfsys@curveto{14.22638pt}{-8.70552pt}{15.81871pt}{-7.11319pt}{17.78297pt}{-7.11319pt}\pgfsys@curveto{19.74724pt}{-7.11319pt}{21.33957pt}{-8.70552pt}{21.33957pt}{-10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{14.22638pt}{10.66978pt}\pgfsys@curveto{14.22638pt}{8.70552pt}{15.81871pt}{7.11319pt}{17.78297pt}{7.11319pt}\pgfsys@curveto{19.74724pt}{7.11319pt}{21.33957pt}{8.70552pt}{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}+\frac{[2]}{[4]}\leavevmode\hbox
to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{7.11319pt}{10.66978pt}\pgfsys@curveto{7.11319pt}{8.70552pt}{8.70552pt}{7.11319pt}{10.66978pt}{7.11319pt}\pgfsys@curveto{12.63405pt}{7.11319pt}{14.22638pt}{8.70552pt}{14.22638pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@lineto{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}}{{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{14.22638pt}{-10.66978pt}\pgfsys@curveto{14.22638pt}{-8.70552pt}{15.81871pt}{-7.11319pt}{17.78297pt}{-7.11319pt}\pgfsys@curveto{19.74724pt}{-7.11319pt}{21.33957pt}{-8.70552pt}{21.33957pt}{-10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}-\frac{[3]}{[4]}\leavevmode\hbox
to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}}{{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@curveto{7.11319pt}{-8.70552pt}{8.70552pt}{-7.11319pt}{10.66978pt}{-7.11319pt}\pgfsys@curveto{12.63405pt}{-7.11319pt}{14.22638pt}{-8.70552pt}{14.22638pt}{-10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{7.11319pt}{10.66978pt}\pgfsys@curveto{7.11319pt}{8.70552pt}{8.70552pt}{7.11319pt}{10.66978pt}{7.11319pt}\pgfsys@curveto{12.63405pt}{7.11319pt}{14.22638pt}{8.70552pt}{14.22638pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{21.33957pt}{-10.66978pt}\pgfsys@lineto{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}+\frac{[2]}{[4]}\leavevmode\hbox
to21.74pt{\vbox
to21.74pt{\pgfpicture\makeatletter\hbox{\hskip-6.9132pt\lower-10.86978pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}
{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}}{{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{7.11319pt}{-10.66978pt}\pgfsys@curveto{7.11319pt}{-8.70552pt}{8.70552pt}{-7.11319pt}{10.66978pt}{-7.11319pt}\pgfsys@curveto{12.63405pt}{-7.11319pt}{14.22638pt}{-8.70552pt}{14.22638pt}{-10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{21.33957pt}{-10.66978pt}\pgfsys@lineto{7.11319pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}{}\pgfsys@moveto{14.22638pt}{10.66978pt}\pgfsys@curveto{14.22638pt}{8.70552pt}{15.81871pt}{7.11319pt}{17.78297pt}{7.11319pt}\pgfsys@curveto{19.74724pt}{7.11319pt}{21.33957pt}{8.70552pt}{21.33957pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{28.45276pt}{-10.66978pt}\pgfsys@lineto{28.45276pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}+\cdots$
and
$\displaystyle(f^{(4)})_{2323P,2323P}$
$\displaystyle=1-\frac{[2]^{2}}{[4]}\frac{\dim(2)}{\dim(3)}+\frac{[2]}{[4]}-\frac{[3]}{[4]}\frac{\dim(3)}{\dim(2)}+\frac{[2]}{[4]}$
$\displaystyle=\frac{q^{2}-1+q^{-2}}{q^{2}+2+q^{-2}}.\qed$
###### Lemma 1.5.
$S_{0123P,0123P}=\kappa=\text{$1$ or $-r$}$
###### Proof.
Just solve Equation (0.1) in ${\mathcal{M}}_{0,P}$, where we only have the
$0123P,0123P$ component. Equation (1.7) tells us the coefficient of the Jones-
Wenzl idempotent. ∎
When $r=1$ Equation (0.1) has a symmetry $S\mapsto-S$, so we may assume
$\kappa=1$ there.
It is now necessary to break into cases, to handle the two possible rotational
eigenvalues.
## 2\. When the rotational eigenvalue is $\omega=-1$ and $r=1$.
We now have a large number of quadratic equations, coming from Equation (0.1)
in 24 variables, along with further linear equations which we know can cut
those 24 dimensions down to 4, and throughout we are working over
$\mathbb{Q}(\sqrt{\mathrm{FPdims}})$. (This field is rather awkward; by using
the lopsided conventions described in [MP12b], we could arrange to work over
$\mathbb{Q}(q)$, but this turns out to give little advantage.)
We begin by asking if we are lucky: is it possible to write any of these
quadratics as a function of a single variable, using the linear equations?
It turns out that many of the components of Equation (0.1) can be rewritten
modulo Equations (1.2) through (1.6) as quadratics in a single variable, and
in particular we can find univariate equations in any of $S_{2323P,23P3P}$,
$S_{2323P,23Q3P}$, $S_{2323Q,23P3Q}$, $S_{2323Q,23Q3Q}$, $S_{23P32,23P32}$,
$S_{23P32,23Q32}$ or $S_{23Q32,23Q32}$. These variables only span a 2
dimensional subspace modulo Equation (1.6), so we choose two components to
analyze. Choosing components corresponding to collapsed loops reduces the
amount of work required to rewrite in terms of a single variable.
###### Lemma 2.1.
The components $23232,23P32$, $2323P,2323P$ of Equation (0.1) can be
simplified using Equations (1.1) through (1.6) and Lemma 1.4 to give
$\displaystyle S_{23P32,23P32}$
$\displaystyle=-\frac{q^{4}}{1+q^{2}+2q^{4}+q^{6}+q^{8}}$ (2.1) $\displaystyle
S_{2323P,23P3P}^{2}$
$\displaystyle=-\frac{\left(q^{11}+q\right)^{2}}{2\left(q^{4}+1\right)\left(q^{4}+q^{2}+1\right)^{2}\left(q^{12}+q^{8}+q^{6}+q^{4}+1\right)}.$
###### Proof.
The matrix algebra ${\mathcal{M}}_{2,2}$ is 7-by-7 with rows and columns
indexed by the paths $21012,21212,21232,23212,23232,23P32$ and $23Q32$. Thus
$S^{2}_{23232,23P32}=\sum_{\gamma}S_{23232,\gamma}S_{\gamma,23P32}$
and we easily see that only the $\gamma=23P32$ and $\gamma=23Q32$ terms
contribute to this sum, since otherwise $S_{23232,\gamma}$ is confined to the
initial arm and so by Equations (1.1), (1.2) and (1.3) zero.
Thus we have the equation
$\displaystyle S_{23232,23P32}S_{23P32,23P32}+S_{23232,23Q32}S_{23Q32,23P32}$
$\displaystyle=(f^{(4)})_{23232,23P32}$
$\displaystyle=-\sqrt{\frac{\dim(P)}{\dim(2)}}\frac{q^{8}}{(1+q^{4})^{2}(1+q^{2}+q^{4})^{2}}$
where we’ve used Equation (1.12). It’s now easy to see the strategy. First use
Equation (1.6) to replace $S_{23232,23Q32}$ and $S_{23Q32,23P32}$ with a
linear combination of coefficients which avoid $Q$. Then modulo rotations and
collapsing, we can write $S_{23232,23P32}$ (indeed, any collapsed paths that
enter $P$ or $Q$ exactly once) as a known multiple of $S_{0123P,0123P}$, which
we know is equal to $1$ from Lemma 1.5. After these steps, the left hand side
is linear in $S_{23P32,23P32}$, and collecting terms gives the desired result.
∎
The proofs of the remaining lemmas in the paper are all analogous to the
above, and we omit them for brevity.
###### Lemma 2.2.
The component $2323P,23P3P$ of Equation (0.1) can be simplified using
Equations (1.1) through (1.6), Lemma 1.4 and either solution of the equations
in Lemma 2.1 to give
$\displaystyle S_{23P3P,23P3P}$
$\displaystyle=\frac{q^{18}+3q^{14}-2q^{12}+3q^{10}-2q^{8}+3q^{6}+q^{2}}{2\left(q^{8}+q^{6}+q^{4}+q^{2}+1\right)\left(q^{12}+2q^{8}+2q^{4}+1\right)}$
###### Lemma 2.3.
The $P323Q,P323Q$ component of Equation (0.1) can be simplified (using
everything above, with either solution of Equation (2.1)) to give
(2.2)
$-\frac{\left(q^{8}{-}q^{6}{-}q^{4}{-}q^{2}{+}1\right)\left(q^{8}{-}q^{6}{+}q^{4}{-}q^{2}{+}1\right)\left(q^{8}{+}q^{6}{+}3q^{4}{+}q^{2}{+}1\right)}{\left(q^{2}{-}q{+}1\right)\left(q^{2}{+}q{+}1\right)\left(q^{4}{+}1\right)\left(q^{4}{-}q^{2}{+}1\right)^{2}\left(q^{4}{-}q^{3}{+}q^{2}{-}q{+}1\right)\left(q^{4}{+}q^{3}{+}q^{2}{+}q{+}1\right)}=0$
###### Lemma 2.4.
All of the factors in Equation (2.2) are strictly positive for $q>1$, except
for $1-q^{2}-q^{4}-q^{6}+q^{8}$, which has 4 real roots at $q=\pm q_{0}^{\pm
1}$ where $q_{0}$ is the largest real root, approximately $1.31228...$. The
corresponding index value $q^{2}+2+q^{-2}$ is $\frac{5+\sqrt{13}}{2}$, that
is, the index of the Haagerup subfactor.
Thus, by the classification results of [Haa94] the only possibility with
$\omega=-1$ is that the standard invariant of our subfactor is exactly that of
the Haagerup subfactor.
## 3\. When the rotational eigenvalue is $\omega=+1$.
###### Lemma 3.1.
We cannot have $\kappa=-r$ with $\omega=1$ in Lemma 1.5, since otherwise the
$0123Q,0123Q$ component of Equation (0.1) gives
$\frac{8q^{4}\left(q^{8}+q^{6}+q^{4}+q^{2}+1\right)^{2}\left(q^{8}+q^{6}+2q^{4}+q^{2}+1\right)}{\left(q^{2}+1\right)^{8}\left(q^{4}-q^{2}+1\right)^{4}}=0,$
which is only possible if $q$ is a root of unity.
###### Lemma 3.2.
If $\kappa=1$ in Lemma 1.5, then the $23232,23P32$ and $2323P,2323P$
components of Equation (0.1) give
$\displaystyle S_{23P32,23P32}$ $\displaystyle=0$ and $\displaystyle
S_{2323P,23P3P}$
$\displaystyle=\frac{q\left(q^{8}+q^{6}+q^{2}+1\right)}{\sqrt{2}\sqrt{q^{4}+1}\left(q^{4}+q^{2}+1\right)^{2}}$
or $\displaystyle=\frac{q\sqrt{q^{4}+1}}{\sqrt{2}\left(q^{4}+q^{2}+1\right)}$
###### Lemma 3.3.
The first case in Lemma 3.2 is impossible, because then the $2323Q,2323Q$
component of Equation (0.1) gives
$\frac{8q^{6}\left(q^{4}-q^{3}+q^{2}-q+1\right)^{2}\left(q^{4}+q^{3}+q^{2}+q+1\right)^{2}}{(q-1)^{2}(q+1)^{2}\left(q^{2}+1\right)^{4}\left(q^{2}-q+1\right)^{2}\left(q^{2}+q+1\right)^{2}\left(q^{4}-q^{2}+1\right)^{2}}=0$
which can not hold when $q>1$.
###### Lemma 3.4.
In the second case of Lemma 3.2 the $2323P,23P3P$ component of Equation (0.1)
gives
$S_{23P3P,23P3P}=-\frac{q^{2}}{2\left(q^{4}+q^{2}+1\right)}$
###### Lemma 3.5.
Finally, in the second case of Lemma 3.2, using the conclusion of the previous
lemma we find that the $P323Q,P323Q$ component of Equation (0.1) gives
$\frac{-q^{20}+2q^{10}-1}{\left(q^{2}+1\right)^{4}\left(q^{4}-q^{2}+1\right)^{3}}=0$
which is impossible when $q>1$.
Thus we see that the rotational eigenvalue $\omega=+1$ was impossible, and we
have completed the proof of the theorem.
## References
* [AH99] Marta Asaeda and Uffe Haagerup. Exotic subfactors of finite depth with Jones indices $(5+\sqrt{13})/2$ and $(5+\sqrt{17})/2$. Comm. Math. Phys., 202(1):1–63, 1999. arXiv:math.OA/9803044 MR1686551 DOI:10.1007/s002200050574.
* [Asa07] Marta Asaeda. Galois groups and an obstruction to principal graphs of subfactors. Internat. J. Math., 18(2):191--202, 2007. MR2307421 DOI:10.1142/S0129167X07003996 arXiv:math.OA/0605318.
* [AY09] Marta Asaeda and Seidai Yasuda. On Haagerup’s list of potential principal graphs of subfactors. Comm. Math. Phys., 286(3):1141--1157, 2009. arXiv:0711.4144 MR2472028 DOI:10.1007/s00220-008-0588-0.
* [BMPS12] Stephen Bigelow, Scott Morrison, Emily Peters, and Noah Snyder. Constructing the extended Haagerup planar algebra. Acta Math., 209(1):29--82, 2012. arXiv:0909.4099 MR2979509 DOI:10.1007/s11511-012-0081-7.
* [CMS11] Frank Calegari, Scott Morrison, and Noah Snyder. Cyclotomic integers, fusion categories, and subfactors. Communications in Mathematical Physics, 303(3):845--896, 2011. With an appendix by Victor Ostrik. arXiv:1004.0665 DOI:10.1007/s00220-010-1136-2 MR2786219.
* [FK97] Igor B. Frenkel and Mikhail G. Khovanov. Canonical bases in tensor products and graphical calculus for $U_{q}({\mathfrak{s}}{\mathfrak{l}}_{2})$. Duke Math. J., 87(3):409--480, 1997. MR1446615 DOI:10.1215/S0012-7094-97-08715-9.
* [Haa94] Uffe Haagerup. Principal graphs of subfactors in the index range $4<[M:N]<3+\sqrt{2}$. In Subfactors (Kyuzeso, 1993), pages 1--38. World Sci. Publ., River Edge, NJ, 1994. MR1317352.
* [IJMS12] Masaki Izumi, Vaughan F. R. Jones, Scott Morrison, and Noah Snyder. Subfactors of index less than 5, Part 3: Quadruple points. Comm. Math. Phys., 316(2):531--554, 2012. arXiv:1109.3190 MR2993924 DOI:10.1007/s00220-012-1472-5.
* [JMS13] Vaughan F. R. Jones, Scott Morrison, and Noah Snyder. The classification of subfactors of index at most 5, 2013. arXiv:1304.6141, to appear in the Bulletin of the American Mathematical Society.
* [Jon00] Vaughan F. R. Jones. The planar algebra of a bipartite graph. In Knots in Hellas ’98 (Delphi), volume 24 of Ser. Knots Everything, pages 94--117. World Sci. Publ., River Edge, NJ, 2000. MR1865703 (preview at google books).
* [Jon03] Vaughan F. R. Jones. Quadratic tangles in planar algebras, 2003. arXiv:1007.1158, to appear in Duke Math. Journ.
* [JP11] Vaughan F. R. Jones and David Penneys. The embedding theorem for finite depth subfactor planar algebras. Quantum Topol., 2(3):301--337, 2011. arXiv:1007.3173 MR2812459 DOI:10.4171/QT/23.
* [Mor] Scott Morrison. A formula for the Jones-Wenzl projections. Unpublished, available at http://tqft.net/math/JonesWenzlProjections.pdf.
* [MP12a] Scott Morrison and David Penneys. Constructing spoke subfactors using the jellyfish algorithm, 2012. arXiv:1208.3637, to appear in Transactions of the American Mathematical Society.
* [MP12b] Scott Morrison and Emily Peters. The little desert? Some subfactors with index in the interval $(5,3+\sqrt{5})$, 2012. arXiv:1205.2742.
* [MPPS12] Scott Morrison, David Penneys, Emily Peters, and Noah Snyder. Classification of subfactors of index less than 5, part 2: triple points. International Journal of Mathematics, 23(3):1250016, 2012. arXiv:1007.2240 MR2902285 DOI:10.1142/S0129167X11007586.
* [MS12] Scott Morrison and Noah Snyder. Subfactors of index less than 5, part 1: the principal graph odometer. Communications in Mathematical Physics, 312(1):1--35, 2012. arXiv:1007.1730 MR2914056 DOI:10.1007/s00220-012-1426-y.
* [MW] Scott Morrison and Kevin Walker. Planar algebras, connections, and Turaev-Viro theory. preprint available at http://tqft.net/tvc.
* [Ost09] Victor Ostrik. On formal codegrees of fusion categories. Math. Research Letters, 16(5):895--901, 2009. arXiv:0810.3242 MR2576705.
* [Pen13] David Penneys. Chirality and principal graph obstructions, 2013. arXiv:1307.5890.
* [Pet10] Emily Peters. A planar algebra construction of the Haagerup subfactor. Internat. J. Math., 21(8):987--1045, 2010. arXiv:0902.1294 MR2679382 DOI:10.1142/S0129167X10006380.
* [PT12] David Penneys and James Tener. Classification of subfactors of index less than 5, part 4: vines. International Journal of Mathematics, 23(3):1250017 (18 pages), 2012\. arXiv:1010.3797 MR2902286 DOI:10.1142/S0129167X11007641.
* [Sny12] Noah Snyder. A rotational approach to triple point obstructions, 2012. arXiv:1207.5090, in press at Analysis & PDE.
This paper is available online at arXiv:1302.5148, under a ‘‘Creative Commons-
By Attribution’’ license. It has been accepted for publication in the
_Bulletin of the London Mathematical Society_ as of 2013/10/14. MSC classes:
46L37 (Primary), 18D05, 57M20 (Secondary)
|
arxiv-papers
| 2013-02-20T23:50:02 |
2024-09-04T02:49:41.987472
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Scott Morrison",
"submitter": "Scott Morrison",
"url": "https://arxiv.org/abs/1302.5148"
}
|
1302.5259
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-LHCb-DP-2012-005 21 February 2013
Radiation damage in the LHCb Vertex Locator
The LHCb VELO group111Authors are listed on the following pages.
The LHCb Vertex Locator (VELO) is a silicon strip detector designed to
reconstruct charged particle trajectories and vertices produced at the LHCb
interaction region. During the first two years of data collection, the $84$
VELO sensors have been exposed to a range of fluences up to a maximum value of
approximately $\rm{45\times 10^{12}\,1\,MeV}$ neutron equivalent
($\rm{1\,MeV\,n_{eq}}$). At the operational sensor temperature of
approximately $-7\,^{\circ}\rm{C}$, the average rate of sensor current
increase is $18\,\upmu\rm{A}$ per $\rm{fb^{-1}}$, in excellent agreement with
predictions. The silicon effective bandgap has been determined using current
versus temperature scan data after irradiation, with an average value of
$E_{g}=1.16\pm 0.03\pm 0.04\,\rm{eV}$ obtained. The first observation of
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ sensor type inversion at the LHC
has been made, occurring at a fluence of around $15\times 10^{12}$ of
$1\,\rm{MeV\,n_{eq}}$. The only $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$
sensors in use at the LHC have also been studied. With an initial fluence of
approximately $\rm{3\times 10^{12}\,1\,MeV\,n_{eq}}$, a decrease in the
Effective Depletion Voltage (EDV) of around $25$ V is observed, attributed to
oxygen induced removal of boron interstitial sites. Following this initial
decrease, the EDV increases at a comparable rate to the type inverted
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors, with rates of
$(1.43\pm 0.16)\times 10^{-12}\,\rm{V}/\,1\,\rm{MeV\,n_{eq}}$ and $(1.35\pm
0.25)\times 10^{-12}\,\rm{V}/\,1\,\rm{MeV\,n_{eq}}$ measured for
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ and
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors, respectively. A
reduction in the charge collection efficiency due to an unexpected effect
involving the second metal layer readout lines is observed.
LHCb VELO group
A. Affolder1, K. Akiba2 M. Alexander3, S. Ali4, M. Artuso5, J. Benton6, M. van
Beuzekom4, P.M. Bjørnstad7, G. Bogdanova8, S. Borghi3,7, T.J.V. Bowcock1, H.
Brown1, J. Buytaert9, G. Casse1, P. Collins9, S. De Capua7, D. Dossett10, L.
Eklund3, C. Farinelli4, J. Garofoli5, M. Gersabeck9, T. Gershon9,10, H.
Gordon11, J. Harrison7, V. Heijne4, K. Hennessy1, D. Hutchcroft1, E. Jans4, M.
John11, T. Ketel4, G. Lafferty7, T. Latham10, A. Leflat8,9, M. Liles1, D.
Moran7, I. Mous4, A. Oblakowska-Mucha12, C. Parkes7, G.D. Patel1, S.
Redford11, M.M. Reid10, K. Rinnert1, E. Rodrigues3,7, M. Schiller4, T.
Szumlak12, C. Thomas11, J. Velthuis6, V. Volkov8, A.D. Webber7, M.
Whitehead10, E. Zverev8.
1Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brasil
3School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
4Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands
5Syracuse University, Syracuse, NY, United States
6H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom
7School of Physics and Astronomy, University of Manchester, Manchester, United
Kingdom
8Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
9European Organization for Nuclear Research (CERN), Geneva, Switzerland
10Department of Physics, University of Warwick, Coventry, United Kingdom
11Department of Physics, University of Oxford, Oxford, United Kingdom
12AGH University of Science and Technology, Krakow, Poland
## 1 Introduction
The VErtex LOcator (VELO) is a silicon strip detector positioned around the
proton-proton interaction region at the LHCb [1] experiment. To obtain the
precision vertexing required for heavy-flavour physics, the closest active
silicon sensor region is located $\rm{8.2\,mm}$ from the beam axis, while the
silicon edge is located at a distance of $7$ mm. For the luminosity delivered
by the LHC in $2010$ and $2011$, the VELO was exposed to higher particle
fluences than any other silicon detector at the LHC. Careful monitoring of
radiation damage to the sensors is essential to ensure the quality of data for
LHCb physics analyses and to provide information relevant to the eventual
detector replacement and upgrade.
During proton injection and energy ramping the LHC beams are wider and less
stable than the beams used for data taking. To prevent damage to the silicon
sensors, the VELO consists of two halves retractable by $29\,\rm{mm}$ in the
horizontal plane. Each half contains $42$ half-disc shaped silicon-strip
sensors. When the beams are in a stable orbit, the two VELO halves are closed
such that the colliding beams are surrounded by the silicon sensors. Half of
the sensors have strips orientated in an approximately radial direction
($\rm{\phi}$-type) and the other half perpendicular to this (R-type), as shown
in figure 1. A detector module consists of an R-type and a $\rm{\phi}$-type
sensor glued to a common support in a back-to-back configuration. Track
coordinates are measured using the charge collected by the two sensors in a
module. All but two of the VELO sensors are oxygenated
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ sensors, consisting of an $n$-type
implant on a $n$-type bulk with a backplane $p^{+}$-type implant. Two
oxygenated $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ silicon sensors are
installed at one end of the VELO, intended to be a test of one of the leading
LHC silicon-upgrade candidates in an operational environment. A summary of the
silicon sensor properties is given in table 1.
Figure 1: A schematic representation of an R-type and a $\rm{\phi}$-type
sensor, with the routing lines orientated perpendicular and parallel to the
silicon strips, respectively.
Each $n^{+}$ implant is read out via a capacitively coupled _first metal
layer_ running along its length. The R-type strips and inner $\rm{\phi}$-type
strips do not extend to the outer region of the sensor. Therefore each strip
is connected via a metal routing line to the edge of the sensor where the
readout electronics are located. The routing lines are referred to as the
_second metal layer_ and are insulated from the bulk silicon and first metal
layer by $3.8\pm 0.3\,\upmu\rm{m}$ of $\rm{SiO_{2}}$. For R-type sensors, the
routing lines are positioned perpendicular to the sensor strips, whilst for
the $\rm{\phi}$-type they are positioned directly above and parallel to the
strips.
Table 1: The VELO sensor design parameters. The sensor position along the beam axis is given relative to the beam interaction region. Parameter | Value
---|---
Silicon thickness | $300\,\upmu\rm{m}$
Strip pitch | $40\mathord{-}120\,\upmu\rm{m}$
Strip width | $11\mathord{-}38\,\upmu\rm{m}$
Routing line width | $\mathord{\sim}11\,\upmu\rm{m}$
Inner silicon edge to beam axis | $7\,\rm{mm}$
Radial distance of active strips from beam axis | $8.2\mathord{-}42\,\rm{mm}$
Sensor position along beam-axis | $-300~{}\rm{to}~{}750\,\rm{mm}$
Oxygen enhancement | $>1\,\rm{x}\,10^{17}\,\rm{cm^{-3}}$
This paper presents studies of radiation damage in the VELO sensors using data
collected during $2010$ and $2011$. Section 2 describes studies of sensor
currents as a function of voltage and temperature. In section 3 the effects of
radiation are monitored by measuring the charge collection efficiency and
noise as a function of bias voltage. An unexpected decrease in clustering
efficiency due to an effect involving the second metal layer is described in
section 4. The results from the various studies are summarised in section 5.
## 2 Current evolution
The leakage current in a silicon sensor varies linearly with fluence for a
wide range of silicon substrates [2]. This predictability provides a simple
and accurate method of relating sensor currents to the amount of particle
fluence accumulated by a sensor. This section presents an analysis of sensor
currents in order to monitor radiation damage in the VELO.
Figure 2: Currents measured in the VELO for each sensor as a function of time
(bottom). The luminosity delivered to LHCb and the average sensor temperature
is shown over the same time scale (middle and top). Increases in the delivered
luminosity are matched by increases in the sensor currents. The mean measured
current agrees well with the prediction from simulation, which is described in
Sect.2.4. The mean measured value excludes sensors that are surface-current
dominated.
### 2.1 Interpretation of sensor current measurements
Each VELO sensor is biased through a high-voltage connection to the backplane,
and a common ground connection to the bias line and innermost guard ring on
the strip side. Therefore a single current is measured, corresponding to the
current drawn by the entire sensor. The raw measured currents are shown as a
function of time in figure 2, for sensors operated at the nominal bias voltage
of $150\,\rm{V}$ and at a mean temperature of approximately
$-7\,^{\circ}\rm{C}$. During the early running period, the majority of the
sensors had small currents, with a few exceptions that had currents of up to
$40\,\upmu\rm{A}$. With sensor irradiation the bulk currents have increased.
The spread of the measured currents is partly due to the difference in the
sensor positions relative to the interaction point, but is dominated by
variations in the sensor temperatures. The occasional dips in the currents are
related to short annealing periods in which the cooling systems were switched
off.
The measured currents contain contributions from two dominant sources,
generically referred to as _bulk_ and _surface_ currents. The bulk currents
vary exponentially with temperature and have a precisely predicted
relationship with fluence. Surface currents arise due to irregularities
introduced during sensor production such as process errors, scratches, guard
rings and non-uniformities in the cut edges. Some of these contributions may
have an exponential dependence on temperature [3], however the surface
currents measured in VELO sensors are predominantly characterised by an Ohmic
increase in current with bias voltage. In the majority of sensors the Ohmic
surface current is seen to anneal with particle fluence. An analysis of VELO
sensor currents as a function of bias voltage (IV scan) is described in detail
in ref. [4].
For the VELO sensors the pre-irradiation exponential contribution is very
small and is assumed to consist of a mixture of bulk and surface currents. The
relative contribution of bulk and surface current is identified by measuring
the current as a function of temperature (IT scan), as shown in figure 3. This
method is described in detail in ref. [5].
Figure 3: The current versus temperature for two VELO sensors operated at the
nominal $150\rm{\,V}$ bias. Before irradiation there is a small exponential
component for the sensor shown in figure a), while that shown in figure b) is
dominated by the non-temperature dependent surface current contribution. After
irradiation, a large exponential component is seen for both sensors. These
sensors are now bulk current dominated and the surface current has largely
been annealed.
### 2.2 The effective band gap
In the temperature range of interest, the bulk current is expected to scale
according to,
$I(T)\propto T^{2}\exp\left(-\frac{E_{g}}{2kT}\right),$ (1)
where $T$ is the temperature in Kelvin and $k$ is the Boltzmann constant
($8.62\times 10^{-5}\,\rm{eV}$). The constant $E_{g}$ is related to the
bandgap energy and an effective value of $1.21\,\rm{eV}$ is assumed for the
temperature range of interest [6]. Using this function the sensor currents can
be normalised to a common temperature. In addition, the current variation as a
function of temperature can be fitted to measure $E_{g}$ in the VELO sensors.
A summary of the results obtained from several IT scans is presented in table
2. The statistical uncertainty quoted is the width of a Gaussian fitted to the
distribution of the measured values from all sensors. The largest source of
systematic uncertainty is related to the accuracy with which sensor
temperatures are known, and the choice of the temperature fitting range [5].
Table 2: The effective band gap, $E_{g}$, measured following various amounts of delivered luminosity. The first uncertainty is statistical and the second is systematic. Delivered luminosity | Bias voltage | $E_{g}$
---|---|---
$\rm{[\,fb^{-1}\,]}$ | [ V ] | [ eV ]
$0.48$ | $100$ | $1.17\pm 0.07\pm 0.04$
$0.48$ | $150$ | $1.18\pm 0.05\pm 0.04$
$0.82$ | $150$ | $1.14\pm 0.06\pm 0.04$
$1.20$ | $150$ | $1.15\pm 0.04\pm 0.04$
The weighted average of the measured values is $E_{g}=1.16\pm 0.03\pm
0.04\,\rm{eV}$, which is statistically compatible with the expected value of
$1.21\,\rm{eV}$ from the literature [6]. Recent studies [6] have shown that
discrepancies can occur due to sensor self-heating, in which case $E_{g}$ is
measured to be systematically high. In addition, dependencies of $E_{g}$ on
the sensor bias voltage have been observed. However, the VELO sensors used for
these measurements were cooled and sufficiently biased at $150$ V, such that
these effects should not significantly influence the result. This is supported
by the consistency of the cross-check measurement made at $100$ V.
### 2.3 Fluence determination
With a good understanding of the annealing conditions, changes in current can
be related to the particle fluence incident on a sensor. The expected change
in leakage current at room temperature is given by the relation,
$\Delta I=\alpha\phi V_{Si},$ (2)
where $\alpha$ is the annealing parameter in units $\rm{A}\,\rm{cm}^{-1}$,
$\phi$ is the fluence in units of number of particles per $\rm{cm^{2}}$, and
$V_{Si}$ is the silicon volume in $\rm{cm^{3}}$. The annealing parameter is a
logarithmic function of time, and also depends on temperature. It has been
shown to follow an Arrhenius relation,
$\alpha\propto\exp\left(-\frac{E_{a}}{kT}\right),$ (3)
where $E_{a}$ is the activation energy for which a value of $1.33\pm
0.07\,\rm{eV}$ has been derived from a fit [7]. In order to estimate the
appropriate value of $\alpha$ for our data, we proceed as follows. The sensor
temperatures are recorded for each minute of operation and corrected to an
equivalent time at $21\,^{\circ}\rm{C}$. The delivered luminosity,
$\mathcal{L}$, is folded with this equivalent time to produce an effective
value of $\alpha\phi$ to be used in eq. 2. This procedure [5] yields a value
for $\alpha\mathcal{L}$ of $4.8\times 10^{-17}\,\rm{A\,fb^{-1}\,cm^{-1}}$,
corresponding to an effective value for $\alpha$ of $6\times
10^{-17}\,\rm{A\,cm^{-1}}$.
Fluences simulated with GEANT4 in the LHCb detector are folded with particle
damage factors using the NIEL scaling hypothesis [8]. The damage caused by
high energy particles in the bulk material is parameterised as proportional to
a displacement damage cross section, $D$. A displacement damage cross section
of $95$ MeV mb for $1$ MeV neutrons, $D_{n}(1\,\rm{MeV})$, is assumed and each
particle, $i$, is assigned an energy-dependent hardness factor, $k_{i}$,
$k_{i}(E)=D_{i}(E)/D_{n}(1\,\rm{MeV}),$ (4)
which can be used to estimate the damage in the material. The total damage is
expressed as a multiple of the expected damage of a $1$ MeV neutron (referred
to as $1$ MeV neutron equivalent, or $1$ MeV $\rm{n_{eq}}$). This technique
has been shown to be highly effective for describing the evolution of the
leakage current. For this analysis, every particle in the simulation that is
incident on a VELO sensor is assigned a displacement damage cross-section, or
cross-section per silicon atom, using as reference a set of tabulated values
from ref. [9].
The fluence varies with the geometric location of the sensors in the VELO, as
shown for simulated events in figure 4. It decreases with radial distance from
the beam with an approximate $1/r^{1.75}$ dependence. The position along the
beam-pipe ($z$-direction) and path length of the particle in the silicon
sensor were both found to significantly affect the fluence incident on a
sensor. The predicted fluence is assigned an uncertainty of
$\mathord{\sim}8\%$ which is dominated by uncertainties due to particles with
no available displacement damage data. The average measured current increase
for the sensors at the operational temperature of approximately
$-7\,^{\circ}\rm{C}$ (and including several annealing periods) is
$\mathord{\sim}18\,\upmu$A per $\rm{fb^{-1}}$.
Figure 4: a) The fluence from $\rm{1\,fb^{-1}}$ of integrated luminosity
versus radius for two VELO sensors, as seen in simulated proton-proton
collisions at a $7$ TeV centre-of-mass energy. Top b) In each sensor, the
fluence as a function of radius is fitted with the function $\rm{Ar^{k}}$. The
fitted exponent, ${\rm k}$, is shown as a function of the sensor
$z$-coordinate, where $z$ is the distance along the beam-axis that a sensor is
from the interaction region. The distribution of the fluence across the sensor
is seen to become flatter with increasing distance from the interaction
region. Bottom b) The fluence at the innermost radius of the sensor against
the sensor $z$-coordinate.
### 2.4 Predicted currents
With the relationship between luminosity and damaging fluence established, the
expected change in leakage current due to radiation damage can be predicted.
The integrated luminosity is taken from the LHCb online measurement.
Predictions are shown to agree with the measured currents, as shown in figure
5. The predictions have an associated uncertainty of approximately $10\%$,
estimated by adding the uncertainties for the integrated luminosity ($5\%$),
the annealing factor ($3\%$) and the damaging fluence prediction ($8\%$) in
quadrature. The predicted leakage current is on average within $5\%$ of the
measured current.
Figure 5: The leakage current against sensor $z$-coordinate after
$\rm{1.20\,fb^{-1}}$ of integrated luminosity, normalised to $0\,^{\circ}{\rm
C}$. The data is in agreement with predictions, represented by the shaded
region. The two VELO halves are referred to as the A and C sides of the VELO.
## 3 Depletion voltage studies
The depletion voltage, defined as the reverse bias voltage required to fully
deplete a sensor, has been monitored as a function of the delivered
luminosity. For fluences delivered to the VELO within the first few years of
operation, the change in depletion voltage for an
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensor should be accurately
described by the Hamburg model [10]. The effective doping of the $n$-bulk
changes over time due to radiation-induced defects. Dominant mechanisms are
expected to be the inactivation of phosphorous dopants in combination with the
introduction of acceptors. This leads to an initial decrease in the depletion
voltage of the $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ sensors to a value
close to $\rm{0\,V}$. The $n$-type bulk inverts and becomes $p$-type, after
which further irradiation leads to an increase in depletion voltage.
Eventually, the bias voltage required to obtain sufficient charge from a
sensor will cause electrical breakdown in the silicon or exceed the $500$ V
hardware limit, thus limiting the useful lifetime of the silicon detector. For
an oxygenated $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensor
irradiated with charged hadrons there are expected to be competing mechanisms,
with acceptor introduction partially compensated by initial oxygen-induced
acceptor removal [11, 12].
Following manufacture the depletion voltage of each VELO sensor was measured
by comparing the capacitance (C) to the bias voltage (V) [13]. It is not
possible to implement this technique after VELO installation and so
alternative methods are used to extract information on the depletion voltage.
This section presents results from two such studies: Charge Collection
Efficiency and noise scan studies.
### 3.1 Charge Collection Efficiency
The amount of charge collected by an under-depleted silicon strip increases
with the bias voltage applied. When the sensor is fully depleted, any further
increase in bias voltage will not increase the amount of charge collected
(given a sufficient signal collection time). The relationship between the
Charge Collection Efficiency (CCE) and the applied bias voltage has been
exploited to measure a property of the sensor analogous to the depletion
voltage, referred to as the Effective Depletion Voltage (EDV).
#### 3.1.1 Effective Depletion Voltage determination
The nominal operational voltage of the VELO sensors is $\rm{150\,V}$. For the
CCE analysis, collision data is recorded with every fifth module operated at a
voltage ranging between $0$ and $\rm{150\,V}$. The remaining modules are
maintained at $\rm{150\,V}$. Sensors with variable voltage are referred to as
_test_ sensors. The test sensors are removed from the reconstruction
algorithms such that only hits from the $\rm{150\,V}$ operated sensors are
used to reconstruct particle tracks. A track is extrapolated to a coordinate
on the test sensor and the set of five strips nearest to this coordinate are
searched for deposited charge. This provides unbiased information on the
amount of charge deposited by the particle as a function of bias voltage. At
each bias voltage the pedestal-subtracted ADC distribution is fitted using a
gaussian convoluted with a Landau function. This is used to determine the Most
Probable Value (MPV) of the ADC distribution. At large bias voltages the MPV
of the ADC distribution reaches a plateau. The EDV is defined as the voltage
at which the MPV of a sensor is equal to $80\%$ of the plateau ADC value, as
shown in figure 6. The threshold of $80\%$ was chosen as it gives closest
agreement with depletion voltages determined from pre-irradiation CV
measurements [13]. For unirradiated sensors, the difference in the values
obtained using the CV method and the EDV method is less than $10\,\rm{V}$.
Differences between the depletion voltage and EDV are expected near to sensor
type-inversion. The depletion voltage is expected to decrease to a value of
approximately $0$ V, whereas the minimum EDV is dictated by the smallest
potential difference required to collect charge from the silicon strips, which
in turn depends on the shaping time of the electronics. When operated with a
$25\,$ns signal shaping time, the smallest EDV that can be measured in the
VELO sensors is approximately $20$ V. The EDV is therefore not an accurate
estimate of the depletion voltage below this value.
Figure 6: a) The pedestal subtracted ADC distributions for an R-type sensor at
three example bias voltages. b) The MPV of the fit to the ADC distribution vs.
bias voltage. The dashed lines represent the ADC that is $80\%$ of the plateau
value, and the corresponding EDV.
#### 3.1.2 Bulk radiation damage
Between April $2010$ and October $2011$ five dedicated CCE scans were taken,
corresponding to delivered luminosities of $0$, $0.04$, $0.43$, $0.80$ and
$1.22\,\rm{fb^{-1}}$. As the fluence delivered to the sensors varies
significantly with sensor radius, each sensor is divided into $5$ radial
regions such that the fluence does not change by more than a factor of two
across a region. The change in EDV with irradiation for a particular
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensor is shown in figure 7.
Initially the EDV is found to decrease with fluence across all radial regions,
as predicted by the Hamburg model. The rate of decrease is greater in the
inner radial regions of the sensor, consistent with expectations that these
regions are exposed to higher fluence. The innermost region undergoes an
increase in EDV between $0.80$ and $1.22\,\rm{fb^{-1}}$ of delivered
luminosity, indicating that this part of the sensor has undergone type
inversion. The $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensors
exhibit a decrease in EDV with initial fluence, as shown in figure 7. This is
understood to be caused by oxygen induced removal of boron interstitial
acceptor sites, an effect that has been previously observed [11, 12].
Figure 7: a) The EDV against sensor radius for an
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensor for each of the CCE
scans. The dashed line shows the mean EDV across all radius regions prior to
sensor irradiation, where some $0\,\rm{fb^{-1}}$ data points are not present
due to low statistics. b) A similar plot for the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$, $\phi$-type sensor. The minimum
EDV is $\rm{\mathord{\sim}40\,V}$, which is significantly higher than the
minimum at $\rm{\mathord{\sim}20\,V}$ observed for the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensor.
The global change in EDV is determined by combining the data from many of the
VELO sensors with the predicted fluence (see section 2.3), as shown in figure
8. Sensors are divided into categories based on their initial EDVs. The
irradiation-induced change in the depletion voltage of
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors is modeled as a
function of time, temperature and fluence by the Hamburg Model. It has three
components: a short term annealing component, a stable damage component and a
reverse annealing component. Taking into account the LHCb luminosity
measurements and VELO sensor temperature readings, the Hamburg model
predictions can be compared to data, as shown by the overlaid curves in figure
8. Good agreement is found for low fluences, and for higher fluences after
type inversion. It is assumed that the sensors type invert at a fluence near
to the EDV minimum. For all $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type
sensors this occurs at approximately the same fluence of $\rm{(10-15)\times
10^{12}\,1\,MeV\,n_{eq}}$. The behaviour after inversion is found to be
independent of the initial EDV of the sensor, with an approximately linear
increase in EDV with further fluence. A linear fit to the data gives a voltage
increase with fluence of $\rm{(1.35\pm 0.25)\times
10^{-12}\,V/1\,MeV\,n_{eq}}$. For the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type, the initial decrease in EDV
occurred up to a fluence of approximately $\rm{2\times
10^{12}\,1\,MeV\,n_{eq}}$. After this the EDV has increased with further
fluence. The rate of increase is measured to be $\rm{(1.43\pm 0.16)\,\times
10^{-12}\,V/1\,MeV\,n_{eq}}$, similar to that of the type inverted
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors.
Figure 8: The EDV against fluence for VELO sensors of various initial EDV. The
EDV from data is compared to depletion voltages predicted by the Hamburg
model, with good agreement observed prior to, and after sensor type-inversion.
The EDV of the $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensors begins
to increase having received significantly less fluence than the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors. If the comparable
rate of EDV increase is maintained with further fluence then the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensors will reach an EDV of
$500\,\rm{V}$, the hardware limit of the VELO system, after receiving
approximately $\rm{35\times 10^{12}\,1\,MeV\,n_{eq}}$ less fluence than the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors. It is expected that
the $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ sensors will reach the
$500\,\rm{V}$ limit following a fluence of approximately $\rm{380\times
10^{12}\,1\,MeV\,n_{eq}}$.
The amount of charge collected is expected to change with fluence due to
radiation induced changes to the silicon. For $\phi$-type sensors the MPV has
decreased by approximately $4\%$ in the most irradiated regions, having
received a fluence of $\rm{40\times 10^{12}\,1\,MeV\,n_{eq}}$. An even larger
reduction of approximately $8\%$ is found in the inner regions of the R-type
sensors, having received a comparable fluence. This is due to a charge loss
mechanism related to the second metal layer of the R-type sensors, which is
described in detail in section 4. The outer regions of the sensor are most
significantly affected, with decreases of approximately $12\%$ observed
following a fluence of just $\rm{2\times 10^{12}\,1\,MeV\,n_{eq}}$.
### 3.2 Noise scans
The CCE scan data described in section 3.1 requires proton beams, and so is
collected at the expense of physics data. A second method has been developed
to monitor radiation damage, using the relationship between the intrinsic
electronic noise of the pre-amplifier and the capacitance of the sensor. Data
scans for this study can be collected regularly as proton collisions are not
required.
In undepleted silicon, several sources of input capacitance are identified,
the most dominant of which is the inter-strip impedance. For
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ sensors before type inversion, the
depletion region grows with increasing voltage from the backplane (the
opposite side to the strips). When the sensor is fully depleted the space-
charge reaches the strips and the inter-strip resistance increases by several
orders of magnitude, resulting in a decrease in sensor noise [14, 15]. For
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors following type
inversion and $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensors, the
depletion region grows from the strip side of the silicon. In this situation
the strips are immediately isolated at the application of a bias voltage and
the relationship between noise and voltage cannot be exploited to extract
information related to the depletion voltage.
The intrinsic noise in VELO sensors is determined by subtracting the mean ADC
value (or _pedestal_) and a common-mode noise term. Figure 9 shows the inverse
of the intrinsic noise as a function of voltage for an
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ and
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensor.
Figure 9: The inverse of the sensor noise against bias voltage for a
particular $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ and
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensor, for two values of
integrated luminosities. The C-V scan measured initial depletion voltages are
shown by the dashed lines.
For the $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ before and after
irradiation, and $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ after irradiation
(having type inverted) the distribution is flat, thus little information
related to the sensor depletion voltages can be extracted. For the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensor prior to type
inversion, an increase in voltage results in a decrease in noise until a
plateau is reached when the sensor is fully depleted.
The noise scan data can be used to identify whether an
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensor has undergone type
inversion. Only R-type sensors are investigated as the strip orientation
allows the identification of strips that have been subject to a specific
fluence. Following a delivered luminosity of approximately
$0.80\,\rm{fb^{-1}}$, $40$ of the sensors are identified as having type-
inverted in the first radial region ($8\mathord{-}11{\,\rm mm}$), while in the
second region ($11\mathord{-}16{\,\rm mm}$) $21$ sensors are identified.
Similar information can be extracted from the CCE data, with a sensor region
defined as type-inverted when the measured EDV has reached a minimum and
subsequently begun to increase. Following the same luminosity, the CCE method
identified $21$ and $5$ type-inverted sensors in the first and second radial
regions, considerably fewer than the noise method. This discrepancy is
understood by examination of figure 8, in which the minimum of the Hamburg
model prediction and the point at which the EDVs begin to increase are
separated by a fluence of approximately $10\times
10^{12}\,1\,\rm{MeV\,n_{eq}}$. The noise scan method is not subject to the
same fluence lag. Following a delivered luminosity of $1.2\,\rm{fb^{-1}}$, the
CCE method identifies $39$ and $21$ sensors in the two radial regions. This is
in good agreement with the noise method, with the same $39$ and $21$ sensors
identified by each method.
## 4 Charge loss to the second metal layer
All physics analyses at LHCb rely on efficient track reconstruction using
clusters from the VELO sensors. A cluster is defined as one or several
adjacent strips with charge above a particular threshold. The data samples
described in section 3.1 are also used to measure the Cluster Finding
Efficiency (CFE), by looking for the presence of a cluster at the track
intercept on the test sensor. Before irradiation the mean CFE of the VELO
sensors was greater than $99\%$ [16]. After irradiation the CFE in many
sensors decreased significantly, as shown in figure 10. The inefficiency is
particularly prevalent at large sensor radii and for high bias voltages.
Figure 10: The CFE for an R-type $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$
sensor as a function of sensor radius for a) different amounts of delivered
luminosity and b) several different bias voltages.
The decrease in CFE does not appear to be proportional to the delivered
luminosity, but instead exhibits a rapid drop between the $0.04$ and
$0.43\,\rm{fb^{-1}}$ data scans.
To determine the source of this CFE decrease, a large sample of regular LHCb
physics data has been used to measure the CFE for small spatial regions on a
sensor at the nominal $\rm{150\,V}$ bias. The result of this is shown in
figure 11, displayed beside a diagram illustrating the layout of the second
metal layer readout lines. There is a clear correspondence between the two
figures, with high CFE measured in regions that are devoid of second metal
layer lines. The relative orientation of the strips and routing lines in
R-type sensors is shown in figure 12. In addition, a schematic cross-section
of an R-type sensor is shown in figure 13. The routing lines from inner strips
are seen to pass approximately perpendicularly over the outer strip implants.
Figure 11: a) The layout of the second metal layer routing lines on an R-type
sensor. The darker regions represent the presence of routing lines, and the
lighter regions their absence. b) The CFE shown in small spatial regions of an
R-type sensor. Figure 12: A photograph of the innermost region of an R-type
sensor. Strips run from the bottom-left to the top-right. Each strip is
connected to a routing line orientated perpendicularly to the strip. Figure
13: A schematic cross-section of a portion of an R-type sensor, showing the
relative position of the two metal layers used to carry the readout signals in
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors. The $n^{+}$ implants
and strips (into the page) run perpendicularly to the routing lines (left to
right). For clarity the routing line of just one strip is shown.
Using the precision tracking of the VELO, it is possible to investigate the
CFE loss as a function of the distance between a track intercept with a
sensor, and the nearest strip and routing line. This is shown in figure 14.
The CFE is improved for track intercepts that are near to the strip implants.
Conversely, the CFE is reduced when a track intercept is far from a strip and
near to a routing line. Similar effects have been observed in other
experiments [17, 18].
Figure 14: The CFE as a function of the distance between the particle
intercept and the nearest routing line, for several bins of the distance
between the particle intercept and closest strip edge.
The source of the CFE loss is hypothesised in terms of charge induction on the
second metal layer. Prior to irradiation, ionised electrons will drift along
the field lines, most of which terminate at the $n^{+}$ implants. Hence the
majority of the signal will be induced on the implants, which are strongly
capacitively coupled to the readout strips. The drifting charge is expected to
be collected well within the $\mathord{\sim}20$ ns readout period of the
electronics, and no signal is expected on neighbouring electrodes (with the
exception of capacitive coupling and cross-talk effects, which are measured to
be low). However, irradiation may cause modifications to the field line
structure, such that not all field lines terminate on the implants. In
addition, there may be charge trapping effects which delay the drift of
charge, resulting in charge sampling before the electrons have reached the
implants. In both of these situations there will be a net induced charge on
nearby electrodes, such as the second metal layer routing lines. In figure 10
the CFE was seen to worsen with increasing bias voltage. This appears to
disfavour the contribution due to trapping, as an increase in bias voltage
should result in faster collection times. However, the bias voltage may also
affect the field-line structure. In reality, it is likely that the charge loss
to the second metal layer is due to several competing mechanisms.
The CFE loss also exhibits a significant radial dependence, as was shown by
figure 10. This can be understood by considering two competing mechanisms. The
implant strip width and the fractional area covered by the strips increases
with radius, resulting in reduced charge loss, due to greater strip shielding.
However, the fractional area covered by the second metal layer also increases
with radius, due to the greater density of lines, increasing the amount of
pickup. The latter effect is dominant, hence the overall charge loss is
greater at large sensor radii.
In addition to lowering the clustering efficiency, charge induced on a routing
line can introduce a noise cluster. The cluster ADC distribution from R-type
sensors has a peak associated to these low ADC noise clusters that has grown
with fluence, as shown in figure 15. These noise clusters are predominantly
single strip clusters located at small radius regions of R-type sensors. The
fraction of the induced noise clusters increases when tracks traverse a sensor
near to a routing line and far from a strip, as shown in figure 15.
Figure 15: For R-type sensors: a) The ADC spectrum of all clusters seen for
three different integrated luminosities. The limit at $10$ ADC counts is
imposed by the clustering thresholds. b) The fraction of reconstructed
clusters that are induced on routing lines as a function of the distance to
the nearest routing line and strip. This is determined from the number of
track intercepts for which the inner strip associated to the nearest routing
line has a $1$ strip cluster with less than $35$ ADC counts.
The CFE decrease is not observed in $\phi$-type sensors as the routing lines
from inner strips were intentionally placed directly above the outer strips to
minimise pick-up. This is made possible by the $\phi$-type sensors strip
orientation. The CFE loss could be partially recovered by lowering the cluster
reconstruction thresholds. However this comes at the expense of a worse signal
to background ratio which leads to higher rates of fake tracks reconstructed
from noise induced clusters.
The R-type $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ and
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ sensors have a similar CFE
dependence on the strip and routing line distance, as shown in figure 16.
Figure 16 shows the MPV of the collected charge distribution as a function of
bias voltage for an $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ and an
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensor. At $150\rm{\,V}$, the
MPV of the $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ and
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ are approximately equal, both
before and after irradiation. For the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensor post irradiation, the
MPV reaches a maximum at around $60\rm{\,V}$ after which it is observed to
decrease with increasing bias voltage. This decrease in MPV leads to a reduced
CFE and is associated with the second metal layer effect. Therefore less
charge is lost to the second metal layer when operating the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ sensor at a lower than nominal
voltage.
Figure 16: a) The CFE as a function of the distance between the particle
intercept and the nearest routing line and strip edge, compared for an
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ and
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ sensor. b) The MPV vs. bias
voltage for an $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ and
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ sensor.
The $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensor does not exhibit
this same charge collection loss (and resulting CFE decrease) dependence on
voltage. This may be due to the depletion region in the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensor growing from the strip
side of the silicon instead of from the sensor backplane. This is supported by
the observation that after type inversion, charge loss in
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors no longer depends on
the bias voltage, as shown in figure 17.
Figure 17: The MPV against bias voltage for an R-type sensor at three values
of delivered luminosity. The sensor region has been identified as having type
inverted in the $1.20\,\rm{fb^{-1}}$ scan, in which the MPV dependence on
voltage is no longer present.
## 5 Summary
The effects of radiation damage have been observed in all of the LHCb VELO
sensors. At the operational sensor temperature of approximately
$-7\,^{\circ}\rm{C}$, the average rate of sensor current increase is measured
to be $18\,\upmu\rm{A}$ per $\rm{fb^{-1}}$, in agreement with expectations.
The silicon effective bandgap has been determined using current versus
temperature scan data collected at various levels of radiation, with an
average value of $E_{g}=1.16\pm 0.03\pm 0.04\,\rm{eV}$ found.
Analysis of the Charge Collection Efficiency has proven an effective method
for tracking the evolution of the sensor depletion voltages with fluence.
Measurements of the Effective Depletion Voltage after sensor type inversion
have been shown to agree well with the Hamburg model predictions. A method
relating the sensor noise dependence on bias voltage was used to identify
type-inverted sensors, and was found to be in good agreement with the Charge
Collection Efficiency method.
A significant decrease in the Cluster Finding Efficiency in R-type sensors due
to the second metal layer has been observed following a relatively small
amount of particle fluence. In the worst affected sensor regions the Cluster
Finding Efficiency decreased by over $5\%$. Despite this relatively large
localised inefficiency, studies of the VELO tracking efficiencies show no
degradation associated to this effect, within the errors of $\pm 0.3\%$. For
the $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$ type sensors before type
inversion the magnitude of the charge loss is found to increase with sensor
bias voltage. For type-inverted $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$
type sensors and $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ type sensors, a
voltage dependence is not observed. Radiation induced modifications to the
field line structure and charge trapping effects are thought to be possible
sources of this charge loss effect.
The two $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ sensors have been studied
in detail, providing valuable information for upgraded detector designs. If
the Effective Depletion Voltage were to continue increasing at the currently
observed rate with further irradiation, then the
$\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{p}}$ would reach the $500$ V hardware
limit having received approximately $\rm{35\times 10^{12}\,1\,MeV\,n_{eq}}$
less fluence than an equivalent $\rm{\it{n^{+}}}\mbox{-}on\mbox{-}{\it{n}}$
type sensor. This corresponds to approximately $1\,\rm{fb}^{-1}$ of integrated
luminosity in the highest particle fluence region of the VELO.
## Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC
(China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG
(Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR
(Poland); ANCS/IFA (Romania); MinES, Rosatom, RFBR and NRC “Kurchatov
Institute” (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER
(Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We
also acknowledge the support received from the ERC under FP7. The Tier1
computing centres are supported by IN2P3 (France), KIT and BMBF (Germany),
INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United
Kingdom). We are thankful for the computing resources put at our disposal by
Yandex LLC (Russia), as well as to the communities behind the multiple open
source software packages that we depend on. A special acknowledgement goes to
all our LHCb collaborators who over the years have contributed to obtain the
results presented in this paper.
## References
* [1] LHCb collaboration, A. A. Alves Jr. et al., The LHCb detector at the LHC, JINST 3 (2008) S08005
* [2] M. Moll, Radiation Damage in Silicon Particle Detectors - microscopic defects and macroscopic properties, PhD thesis, Fachbereich Physik der Universität Hamburg, 1999
* [3] Y. Wang, A. Neugroschel, and C. T. Sah, Temperature Dependence of Surface Recombination Current in MOS Transistors, IEEE Trans. Nucl. Sci. 48 (2001) 2095
* [4] A. Gureja et al., Use of IV (current vs voltage) scans to track radiation damage in the LHCb VELO, LHCb-PUB-2011-020
* [5] A. Hickling et al., Use of IT (current vs temperature) scans to study radiation damage in the LHCb VELO, LHCb-PUB-2011-021
* [6] A. Chilingarov, Generation current temperature scaling, Tech. Rep. PH-EP-Tech-Note-2013-001, CERN, Geneva, Jan, 2013
* [7] E. Fretwurst et al., Reverse annealing of the effective impurity concentration and long term operational scenario for silicon detectors in future collider experiments, Nucl. Instrum. Meth. A342 (1994), no. 1 119
* [8] A. Vasilescu, The NIEL hypothesis applied to neutron spectra of irradiation facilities and in the ATLAS and CMS SCT, ROSE/TN/97-2, December, 1999
* [9] A. Vasilescu and G. Lindstroem, Displacement damage in silicon, on-line compilation, tech. rep., http://sesam.desy.de/members/gunnar/Si-dfuncs.html
* [10] R. Wunstorf et al., Results on radiation hardness of silicon detectors up to neutron fluences of $\rm{10^{15}~{}n/cm^{2}}$, Nucl. Instrum. Meth. A315 (1992), no. 1–3 149
* [11] F. Lemeilleur et al., Electrical properties and charge collection efficiency for neutron-irradiated p-type and n-type silicon detectors, Nuclear Physics B - Proceedings Supplements 32 (1993), no. 0 415
* [12] M. Lozano et al., Comparison of radiation hardness P-in-N, N-in-N, N-in-P Silicon Pad Detectors, IEEE Trans. Nucl. Sci. 52 (2005) 1468
* [13] P. R. Turner, VELO module production - sensor testing, LHCb-2007-072
* [14] The CMS Tracker Collaboration, C. Barth, Evolution of silicon sensor characteristics of the CMS tracker, Nucl. Instrum. Meth. A658 (2011), no. 1 6
* [15] The CMS Tracker Collaboration, C. Barth, Evolution of silicon sensor characteristics of the CMS silicon strip tracker, Nucl. Instrum. Meth. In Press (2012), no. 0
* [16] The LHCb VELO Group, Perfomance of the LHCb Vertex Locator, To be submitted to JINST (2012)
* [17] ATLAS Pixel Collaboration, L. Tommaso, Test beam results of ATLAS Pixel sensors, arXiv:hep-ex/0210045
* [18] T. Rohe et al., Position Dependence of Charge Collection in Prototype Sensors for the CMS Pixel Detector, IEEE Trans. Nucl. Sci. 51 (2004) 1150
|
arxiv-papers
| 2013-02-21T11:53:53 |
2024-09-04T02:49:41.994816
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "A. Affolder, K. Akiba, M. Alexander, S. Ali, M. Artuso, J. Benton, M.\n van Beuzekom, P.M. Bj{\\o}rnstad, G. Bogdanova, S. Borghi, T.J.V. Bowcock, H.\n Brown, J. Buytaert, G. Casse, P. Collins, S. De Capua, D. Dossett, L. Eklund,\n C. Farinelli, J. Garofoli, M. Gersabeck, T. Gershon, H. Gordon, J. Harrison,\n V. Heijne, K. Hennessy, D. Hutchcroft, E. Jans, M. John, T. Ketel, G.\n Lafferty, T. Latham, A. Leflat, M. Liles, D. Moran, I. Mous, A.\n Oblakowska-Mucha, C. Parkes, G.D. Patel, S. Redford, M.M. Reid, K. Rinnert,\n E. Rodrigues, M. Schiller, T. Szumlak, C. Thomas, J. Velthuis, V. Volkov,\n A.D. Webber, M. Whitehead, E. Zverev",
"submitter": "Dermot Moran",
"url": "https://arxiv.org/abs/1302.5259"
}
|
1302.5344
|
# From data towards knowledge: Revealing the architecture of signaling systems
by unifying knowledge mining and data mining of systematic perturbation data
Songjian Lu${}^{\text{}1}$, Bo Jin${}^{\text{}1}$, Ashley
Cowart${}^{\text{}2}$, Xinghua Lu${}^{\text{}1,*}$
1\. Department of Biomedical Informatics, University of Pittsburgh,
Pittsburgh, PA 15232.
2\. Dept Biochemistry and Molecular Biology, Medical University of South
Carolina, Charleston, SC 29425.
$*$ Corresponding Author: Xinghua Lu, Department of Biomedical Informatics,
University of Pittsburgh, 5607 Baum Boulevard, Pittsburgh, PA 15232.
## Abstract
####
Genetic and pharmacological perturbation experiments, such as deleting a gene
and monitoring gene expression responses, are powerful tools for studying
cellular signal transduction pathways. However, it remains a challenge to
automatically derive knowledge of a cellular signaling system at a conceptual
level from systematic perturbation-response data. In this study, we explored a
framework that unifies knowledge mining and data mining approaches towards the
goal. The framework consists of the following automated processes: 1) applying
an ontology-driven knowledge mining approach to identify functional modules
among the genes responding to a perturbation in order to reveal potential
signals affected by the perturbation; 2) applying a graph-based data mining
approach to search for perturbations that affect a common signal with respect
to a functional module, and 3) revealing the architecture of a signaling
system organize signaling units into a hierarchy based on their relationships.
Applying this framework to a compendium of yeast perturbation-response data,
we have successfully recovered many well-known signal transduction pathways;
in addition, our analysis have led to many hypotheses regarding the yeast
signal transduction system; finally, our analysis automatically organized
perturbed genes as a graph reflecting the architect of the yeast signaling
system. Importantly, this framework transformed molecular findings from a gene
level to a conceptual level, which readily can be translated into computable
knowledge in the form of rules regarding the yeast signaling system, such as
“if genes involved in MAPK signaling are perturbed, genes involved in
pheromone responses will be differentially expressed”.
## Introduction
Model organisms, such as Saccharomyces cerevisiae and Drosophila melanogaster,
are powerful systems to study cellular signal transduction, because they are
amenable to systematic genetic and pharmacological perturbations, enabling
biologists to infer whether a gene is involved in a signal transduction
pathway through studying perturbation-response data. The premise for
elucidating signal transduction pathways from systematic perturbation
experiments is that, if perturbation of a set of genes consistently causes a
common cellular response, e.g., a phenotype presented as the differential
expression of a module of genes, the perturbed genes are likely the members
(or modulators) of the signal transduction pathway that leads to the
phenotype.
In this study, we refer to a signal from an information theory (Cover and
Thomas,, 2006) point of view, in which a signal is a latent variable whose
state contains information with respect to another variable, e.g., the
expression state of a gene module or the state of another signal. From the
same viewpoint, a signaling system consists of a set of latent variables
connected as a network, in which an edge exists between a pair of signals if
the state of one signal affects that of the other, i.e., information can be
transmitted between the signals, and the relay of signals along the paths in
the network enables the system to encode complex information. From a cell
biology viewpoint, a signal transduction pathway consists of a collection of
signaling molecules that detect and transmit a signal that has a physical or
chemical form, e.g., the represent of pheromone in the environment. In such a
system, a signal is encoded as a change in the state of a signaling molecule,
often manifested as a change in the structural conformation of a protein,
chemical modification of a signaling molecule, or a change in the
concentration of a signaling molecule. While it would be ideal to find a one-
to-one mapping between the signaling molecules in cells and the signals in the
information theory framework, such a mapping can be difficult to obtain and
too complex to represent. Representing cellular signaling systems within the
abstract information-theory framework provides the following advantages: 1) it
enables us to use latent variables to represent the state of yet unknown
signaling molecules; 2) it allows us to represent the biological signals
encoded by a group of signaling molecules into a single-bit signal, if the
signals encoded by these molecules convey a common piece of information with
respect to other variables. We refer to such a group of signaling molecules as
a signaling unit. The following example illustrate the parallelism between the
biological entities and and their counterparts in a computational model. A
pheromone receptor in a yeast cell and its associated G-proteins can be
thought of as one signaling unit, as they function together to detect the
signal of pheromone in an inseparable manner. Another exemplary signaling unit
is the cascade of mitogen-activated protein kinases (MAPKs), which transduce
signals among themselves through a chain of protein phosphorylation reactions
almost in a deterministic fashion. The states of these signaling units can be
represented as two single-bit signals in a computational model. When a yeast
cell is exposed to pheromone, the receptor unit detects the signal and
transmit the signal to MAPKs unit(Gustin et al.,, 1998; Herskowitz,, 1995),
which further relays the signal to down stream signaling units to regulate
expression of downstream genes involved in mating. These relationships between
signaling units can be represented as edges in the model. Moreover, in
addition to pheromone response, the MAPK signaling unit also interacts with
other signaling units to transmit the signals that affect
filamentation/invasion processes (Gustin et al.,, 1998; Herskowitz,, 1995);
such branching and cross-talks between different signaling pathways can be
represented as a network connecting signals in the computational model. Thus,
the general task of using systematic perturbation data to study a cellular
signaling system can be reduced to the following specific tasks: 1) revealing
the signals embedded in the convoluted molecular phenotype data such as
microarrays, 2) identifying perturbed genes that affect a common signal, 3)
grouping perturbed genes into signaling units based on the information they
encode, and 4) inferring the paths between signaling units where a path may or
may not correspond to a signal transduction pathway in conventional cell
biology.
In the seminal work by Hughes et al., (2000), yeast cells were subjected to
over 300 types of systematic perturbations (gene deletions and chemical
treatments111From here on, we refer to such a treatment experiment as a
perturbation instance) and the transcriptional responses to the perturbations
were measured using microarrays. This dataset has been widely used to test
different computational approaches for investigating the relationship between
perturbed genes and responding genes (Hughes et al.,, 2000; Tanay et al.,,
2002; Ourfali et al.,, 2007; Markowetz et al.,, 2007; Yeger-Lotem et al.,,
2009; Huang et al.,, 2009). For example, using a conventional hierarchical
clustering approach, Hughes et al grouped perturbed genes into clusters to
elucidate the cellular functions of some genes, based on the fact that
perturbing these genes produced gene expression profiles similar to those
resulting from perturbing the known members of certain pathways. To relax the
requirement of global similarity by hierarchical clustering, other researchers
have studied approaches to connect a subset of perturbation instances to a
subset of responding genes in order to find context specific information
between the perturbation and the responses (Tanay et al.,, 2002). Such a task
is often cast as a biclustering problem (Madeira et al.,, 2004; Cheng et al.,,
2000; Erten et al.,, 2010). More recently, sophisticated graph-based
algorithms have been applied to the dataset to study potential signal pathways
(Yeger-Lotem et al.,, 2009; Huang et al.,, 2009; Ourfali et al.,, 2007). The
basic idea underlying the studies by Yeger-Lotem et al and Huang et al is to
model the information flow from perturbed genes to responding genes through a
PPI network by employing graph search algorithms, e.g., price collecting
Steiner tree algorithms.
While the above studies have led to many biological insights regarding the
system at a gene level, they did not address the task of discovering signaling
units and representing the findings at a conceptual level in order to derive
computable knowledge, such as the rule: if a gene involved in MAPK pathway is
deleted, the cellular response to pheromone will be affected. Transforming
experimental data into concepts and further elucidating the relationship among
the concepts are critical steps of knowledge acquisition and knowledge
representation. The scale of contemporary biotechnologyies further
necessitates computational approaches to perform such tasks in an automated
manner in order to facilitate knowledge discovery by human experts. Yet, the
development of such techniques is severely lagging behind the pace of data
generation. In this paper, we report a proof of concept framework that unifies
knowledge mining and data mining to derive knowledge regarding a signaling
system in an automatic manner, and we refer to the overall approach as
ontology-driven knowledge discovery of signaling pathways (OKDSP). We tested
the framework using the yeast perturbation-response data by Hughes et al.,
(2000) to illustrate its utility.
## Results and Discussion
A key step of “reverse engineering” signaling pathways using systematic
perturbations data is to identify perturbations that convey the same
information, in other words, to first find the “jigsaw puzzle” pieces
belonging to a signal transduction pathway. For example, a classic yeast
genetic approach is to search for deletion strains that exhibit a common
phenotype as a means for identifying genes potentially involved in a signaling
pathway carrying information with respect to the phenotype (Winzeler et al.,,
1999). The advent of genome technologies enables biologists to use genome-
scale data, such as gene expression data, as “molecular phenotypes” to study
the impact of systematic perturbations (Hughes et al.,, 2000). In general, a
perturbation treatment, such as deleting a gene, often affects multiple
biological processes. For example, deleting a gene involved in ergosterol
metabolism will affect the organization of cell membrane, which in turn will
affect multiple signaling pathways located in the membrane. As such, the
overall cellular response to a perturbation instance, which often manifested
as a long list of differentially expressed genes, inevitably reflects a
mixture of responses to multiple signals. Thus, we are confronted with two
fundamental tasks when studying systematic perturbation data: 1) dissecting
signals from the convoluted gene expression responses to a perturbation
instance, i.e., finding a module of genes whose expression state reflects the
state of a signal transduced along a signaling pathway, 2) identifying a set
of perturbation instances that affects the signal regulating a common
expression module.
To address the tasks, we hypothesize that, if a module of genes—whose
functions are coherently related—responds to multiple perturbation instances
in a coordinated manner, the genes in the module are likely regulated by a
common signal, and the perturbation instances affect this signal. Based on
this assumption, we can first decompose the overall expression response to a
perturbation instance into functional modules, with each module potentially
responding to a distinct signal; then we can investigate if a functional
module is repeatedly affected in multiple perturbation instances. In this
study, we developed an ontology-based knowledge mining approach to identify
functional modules, and we then developed a novel bipartite-graph-based data
mining approach to search for perturbation instances affecting a common
signal. Based on the results from the steps above, we further identified
signaling units and revealed their organization in a signaling system using a
graph-based algorithm.
### Identifying functional modules through knowledge mining
The Gene Ontology (Ashburner et al.,, 2000) (GO) contains a collection of
biological concepts (GO terms) describing molecular biology aspects of genes.
The relationship among the concepts are represented in a directed acyclic
graph (DAG). An edge reflecting an “is-a” relationship between a pair of GO
terms indicates that the concept encoded by the parent term is more general
and subsumes the concept of the child term. The GO has been widely used to
annotate the function of genes of different model organisms, therefore it is
natural to treat a set of genes annotated with a common GO term as a
functional module, a widely used approach in bioinformatics analyses (Segal et
al.,, 2004; Subramanian et al.,, 2005).
Figure 1: Characterization of the summary GO terms. A. The histograms of the
number of genes associated with each GO term before and after ontology-guided
knowledge mining: 1) the original GO annotations for all responding genes
(blue), and 2) the GO terms returned by the instance-based module search
(red). B. The distribution of the levels of the above GO term sets in the
ontology hierarchy are shown as normalized histograms. Level $0$ represents
the root of the Biological Process namespace.
We first investigated if original GO annotations from the GO database are
suitable to represent the major functional themes of genes responding to
perturbations in our setting. Based on the results of gene expression analysis
performed by Hughes et al, $5,289$ genes were determined to be differentially
expressed in response to one or more perturbation instance(s). We identified
all the GO terms that have been used to annotate these genes and retained a
subset that belong to the Biological Processes domain of the GO, which
consisted of 1,739 unique GO terms. We studied the distribution of the number
of genes annotated by each GO term, and the results are shown as a histogram
in Figure 1. The figure shows that a large number of original GO annotations
was associated with only a few genes; in fact almost half ($43.93\%$) of the
GO terms was associated with only 1 or 2 genes. The results reflect the fact
that, while original GO annotations are highly specific and informative with
regards to individual genes, they would fail to represent the major functional
themes of a set of genes. Therefore, there is a need to identify more general
terms to represent major functional themes.
We then formulated the task of finding functional modules as follows: given a
list of genes responding to a perturbation instance and their GO annotations,
assign the genes into non-disjoint functional modules, such that genes within
a module participate in coherently related biological processes. This was
achieved by utilizing the hierarchical organization of the GO to group a
subset of genes under a suitable GO term, which retains as much of original
the semantic information as possible. We developed novel quantitative metrics
for objectively assessing the fitness of a summarizing GO term, which enabled
us to find a term that covered many genes and yet minimized the loss of
semantic information the original annotations. Our criteria for a summarizing
GO term included: 1) requiring the summarizing term to be statistically
enriched in the input gene list, and 2) requiring the functions of the genes
in a module to be semantically coherent when measured with a functional
coherence metric previously developed by our group (Richards et al., (2010),
see Methods section). This enabled us to dynamically search for suitable terms
along the GO hierarchy and to group genes under suitable summary terms in a
manner that is specific for each input gene list, rather than using pre-fixed
annotations (Subramanian et al.,, 2005). We refer to this approach as a
knowledge mining approach because it searches for a new representation of the
function of genes through assimilating knowledge represented by the original
annotations.
Figure 2: Functional coherence of modules. A. The cumulative distribution of
functional coherence p-values of the responding modules identified by
different methods: MBSEC with module-based input graphs (red), SAMBA with
module-based input graphs (green), and SAMBA with the global input graph
(blue). B. The cumulative distribution of functional coherence p-values of the
perturbation modules identified by different methods: MBSEC with module-based
input graphs (red), SAMBA with module-based input graphs (green), and SAMBA
with the global input graph (blue).
Applying this approach, we identified functionally coherent modules for each
perturbation experiment. Further, we merged the modules from different
perturbation instances that shared a common GO annotation. The procedure led
to a total of 527 distinct functional modules, each summarized with a distinct
GO term. The statistics of the modules, the number of genes annotated by
summarizing terms and the levels of the terms in the GO hierarchy, are shown
in Figure 1. It is interesting to note that while the summarizing GO terms
tend to annotate more genes than the original ones, the distribution of the
terms along the GO hierarchy is quite close to the original annotations,
indicating that our approach retained a level of semantic specificity similar
to the original annotations.
We further investigated the modules and found the results biologically
sensible. For example, we found that 38 genes were grouped into a module
annotated with the term GO:0008643 (carbohydrate transport) (from here on, we
name a functional module using its summary GO term), including 17 genes in
hexose transport {HXT1, HXT2, …, HXT17}. The original annotations of the genes
in the module included GO:0051594 (detection of glucose, covering 3 genes),
GO:0005536 (glucose binding, covering 3 genes), GO:0005338 (nucleotide-sugar
transmembrane transporter activity, covering 4 genes), GO:0005353 (fructose
transmembrane transporter activity, covering 16 genes), and so on. Our
algorithm summarized the function of the genes using the term GO:0008643
(carbohydrate transport), which we believe does not result in a significant
loss of information regarding the individual genes, thus providing a sensible
representation of overall function of a larger group of genes. A list of
function modules is shown in the supplementary website.
Figure 3: Subgraph connectivity. Cumulative distribution of within bipartite
subgraph connectivity of the modules identified in three experiments: MBSEC
with module-based input graphs (red), SAMBA with module-based input graphs
(green), and SAMBA with global input graph (blue).
Figure 4: Protein-protein physical and genetic interactions within modules.
A. The cumulative distribution of the within module PPI/GI connectivity ratios
of responding modules identified by different methods: MBSEC with module-based
input graphs (red), SAMBA with module-based input graphs (green), and SAMBA
with the global input graph (blue). B. The cumulative distribution of the
connectivity ratios within perturbation modules identified by different
methods: MBSEC with module-based input graphs (red), SAMBA with module-based
input graphs (green), and SAMBA with the global input graph (blue).
### Searching for perturbation instances affecting a common signal
Using a functional module from the previous section as a putative unit
responding to a cellular signal, we further searched for the perturbation
instances affected to the functional module. Success in finding a set of
functionally coherent genes that repeatedly co-responded to multiple
perturbation instances would provide a strong indication that the responding
genes are regulated as a unit by a common signal and that the perturbation
instances may have affected such a signal. We addressed the searching task in
the following steps: 1) Given a functional module, we first created a
bipartite graph using all perturbation instances on one side and the genes in
the functional module on the other side, referred to as a functional-module-
based graph. In such a graph, an edge between a perturbation instance and a
responding gene indicates that the gene is differentially expressed in
response to the instance. 2) We then searched for a densely connected subgraph
satisfying the following conditions: a) each vertex was on average connected
to a given percent, $r$, of the vertices on the opposite side, and b) the size
(number of vertices) of the subgraph was maximized. We refer to the vertices
on the perturbation side of a densely connected subgraph as a perturbation
module, and those on the responding side as a response module. The problem of
finding such a subgraph from a bipartite graph belongs to a family bi-
clustering problems (Madeira et al.,, 2004; Cheng et al.,, 2000; Erten et
al.,, 2010), which are NP-hard. There are many approximate algorithms for
solving the problem (see the review by Madeira et al., (2004)), but our
formulation has distinct objectives, which allow us to specify the degree of
connectivity between perturbation and responding modules. We have developed
and implemented a greedy algorithm, referred to as the maximal bipartite
subgraph with expected connectivity (MBSEC) algorithm, to solve this problem,
see Methods.
We performed experiments to test the following two hypotheses: 1) using
functional-module-based graphs as inputs for a dense-subnetwork searching
algorithm would enhance the capability of identifying signaling pathways; 2)
specifically pursuing high density of a subgraphs enhances the capability of
finding signaling pathways. To test the first hypothesis, we applied an
algorithm referred to as the statistical-algorithmic method for bicluster
analysis (SAMBA) by Tanay et al., (2002) to assess the impact of different
inputs on the quality of perturbation-response modules. SAMBA is a well-
established algorithm that solves the biclustering problem under a bipartite
graph setting, which is similar to problem setting. We first applied the SAMBA
(implemented in the Expander program, v5.2) with default settings to the
global bipartite graph consisting of all 5,289 responding genes and 300
perturbations, which returned a total of $304$ subgraphs. We then applied the
SAMBA program to each of the functional-module-based graphs, and a total of
$131$ subgraphs were returned. To test the second hypothesis, we applied the
MBSEC algorithms to the same functional-module-based graphs as in the previous
experiment, using the following parameter settings: $r\geq 0.75$ and $s\geq
4$. The experiment identified a total of $122$ subgraphs that satisfied the
requirements.
We assessed the overall quality of a perturbation (or a responding) module by
determining the functional coherence score of the module using the method
previously developed by our group (Richards et al.,, 2010). This method
measures functional relatedness of a set of genes based on the semantic
similarity of their functional annotations and provides a p-value of the
coherence score of a gene set. The key idea of this method is as follows:
given a set of genes, map the genes to a weighted graph representing the
ontology structure of the GO, in which the weight of an edge reflects the
semantic distance between the concepts represented by the a pair of GO terms;
identify a Steiner tree that connects the GO terms annotating these genes and
measure how closely the genes are located within the graph using the total
length of the tree; apply a statistical model to assess if the genes in the
set are more functionally related than those from a random gene set. A gene
set with a small p-value would indicate that the functions of the genes are
coherently related to each other.
Figure 2 shows the results of functional coherence analysis of responding
modules (Panel A) and perturbation modules (Panel B) by plotting the
cumulative distribution of the modules based on their p-values. Panel A shows
that all responding modules returned by our MBSEC algorithm and those returned
by SAMBA with functional-module-based graphs as input were assessed as
functionally coherent. This is not surprising, as all the input modules were
functionally coherent (p-value $\leq 0.05$), and therefore the returned
responding modules, which were sets of the input modules, were likely to be
coherent. In comparison, when using the global perturbation-response bipartite
graph as input, about 70% of the responding modules identified by SAMBA were
assessed to be coherent. The results indicate that, while the SAMBA algorithm
is capable of identifying biclusters with coherent responding modules, a high
percentage of returned responding modules contains a mixture of genes involved
in diverse biological processes.
Since the goal is to find perturbation instances that likely constitute a
signaling pathway, it is more interesting to inspect if the genes in a
perturbation module are coherently related. We assessed the functional
coherence of the perturbation modules returned from the three experiments for
the impact of different inputs and algorithms on the results (see Panel B of
Figure 2). A higher percentage of perturbation modules was functionally
coherent when functional-module-based graphs were used as inputs for SAMBA
when compared with those from the SAMBA with a global graph, indicating that
indeed perturbation instances densely connected to a functionally coherent
responding module were more coherent themselves, i.e., they were more likely
to function together. When comparing the results from MBSEC algorithm with
those from the SAMBA, our algorithm returned the highest percentage of
functionally coherent perturbation modules. The results indicate that, when
inputs are the same, specifically pursuing high density subgraphs enhances the
quality of identified perturbation modules.
We further inspected the within subgraph connectivity, determined as the
number of edges within a subgraph over the number of maximal possible edges
($n\times m$, with $n$ and $m$ representing the number of vertices on each
side respectively), to investigate if the differences in functional coherence
of the modules were related to the capabilities of the algorithms to find
densely connected graphs. Figure 3 shows that there were striking differences
in the connectivity of the subgraphs returned from three experiments. The
results also support the notion that enhanced capability of finding densely
connected perturbation-response bipartite graph underlies the capability of
identifying coherent modules.
In addition to assessing the functional relationship of the genes, we further
quantified and compared within module physical and genetic interactions, which
provided another line of evidence for assessing if genes in the modules were
functionally related. Using protein-protein physical interaction and genetic
interaction data from the BioGrid (Stark et al.,, 2010), we calculated the
ratio of the number of known interactions within a module containing $N$ genes
over the maximum number of possible interactions for the module
($1/2*N(N-1)$). We plot the cumulative distributions of modules based on their
interaction ratios in Figure 4. The figure shows that there are more physical
and/or genetic interactions within both perturbation and responding modules
identified by our methods, indicating that indeed the genes in these modules
are more likely to function together.
Taken together, these results indicate that, by constraining the search space
to functionally coherent genes and explicitly requiring a degree of
connectivity of subgraphs, our approach enhances the capability of identifying
perturbation modules in which the genes are more likely to physically interact
with each other to participate in coherently related biological processes.
Thus they likely participate in a common signaling pathway and carry a common
signal.
### Discovering signaling pathways based on perturbation-responding subgraph
A subgraph consisting of a perturbation and a responding module reflects the
fact that the perturbation instances affected the signal controlling the
expression state of the genes in the responding module. It is interesting to
see if a perturbation module contains the members and/or modulators of a
signaling pathway. Indeed, we found many of the identified perturbation
modules corresponded to well-known signaling pathways. For example, our
analysis identified a subgraph consisting of a responding module of $8$ genes
annotated by the GO term GO:0019236 (response to pheromone) and a perturbation
module consisting of $16$ perturbation instances: {$STE11$, $STE4$, $DIG1$,
$DIG2$, $HMG2$, $FUS3$, $KSS1$, $RAD6$, $STE7$, $STE18$, $STE5$, $CDC42$,
$STE12$, $STE24$, $SOD1$, $ERG28$ }. In the list of the perturbation
instances, we highlighted (with blue font) the genes that are known to be
members of the well-studied yeast pheromone response pathway reviewed by
Gustin et al., (1998), which listed 20 gene products as the members of the
pathway. In the study by Hughes et al., (2000), 12 out of those 20 genes were
deleted. We found that 10 out of these 12 perturbation instances were included
in the perturbation module of this subgraph. This result indicates that our
approach is capable of re-constituting the majority of the genes involved in
the pheromone signaling pathway. Inclusion of ergosterol metabolism enzymes,
ERG28 and HMG2, in the perturbation module indicates that our approach can
also identify the modulators of a signaling pathway.
Figure 5: Example perturbation-responding subgraphs. Two example subgraphs
are shown: Panel A GO:0019236 (response to pheromone) and Panel B: GO:0006826
(iron ion transport). For each subgraph, the perturbation instances (green
hexagons) are shown in the top tier; responding genes (blue circles) are shown
in the middle tiers; and the transcription factor modules (grey triangles) are
shown in the bottom tier. To avoid an overly crowded figure, a red dash line
indicates that a perturbation instance and a responding gene is NOT connected.
In addition to “re-discovering” the known signaling pathways, analysis of
subgraphs obtained in this study led novel hypotheses. For example, in one
subgraph, the responding module was annotated with GO:0006826 (iron ion
transport) and consisted entirely of genes involved in cellular iron
homeostasis, including iron transporters and ferric reductases, shown in Panel
B of Figure 5. These genes are known to be primarily regulated by the iron-
responsive transcription factor Aft1p and partially comprise the iron regulon
in yeast (Yamaguchi-Iwai et al.,, 1996). Intriguingly, the perturbed gene set
consisted largely of proteins involved in mitochondrial translation, including
gene products involved in mitochondrial ribosomal subunits ($RML2$, $RSM18$,
$MRPL33$), translation ($HER2$, $DIA4$, $AEP2$), and RNA processing ($MSU1$).
These data lead to a novel hypothesis that perturbation of mitochondrial
protein synthesis will lead to changes in the iron sensing process. In fact,
such a link has only recently been suggested, in that iron-sulfur complex
synthesis in mitochondria, which requires a set of 10 distinct protein
components (Lill et al.,, 2000), directly impacts cellular iron uptake and
utilization (Hausmann et al.,, 2008; Rutherford et al.,, 2005). Indeed, these
data provide a rationale for the new hypothesis that mitochondria translation
plays an essential role in cell iron homeostasis through iron-sulfur complex
synthesis.
We have visualized all the perturbation-responding module pairs identified in
our experiments and show the results on the supplementary website. The data
allow readers, particularly yeast biologists, to inspect the results and
assess the quality of the modules, and more importantly, to explore new
hypotheses regarding yeast signaling systems. In Figure 5, we show the
subgraphs related to GO:0019236 (response to pheromone) and GO:0006826 (iron
ion transport). In this figure, we show the perturbation instances (green
hexagons) and responding modules (blue circles) in two tiers. Due to the fact
that the connections between the perturbation and the responding module are
very dense, which would interfere with visualization, we reversely indicate
perturbation instances and responding genes that were NOT connected, shown as
the red dash-lines in the figure. Using a graph-based algorithm (Lu et al.,,
2011), we further identified transcription factor (red triangles) modules that
are likely responsible for the co-expression of the genes in the responding
modules. Including TF information in data visualization further enhances
interpretation of the subgraphs. For example, the fact that each responding
module in this figure are connected (thus potentially regulated) by a TF
module further strengthens the hypothesis that the genes are co-regulated
together as a unit responding to a common signal.
### Revealing organization of cellular signals
Our approach enabled us to use responding modules to reflect major signals in
a cellular system and perturbation instances that affect these signals. We
have found that many perturbation instances were involved in multiple
perturbation-response subgraphs, indicating that the signal affected by such a
perturbed instances was connected to multiple signals through cross-talks.
This observation offered us an opportunity to further investigate the
organization of cellular signals by studying what signals each perturbation
instance affects, and how the signals are related to each other. For example,
it is interesting to investigate whether a set of perturbation instances
affects a common set of responding modules— that is, the information encoded
by these genes is identical—so that we can group them as a signaling unit.
Similarly, it is of interest to investigate whether the responding modules
(signals) affected by one perturbed gene are a subset of those affected by
another perturbed gene, and to utilize such a relationship to organize the
signals. The latter task is closely related to that addressed by the nested
effect model (Markowetz et al.,, 2007), which aims to capture the hierarchical
relationship among perturbation instances based on the genes they affect.
Since the nested effect model used an individual gene as a responding unit,
the scale of the problem became intractable (exponential) and a Markov chain
Monte Carlo algorithm was employed. In contrast, our approach used
conceptualized responding modules, which provided two advantages: 1) the
projection of high-dimensional data at the gene level to a low-dimensional and
semantic-rich concept level reduces complexity of the task; 2) the unique
annotation associated with each module renders the task of determining subset
relationship among perturbation instances a trivial task. These
characteristics enabled us to develop a polynomial algorithm (see Methods), to
organize the perturbation instances into a (DAG). In such a graph, each node
is comprised of a set of perturbation instances that share common responding
modules, i.e., a signaling unit; an edge between a pair of nodes indicates
that the signals affected by the parent node subsume those carried by the
child node. We collected all perturbation modules that contained at least $8$
perturbation instances and organized perturbation instances into a DAG as
shown in Figure 6.
Inspecting the perturbation nodes including multiple genes, we found that the
genes in these nodes tend to participate in coherently related biological
processes, and they often physically interact with each other at high
frequencies (data not shown). For example, one perturbation node (highlighted
with a blue border) in Figure 6 contains multiple STE (sterility) genes, a set
of well-studied genes that mediates pheromone signaling in yeast, and they
share common responding modules annotated with the functions “response to
pheromone” (GO:0019236) and “sexual reproduction” (GO:0019953). Thus our
method is capable of identifying perturbed instances whose information can be
encoded using a one-bit signal—a switch affecting expression of the genes
responding to pheromone signaling.
Visualization of the relationship of perturbation instances in a DAG enables a
biologist to investigate how signals are combined to generate a cellular
response. For example, there is a perturbation node (highlighted with a red
border) in Figure 6 containing $DIG1$, $DIG2$, $SOD1$, $FUS3$ and $KSS1$, all
of which, except $SOD1$, are involved in MAPK activities. Our results show
that there is a path connecting this node to the aforementioned STE node, and
then further to the “respond to pheromone” responding module, indicating that
the gene products of the two nodes work together to transmit signals in
response to pheromone. Indeed, it is well known that MAPK activities are
required in the pheromone signaling pathway (Gustin et al.,, 1998;
Herskowitz,, 1995). Yet, our results clearly presented the fact that the MAPK
node also carries information besides pheromone response, it also affects the
biological processes of “proteolysis” (GO:0006508) process, for example.
Figure 6: Organizing perturbation instances and responding modules In this
graph, responding modules are represented as green oval nodes, with each being
annotated by a GO term. The rectangle nodes are perturbation nodes, which may
contain one or more genes that share a common set of responding modules.
Another interesting observation is that the hierarchical organization of the
perturbation instances reflects their relative position in a signaling
cascade. For example, perturbation of ergosterol metabolism genes, ERG2, ERG3,
HMG2, ERG11, and ERG28, tend to have a broad impact on different signals,
including the pheromone response pathway. This is understandable: as a
critical component of the plasma membrane, ergosterol influences the
organizational compartments of the plasma membrane such as lipid rafts (Simon
and Sampaio,, 2011), which in turn affect the organization of signaling
molecules in the membrane. As such, perturbation of these genes has a broad
impact on diverse cellular signals. Our results indicate that $HMG2$ and
$ERG28$ are connected to the STE node to influence the expression of the
pheromone responding module. The role of ergosterol metabolism in modulating
pheromone response signaling has only recently been studied by Jin et al.,
(2008). More interestingly, our results indicate that perturbation of distinct
enzymes of ergosterol metabolism leads to distinct cellular signals,
presumably by perturbing the production of distinct species of ergosterols.
The view that distinct lipid species encode/regulate disparate signals is
widely accepted in the lipidomics research domain (Parks and Casey,, 1995).
## Summary
In this study, we developed a proof of concept framework for unifying
knowledge mining and data mining to conceptualize the findings from systematic
perturbation experiments in order to enhance the capability of identifying
signal transduction pathways. The innovations of our approach are reflected in
the following aspects: 1) an ontology-driven approach for identifying
functional modules from a genes list in a dynamic and data-driven (instance-
based) manner and projecting molecular finding to a conceptual level, 2)
innovative formulation of the biclustering problem in terms of a constrained
search space and new objective functions, and 3) a novel graph algorithm that
enables organizing signaling molecules at a system level in a tractable manner
for the first time. We have demonstrated that conceptualization of cellular
responses to systematic perturbations enhances the capability of identifying
perturbation instances that participate in specific signal transduction
pathways. To the best of our knowledge, this is the first report of a
computational framework capable of automatically assimilating the information
from systematic perturbation data to reveal the architecture of a cellular
signaling system at a conceptual level that can be readily interpreted by
biologists to gain insights of a system.
More importantly, conceptualization of experimental results is a critical step
towards the ultimate goal of systems biology—acquiring computable knowledge
from experimental data for reasoning and hypothesis generation. Our results
already laid the foundation to derive abstract knowledge. For example, one can
translate a path from a perturbation node to a responding module in Figure 6
into a rule as follows: “if genes involved in MAPK signaling are perturbed,
genes involved in pheromone responses will be differentially expressed”. A
rule like this represents the relationships between perturbed genes and
responding genes at a conceptual level. Equipped with rules and facts, a
computing agent can then make a prediction that perturbation of a newly
discovered gene may lead to the differential expression of genes involved
pheromone responses, if the gene is found to be involved in MAPK signaling.
Ongoing research is devoted to acquiring and representing facts, assertions
and rules from systems biology data in an accurate and generalizable manner.
Algorithm-1 HDSubgraph$(G,r,s)$
---
Input: $G=(V_{1},V_{2},E)$ – a bipartite graph, $r$ – the connectivity ratio
of the subgraph,
| | and $s$ – the minimum number of perturbations in the solution.
Output: A highly dense subgraph
1\. $G_{sub}=\emptyset$; $Score_{best}=-1$;
2\. for each subset $S_{1}$ of size $s-1$ in $V_{1}$ do
3. | $V_{remain}=V_{1}-S_{1}$; $V^{\prime\prime}_{1}=S_{1}$; $Status=1$;
4. | while $Status=1$ do
5. | | $Score_{temp}=-1$; $G^{\prime}_{sub}=\emptyset$; $Status=0$;
6. | | for each $u\in V_{remain}$ do
7. | | | $V^{\prime}_{1}=V^{\prime\prime}_{1}\cup\\{u\\}$; $V^{\prime}_{2}=\\{v|v\in V_{2}$ and $v$ connects to at least
| | | | $r|V^{\prime}_{1}|$ vertices in $V^{\prime}_{1}$};
8. | | | Calculate the score of induced subgraph $G^{\prime}=(V^{\prime}_{1},V^{\prime}_{2},E^{\prime})$
| | | | and save the score to $SC$;
9. | | | if $SC>Score_{temp}$ then
10. | | | | $Score_{temp}=SC$; $G^{\prime}_{sub}=G^{\prime}$;
11. | | if $Score_{best}<Score_{temp}$ then
12. | | | $G_{sub}=G^{\prime}_{sub}$; $Score_{best}=Score_{temp}$; $Status=1$;
13. | | | Assign $V^{\prime}_{1}$ of $G_{sub}$ to $V^{\prime\prime}_{1}$; $V_{remain}=V_{1}-V^{\prime\prime}_{1}$;
14\. return $G_{sub}$;
Note: 1\. $score(G^{\prime})=\sum_{x\in
V^{\prime}_{1}}((1+0.001)|V^{\prime}_{2}|-1/(1-r)(|V^{\prime}_{2}|-degree_{G^{\prime}}(x)))$.
When a new node $x$ is added,
| | | there is a score gain if the degree of $x$ in $G^{\prime}$ is at least $r|V^{\prime}_{2}|$; else a penalty will be applied.
| | 2\. The $s$ is smaller than or equal to the minimum number of perturbations in the solution. The growth
| | | of $s$ will greatly increase the running time of the algorithm.
Figure 7: Greedy algorithm to find the highly dense bipartite subgraph
## Materials and Methods
The microarray data from the systematic perturbation experiments by Hughes et
al., (2000) were collected, and differentially expressed genes responding to
each perturbation were identified based the analysis of the original paper.
Given a list of differentially expressed genes responding to a perturbation
instance, we represent the genes and their annotations using a data structure
referred to as GOGene graph (Muller et al.,, 2009). In such a graph, a node
represents a GO term and a directed edge between a pair of nodes reflects an
”is-a” relationship between the GO terms; in addition, each node keeps track
of the genes it annotates, therefore the graph contains information on both GO
terms and genes. The procedure for searching for summarizing GO terms iterates
through the following steps: 1) perform an enrichment analysis (Khatri et
al.,, 2005) for each leaf GO term among the instance-specific responding
genes; 2) select the GO term with the biggest p-value (least enriched) and
merge its genes to the parent node with the shortest semantic distance as
defined by Jin et al (Jin et al.,, 2010); 3) trim the term off the graph; 4)
repeat the above procedures. We stop trimming a GO term once it is
significantly enriched (p-value $\leq$ 0.05) and the genes summarized by the
term remained functionally coherent (Richards et al.,, 2010), and its
associated genes are treated as a functionally coherent module; otherwise all
non-significant terms would eventually be merged to the root node of the GO
hierarchy and their associated genes are deemed as not coherently related.
To assess the functional coherence, we applied the method developed by
Richards et al., (2010). In this approach, the ontology structure of the GO is
represented as a weighted graph, in which an edge weight represents the
semantic distances between a pair of GO terms. When given a list of genes, the
genes are associated to their annotation GO terms and a Steiner tree
connecting all genes is identified. Using the total length of the Steiner tree
as a score reflecting the functional relatedness of the genes, a statistical
model is applied to assess the probability of observing such a score if sets
with the same size are randomly drawn from yeast genome. See Richards et al.,
(2010) for details.
To search for a densely connected perturbation-responding subgraph in a
bipartite graph, we formulated the task as follows: given a bipartite graph
$G$, find a subgraph $G^{\prime}=(V^{\prime}_{1},V^{\prime}_{2},E^{\prime})$
of $G$ that satisfies the following conditions: 1) $(|V^{\prime}_{1}|\geq
s)\bigcap(|V^{\prime}_{2}|\geq s)$, where $s$ is a user defined threshold for
cluster size; 2) each vertex in $V^{\prime}_{1}$ connects to at least
$|V^{\prime}_{2}|\times r$ vertices in $V^{\prime}_{2}$, and each vertex in
$V^{\prime}_{2}$ connects to at least $|V^{\prime}_{1}|\times r$ vertices in
$V^{\prime}_{1}$, where the parameter $r\in[0,1]$ is a connectivity ratio
defined by users; and 3) the size of the subgraph
($|V^{\prime}_{1}|+|V^{\prime}_{2}|$ ) is maximized. We set the parameters as
follows: $s=4$ and $r=0.75$. The algorithms for searching for the subgraph is
shown in Figure 7.
Algorithm organizing signaling components
---
Input: A set of perturbation-responding subgraphs represented as a dictionary
$D$, in which a key is
| | a perturbation instance and its value is a list of the responding modules (RMs) it connects
Output: A DAG organization of perturbation instances and RMs
# Create a DAG consisting of perturbation instances and RMs
1\. Create an empty graph $G$;
2\. Add all RMs to $G$ as RM nodes;
3\. Combine perturbation instances that connect to an identical set of RMs
| into a joint perturbation node; add all resulting perturbation nodes into
$G$;
4\. Add directed edges between perturbation nodes and RM nodes as specified in
$D$
5\. Add directed edges between a pair of perturbation nodes, $n_{1}$ and
$n_{2}$,
| if the set of RMs associated with $n_{2}$ is a subset of that associated
with $n_{1}$;
#Simplify the DAG
6\. for each node $n_{1}$ do
6.1. | for each node $n_{2}$ that is a descendent of node $n_{1}$ do
6.2. | | if $n_{2}$ has a parent node that is a descendant of $n_{1}$ then
6.3. | | | Remove edge $(n_{1},n_{2})$;
7\. return $G$;
Figure 8: Algorithm for organizing perturbation instances and RMs
To organize perturbation instances based on their signals, we developed an
algorithm to organize the perturbed instances into a DAG. In such a graph,
there are two types of nodes: responding module nodes and perturbation nodes.
Our algorithm groups perturbation instances that share identical responding
modules into a common perturbation node, a signaling unit, and connect the
perturbation node to its corresponding responding modules. The algorithm
further organizes perturbation nodes such that, if signals by a perturbation
node subsume those of another, a directed edge pointing to the subsumed node
is added between them. The algorithm is shown in Figure 8.
## Acknowledgement
The authors would like to thank Ms Vicky Chen and Joyeeta Dutta-Moscato for
reading and editing manuscript, and Drs. Nabil Matmati and David Montefusco
for discussions.
#### Funding:
This research was partially supported by the following NIH grants: R01LM011155
and R01LM010144.
## Author Contribution
XL conceived the project; SL performed the majority of the analyses; BJ
contributed to the methods of knowledge mining; LAC helped with biological
interpretation of results; XL, SL and LAC drafted the manuscript.
## References
* Ashburner et al., (2000) Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS, Eppig JT, Harris MA, Hill DP, Issel-Tarver L, Kasarskis A, Lewis S, Matese JC, Richardson JE, Ringwald M, Rubin GM, Sherlock G. (2000) Gene ontology: tool for the unification of biology. Nature Genetics 25: 25-29.
* Bjölund et al., (2006) Björklund A, Husfeldt T, Koivisto M (2006) Set partitioning via Inclusion-Exclusion. SIAM Journal on Computing 39: 546-563.
* Cheng et al., (2000) Cheng Y, Church GM (2000) Biclustering of Expression Data. Proceedings of Pacific Symposium on Biocomputing.
* Colak et al., (2010) Colak R, Moser F, Chu JS, Schönhuth A, Chen N, Ester M (2010) Module Discovery by Exhaustive Search for Densely Connected, Co-Expressed Regions in Biomolecular Interaction Networks. PLoS ONE 5: e13348.
* Cover and Thomas, (2006) Cover, TM. and Thomas JA.. (2006) Elements of Information Theory. 2nd Ed., John Wiley and Sons
* Daignan-Fornier et al., (1992) Daignan-Fornier B, Fink GR (1992) Coregulation of purine and histidine biosynthesis by the transcriptional activators BAS1 and BAS2. Proceedings of National Academy of Sciences USA 89: 6746-6750.
* Denis et al., (1998) Denis V, Boucherie H, Monribot C, Daignan-Fornier B (1998) Role of the Myb-like protein Bas1p in Saccharomyces cerevisiae: a proteome analysis. Molecular Microbiology 30: 557-566.
* Erten et al., (2010) Erten C, Soz̈dinler M (2010) Improving performances of suboptimal greedy iterative biclustering heuristics via localization. Bioinformatics 26: 2594-2600.
* Garnett et al., (2012) Garnett MJ, Edelman EJ, Heidorn SJ, Greenman CD, Dastur A, Lau KW, Greninger P, Thompson IR, Luo X, Soares J, Liu Q, Iorio F, Surdez D, Chen L, Milano RJ, Bignell GR, Tam AT, Davies H, Stevenson JA, Barthorpe S, Lutz SR, Kogera F, Lawrence K, McLaren-Douglas A, Mitropoulos X, Mironenko T, Thi H, Richardson L, Zhou W, Jewitt F, Zhang T, O’Brien P, Boisvert JL, Price S, Hur W, Yang W, Deng X, Butler A, Choi HG, Chang JW, Baselga J, Stamenkovic I, Engelman JA, Sharma SV, Delattre O, Saez-Rodriguez J, Gray NS, Settleman J, Futreal PA, Haber DA, Stratton MR, Ramaswamy S, McDermott U, and Benes CH. (2012) Systematic identification of genomic markers of drug sensitivity in cancer cells. Nature 483:570-5
* Gustin et al., (1998) Gustin MC, Albertyn J, Alexander M, Davenport K (1998) MAP Kinase Pathway in the Yeast Saccharomyces cerevisiae. Microbilogy and Molecular Biology Reviews 62: 1264-1300.
* Harbison et al., (2004) Harbison CT, Gordon DB, Lee TI, Rinaldi NJ, Macisaac KD, Danford TW, Hannett NM, Tagne J, Reynolds DB, Yoo J, Jennings EG, Zeitlinger J, Pokholok DK, Kellis M, Rolfe PA, Takusagawa KT, Lander ES, Gifford DK, Fraenkel E, Young RA (2004) Transcriptioanl regulatory code of a eukaryotic genome. Nature 431: 99-104.
* Hausmann et al., (2008) Hausmann A, Samans B, Lill R, Muḧlenhoff U (2008) Cellular and mitochondrial remodeling upon defects in iron-sulfur protein biogenesis. J. Biol. Chem. 283: 8318-8330.
* Herskowitz, (1995) Herskowitz, I,. (1995) MAP kinase pathways in yeast: For mating and more. Cell, 80: 187-197
* Hua et al., (2009) Hua Q, Yu D, Lau FC, Wang Y (2009) Exact Algorithms for Set Multicover and Multiset Multicover Problems. Lecture Note in Computer Science, ISSAC 2009: 34-44.
* Huang et al., (2009) Huang SC, Fraenkel E (2009) Integrating Proteomic and Transcriptional and Interactome Data Reveals Hidden Components of signaling and Regulatory Networks. Science Signaling 2: Ra40.
* Hughes et al., (2000) Hughes TR, Marton MJ, Jones AR, Roberts CJ, Stoughton R, Armour CD, Bennett HA, Coffey E, Dai H, He YD, Kidd MJ, King AM, Meyer MR, Slade D, Lum PY, Stepaniants SB, Shoemaker DD, Gachotte D, Chakraburtty K, Simon J, Bard M, Friend SH. (2000) Functional Discovery via a Compendium of Expression Profiles. Cell 102: 109-126.
* Jin et al., (2010) Jin B, Lu X (2010) Identifying informative subsets of the Gene Ontology with information bottleneck methods. Bioinformatics 26: 2445-2451.
* Jin et al., (2008) Jin H, McCaffery JM, Grote E (2008) Ergosterol promotes pheromone signaling and plasma membrane fusion in mating yeast. J Cell Biol 180: 813-826.
* Khatri et al., (2005) Khatri P, Draghici S (2005) Ontological analysis of gene expression data: current tools, limitations,and open problems. Bioinformatics 21: 3587-3595.
* Kanehisa et al., (2012) Kanehisa, M., Goto, S., Sato, Y., Furumichi, M., and Tanabe, M.; KEGG for integration and interpretation of large-scale molecular datasets. Nucleic Acids Res 40:D109-D114 .
* Lamb et al., (2006) Lamb, J, Crawford, ED., Peck,D., Modell, JW., Blat, IC., Wrobel,MJ., Lerner, J,Brunet, JP., Subramanian, A., Ross, KN., Reich, M., Hieronymus, H., Wei, G., Armstrong, SA., Haggarty, SJ., Clemons, PA., Wei, R., Carr, SA., Lander, ES., and Golub TR.. (2006) The Connectivity Map: Using Gene-Expression Signatures to Connect Small Molecules, Genes, and Disease. Science 313:1929-1935
* Lill et al., (2000) Lill R, Kispal G (2000) Maturation of cellular Fe-S proteins: an essential function of mitochondria. Trends Biochem. Sci. 25: 352-356.
* Lu et al., (2011) Lu S, Lu X (2011) Using graph model to find transcription factor modules: the hitting set problem and an exact algorithm. Algorithms for Molecular Biology 8: 2.
* Maclsaac et al., (2006) MacIsaac KD, Wang T, Gordon DB, Gifford DK, Stormo GD, Fraenkel E (2006) An imporved map of conserved regularoty sites for Saccharomyces cerevisiae. BMC Bioinformatics 7: 113.
* Madeira et al., (2004) Madeira SC, Oliveira AL (2004) Biclustering algorithms for biological data analysis: a survey. IEEE Transaction on Computational Biology and Bioinformatics 1: 24-45.
* Markowetz et al., (2007) Markowetz F, Kostka D, Troyanskaya OG, Spang R (2007) Nested effects models for high-dimensional phenotyping screens. Bioinformatics 23: i305.
* Mewes et al., (2011) Mewes HW, Ruepp A, Theis F, Rattei T, Walter M, Frishman D, Suhre K, Spannagl M, Mayer KF, Stmpflen V, Antonov A (2011) MIPS: curated databases and comprehensive secondary data resources in 2010\. Nuc. Acids Res. 39: D220-D224.
* Muller et al., (2009) Muller B, Richards AJ, Jin B, Lu X (2009) GOGrapher: A Python library for GO graph representation and analysis. BMC Research Notes 2: 122.
* Ourfali et al., (2007) Ourfali O, Shlomi T, Ideker T, Ruppin E, Sharan R (2007) SPINE: a framework for signaling-regulatory pathway inference from cause-effect experiments. Bioinformatics 23: i359-i366.
* Parks and Casey, (1995) Parks LW, Casey WM (1995) Physiological implications of sterol biosynthesis in yeast. Annu Rev Microbiol 49: 95-116.
* Pinson et al., (2000) Pinson B , Kongsrud TL , Ording E , Johansen L , Daignan-Fornier B , Gabrielsen OS (2000) signaling through regulated transcription factor interaction: mapping of a regulatory interaction domain in the Myb-related Bas1p. Nucleic Acids Research 28: 4665-4673.
* Richards et al., (2010) Richards AJ, Muller B, Shotwell M, Cowart LA, Rohrer B, Lu X, (2010) Assessing the functional coherence of gene sets with metrics based on the Gene Ontology graph. Bioinformatics 26: i79-i87.
* Rutherford et al., (2005) Rutherford JC, Ojeda L, Balk J, Muḧlenhoff U, Lill R, Winge DR (2005) Activation of the iron regulon by the yeast Aft1/Aft2 transcription factors depends on mitochondrial but not cytosolic iron-sulfur protein biogenesis. J. Biol. Chem. 280: 10135-10140.
* Segal et al., (2004) Segal E, Friedman N, Koller D, Regev A (2004) A module map showing conditional activity of expression modules in cancer. Nature Genetics. 39: 1090
* Simon and Sampaio, (2011) Simons, K and Sampaio, JL., (2011) Membrane organization and lipid rafts. Cold Spring Harb Perspect Biol. 3:a004697
* Stark et al., (2010) Stark C, Breitkreutz BJ, Chatr-Aryamontri A, Boucher L, Oughtred R, Livstone MS, Nixon J, Van Auken K, Wang X, Shi X, Reguly T, Rust JM, Winter A, Dolinski K, Tyers M (2010) The BioGRID Interaction Database. Nuc. Acids Res. 36: D698-D704.
* Stratton et al., (2009) Stratton MR, Campbell PJ, Futreal PA (2009) The cancer genome. Nature 458: 719-724.
* Subramanian et al., (2005) Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP (2005) A Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci USA 102: 15545-50.
* Tanay et al., (2002) Tanay A, Sharan R, Shamir R (2002) Discovering statistically significant biclusters in gene expression data. Bioinformatics 18: S136-S144.
* Widmann et al., (1999) Widmann C, Gibson S, Jarpe MB, Johnson GL (1999) Mitogen-activated protein kinase: conservation of a three-kinase module from yeast to human. Physiol Rev 79:143-80.
* Winzeler et al., (1999) Winzeler EA, Shoemaker DD, Astromoff A, Liang H, Anderson K, Andre B, Bangham R, Benito R, Boeke JD, Bussey H, Chu AM, Connelly C, Davis K, Dietrich F, Dow SW, El Bakkoury M, Foury F, Friend SH, Gentalen E, Giaever G, Hegemann JH, Jones T, Laub M, Liao H, Liebundguth N, Lockhart DJ, Lucau-Danila A, Lussier M, M’Rabet N, Menard P, Mittmann M, Pai C, Rebischung C, Revuelta JL, Riles L, Roberts CJ, Ross-MacDonald P, Scherens B, Snyder M, Sookhai-Mahadeo S, Storms RK, Véronneau S, Voet M, Volckaert G, Ward TR, Wysocki R, Yen GS, Yu K, Zimmermann K, Philippsen P, Johnston M, Davis RW. (1999) Functional characterization of the S. cerevisiae genome by gene deletion and parallel analysis. Science 181:3058-68
* Yamaguchi-Iwai et al., (1996) Yamaguchi-Iwai Y, Stearman R, Dancis A, Klausner RD (1996) Iron-regulated DNA binding by the AFT1 protein controls the iron regulon in yeast. EMBO. J. 14: 1231-1239.
* Yeger-Lotem et al., (2009) Yeger-Lotem E, Riva L, Su LJ, Gitler AD, Cashikar AG, King OD, Auluck PK, Geddie ML, Valastyan JS, Karger DR, Lindquist S, Fraenkel E (2009) Bridging high-throughput genetic and transcriptional data revaels cellular responses to alpha-synuclein toxicity. Nature Genetics 41: 316-323.
* Muller et al., (20) Zhou Q, and Wong WH (2004) CisModule: de novo discovery of cis-regulatory modules by hierarchical mixture modeling. Proc. Natl. Acad. Sci. USA 101: 12114-12119.
|
arxiv-papers
| 2013-02-21T17:14:59 |
2024-09-04T02:49:42.003017
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Songjian Lu, Bo Jin, Ashley Cowart, Xinghua Lu",
"submitter": "Songjian Lu",
"url": "https://arxiv.org/abs/1302.5344"
}
|
1302.5519
|
# Uniform hyperbolicity of the curve graph via surgery sequences
Matt Clay , Kasra Rafi and Saul Schleimer
###### Abstract.
We prove that the curve graph $\mathcal{C}^{(1)}(S)$ is Gromov-hyperbolic with
a constant of hyperbolicity independent of the surface $S$. The proof is based
on the proof of hyperbolicity of the free splitting complex by Handel and
Mosher, as interpreted by Hilion and Horbez.
The first author is partially supported by NSF grant DMS-1006898. The second
author is partially supported by NSF grant DMS-1007811. The third author is
partially supported by EPSRC grant EP/I028870/1.
This work is in the public domain.
## 1\. Introduction
In recent years the curve graph has emerged as the central object in a variety
of areas, such as Kleinian groups [15, 14, 7], Teichmüller spaces [16, 17, 6]
and mapping class groups [13, 2]. The initial breakthrough was the result of
Masur and Minsky showing that the curve graph is Gromov hyperbolic [12].
In this note, we give an new proof of the hyperbolicity of all curve graphs.
We improve on the original proof by additionally showing that the
hyperbolicity constants are _uniform_ : that is, independent of the topology
of the surface.
We use the same hyperbolicity criterion as defined and used by Masur and
Minsky [12, Definition 2.2]. Suppose $\mathcal{X}$ is a graph, equipped with a
family of paths, and each path $\sigma$ is equipped with a projection map
$\pi_{\sigma}\colon\mathcal{X}\to\sigma$. If the family of paths and
projection maps satisfy the _retraction_ , _Lipschitz_ , and _contraction_
axioms, as stated in Section 5 then $\mathcal{X}$ is hyperbolic [12, Theorem
2.3]. We also provide a proof in Section 6. Bestvina and Feighn recently used
a similar argument to show that the _free factor graph_ of a free group is
Gromov hyperbolic [3].
For the curve graph and for the free factor graph another, more geometric,
space played the key role in the definition of paths and projection maps. For
the curve graph this was _Teichmüller space_ ; for the free factor graph it
was _outer space_. An understanding of geodesics in the geometric spaces was
necessary to define the family of paths and their projection maps.
The _splitting graph_ , another variant of the curve graph for the free group,
was recently shown to be hyperbolic by Handel and Mosher [9]. They also use
the hyperbolicity criterion of Masur and Minsky. A novel aspect of their
approach was to dispense with the ancillary geometric space; instead they
define projection as if the space _were_ hyperbolic, and the family of paths
were geodesics. Specifically, given three points $x$, $y$ and $z$ in the
space, the projection of $z$ to the path $\sigma$ from $x$ to $y$ is the first
point along $\sigma$ that is close (in a uniform sense) to the path from $z$
to $y$. See Figure 1.1.
2pt $x$ [Br] at 5 26 $y$ [Bl] at 240 26 $z$ [Bl] at 123 201 $\pi(z)$ [t] at
152 22 [height = 3.5 cm]projection
Figure 1.1. Handel–Mosher projection of a point $z$ to the path from $x$ to
$y$.
The paths used by Handel and Mosher in the splitting graph have a key property
that is very reminiscent of negatively curved spaces: _exponential
divergence_. In the other direction we find exponential convergence. On a
small scale, Handel and Mosher show paths that start distance two apart, and
that have the same target, must “intersect” after a distance depending only on
the rank of the free group. On a larger scale, this implies that the “girth”
of two paths, with the same target, is cut in half after a similar distance.
This property is the main tool used to verify the Masur and Minsky axioms.
Hilion and Horbez [11] gave a geometric spin to Handel and Mosher’s argument;
this led them to an alternative proof of hyperbolicity of the splitting graph
(in their setting called the _sphere graph_). Their paths were surgery
sequences of spheres in the doubled handlebody. We closely follow their set-up
and use surgery sequences of arcs and curves as paths in the curve graph. We
now state our main results.
Let $S=S_{g,n}$ be a surface of genus $g$ with $n$ boundary components, let
$\mathcal{C}(S)$ be the complex of curves, and let $\mathcal{AC}(S)$ be the
complex of curves and arcs; we defer the definitions to Section 2. We add a
superscript $(1)$ to denote the one-skeleton.
There is a constant ${\sf U}$ such that if $3g-3+n\geq 2$ and $n>0$ then
$\mathcal{AC}^{(1)}(S_{g,n})$ is ${\sf U}$–hyperbolic.
The inclusion $\mathcal{C}^{(1)}(S_{g,n})\to\mathcal{AC}^{(1)}(S_{g,n})$ gives
a quasi-isometric embedding with constants independent of $g$ and $n$. Deduce
the following.
There is a constant ${\sf U}$ such that if $3g-3+n\geq 2$ and $n>0$ then
$\mathcal{C}^{(1)}(S_{g,n})$ is ${\sf U}$–hyperbolic.
We also prove uniform hyperbolicity in the closed case, when $n=0$. This
follows from Theorem 6.4, as $\mathcal{C}^{(1)}(S_{g,0})$ isometrically embeds
in $\mathcal{C}^{(1)}(S_{g,1})$.
There is a constant ${\sf U}$ such that if $3g-3\geq 2$ then
$\mathcal{C}^{(1)}(S_{g})$ is ${\sf U}$–hyperbolic.
As noted above, the various constants appearing in our argument are uniform.
This is mostly due to Lemma 3.3 which shows that paths that start distance two
apart, and that have the same target, must “intersect” after a uniform
distance.
After the original paper of Masur and Minsky, Bowditch [4] and Hamenstädt [8]
also gave proofs of the hyperbolicity of the curve graph. In all of these the
upper bound on the hyperbolicity constant depended on the topology of the
surface $S$. During the process of writing this paper, several other proofs of
uniform hyperbolicity emerged. Bowditch [5] has refined his approach to obtain
uniform constants using techniques he developed in [4]; the proof by Aougab
[1] has many common themes with the work of Bowditch. The work of Hensel,
Przytycki, and Webb [10] also uses surgery paths and has other points of
contact with our work. However Hensel, Przytycki, and Webb do not use the
Masur–Minsky criterion; they also obtain much smaller hyperbolicity constants
than given here.
### Acknowledgements
We thank the Centre de Recerca Matemàtica for its hospitality during its 2012
research program on automorphisms of free groups.
## 2\. Background
Let $S=S_{g,n}$ be a connected, compact, oriented surface of genus $g$ with
$n$ boundary components. We make the standing assumption that the _complexity_
of $S$, namely $3g-3+n$, is at least two. This rules out three surfaces:
$S_{0,4},S_{1},S_{1,1}$. In each case the arc and curve complex is a version
of the Farey graph; the Farey graph has hyperbolicity constant one when we
restrict to the vertices, and $3/2$ when we include the edges.
### 2.1. Arcs and curves
A properly embedded curve or arc $\alpha\subset S$ is _essential_ if $\alpha$
does not cut a disk off of $S$. A properly embedded curve $\alpha$ is _non-
peripheral_ if it does not cut an annulus off of $S$. Define $\mathcal{AC}(S)$
to be the set of ambient isotopy classes of essential arcs and essential non-
peripheral curves.
For classes $\alpha,\beta\in\mathcal{AC}(S)$ define the geometric intersection
number $\operatorname{i}(\alpha,\beta)$ to be the minimal intersection number
among representatives. A non-empty subset $A\subset\mathcal{AC}(S)$ is a
_system of arcs and curves_ , or simply a _system_ , if for all
$\alpha,\beta\in A$ we have $\operatorname{i}(\alpha,\beta)=0$. We now give
$\mathcal{AC}(S)$ the structure of a simplicial complex by taking systems for
the simplices. We use $\mathcal{C}(S)$ to denote the subcomplex of
$\mathcal{AC}(S)$ spanned by curves alone. Note that these are flag complexes:
when the one-skeleton of a simplex is present, so is the simplex itself. Let
$\mathcal{K}^{(1)}$ denote the one-skeleton of a simplicial complex
$\mathcal{K}$.
If $\alpha$ and $\beta$ are vertices of $\mathcal{AC}(S)$ then we use
$d_{S}(\alpha,\beta)$ to denote the combinatorial distance coming from
$\mathcal{AC}^{(1)}(S)$. Given two systems $A,B\subset\mathcal{AC}(S)$ we
define their _outer distance_ to be
$\operatorname{outer}(A,B)=\max\\{d_{S}(\alpha,\beta)\mathbin{\mid}\alpha\in
A,\,\beta\in B\\}$
and their _inner distance_ to be
$\operatorname{inner}(A,B)=\min\\{d_{S}(\alpha,\beta)\mathbin{\mid}\alpha\in
A,\,\beta\in B\\}.$
For $\beta\in\mathcal{AC}(S)$ we write $\operatorname{inner}(A,\beta)$ instead
of $\operatorname{inner}(A,\\{\beta\\})$, and similarly for the outer
distance. If $A$ and $B$ are systems and $C\subset B$ is a subsystem then
(2.1)
$\operatorname{inner}(A,B)\leq\operatorname{inner}(A,C)\leq\operatorname{inner}(A,B)+1.$
For any three systems $A$, $B$, and $C$ there is a triangle inequality, up to
an additive error of one, namely
(2.2)
$\operatorname{inner}(A,B)\leq\operatorname{inner}(A,C)+\operatorname{inner}(C,B)+1.$
The additive error can be reduced to zero when $C$ is a singleton.
Suppose $A\subset\mathcal{AC}(S)$ is a system and $\gamma\in\mathcal{AC}(S)$
is an arc or curve. We say $\gamma$ _cuts_ $A$ if there is an element
$\alpha\in A$ so that $\operatorname{i}(\gamma,\alpha)>0$. If $\gamma$ does
not cut $A$ then we say $\gamma$ _misses_ $A$.
A system $A$ _fills_ $S$ if every curve $\gamma\in\mathcal{C}(S)$ cuts $A$.
Note that filling systems are necessarily comprised solely of arcs. A filling
system $A$ is _minimal_ if no subsystem is filling.
###### Lemma 2.3.
Suppose $S=S_{g,n}$, with $n>0$, and suppose $A$ is a minimal filling system.
If $S-A$ is a disk then $|A|=2g-1+n$. On the other hand, if $S-A$ is a
collection of peripheral annuli then $|A|=2g-2+n$. ∎
### 2.2. Surgery
If $X$ is a space and $Y\subset X$ is a subspace, let $N=N_{X}(Y)$ denote a
small regular neighborhood of $Y$ taken in $X$. Let
$\operatorname{fr}(N)={\overline{\partial N-\partial X}}$ be the _frontier_ of
$N$ in $X$.
Now suppose $A$ is a system and $\omega$ is a directed arc cutting $A$. Choose
representatives to minimize intersection numbers between elements of $A$ and
$\omega$. Suppose $\delta$ is the component of $\omega-A$ containing the
initial point of $\omega$. Thus $\delta$ meets only one component of $A$, say
$\alpha$; we call $\alpha$ the _active element_ of $A$. Let
$N=N_{S}(\alpha\cup\delta)$ be a neighborhood. Let $N^{\prime}$ be the
component of $N-\alpha$ containing the interior of $\delta$. Let
$\alpha^{\omega}$ be the component(s) of $\operatorname{fr}(N)$ that are
contained in $N^{\prime}$. See Figure 2.1 for the two possible cases.
2pt $\delta$ [Bl] at 127 102 $\alpha^{\omega}$ [tl] at 160 99 $\alpha$ [Bl] at
49 151 $\partial S$ [bl] at 199 3
$\begin{array}[]{ccc}\includegraphics[height=3.5cm]{arc_{s}urgery}&&\includegraphics[height=3.5cm]{curve_{s}urgery}\end{array}$
Figure 2.1. The result of surgery, $\alpha^{\omega}$, is either a pair of arcs
or a single arc as $\alpha$ is an arc or a curve.
We call the arcs of $\alpha^{\omega}$ the _children_ of $\alpha$. Define
$A^{\omega}=(A-\alpha)\cup\alpha^{\omega}$; this is the result of _surgering_
$A$ exactly once along $\omega$.
###### Lemma 2.4.
Suppose $A,B$ are systems and $\omega$ is a directed arc cutting $A$. Then
$|\operatorname{inner}(A^{\omega},B)-\operatorname{inner}(A,B)|\leq 1$.
###### Proof.
Note that $A^{\omega}\cup A$ is again a system. The conclusion now follows
from two applications of Equation 2.1. ∎
When $B=\\{\omega\\}$ a stronger result holds.
###### Proposition 2.5.
Suppose $A$ is a system and $\omega$ is a directed arc cutting $A$. Then
$\operatorname{inner}(A^{\omega},\omega)\leq\operatorname{inner}(A,\omega)$.
###### Proof.
We induct on $\operatorname{inner}(A,\omega)$. Suppose that
$\operatorname{inner}(A,\omega)=n+1$. Let $\alpha$ be the element of $A$
realizing the minimal distance to $\omega$. There are two cases. If $\alpha$
is not the active element then $\alpha\in A^{\omega}$ and the inner distance
remains the same or decreases. For example, this occurs when $n=0$.
Suppose, instead, that $\alpha$ is the active element and that $n>0$. Pick
$\beta\in\mathcal{AC}(S)$ with
* •
$d_{S}(\alpha,\beta)=1$,
* •
$d_{S}(\beta,\omega)=n$, and,
* •
subject to the above, $\beta$ minimizes $\operatorname{i}(\beta,\omega)$.
Consider the system $B=\\{\alpha,\beta\\}$. The induction hypothesis gives
$\operatorname{inner}(B^{\omega},\omega)\leq\operatorname{inner}(B,\omega)$.
If $\beta$ is the active element of $B$ then we contradict the minimality of
$\beta$. Thus $\alpha$ is the active element of $B$. We deduce
$\operatorname{inner}(\alpha^{\omega},\omega)\leq d_{S}(\alpha,\omega)$,
competing the proof. ∎
If $A$ is a system and $\omega$ is a directed arc cutting $A$ then we define a
_surgery sequence_ starting at $A$ with _target_ the directed arc $\omega$, as
follows. Set $A_{0}=A$ and let $A_{i+1}=A_{i}^{\omega}$; that is, we obtain
$A_{i+1}$ by surgering the active element of $A_{i}$ exactly once along
$\omega$. The arc $\omega$ misses the last system $A_{N}$; the resulting
sequence is $\\{A_{i}\\}_{i=0}^{N}$.
Given integers $i\leq j$ we adopt the notation
$[i,j]=\\{k\in\mathbb{Z}\mathbin{\mid}i\leq k\leq j\\}$.
###### Lemma 2.6.
Suppose $\\{A_{i}\\}_{i=0}^{N}$ is a surgery sequence with target $\omega$.
Then for each distance $d\in[0,\operatorname{inner}(A,\omega)-1]$ there is an
index $i\in[0,N]$ such that $\operatorname{inner}(A,A_{i})=d$.
###### Proof.
Since $\operatorname{outer}(A_{N},\omega)\leq 1$ the triangle inequality
$\operatorname{inner}(A,\omega)\leq\operatorname{inner}(A,A_{N})+\operatorname{inner}(A_{N},\omega)$
holds without additive error. Thus
$\operatorname{inner}(A,A_{N})\geq\operatorname{inner}(A,\omega)-1$. The
conclusion now follows from Lemma 2.4. ∎
We can also generalize Proposition 2.5 to sequences. As we do not use this in
the remainder of the paper, we omit the proof.
###### Proposition 2.7.
Suppose $\\{A_{i}\\}_{i=0}^{N}$ is a surgery sequence with target $\omega$.
Let $\alpha_{k}\subset A_{k}$ be the active element and set
$\omega_{k}=\alpha_{k}^{\omega}$. Then
$\operatorname{inner}(A_{i+1},\omega_{k})\leq\operatorname{inner}(A_{i},\omega_{k})$,
for $i<k$. ∎
Suppose $B\subset A$ is a subsystem and $\omega$ is a directed arc cutting
$A$. Let $\\{A_{i}\\}$ be the surgery sequence starting at $A$ with target
$\omega$. Let $B_{0}=B$ and suppose we have defined $B_{i}\subset A_{i}$. If
the active element $\alpha\in A_{i}$ is _not_ in $B_{i}$ then we define
$B_{i+1}=B_{i}$. If the active element $\alpha\in A_{i}$ is in $B_{i}$ then
define $B_{i+1}=B_{i}^{\omega}$. In any case we say that the elements of
$B_{i+1}$ are the _children_ of the elements of $B_{i}$; for $j\geq i$ we say
that the elements of $B_{j}$ are the _descendants_ of $B_{i}$. We call the
sequence $\\{B_{i}\\}$ a surgery sequence with _waiting times_ ; the sequence
$\\{B_{i}\\}$ is _subordinate_ to $\\{A_{i}\\}$.
## 3\. Descendants
The goal of this section is to prove Lemma 3.3: disjoint systems have a common
descendant within constant distance. Recall that a simplex
$A\subset\mathcal{AC}(S)$ is called a system.
###### Lemma 3.1.
Suppose $A$ is a system and $\omega$ is a directed arc cutting $A$. Suppose
$\gamma\in\mathcal{C}(S)$ is a curve. If $\gamma$ cuts $A$ then $\gamma$ cuts
$A^{\omega}$.
###### Proof.
Suppose $\alpha\in A$ is the active element. If $\gamma$ cuts some element of
$A-\alpha$ then there is nothing to prove. If $\gamma$ cuts $\alpha$ then,
consulting Figure 2.1, the curve $\gamma$ also cuts $\alpha^{\omega}$ and so
cuts $A^{\omega}$. ∎
###### Lemma 3.2.
Suppose $\\{A_{i}\\}$ is a surgery sequence with target $\omega$. For any
index $k$, if $\operatorname{outer}(A_{0},A_{k})\geq 3$ then $A_{j}$ is
filling for all $j\geq k$.
###### Proof.
By Lemma 3.1 it suffices to prove that $A_{k}$ is filling. Pick any
$\gamma\in\mathcal{C}(S)$. Since $\operatorname{outer}(A_{0},A_{k})\geq 3$ it
follows that $\gamma$ cuts $A_{0}$ or $A_{k}$, or both. If $\gamma$ cuts
$A_{k}$ we are done. If $\gamma$ cuts $A_{0}$ then we are done by Lemma 3.1. ∎
###### Lemma 3.3.
Suppose $A$ is a system and $\omega$ is a directed arc with
$\operatorname{inner}(A,\omega)\geq 6$. Suppose $B,C\subset A$ are subsystems.
Let $\\{A_{i}\\}_{i=0}^{N}$ be the surgery sequence starting at $A_{0}=A$ with
target $\omega$. Let $\\{B_{i}\\}$ and $\\{C_{i}\\}$ be the subordinate
surgery sequences. Then there is an index $k\in[0,N]$ such that:
1. (1)
$B_{k}\cap C_{k}\neq\emptyset$ and
2. (2)
$\operatorname{inner}(A_{0},A_{i})\leq 5$ for all $i\in[0,k]$.
We paraphrase this as “the subsystems $B$ and $C$ have a common descendant
within constant distance of $A$”.
###### Proof of Lemma 3.3.
Let $\ell$ be the first index with $\operatorname{inner}(A,A_{\ell})=3$. Note
that $\ell$ exists by Lemma 2.6. Also, Lemma 2.4 implies that
$\operatorname{inner}(A,A_{\ell-1})=2$. Suppose $\beta$ is the active element
of $A_{\ell-1}$. It follows that $\operatorname{inner}(A,\beta)=2$ and $\beta$
is the only element of $A_{\ell-1}$ with this inner distance to $A$. Thus
every $\alpha\in A_{\ell}$ has inner distance three to $A$. If $\omega$ misses
some element of $A_{\ell}$ then $\operatorname{inner}(A,\omega)\leq 4$,
contrary to hypothesis. Thus $\omega$ cuts every element of $A_{\ell}$.
Isotope the arcs of $A_{\ell}$ to be pairwise disjoint and to intersect
$\omega$ minimally.
If $B_{\ell}\cap C_{\ell}\neq\emptyset$ then we take $k=\ell$ and we are done.
Suppose instead $B_{\ell}$ and $C_{\ell}$ are disjoint. Since
$\operatorname{inner}(A,A_{\ell})=3$ we have both
$\operatorname{outer}(B,B_{\ell})$ and $\operatorname{outer}(C,C_{\ell})$ are
at least three. Deduce from Lemma 3.2 that $B_{\ell}$ and $C_{\ell}$ both fill
$S$, and thus consist only of arcs. Let $B^{\prime}\subset B_{\ell}$ and
$C^{\prime}\subset C_{\ell}$ be minimal filling subsystems.
Set $x=-\chi(S)=2g-2+n$. Set $b=1$ if $S-B^{\prime}$ is a disk. Set $b=0$ if
$S-B^{\prime}$ is a union of peripheral annuli. Lemma 2.3 implies
$|B^{\prime}|=x+b$. Define $c$ similarly, with respect to $C^{\prime}$. Let
$A^{\prime}=B^{\prime}\cup C^{\prime}$. Let $p$ be the number of peripheral
annuli in $S-A^{\prime}$. Observe that if either $b$ or $c$ is one, then $p$
is zero.
We build a graph $G$, _dual_ to $A^{\prime}$, as follows. For every component
$C\subset S-A^{\prime}$ there is a dual vertex $v_{C}$. For every arc
$\alpha\in A^{\prime}$ there is a dual edge $e_{\alpha}$; the two ends
$e_{\alpha}$ are attached to $v_{C}$ and $v_{D}$ where $C$ and $D$ meet the
two sides of $\alpha$. Note the possibility that $C$ equals $D$. Finally, for
every peripheral annulus component $P\subset S-A^{\prime}$ there is a
peripheral edge $e_{P}$. Both ends of $e_{P}$ are attached to $v_{P}$.
Thus $G$ has $|A^{\prime}|+p=2x+b+c+p$ edges. Since $S$ is homotopy equivalent
to $G$, we deduce that $G$ has $x+b+c+p$ vertices. Since $B^{\prime}\cap
C^{\prime}=\emptyset$, the graph $G$ has no vertices of degree one or two.
###### Claim.
One of the following holds.
1. (1)
The graph $G$ has a vertex of valence three, dual to a disk component of
$S-A^{\prime}$.
2. (2)
Every vertex of $G$ has valence four and every component of $S-A^{\prime}$ is
a disk.
###### Proof of Claim.
Let $V_{d}$ denote the number of vertices of $G$ with degree $d$. As there are
no vertices of valence one or two, twice the number of edges of $G$ equals
$\sum_{d\geq 3}d\cdot V_{d}$. Hence:
$\displaystyle 4x+2b+2c+2p$ $\displaystyle=\sum_{d\geq 3}d\cdot V_{d}$
$\displaystyle\geq 3V_{3}+4\sum_{d\geq 4}V_{d}$ $\displaystyle=4\sum_{d\geq
3}V_{d}-V_{3}$ $\displaystyle=4x+4b+4c+4p-V_{3}.$
Therefore, $V_{3}\geq 2b+2c+2p$ where equality holds if and only if $V_{d}=0$
for $d\geq 5$. If $p=0$ then either $V_{3}>0$, and we obtain the first
conclusion, or $V_{3}=0$, and we have the second. If $p>0$ then $V_{3}\geq 2p$
and we obtain the first conclusion. ∎
Let $\\{\delta_{i}\\}_{i=1}^{M}$ enumerate the arcs of
$\omega\cap(S-A_{\ell})$, where the order of the indices agrees with the
orientation of $\omega$. So the system $A_{\ell+1}$ is obtained from
$A_{\ell}$ via surgery along $\delta_{1}$. Generically, our strategy is to
find a disk component $R\subset S-A_{\ell}$ and an arc $\delta_{i}\subset R$
so that
* •
$\delta_{i}$ meets both $B_{\ell}$ and $C_{\ell}$ and
* •
$\delta_{i}$ is parallel in $R$ to a subarc of $\partial S$.
That is, $\delta_{i}$ cuts a rectangle off of $R$. Surgery along $\delta_{i}$
then produces a common descendent for the systems $B$ and $C$.
Suppose conclusion (1) of the claim holds. Deduce there is a disk component
$R\subset S-A_{\ell}$ that is combinatorially a hexagon, with sides
alternating between $\partial S$ and $A_{\ell}$. Furthermore, $R$ meets both
$B_{\ell}$ and $C_{\ell}$. As a very special case, if $\delta_{1}$ lies in $R$
then take $k=\ell+1$ and we are done. See the left-hand side of Figure 3.1.
$\begin{array}[]{ccc}\includegraphics[height=3.5cm]{region}&&\includegraphics[height=3.5cm]{region2}\end{array}$
Figure 3.1. The lower and the vertical sides of $R$ lie in $\partial S$; the
longer boundary arcs lie in $A_{\ell}$. The arcs in the interior are subarcs
of $\omega$. The arc with the arrow is $\delta_{1}$ on the left and is
$\delta_{m}$ on the right.
If $\delta_{1}$ does not lie in $R$, then let $\delta_{m}$ be the first arc
contained in $R$ that meets both $B_{\ell}$ and $C_{\ell}$. Set $k=\ell+m$.
See the right-hand side of Figure 3.1. One of the arcs in
$\operatorname{fr}(R)$ survives to $A_{k-2}$. Thus
$\operatorname{inner}(A,A_{i})\leq 3$ for all $i\in[\ell,k-2]$. The frontier
of $R$ may be surgered during the interval $[\ell+1,k-2]$, but there is always
a hexagon bounded by the children of $\operatorname{fr}(R)$, containing the
arc $\delta_{m}$. Surgering $\delta_{m}$ produces the desired common
descendants in $A_{k}$. Finally, we note that
$\operatorname{inner}(A,A_{k-1})$ and $\operatorname{inner}(A,A_{k})$ are at
most $4$ as a child of an arc of $\operatorname{fr}(R)$ is in both $A_{k-1}$
and $A_{k}$. Hence the lemma holds in this case.
Suppose instead that conclusion (2) of the claim holds. Thus every component
of $S-A^{\prime}$ is combinatorially an octagon with sides alternating between
$\partial S$ and $A^{\prime}$. If $A_{\ell}\neq A^{\prime}$ then $S-A_{\ell}$
has a disk component that is combinatorially a hexagon, and the above argument
applies. Therefore, we assume
$A^{\prime},B^{\prime},C^{\prime}=A_{\ell},B_{\ell},C_{\ell}$.
Fix a component $R\subset S-A_{\ell}$ that does not contain $\delta_{1}$. We
refer to the four sides of $\operatorname{fr}(R)\subset A_{\ell}$ using the
cardinal directions ${\sf N}$, ${\sf S}$, ${\sf E}$ and ${\sf W}$. Up to
interchanging $B_{\ell}$ and $C_{\ell}$, there are three cases to consider,
depending on how ${\sf N}$, ${\sf S}$, ${\sf E}$ and ${\sf W}$ lie in
$B_{\ell}$ or $C_{\ell}$.
Suppose that ${\sf N}$ lies in $B_{\ell}$ and the three other sides lie in
$C_{\ell}$. Suppose there is an arc $\delta_{i}$ in $R$ connecting ${\sf N}$
to ${\sf E}$ or ${\sf N}$ to ${\sf W}$. Let $\delta_{m}$ be the first such
arc. Arguing as before, under conclusion (1), the lemma holds. If there is no
such arc then, as $\omega$ cuts ${\sf N}$, there is an arc $\delta_{i}$
connecting ${\sf N}$ to ${\sf S}$. Let $\delta_{m}$ be the first such arc; set
$k=\ell+m$. As ${\sf N}\in A_{j}$ for all $j\in[\ell,k-2]$, deduce
$\operatorname{inner}(A,A_{i})\leq 3$ for all such $j$. Also
$\operatorname{inner}(A,A_{k-1})$ and $\operatorname{inner}(A,A_{k})$ are at
most $4$ as a child of an arc of $\operatorname{fr}(R)$ is in both $A_{k-1}$
and $A_{k}$. We now observe that some descendants of $\operatorname{fr}(R)$
cobound a combinatorial hexagon $R^{\prime}$ in $S-A_{k}$. If $\omega$ misses
any arc in the frontier of $R^{\prime}$, then
$\operatorname{inner}(A,\omega)\leq 5$, contrary to the hypothesis. Else,
arguing as in conclusion (1), the lemma holds.
Suppose ${\sf N}$ and ${\sf E}$ lie in $B_{\ell}$ while ${\sf S}$ and ${\sf
W}$ lie in $C_{\ell}$. If there is an arc connecting ${\sf N}$ to ${\sf W}$ or
connecting ${\sf E}$ to ${\sf S}$, then surgery along the first such produces
common descendants. If there is no such arc, then there must be an arc
connecting ${\sf N}$ to ${\sf S}$ or an arc connecting ${\sf E}$ to ${\sf W}$;
if not $\omega$ misses one of the diagonals of $R$, so
$\operatorname{inner}(\omega,A_{\ell})\leq 2$ implying
$\operatorname{inner}(\omega,A)\leq 5$, contrary to assumption. Again, surgery
along the first such arc produces a combinatorial hexagon.
Suppose finally that ${\sf N}$ and ${\sf S}$ lie in $B_{\ell}$ while ${\sf E}$
and ${\sf W}$ lie in $C_{\ell}$. Surgery along the first arc connecting
$B_{\ell}$ to $C_{\ell}$, inside of $R$, produces common descendants. Such an
arc exists because $\omega$ cuts every arc of $A_{\ell}$. ∎
## 4\. Footprints
In this section we define the _footprint_ of an arc or curve on a surgery
sequence. This is not to be confused with the projection, which is defined in
Section 5.
Fix $\gamma\in\mathcal{AC}(S)$. Suppose $A$ is a system and $\omega$ is a
directed arc. Let $\\{A_{i}\\}_{i=0}^{N}$ be the surgery sequence starting at
$A$ with target $\omega$. We define $\phi(\gamma)$, the _footprint_ of
$\gamma$ on $\\{A_{i}\\}$, to be the set
$\phi(\gamma)=\\{i\in[0,N]\mathbin{\mid}\mbox{$\gamma$ misses $A_{i}$}\\}.$
Note that if $\gamma$ is an element of $A_{i}$ then $i$ lies in the footprint
$\phi(\gamma)$.
###### Lemma 4.1.
With $\gamma,A,\omega$ as above: the footprint $\phi(\gamma)$ is an interval.
###### Proof.
When $\gamma$ is a curve, this follows from Lemma 3.1. So suppose that
$\gamma$ is an arc. Without loss of generality we may assume $\phi(\gamma)$ is
non-empty and $\min\phi(\gamma)=0$. Note that if $\omega$ misses $\gamma$ then
we are done. Isotope $\gamma$, $A$, and $\omega$ to minimize their
intersection numbers.
We now surger $A_{0}=A$. These surgeries are ordered along $\omega$. Let
$\alpha_{i}$ be the active element of $A_{i}$. Let $\delta_{i}\subset\omega$
be the surgery arc for $\alpha_{i}$, in other words, the subarc of $\omega$
with endpoints the initial endpoint of $\omega$ and the initial intersection
point between $\omega$ and $\alpha_{i}$. We define a pair of intervals.
$\displaystyle I$
$\displaystyle=\\{i\mathbin{\mid}\delta_{i-1}\cap\gamma=\emptyset\\}\cup\\{0\\}$
$\displaystyle J$
$\displaystyle=\\{i\mathbin{\mid}\delta_{i-1}\cap\gamma\neq\emptyset\\}$
The inclusions $\delta_{i-1}\subset\delta_{i}$ and the fact that $\gamma$
misses $A_{0}$ implies that $I\subset\phi(\gamma)$. To finish the proof we
will show $J\cap\phi(\gamma)=\emptyset$, implying that $I=\phi(\gamma)$.
Fix any $k\in J$. Let $\alpha_{k-1}$ be the active element of $A_{k-1}$. As
$\alpha_{k-1}$ is an arc or a curve we consult the left- or right-hand side of
Figure 2.1. Note that $\gamma$ meets $\delta_{k-1}$, and $\gamma$ is an arc,
so it enters and exits the region cobounded by $\alpha_{k-1}$ and its
children. Thus $\gamma$ cuts $A_{k}$ and we are done. ∎
## 5\. Projections to surgery sequences
In Propositions 5.4, 5.5, and 5.6 below we verify that a surgery path has a
projection map satisfying three properties, called here the _retraction axiom_
, the _Lipschitz axiom_ , and the _contraction axiom_. These were first set
out by Masur and Minsky [12, Definition 2.2]. We closely follow Handel and
Mosher [9]. We also refer to the paper of Hilion and Horbez [11]. We emphasize
that the various constants appearing in our argument are _uniform_ , that is,
independent of the surface $S=S_{g,n}$, mainly by virtue of Lemma 3.3.
The relevance of the three axioms is given by the following theorem of Masur
and Minsky [12, Theorem 2.3].
###### Theorem 5.1.
If $\mathcal{X}$ has an almost transitive family of paths, with projections
satisfying the three axioms, then $\mathcal{X}^{(1)}$ is hyperbolic.
Furthermore, the paths in the family are uniform reparametrized quasi-
geodesics.
Before turning to definitions, we remark that the hyperbolicity constant and
the quasi-geodesic constants depend only on the constants coming from almost
transitivity and from the three axioms. In Section 6 we provide a proof of
Theorem 5.1, giving an estimate for the resulting hyperbolicity constant.
### 5.1. Transitivity
Suppose that $\mathcal{X}$ is a flag simplicial complex. A _path_ is a
sequence $\\{\sigma_{i}\\}_{i=0}^{N}$ of simplices in $\mathcal{X}$. A family
of paths in $\mathcal{X}$ is _$d$ –transitive_ (or simply _almost transitive_)
if for any vertices $x,y\in\mathcal{X}^{(0)}$ there exists a path
$\\{\sigma_{i}\\}_{i=0}^{N}$ in the family such that
$\operatorname{inner}(x,\sigma_{0})$,
$\operatorname{inner}(\sigma_{i},\sigma_{i+1})$, and
$\operatorname{inner}(\sigma_{N},y)$ are all at most $d$.
###### Lemma 5.2 (Transitivity).
Surgery sequences form a $2$–transitive family of paths.
###### Proof.
Fix $\alpha,\beta\in\mathcal{AC}(S)$. Pick an oriented arc
$\omega\in\mathcal{AC}(S)$ so that $\operatorname{i}(\beta,\omega)=0$. Let
$\\{A_{i}\\}_{i=0}^{N}$ be the surgery sequence starting at
$A_{0}=\\{\alpha\\}$ with target $\omega$. Since
$\operatorname{inner}(A_{N},\beta)\leq 2$, the lemma is proved. ∎
### 5.2. Projection
We now define the projection map to a surgery sequence, following Handel and
Mosher, see Figure 1.1. We then state and verify the three axioms in our
setting.
###### Definition 5.3 (Projection).
Suppose $\\{A_{i}\\}_{i=0}^{N}$ is a surgery sequence with target $\omega$. We
define the _projection map_ $\pi\colon\mathcal{AC}(S)\to[0,N]$ as follows. Fix
$\beta\in\mathcal{AC}(S)$. Suppose that $\\{B_{j}\\}$ is the surgery sequence
starting at $B=\\{\beta\\}$ with target $\omega$. Define $\pi(\beta)$ to be
the least index $m\in[0,N]$ so that there is an index $k$ with $A_{m}\cap
B_{k}\neq\emptyset$. If no such index $m$ exists then we set $\pi(\beta)=N$.
In the following we use the notation $[i,j]=[\min\\{i,j\\},\max\\{i,j\\}]$
when the order is not important. We also write $A[i,j]$ for the union
$\cup_{k\in[i,j]}A_{k}$.
###### Proposition 5.4 (Retraction).
For any surgery sequence $\\{A_{i}\\}_{i=0}^{N}$, index $k\in[0,N]$, and
element $\beta\in A_{k}$ we have the diameter of $A[\pi(\beta),k]$ is at most
two.
###### Proof.
Let $\\{B_{j}\\}_{j=k}^{N}$ be the surgery sequence subordinate to
$\\{A_{i}\\}_{i=k}^{N}$ that starts at $B=\\{\beta\\}$. Set $m=\pi(\beta)$;
note that $m\leq k$, as $\beta\in B_{k}\subset A_{k}$.
Suppose that $A_{m}\cap B_{\ell}\neq\emptyset$ for some $\ell\geq k$. As
$\\{B_{j}\\}$ is subordinate to $\\{A_{i}\\}$ we have $B_{\ell}\subset
A_{\ell}$. Pick any $\gamma\in A_{m}\cap A_{\ell}$. By Lemma 4.1 we have that
$[m,\ell]$ lies in $\phi(\gamma)$, the footprint of $\gamma$. Thus $[m,k]$
lies in $\phi(\gamma)$. Thus the diameter of $A[m,k]$ is at most two,
finishing the proof. ∎
Instead of using footprints, Hilion and Horbez [11, Proposition 5.1] verify
the retraction axiom by using the fact that intersection numbers decrease
monotonically along a surgery sequence.
The verification of the final two axioms is identical to that of Handel and
Mosher [9]: replace their Proposition 6.5 in the argument of Section 6.3 with
Lemma 3.3. Alternatively, in the geometric setting these arguments appear in
Section 7 of [11]: replace their Proposition 7.1 with our Lemma 3.3.
###### Proposition 5.5 (Lipschitz).
For any surgery sequence $\\{A_{i}\\}_{i=0}^{N}$ and any vertices
$\beta,\gamma\in\mathcal{AC}(S)$, if $d_{S}(\beta,\gamma)\leq 1$ then the
diameter of $A[\pi(\beta),\pi(\gamma)]$ is at most $14$.
###### Proof.
Let $m=\pi(\beta)$ and $k=\pi(\gamma)$. Without loss of generality we may
assume that $m\leq k$. There are two cases. Suppose that
$\operatorname{inner}(A_{m},\omega)\leq 6$. By Proposition 2.5, for all $i\geq
m$ we have $\operatorname{inner}(A_{i},\omega)\leq 6$. It follows that the
diameter of $A[m,k]$ is at most $14$.
Suppose instead that $\operatorname{inner}(A_{m},\omega)\geq 7$. Fix some
$\beta^{\prime}\in A_{m}$, a descendent of $\beta$. Thus there is a descendent
$\gamma^{\prime}$ of $\gamma$ with $d_{S}(\beta^{\prime},\gamma^{\prime})\leq
1$. Set $B^{\prime}=\\{\beta^{\prime},\gamma^{\prime}\\}$ and note that
$\operatorname{inner}(B^{\prime},\omega)\geq 6$. Let $\\{B_{i}^{\prime}\\}$ be
the resulting surgery sequence with target $\omega$.
By Lemma 3.3, there is an index $p$ and some $\delta\in B_{p}^{\prime}$ that
is a common descendent of both $\beta^{\prime}$ and $\gamma^{\prime}$.
Additionally, any vertex of $B^{\prime}[0,p]$ has inner distance to
$B^{\prime}=B_{0}^{\prime}$ of at most five. Now, since $\delta$ is a
descendent of $\beta^{\prime}$ there is some least index $q$ so that
$\delta\in A_{q}$. Thus $k\leq q$. It follows that the diameter of $A[m,k]$ is
at most $14$. ∎
###### Proposition 5.6 (Contraction).
There are constants $a,b,c$ with the following property. For any surgery
sequence $\\{A_{i}\\}_{i=0}^{N}$ and any vertices
$\beta,\gamma\subset\mathcal{AC}(S)$ if
* •
$\operatorname{inner}(\beta,A[0,N])\geq a$ and
* •
$d_{S}(\beta,\gamma)\leq b\cdot\operatorname{inner}(\beta,A[0,N])$
then the diameter of $A[\pi(\beta),\pi(\gamma)]$ is at most $c$.
In fact, the following values suffice: $a=24$, $b=\frac{1}{8}$ and $c=14$.
###### Proof.
Suppose $\\{A_{i}\\}_{i=0}^{N}$ is a surgery sequence with target $\omega$.
Let $\pi\colon\mathcal{AC}(S)\to[0,N]$ denote the projection to the surgery
sequence $\\{A_{i}\\}$. Let $\\{B_{j}\\}_{j=0}^{M}$ be the surgery sequence
starting with $B_{0}=\\{\beta\\}$ with target $\omega$.
The contraction axiom is verified by repeatedly applying Lemma 3.3: if two
arcs or curves are far from $\\{A_{i}\\}_{i=0}^{N}$ but proportionally close
to one another, then their surgery sequences have a common descendant prior to
intersecting $\\{A_{i}\\}$. An application of the Lipschitz axiom, Proposition
5.5, completes the proof.
We begin with a claim. For the purpose of the claim, we use weaker hypotheses:
$\operatorname{inner}(\beta,A[0,N])\geq 21$ and
$d_{S}(\beta,\gamma)\leq\frac{1}{7}\operatorname{inner}(\beta,A[0,N])$.
###### Claim.
There is an index $k\in[0,M]$ so that
* •
$B_{k}$ contains a descendent of $\gamma$ and
* •
$\operatorname{inner}(\beta,B_{j})\leq 6d_{S}(\beta,\gamma)$ for all
$j\in[0,k]$.
###### Proof of Claim.
Fix $\alpha\in\mathcal{AC}(S)$ such that
$d_{S}(\beta,\alpha)=d_{S}(\beta,\gamma)-1$ and
$\operatorname{i}(\alpha,\gamma)=0$. By induction, there is an index
$\ell\in[0,M]$ such that $B_{\ell}$ contains a descendent of $\alpha$ and such
that $\operatorname{inner}(\beta,B_{j})\leq
6d_{S}(\beta,\alpha)=6d_{S}(\beta,\gamma)-6$ for all $j\in[0,\ell]$. Let
$\beta^{\prime}\in B_{\ell}$ be such a descendent. As
$\operatorname{i}(\alpha,\gamma)=0$, it follows that $\gamma$ has a
descendant, $\gamma^{\prime}$, that misses $\beta^{\prime}$. Let
$B^{\prime}=\\{\beta^{\prime},\gamma^{\prime}\\}$ and let
$\\{B^{\prime}_{i}\\}$ be the resulting surgery sequence with target $\omega$.
We have:
$\displaystyle\operatorname{inner}(B^{\prime},\omega)$
$\displaystyle\geq\operatorname{inner}(B_{\ell},\omega)-1$ $\displaystyle\geq
d_{S}(\beta,\omega)-\operatorname{inner}(\beta,B_{\ell})-2$
$\displaystyle\geq\operatorname{inner}(\beta,A_{N})-1-\mathopen{}\mathclose{{}\left(6d_{S}(\beta,\gamma)-6}\right)-2$
$\displaystyle\geq\frac{1}{7}\operatorname{inner}(\beta,A_{N})+3$
$\displaystyle\geq 6.$
As in the proof of Proposition 5.5, we use Lemma 3.3 to obtain an index $p$
and element $\delta\in B^{\prime}_{p}$, so that $\delta$ is a common
descendent of $\beta^{\prime}$ and $\gamma^{\prime}$. Additionally, any
element of $B^{\prime}[0,p]$ has inner distance to $B^{\prime}$ of at most
five. Let $k\in[\ell,M]$ be the first index such that $\delta\in B_{k}$.
What is left to show is that for $j\in[\ell,k]$ we have
$\operatorname{inner}(\beta,B_{j})\leq 6d_{S}(\beta,\gamma)$; by induction it
holds for $j\in[0,\ell]$. As for each $j\in[\ell,k]$ the system $B_{j}$
contains a descendent of $\beta^{\prime}$ we have:
$\displaystyle\operatorname{inner}(\beta,B_{j})$
$\displaystyle\leq\operatorname{inner}(\beta,B^{\prime})+\operatorname{inner}(B^{\prime},B_{j})+1$
$\displaystyle\leq(6d_{S}(\beta,\gamma)-6)+5+1$ $\displaystyle\leq
6d_{S}(\beta,\gamma).$
This completes the proof of the claim. ∎
We now complete the verification of the contraction axiom. There are two
cases. Suppose $\pi(\beta)\leq\pi(\gamma)$ and the weaker hypotheses hold:
$\operatorname{inner}(\beta,A[0,N])\geq 21$ and
$d_{S}(\beta,\gamma)\leq\frac{1}{7}\operatorname{inner}(\beta,A[0,N])$. Let
$k\in[0,M]$ be as in the claim and let $\gamma_{1}\in B_{k}$ be a descendent
of $\gamma$. As $\gamma_{1}$ is a descendant of $\gamma$, we have that
$\pi(\gamma)\leq\pi(\gamma_{1})$. Let $\ell\in[0,N]$ be such that
$\operatorname{inner}(\beta,A_{\ell})$ is minimal. For all $j\in[0,k]$, by the
second bullet of the claim we have:
$\displaystyle\operatorname{inner}(\beta,B_{j})$ $\displaystyle\leq
6d_{S}(\beta,\gamma)$
$\displaystyle\leq\frac{6}{7}\operatorname{inner}(\beta,A_{\ell})$
$\displaystyle\leq\operatorname{inner}(\beta,A_{\ell})-2.$
Therefore, we have that $B_{j}\cap A_{i}=\emptyset$ for all $j\in[0,k]$ and
$i\in[0,N]$ and so $\beta$ has a descendant $\beta_{1}\in B_{k}$ such that
$\pi(\beta)=\pi(\beta_{1})$. Hence
$[\pi(\beta),\pi(\gamma)]\subset[\pi(\beta_{1}),\pi(\gamma_{1})]$. By
Proposition 5.5 as $d_{S}(\beta_{1},\gamma_{1})\leq 1$, the diameter of
$A[\pi(\beta_{1}),\pi(\gamma_{1})]$ is at most 14. Therefore the diameter of
$A[\pi(\beta),\pi(\gamma)]$ is also at most 14.
We now deal with the remaining case. Suppose $\pi(\beta)>\pi(\gamma)$,
$\operatorname{inner}(\beta,A[0,N])\geq 24$ and
$d_{S}(\beta,\gamma)\leq\frac{1}{8}\operatorname{inner}(\beta,A[0,N])$. Here
we proceed along the lines of [9, Lemma 3.2]. We find for all $i\in[0,N]$:
$\displaystyle\operatorname{inner}(\gamma,A_{i})$
$\displaystyle\geq\operatorname{inner}(\beta,A_{i})-d_{S}(\beta,\gamma)$ (5.7)
$\displaystyle\geq\frac{7}{8}\operatorname{inner}(\beta,A_{i})\geq 21$ (5.8)
$\displaystyle d_{S}(\beta,\gamma)$
$\displaystyle\leq\frac{1}{8}\operatorname{inner}(\beta,A_{i})\leq\frac{1}{7}\operatorname{inner}(\gamma,A_{i})$
As $\pi(\gamma)\leq\pi(\beta)$, the above argument now implies that the
diameter of $A[\pi(\beta),\pi(\gamma)]$ is at most 14. ∎
## 6\. Hyperbolicity
In this section, we use the contraction properties of $\mathcal{AC}^{(1)}(S)$
to prove it is Gromov hyperbolic. This is already proven in [12]. However, we
need an explicit estimate for the hyperbolicity constant. Hence, we reproduce
the argument here, keeping careful track of constants.
We say a path $g\colon I\to\mathcal{X}$ is $(\ell,L)$– _Lipschitz_ if
$\frac{|s-t|}{\ell}\leq d_{\mathcal{X}}\big{(}g(s),g(t)\big{)}\leq L|s-t|.$
Let $a$, $b$ and $c$ be the constants from Proposition 5.6.
###### Proposition 6.1.
Suppose $g\colon[0,M]\to\mathcal{AC}^{(1)}(S)$ is $(\ell,L)$–Lipschitz and let
$\\{A_{i}\\}_{i=0}^{N}$ be a surgery sequence so that $g(0)$ misses $A_{0}$
and $g(M)$ misses $A_{N}$. Then, for every $t\in[0,M]$,
$d_{\mathcal{AC}}\big{(}g(t),\\{A_{i}\\}\big{)}\leq\frac{4c\ell L(\ell
L+1)}{b},$
assuming $\displaystyle\frac{2c\ell L}{b}\geq a$.
###### Remark 6.2.
Note that the hypothesis $\frac{2c\ell L}{b}\geq a$ holds for the constants
$a$, $b$ and $c$ given by Proposition 5.6 if $\ell,L\geq 1$.
###### Proof of Proposition 6.1.
For $t\in[0,M]$, let $g_{t}=g(t)$. Define
(6.3) $D=\frac{2c\ell L}{b},$
and let $I\subset[0,M]$ be an interval so that for $t\in I$,
$d_{\mathcal{AC}}(g_{t},\\{A_{i}\\})\geq D$. Divide $I$ to intervals of size
at most $bD/L$. Assume there are $m$ such intervals with
$\frac{(m-1)bD}{L}\leq|I|\leq\frac{mbD}{L}.$
Note that the image of every subinterval $J$ under $g$ has a length of
$bD/L\leq bD$ and the whole interval is distance at least $D\geq a$ from the
surgery path $\\{A_{i}\\}$. Hence, Proposition 5.6 applies; so $\pi(g(J))$ has
a diameter of at most $c$. Let $R$ be the largest distance between a point in
$g(I)$ to the set $\\{A_{i}\\}$. Since $g(0)$ and $g(M)$ are within distance
$D$ of the set $\\{A_{i}\\}$, we have
$R\leq D+\frac{L|I|}{2}.$
Also, since $g$ is a $(\ell,L)$–quasi-geodesic, the end points of $g(I)$ are
at least $|I|/\ell$ apart. That is,
$\frac{(m-1)bD}{\ell L}\leq\frac{|I|}{\ell}\leq mc+2D.$
Thus,
$m(bD-c\ell L)\leq 2\ell LD+bD\quad\Longrightarrow\quad m\leq\frac{D(2\ell
L+b)}{bD-c\ell L}.$
This, in turn, implies that
$R\leq D+\frac{\ell L(mc+2D)}{2}\leq(\ell L+1)D+\frac{c\ell LD(2\ell
L+b)}{2(bD-c\ell L)}.$
From Equation 6.3 we get
$R\leq(\ell L+1)D+D(\ell L+b/2)\leq D(2\ell L+2)=\frac{4c\ell L(\ell
L+1)}{b},$
which is as we claimed. ∎
###### Theorem 6.4.
If $3g-3+n\geq 2$ and $n>0$, then $\mathcal{AC}^{(1)}(S_{g,n})$ is
$\delta$–hyperbolic where
$\delta=\frac{56c}{b}+\frac{c}{2}+1.$
###### Proof.
Consider three points $\alpha,\beta,\gamma\in\mathcal{AC}^{(1)}(S_{g,n})$.
Choose a geodesic segment connecting $\alpha$ to $\beta$ and denote it by
$[\alpha,\beta]$. Let $[\beta,\gamma]$ and $[\alpha,\gamma]$ be similarly
defined. We need to show that the geodesic segment $[\beta,\gamma]$ is
contained in a $\delta$–neighborhood of $[\alpha,\beta]\cup[\alpha,\gamma]$.
Let $\alpha^{\prime}$ be the closest point in $[\beta,\gamma]$ to $\alpha$.
The path $p_{\alpha,\beta}$ obtained from the concatenation
$[\alpha,\alpha^{\prime}]\cup[\alpha^{\prime},\beta]$ is a $(3,1)$–Lipschitz
path [12, page 147]. By Proposition 6.1,
$\text{if}\quad\ell=3,L=1,\quad\text{then}\quad R\leq\frac{48c}{b}.$
That is, $p_{\alpha,\beta}$ stays in a $(48c/b)$–neighborhood of any surgery
path $\\{A_{i}\\}$ that starts next to $\alpha$ and end next to $\beta$.
(Recall that surgery paths are $2$–transitive.) Also by Proposition 6.1,
$\text{if}\quad\ell=L=1,\text{then}\quad R\leq\frac{8c}{b}.$
That is, the geodesic $[\alpha,\beta]$, which is a $(1,1)$–Lipschitz path,
stays in a $(8c/b)$–neighborhood of $\\{A_{i}\\}$. By the Lipschitz property
of projection, its image is $c$ dense. That is, any point in $\\{A_{i}\\}$ is
at most $8c/b+\frac{c}{2}+1$ away from a point in $[\alpha,\beta]$. Therefore,
the path $p_{\alpha,\beta}$ is contained in a
$\delta=\frac{48c}{b}+\frac{8c}{b}+\frac{c}{2}+1$
neighborhood of $[\alpha,\beta]$. Similar arguments shows that the path
$p_{\alpha,\gamma}$ is contained in a $\delta$–neighborhood of
$[\alpha,\gamma]$. Hence, $[\beta,\gamma]$ is contained in a
$\delta$–neighborhood of $[\alpha,\beta]\cup[\alpha,\gamma]$. That is,
$\mathcal{AC}^{(1)}(S)$ is $\delta$–hyperbolic. ∎
## 7\. Inclusions
In this section, we show that the hyperbolicity of the curve complex follows
from the hyperbolicity of the arc and curve complex.
###### Corollary 7.1.
There is a constant ${\sf U}$ such that if $3g-3+n\geq 2$ and $n>0$ then
$\mathcal{C}^{(1)}(S_{g,n})$ is ${\sf U}$–hyperbolic.
###### Proof.
The surgery relation $\sigma\colon\mathcal{AC}\to\mathcal{C}$ takes curves to
themselves and sends an arc $\alpha$ to a system $A=\sigma(\alpha)$ so that
$\alpha$ is contained in a pants component of $S-A$. For
$\alpha,\beta\in\mathcal{AC}$ we have
$d_{\mathcal{C}}(\sigma(\alpha),\sigma(\beta))\leq
2d_{\mathcal{AC}}(\alpha,\beta)$
by Lemma 2.2 of [13]. On the other hand, for $\alpha,\beta\in\mathcal{C}$ we
have
$d_{\mathcal{AC}}(\alpha,\beta)\leq d_{\mathcal{C}}(\alpha,\beta).$
Thus the inclusion of $\mathcal{C}^{(1)}(S_{g,n})$ into
$\mathcal{AC}^{(1)}(S_{g,n})$ sends geodesics to $(1,2)$–Lipschitz paths.
Continuing as in the proof of Theorem 6.4, we get that the image of a geodesic
in $\mathcal{C}$ is in a uniformly bounded neighborhood of a geodesic in
$\mathcal{AC}$. Hence, the hyperbolicity of $\mathcal{AC}$ implies the
hyperbolicity of $\mathcal{C}$. ∎
We now deal with the case when $S=S_{g}$ is closed.
###### Theorem 7.2.
If $3g-3\geq 2$ then $\mathcal{C}^{(1)}(S_{g})$ is Gromov hyperbolic.
Furthermore, the constant of hyperbolicity is at most that of
$\mathcal{C}^{(1)}(S_{g,1})$.
###### Proof.
Let $\Sigma=S_{g,1}$. By Corollary 7.1 we have $\mathcal{C}^{(1)}(\Sigma)$ is
${\sf U}$–hyperbolic. By Theorem 1.2 of [18], the curve complex
$\mathcal{C}^{(1)}(S)$ isometrically embeds in the curve complex
$\mathcal{C}^{(1)}(\Sigma)$. ∎
## References
* [1] Tarik Aougab. Uniform hyperbolicity of the graphs of curves. December 2012, arXiv:1212.3160.
* [2] Jason Behrstock, Bruce Kleiner, Yair Minsky, and Lee Mosher. Geometry and rigidity of mapping class groups. Geom. Topol., 16(2):781–888, 2012, arXiv:0801.2006.
* [3] Mladen Bestvina and Mark Feighn. Hyperbolicity of the complex of free factors. July 2011, arXiv:1107.3308.
* [4] Brian H. Bowditch. Intersection numbers and the hyperbolicity of the curve complex. J. Reine Angew. Math., 598:105–129, 2006. bhb-curvecomplex.pdf.
* [5] Brian H. Bowditch. Uniform hyperbolicity of the curve graphs, 2012. uniformhyp.pdf.
* [6] Jeffrey Brock, Howard Masur, and Yair Minsky. Asymptotics of Weil-Petersson geodesic. I. Ending laminations, recurrence, and flows. Geom. Funct. Anal., 19(5):1229–1257, 2010, arXiv:0802.1370.
* [7] Jeffrey F. Brock, Richard D. Canary, and Yair N. Minsky. The classification of Kleinian surface groups, II: The ending lamination conjecture. Ann. of Math. (2), 176(1):1–149, 2012, arXiv:math/0412006v2.
* [8] Ursula Hamenstädt. Geometry of the complex of curves and of Teichmüller space. In Handbook of Teichmüller theory. Vol. I, volume 11 of IRMA Lect. Math. Theor. Phys., pages 447–467. Eur. Math. Soc., Zürich, 2007, arXiv:math/0502256.
* [9] Michael Handel and Lee Mosher. The free splitting complex of a free group I: Hyperbolicity. November 2011, arXiv:1111.1994.
* [10] Sebastian Hensel, Piotr Przytycki, and Richard C. H. Webb. Slim unicorns and uniform hyperbolicity for arc graphs and curve graphs. January 2013, arXiv:1301.5577.
* [11] Arnaud Hilion and Camille Horbez. The hyperbolicity of the sphere complex via surgery paths. October 2012, arXiv:1210.6183.
* [12] Howard A. Masur and Yair N. Minsky. Geometry of the complex of curves. I. Hyperbolicity. Invent. Math., 138(1):103–149, 1999, arXiv:math/9804098v2.
* [13] Howard A. Masur and Yair N. Minsky. Geometry of the complex of curves. II. Hierarchical structure. Geom. Funct. Anal., 10(4):902–974, 2000, arXiv:math/9807150v1.
* [14] Yair Minsky. The classification of Kleinian surface groups. I. Models and bounds. Ann. of Math. (2), 171(1):1–107, 2010, arXiv:math/0302208v3.
* [15] Yair N. Minsky. The classification of punctured-torus groups. Ann. of Math. (2), 149(2):559–626, 1999. arXiv:math/9807001.
* [16] Kasra Rafi. A characterization of short curves of a Teichmüller geodesic. Geom. Topol., 9:179–202 (electronic), 2005. arXiv:math/0404227.
* [17] Kasra Rafi. Hyperbolicity in Teichmüller space. November 2010, arXiv:1011.6004.
* [18] Kasra Rafi and Saul Schleimer. Covers and the curve complex. Geom. Topol., 13(4):2141–2162, 2009, arXiv:math/0701719v2.
|
arxiv-papers
| 2013-02-22T08:41:33 |
2024-09-04T02:49:42.014086
|
{
"license": "Public Domain",
"authors": "Matt Clay, Kasra Rafi, Saul Schleimer",
"submitter": "Saul Schleimer",
"url": "https://arxiv.org/abs/1302.5519"
}
|
1302.5578
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-PH-EP-2013-020 LHCb-PAPER-2012-057 June 13, 2013
Measurements of the $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ decay amplitudes and the $\mathchar 28931\relax^{0}_{b}$
polarisation in $pp$ collisions at $\sqrt{s}=7\mathrm{\,Te\kern-2.07413ptV}$
The LHCb collaboration†††Authors are listed on the following pages.
An angular analysis of $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ decays is performed using a data sample corresponding to
$1.0\mbox{\,fb}^{-1}$ collected in $pp$ collisions at
$\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$ with the LHCb detector at the LHC. A
parity violating asymmetry parameter characterising the $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ decay of $0.05\pm 0.17\pm 0.07$ and a $\mathchar
28931\relax^{0}_{b}$ transverse production polarisation of $0.06\pm 0.07\pm
0.02$ are measured, where the first uncertainty is statistical and the second
systematic.
Submitted to Physics Letters B
© CERN on behalf of the LHCb collaboration, license CC-BY-3.0.
LHCb collaboration
R. Aaij40, C. Abellan Beteta35,n, B. Adeva36, M. Adinolfi45, C. Adrover6, A.
Affolder51, Z. Ajaltouni5, J. Albrecht9, F. Alessio37, M. Alexander50, S.
Ali40, G. Alkhazov29, P. Alvarez Cartelle36, A.A. Alves Jr24,37, S. Amato2, S.
Amerio21, Y. Amhis7, L. Anderlini17,f, J. Anderson39, R. Andreassen59, R.B.
Appleby53, O. Aquines Gutierrez10, F. Archilli18, A. Artamonov 34, M.
Artuso56, E. Aslanides6, G. Auriemma24,m, S. Bachmann11, J.J. Back47, C.
Baesso57, V. Balagura30, W. Baldini16, R.J. Barlow53, C. Barschel37, S.
Barsuk7, W. Barter46, Th. Bauer40, A. Bay38, J. Beddow50, F. Bedeschi22, I.
Bediaga1, S. Belogurov30, K. Belous34, I. Belyaev30, E. Ben-Haim8, M.
Benayoun8, G. Bencivenni18, S. Benson49, J. Benton45, A. Berezhnoy31, R.
Bernet39, M.-O. Bettler46, M. van Beuzekom40, A. Bien11, S. Bifani12, T.
Bird53, A. Bizzeti17,h, P.M. Bjørnstad53, T. Blake37, F. Blanc38, J. Blouw11,
S. Blusk56, V. Bocci24, A. Bondar33, N. Bondar29, W. Bonivento15, S. Borghi53,
A. Borgia56, T.J.V. Bowcock51, E. Bowen39, C. Bozzi16, T. Brambach9, J. van
den Brand41, J. Bressieux38, D. Brett53, M. Britsch10, T. Britton56, N.H.
Brook45, H. Brown51, I. Burducea28, A. Bursche39, G. Busetto21,q, J.
Buytaert37, S. Cadeddu15, O. Callot7, M. Calvi20,j, M. Calvo Gomez35,n, A.
Camboni35, P. Campana18,37, A. Carbone14,c, G. Carboni23,k, R. Cardinale19,i,
A. Cardini15, H. Carranza-Mejia49, L. Carson52, K. Carvalho Akiba2, G.
Casse51, M. Cattaneo37, Ch. Cauet9, M. Charles54, Ph. Charpentier37, P.
Chen3,38, N. Chiapolini39, M. Chrzaszcz 25, K. Ciba37, X. Cid Vidal36, G.
Ciezarek52, P.E.L. Clarke49, M. Clemencic37, H.V. Cliff46, J. Closier37, C.
Coca28, V. Coco40, J. Cogan6, E. Cogneras5, P. Collins37, A. Comerma-
Montells35, A. Contu15, A. Cook45, M. Coombes45, S. Coquereau8, G. Corti37, B.
Couturier37, G.A. Cowan38, D. Craik47, S. Cunliffe52, R. Currie49, C.
D’Ambrosio37, P. David8, P.N.Y. David40, I. De Bonis4, K. De Bruyn40, S. De
Capua53, M. De Cian39, J.M. De Miranda1, M. De Oyanguren Campos35,o, L. De
Paula2, W. De Silva59, P. De Simone18, D. Decamp4, M. Deckenhoff9, L. Del
Buono8, D. Derkach14, O. Deschamps5, F. Dettori41, A. Di Canto11, H.
Dijkstra37, M. Dogaru28, S. Donleavy51, F. Dordei11, A. Dosil Suárez36, D.
Dossett47, A. Dovbnya42, F. Dupertuis38, R. Dzhelyadin34, A. Dziurda25, A.
Dzyuba29, S. Easo48,37, U. Egede52, V. Egorychev30, S. Eidelman33, D. van
Eijk40, S. Eisenhardt49, U. Eitschberger9, R. Ekelhof9, L. Eklund50, I. El
Rifai5, Ch. Elsasser39, D. Elsby44, A. Falabella14,e, C. Färber11, G.
Fardell49, C. Farinelli40, S. Farry12, V. Fave38, D. Ferguson49, V. Fernandez
Albor36, F. Ferreira Rodrigues1, M. Ferro-Luzzi37, S. Filippov32, C.
Fitzpatrick37, M. Fontana10, F. Fontanelli19,i, R. Forty37, O. Francisco2, M.
Frank37, C. Frei37, M. Frosini17,f, S. Furcas20, E. Furfaro23, A. Gallas
Torreira36, D. Galli14,c, M. Gandelman2, P. Gandini54, Y. Gao3, J. Garofoli56,
P. Garosi53, J. Garra Tico46, L. Garrido35, C. Gaspar37, R. Gauld54, E.
Gersabeck11, M. Gersabeck53, T. Gershon47,37, Ph. Ghez4, V. Gibson46, V.V.
Gligorov37, C. Göbel57, D. Golubkov30, A. Golutvin52,30,37, A. Gomes2, H.
Gordon54, M. Grabalosa Gándara5, R. Graciani Diaz35, L.A. Granado Cardoso37,
E. Graugés35, G. Graziani17, A. Grecu28, E. Greening54, S. Gregson46, O.
Grünberg58, B. Gui56, E. Gushchin32, Yu. Guz34, T. Gys37, C. Hadjivasiliou56,
G. Haefeli38, C. Haen37, S.C. Haines46, S. Hall52, T. Hampson45, S. Hansmann-
Menzemer11, N. Harnew54, S.T. Harnew45, J. Harrison53, T. Hartmann58, J. He7,
V. Heijne40, K. Hennessy51, P. Henrard5, J.A. Hernando Morata36, E. van
Herwijnen37, E. Hicks51, D. Hill54, M. Hoballah5, C. Hombach53, P. Hopchev4,
W. Hulsbergen40, P. Hunt54, T. Huse51, N. Hussain54, D. Hutchcroft51, D.
Hynds50, V. Iakovenko43, M. Idzik26, P. Ilten12, R. Jacobsson37, A. Jaeger11,
E. Jans40, P. Jaton38, F. Jing3, M. John54, D. Johnson54, C.R. Jones46, B.
Jost37, M. Kaballo9, S. Kandybei42, M. Karacson37, T.M. Karbach37, I.R.
Kenyon44, U. Kerzel37, T. Ketel41, A. Keune38, B. Khanji20, O. Kochebina7, I.
Komarov38,31, R.F. Koopman41, P. Koppenburg40, M. Korolev31, A. Kozlinskiy40,
L. Kravchuk32, K. Kreplin11, M. Kreps47, G. Krocker11, P. Krokovny33, F.
Kruse9, M. Kucharczyk20,25,j, V. Kudryavtsev33, T. Kvaratskheliya30,37, V.N.
La Thi38, D. Lacarrere37, G. Lafferty53, A. Lai15, D. Lambert49, R.W.
Lambert41, E. Lanciotti37, G. Lanfranchi18,37, C. Langenbruch37, T. Latham47,
C. Lazzeroni44, R. Le Gac6, J. van Leerdam40, J.-P. Lees4, R. Lefèvre5, A.
Leflat31,37, J. Lefrançois7, S. Leo22, O. Leroy6, B. Leverington11, Y. Li3, L.
Li Gioi5, M. Liles51, R. Lindner37, C. Linn11, B. Liu3, G. Liu37, J. von
Loeben20, S. Lohn37, J.H. Lopes2, E. Lopez Asamar35, N. Lopez-March38, H. Lu3,
D. Lucchesi21,q, J. Luisier38, H. Luo49, F. Machefert7, I.V.
Machikhiliyan4,30, F. Maciuc28, O. Maev29,37, S. Malde54, G. Manca15,d, G.
Mancinelli6, U. Marconi14, R. Märki38, J. Marks11, G. Martellotti24, A.
Martens8, L. Martin54, A. Martín Sánchez7, M. Martinelli40, D. Martinez
Santos41, D. Martins Tostes2, A. Massafferri1, R. Matev37, Z. Mathe37, C.
Matteuzzi20, E. Maurice6, A. Mazurov16,32,37,e, J. McCarthy44, R. McNulty12,
A. Mcnab53, B. Meadows59,54, F. Meier9, M. Meissner11, M. Merk40, D.A.
Milanes8, M.-N. Minard4, J. Molina Rodriguez57, S. Monteil5, D. Moran53, P.
Morawski25, M.J. Morello22,s, R. Mountain56, I. Mous40, F. Muheim49, K.
Müller39, R. Muresan28, B. Muryn26, B. Muster38, P. Naik45, T. Nakada38, R.
Nandakumar48, I. Nasteva1, M. Needham49, N. Neufeld37, A.D. Nguyen38, T.D.
Nguyen38, C. Nguyen-Mau38,p, M. Nicol7, V. Niess5, R. Niet9, N. Nikitin31, T.
Nikodem11, A. Nomerotski54, A. Novoselov34, A. Oblakowska-Mucha26, V.
Obraztsov34, S. Oggero40, S. Ogilvy50, O. Okhrimenko43, R. Oldeman15,d,37, M.
Orlandea28, J.M. Otalora Goicochea2, P. Owen52, B.K. Pal56, A. Palano13,b, M.
Palutan18, J. Panman37, A. Papanestis48, M. Pappagallo50, C. Parkes53, C.J.
Parkinson52, G. Passaleva17, G.D. Patel51, M. Patel52, G.N. Patrick48, C.
Patrignani19,i, C. Pavel-Nicorescu28, A. Pazos Alvarez36, A. Pellegrino40, G.
Penso24,l, M. Pepe Altarelli37, S. Perazzini14,c, D.L. Perego20,j, E. Perez
Trigo36, A. Pérez-Calero Yzquierdo35, P. Perret5, M. Perrin-Terrin6, G.
Pessina20, K. Petridis52, A. Petrolini19,i, A. Phan56, E. Picatoste Olloqui35,
B. Pietrzyk4, T. Pilař47, D. Pinci24, S. Playfer49, M. Plo Casasus36, F.
Polci8, G. Polok25, A. Poluektov47,33, E. Polycarpo2, D. Popov10, B.
Popovici28, C. Potterat35, A. Powell54, J. Prisciandaro38, V. Pugatch43, A.
Puig Navarro38, G. Punzi22,r, W. Qian4, J.H. Rademacker45, B.
Rakotomiaramanana38, M.S. Rangel2, I. Raniuk42, N. Rauschmayr37, G. Raven41,
S. Redford54, M.M. Reid47, A.C. dos Reis1, S. Ricciardi48, A. Richards52, K.
Rinnert51, V. Rives Molina35, D.A. Roa Romero5, P. Robbe7, E. Rodrigues53, P.
Rodriguez Perez36, S. Roiser37, V. Romanovsky34, A. Romero Vidal36, J.
Rouvinet38, T. Ruf37, F. Ruffini22, H. Ruiz35, P. Ruiz Valls35,o, G.
Sabatino24,k, J.J. Saborido Silva36, N. Sagidova29, P. Sail50, B. Saitta15,d,
C. Salzmann39, B. Sanmartin Sedes36, M. Sannino19,i, R. Santacesaria24, C.
Santamarina Rios36, E. Santovetti23,k, M. Sapunov6, A. Sarti18,l, C.
Satriano24,m, A. Satta23, M. Savrie16,e, D. Savrina30,31, P. Schaack52, M.
Schiller41, H. Schindler37, M. Schlupp9, M. Schmelling10, B. Schmidt37, O.
Schneider38, A. Schopper37, M.-H. Schune7, R. Schwemmer37, B. Sciascia18, A.
Sciubba24, M. Seco36, A. Semennikov30, K. Senderowska26, I. Sepp52, N.
Serra39, J. Serrano6, P. Seyfert11, M. Shapkin34, I. Shapoval42,37, P.
Shatalov30, Y. Shcheglov29, T. Shears51,37, L. Shekhtman33, O. Shevchenko42,
V. Shevchenko30, A. Shires52, R. Silva Coutinho47, T. Skwarnicki56, N.A.
Smith51, E. Smith54,48, M. Smith53, M.D. Sokoloff59, F.J.P. Soler50, F.
Soomro18,37, D. Souza45, B. Souza De Paula2, B. Spaan9, A. Sparkes49, P.
Spradlin50, F. Stagni37, S. Stahl11, O. Steinkamp39, S. Stoica28, S. Stone56,
B. Storaci39, M. Straticiuc28, U. Straumann39, V.K. Subbiah37, S. Swientek9,
V. Syropoulos41, M. Szczekowski27, P. Szczypka38,37, T. Szumlak26, S.
T’Jampens4, M. Teklishyn7, E. Teodorescu28, F. Teubert37, C. Thomas54, E.
Thomas37, J. van Tilburg11, V. Tisserand4, M. Tobin39, S. Tolk41, D.
Tonelli37, S. Topp-Joergensen54, N. Torr54, E. Tournefier4,52, S. Tourneur38,
M.T. Tran38, M. Tresch39, A. Tsaregorodtsev6, P. Tsopelas40, N. Tuning40, M.
Ubeda Garcia37, A. Ukleja27, D. Urner53, U. Uwer11, V. Vagnoni14, G.
Valenti14, R. Vazquez Gomez35, P. Vazquez Regueiro36, S. Vecchi16, J.J.
Velthuis45, M. Veltri17,g, G. Veneziano38, M. Vesterinen37, B. Viaud7, D.
Vieira2, X. Vilasis-Cardona35,n, A. Vollhardt39, D. Volyanskyy10, D. Voong45,
A. Vorobyev29, V. Vorobyev33, C. Voß58, H. Voss10, R. Waldi58, R. Wallace12,
S. Wandernoth11, J. Wang56, D.R. Ward46, N.K. Watson44, A.D. Webber53, D.
Websdale52, M. Whitehead47, J. Wicht37, J. Wiechczynski25, D. Wiedner11, L.
Wiggers40, G. Wilkinson54, M.P. Williams47,48, M. Williams55, F.F. Wilson48,
J. Wishahi9, M. Witek25, S.A. Wotton46, S. Wright46, S. Wu3, K. Wyllie37, Y.
Xie49,37, F. Xing54, Z. Xing56, Z. Yang3, R. Young49, X. Yuan3, O.
Yushchenko34, M. Zangoli14, M. Zavertyaev10,a, F. Zhang3, L. Zhang56, W.C.
Zhang12, Y. Zhang3, A. Zhelezov11, A. Zhokhov30, L. Zhong3, A. Zvyagin37.
1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
3Center for High Energy Physics, Tsinghua University, Beijing, China
4LAPP, Université de Savoie, CNRS/IN2P3, Annecy-Le-Vieux, France
5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-
Ferrand, France
6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot,
CNRS/IN2P3, Paris, France
9Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
10Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
11Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
12School of Physics, University College Dublin, Dublin, Ireland
13Sezione INFN di Bari, Bari, Italy
14Sezione INFN di Bologna, Bologna, Italy
15Sezione INFN di Cagliari, Cagliari, Italy
16Sezione INFN di Ferrara, Ferrara, Italy
17Sezione INFN di Firenze, Firenze, Italy
18Laboratori Nazionali dell’INFN di Frascati, Frascati, Italy
19Sezione INFN di Genova, Genova, Italy
20Sezione INFN di Milano Bicocca, Milano, Italy
21Sezione INFN di Padova, Padova, Italy
22Sezione INFN di Pisa, Pisa, Italy
23Sezione INFN di Roma Tor Vergata, Roma, Italy
24Sezione INFN di Roma La Sapienza, Roma, Italy
25Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of
Sciences, Kraków, Poland
26AGH University of Science and Technology, Kraków, Poland
27National Center for Nuclear Research (NCBJ), Warsaw, Poland
28Horia Hulubei National Institute of Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
29Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia
30Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia
31Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
32Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN),
Moscow, Russia
33Budker Institute of Nuclear Physics (SB RAS) and Novosibirsk State
University, Novosibirsk, Russia
34Institute for High Energy Physics (IHEP), Protvino, Russia
35Universitat de Barcelona, Barcelona, Spain
36Universidad de Santiago de Compostela, Santiago de Compostela, Spain
37European Organization for Nuclear Research (CERN), Geneva, Switzerland
38Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
39Physik-Institut, Universität Zürich, Zürich, Switzerland
40Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands
41Nikhef National Institute for Subatomic Physics and VU University Amsterdam,
Amsterdam, The Netherlands
42NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
43Institute for Nuclear Research of the National Academy of Sciences (KINR),
Kyiv, Ukraine
44University of Birmingham, Birmingham, United Kingdom
45H.H. Wills Physics Laboratory, University of Bristol, Bristol, United
Kingdom
46Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
47Department of Physics, University of Warwick, Coventry, United Kingdom
48STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
49School of Physics and Astronomy, University of Edinburgh, Edinburgh, United
Kingdom
50School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
51Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
52Imperial College London, London, United Kingdom
53School of Physics and Astronomy, University of Manchester, Manchester,
United Kingdom
54Department of Physics, University of Oxford, Oxford, United Kingdom
55Massachusetts Institute of Technology, Cambridge, MA, United States
56Syracuse University, Syracuse, NY, United States
57Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de
Janeiro, Brazil, associated to 2
58Institut für Physik, Universität Rostock, Rostock, Germany, associated to 11
59University of Cincinnati, Cincinnati, OH, United States, associated to 56
aP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS),
Moscow, Russia
bUniversità di Bari, Bari, Italy
cUniversità di Bologna, Bologna, Italy
dUniversità di Cagliari, Cagliari, Italy
eUniversità di Ferrara, Ferrara, Italy
fUniversità di Firenze, Firenze, Italy
gUniversità di Urbino, Urbino, Italy
hUniversità di Modena e Reggio Emilia, Modena, Italy
iUniversità di Genova, Genova, Italy
jUniversità di Milano Bicocca, Milano, Italy
kUniversità di Roma Tor Vergata, Roma, Italy
lUniversità di Roma La Sapienza, Roma, Italy
mUniversità della Basilicata, Potenza, Italy
nLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain
oIFIC, Universitat de Valencia-CSIC, Valencia, Spain
pHanoi University of Science, Hanoi, Viet Nam
qUniversità di Padova, Padova, Italy
rUniversità di Pisa, Pisa, Italy
sScuola Normale Superiore, Pisa, Italy
## 1 Introduction
For $\mathchar 28931\relax^{0}_{b}$ baryons originating from energetic
$b$-quarks, heavy-quark effective theory (HQET) predicts a large fraction of
the transverse $b$-quark polarisation to be retained after hadronisation [1,
2], while the longitudinal polarisation should vanish due to parity
conservation in strong interactions. For $\mathchar 28931\relax^{0}_{b}$
baryons produced in $e^{-}e^{+}\to Z^{0}\to b\overline{}b$ transitions, a
substantial polarisation is measured [3, 4, 5], in agreement with the
$Z^{0}b\overline{}b$ coupling of the Standard Model (SM). There is no previous
polarisation measurement for $\mathchar 28931\relax^{0}_{b}$ baryons produced
at hadron colliders. The transverse polarisation is estimated to be
$\mathcal{O}(10\%)$ in Ref. [6] while Ref. [7] mentions it could be as large
as 20%. However, for $\mathchar 28931\relax$ baryons produced in fixed-target
experiments [8, 9, 10], the polarisation was observed to depend strongly on
the Feynman variable $x_{\rm F}=2\,\mbox{$p_{\rm L}$}/\sqrt{s}$, $p_{\rm L}$
being the $\mathchar 28931\relax$ longitudinal momentum and $\sqrt{s}$ the
collision centre-of-mass energy, and to vanish at $x_{\rm F}\approx 0$.
Extrapolating these results and taking into account the very small $x_{\rm
F}\approx 0.02$ value for $\mathchar 28931\relax^{0}_{b}$ produced at the
Large Hadron Collider (LHC) at $\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$, this
could imply a polarisation much smaller than $10\%$.
In this Letter, we perform an angular analysis of $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}(\to\mu^{+}\mu^{-})\mathchar 28931\relax(\to p\pi^{-})$ decays using
$1.0\mbox{\,fb}^{-1}$ of $pp$ collision data collected in 2011 with the LHCb
detector [11] at the LHC at $\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$. Owing
to the well-measured $\mathchar 28931\relax\\!\to p\pi^{-}$ decay asymmetry
parameter ($\alpha_{\mathchar 28931\relax}$) [12] and the known behaviour of
the decay of a vector particle into two leptons, the final state angular
distribution contains sufficient information to measure the $\mathchar
28931\relax^{0}_{b}$ production polarisation and the decay amplitudes [13].
The asymmetry of the $\kern 1.00006pt\overline{\kern-1.00006pt\mathchar
28931\relax}$ decay ($\alpha_{\kern
0.70004pt\overline{\kern-0.70004pt\mathchar 28931\relax}}$) is much less
precisely measured [12], however by neglecting possible $C\\!P$ violation
effects, which are predicted to be very small in the SM [14, 15],
$\alpha_{\mathchar 28931\relax}$ and $-\alpha_{\kern
0.70004pt\overline{\kern-0.70004pt\mathchar 28931\relax}}$ can be assumed to
be equal. Similarly, $C\\!P$ violation effects in $\mathchar
28931\relax^{0}_{b}$ decays are neglected, and the decay amplitudes of the
$\mathchar 28931\relax^{0}_{b}$ and $\kern
1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}^{0}_{b}$ are
therefore assumed to be equal. Inclusion of charge-conjugated modes is
henceforth implied. The asymmetry parameter $\alpha_{b}$ in $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ decays, defined in Sec. 2, is calculated in many publications as
summarised in Table 1. Most predictions lie in the range from $-21\%$ to
$-10\%$ while Ref. [7] obtains a large positive value using HQET. Note that
the theoretical predictions depend on the calculations of the form-factors and
experimental input that were available at the time they were made.
Table 1: Theoretical predictions for the $\mathchar 28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar 28931\relax$ decay asymmetry parameter $\alpha_{b}$. Method | Value | Reference
---|---|---
Factorisation | $-0.1$ | [16]
Factorisation | $-0.18$ | [17]
Covariant oscillator quark model | $-0.208$ | [18]
Perturbative QCD | $-0.17$ to $-0.14$ | [19]
Factorisation (HQET) | $0.777$ | [7]
Light front quark model | $-0.204$ | [20]
It should be noted that $\mathchar 28931\relax^{0}_{b}$ baryons can also be
produced in the decay of heavier $b$ baryons [21, 22, 23], where the
polarisation is partially diluted [6]. These strong decays are experimentally
difficult to distinguish from $\mathchar 28931\relax^{0}_{b}$ that hadronise
directly from a $pp$ collision and therefore contribute to the measurement
presented in this study.
A sufficiently large $\mathchar 28931\relax^{0}_{b}$ polarisation would allow
the photon helicity in $\mathchar 28931\relax^{0}_{b}\\!\to\mathchar
28931\relax\gamma$ and $\mathchar 28931\relax^{0}_{b}\\!\to\mathchar
28931\relax^{*}\gamma$ decays to be probed [24, 25, 6]. The photon helicity is
sensitive to contributions from beyond the SM.
## 2 Angular formalism
The $\mathchar 28931\relax^{0}_{b}$ spin has not yet been measured but the
quark model prediction is spin $\frac{1}{2}$. The $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ mode is therefore the decay of a spin $\frac{1}{2}$ particle into
a spin $1$ and a spin $\frac{1}{2}$ particle. In the helicity formalism, the
decay can be described by four $\mathcal{M}_{\lambda_{1}\lambda_{2}}$ helicity
amplitudes ($\mathcal{M}_{+\frac{1}{2},0}$, $\mathcal{M}_{-\frac{1}{2},0}$,
$\mathcal{M}_{-\frac{1}{2},-1}$ and $\mathcal{M}_{+\frac{1}{2},+1}$) where
$\lambda_{1}$ ($\lambda_{2}$) is the helicity of the $\mathchar 28931\relax$
(${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$) particle. The angular
distribution of the decay (${\frac{\mathrm{d}\Gamma}{\mathrm{d}\Omega_{5}}}$)
is calculated in Ref. [13] and reported in Ref. [26]. It depends on the five
angles shown in Fig. 1. The first angle, $\theta$, is the polar angle of the
$\mathchar 28931\relax$ momentum in the $\mathchar 28931\relax^{0}_{b}$ rest-
frame with respect to $\vec{n}=(\vec{p}_{\mathchar
28931\relax^{0}_{b}}\times\vec{p}_{{\rm beam}})/|\vec{p}_{\mathchar
28931\relax^{0}_{b}}\times\vec{p}_{{\rm beam}}|$, a unit vector perpendicular
to the production plane. The second and third angles are $\theta_{1}$ and
$\phi_{1}$, the polar and azimuthal angles of the proton in the $\mathchar
28931\relax$ rest-frame and calculated in the coordinate system defined by
$\vec{z}_{1}=\vec{p}_{\mathchar 28931\relax}/|\vec{p}_{\mathchar
28931\relax}|$ and $\vec{y}_{1}=(\vec{n}\times\vec{p}_{\mathchar
28931\relax})/|\vec{n}\times\vec{p}_{\mathchar 28931\relax}|$. The remaining
angles are $\theta_{2}$ and $\phi_{2}$, the polar and azimuthal angles of the
positively-charged muon in the ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$
rest-frame and calculated in the coordinate system defined by
$\vec{z}_{2}=\vec{p}_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}/|\vec{p}_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}|$ and
$\vec{y}_{2}=(\vec{n}\times\vec{p}_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}})/|\vec{n}\times\vec{p}_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}|$. The angular distribution also depends on the four
$\mathcal{M}_{\lambda_{1}\lambda_{2}}$ amplitudes, on the $\alpha_{\mathchar
28931\relax}$ parameter, and on the transverse polarisation parameter $P_{b}$,
the projection of the $\mathchar 28931\relax^{0}_{b}$ polarisation vector on
$\vec{n}$.
Figure 1: Definition of the five angles used to describe the $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}(\to\mu^{+}\mu^{-})\mathchar 28931\relax(\to p\pi^{-})$ decay.
Assuming that the detector acceptance over $\phi_{1}$ and $\phi_{2}$ is
uniformly distributed, the analysis can be simplified by integrating over the
two azimuthal angles
$\displaystyle\frac{\mathrm{d}\Gamma}{\mathrm{d}\Omega_{3}}(\cos\theta,\cos\theta_{1},\cos\theta_{2})$
$\displaystyle=\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\;\frac{\mathrm{d}\Gamma}{\mathrm{d}\Omega_{5}}(\theta,\theta_{1},\theta_{2},\phi_{1},\phi_{2})\;\mathrm{d}\phi_{1}\;\mathrm{d}\phi_{2}$
$\displaystyle=\frac{1}{16\pi}\sum_{i=0}^{7}\;f_{i}(|\mathcal{M}_{+\frac{1}{2},0}|^{2},|\mathcal{M}_{-\frac{1}{2},0}|^{2},|\mathcal{M}_{-\frac{1}{2},-1}|^{2},|\mathcal{M}_{+\frac{1}{2},+1}|^{2})$
$\displaystyle\hskip 62.59596ptg_{i}(P_{b},\alpha_{\mathchar
28931\relax})\;h_{i}(\cos\theta,\cos\theta_{1},\cos\theta_{2}).$ (1)
The functions describing the decay only depend on the magnitudes of the
$\mathcal{M}_{\lambda_{1}\lambda_{2}}$ amplitudes, on $P_{b}$ and
$\alpha_{\mathchar 28931\relax}$, and on $\cos\theta$, $\cos\theta_{1}$, and
$\cos\theta_{2}$. Using the normalisation condition
${|\mathcal{M}_{+\frac{1}{2},0}|^{2}+|\mathcal{M}_{-\frac{1}{2},0}|^{2}+|\mathcal{M}_{-\frac{1}{2},-1}|^{2}+|\mathcal{M}_{+\frac{1}{2},+1}|^{2}=1}$,
the $f_{i}$ functions can be written in terms of the following three
parameters:
${\alpha_{b}\equiv|\mathcal{M}_{+\frac{1}{2},0}|^{2}-|\mathcal{M}_{-\frac{1}{2},0}|^{2}+|\mathcal{M}_{-\frac{1}{2},-1}|^{2}-|\mathcal{M}_{+\frac{1}{2},+1}|^{2}}$,
${r_{0}\equiv|\mathcal{M}_{+\frac{1}{2},0}|^{2}+|\mathcal{M}_{-\frac{1}{2},0}|^{2}}$
and
${r_{1}\equiv|\mathcal{M}_{+\frac{1}{2},0}|^{2}-|\mathcal{M}_{-\frac{1}{2},0}|^{2}}$.
The functions used to describe the angular distributions are shown in Table 2.
Four parameters ($P_{b}$, $\alpha_{b}$, $r_{0}$ and $r_{1}$) have to be
measured simultaneously from the angular distribution. The $\alpha_{b}$
parameter is the parity violating asymmetry characterising the $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ decay.
Table 2: Functions used to describe the angular distributions in three dimensions. $i$ | $f_{i}(\alpha_{b},r_{0},r_{1})$ | $g_{i}(P_{b},\alpha_{\mathchar 28931\relax})$ | $h_{i}(\cos\theta,\cos\theta_{1},\cos\theta_{2})$
---|---|---|---
0 | $1$ | $1$ | $1$
1 | $\alpha_{b}$ | $P_{b}$ | $\cos\theta$
2 | $2r_{1}-\alpha_{b}$ | $\alpha_{\mathchar 28931\relax}$ | $\cos\theta_{1}$
3 | $2r_{0}-1$ | $P_{b}\alpha_{\mathchar 28931\relax}$ | $\cos\theta\cos\theta_{1}$
4 | $\frac{1}{2}(1-3r_{0})$ | $1$ | $\frac{1}{2}(3\cos^{2}\theta_{2}-1)$
5 | $\frac{1}{2}(\alpha_{b}-3r_{1})$ | $P_{b}$ | $\frac{1}{2}(3\cos^{2}\theta_{2}-1)\cos\theta$
6 | $-\frac{1}{2}(\alpha_{b}+r_{1})$ | $\alpha_{\mathchar 28931\relax}$ | $\frac{1}{2}(3\cos^{2}\theta_{2}-1)\cos\theta_{1}$
7 | $-\frac{1}{2}(1+r_{0})$ | $P_{b}\alpha_{\mathchar 28931\relax}$ | $\frac{1}{2}(3\cos^{2}\theta_{2}-1)\cos\theta\cos\theta_{1}$
## 3 Detector, trigger and simulation
The LHCb detector [11] is a single-arm forward spectrometer covering the
pseudorapidity range $2<\eta<5$, designed for the study of particles
containing $b$ or $c$ quarks. The detector includes a high precision tracking
system consisting of a silicon-strip vertex detector (VELO) surrounding the
$pp$ interaction region, a large-area silicon-strip detector located upstream
of a dipole magnet with a bending power of about $4{\rm\,Tm}$, and three
stations of silicon-strip detectors and straw drift tubes placed downstream.
The combined tracking system provides a momentum measurement with relative
uncertainty that varies from 0.4% at 5${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$
to 0.6% at 100${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, and three-dimensional
impact parameter (IP) resolution of 20$\,\upmu\rm m$ for tracks with high
transverse momentum. Charged hadrons are identified using two ring-imaging
Cherenkov detectors (RICH) [27]. Photon, electron and hadron candidates are
identified by a calorimeter system consisting of scintillating-pad and
preshower detectors, an electromagnetic calorimeter and a hadronic
calorimeter. Muons are identified by a system composed of alternating layers
of iron and multiwire proportional chambers [28]. The trigger [29] consists of
a hardware stage, based on information from the calorimeter and muon systems,
followed by a software stage, which applies a full event reconstruction.
The hardware trigger selects events containing a muon with a transverse
momentum, $\mbox{$p_{\rm T}$}>1.48{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ or two
muons with a product of their $p_{\rm T}$ larger than
$(1.3{\mathrm{\,Ge\kern-1.00006ptV\\!/}c})^{2}$. In the subsequent software
trigger, we require two oppositely-charged muons having an invariant mass
larger than $2800{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ and originating
from the same vertex, or a single muon with $\mbox{$p_{\rm
T}$}>1.3{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and being significantly
displaced with respect to all the primary $pp$ interaction vertices (PVs) in
the event, or a single muon with $p>10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$
and $\mbox{$p_{\rm T}$}>1.7{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. Displaced
muons are identified by means of their IP and $\chi^{2}_{\rm IP}$, where the
$\chi^{2}_{\rm IP}$ is the $\chi^{2}$ difference when the PV is fitted with or
without the muon track. Finally, we require two oppositely-charged muons with
an invariant mass within $120{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ of the
nominal ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ mass [12] forming a
common vertex which is significantly displaced from the PVs. Displaced
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ vertices are identified by
computing the vertex separation $\chi^{2}$, the $\chi^{2}$ difference between
the PV and the ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ vertex. In the
$\mathchar 28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\mathchar 28931\relax$ selection described below, we use the muon pairs
selected by the trigger.
Simulation is used to understand the detector efficiencies and resolutions and
to train the analysis procedure. Proton-proton collisions are generated using
Pythia 6.4 [30] with a specific LHCb configuration [31]. Decays of hadronic
particles are described by EvtGen [32] in which final state radiation is
generated using Photos [33]. The interaction of the generated particles with
the detector and its response are implemented using the Geant4 toolkit [34,
*Agostinelli:2002hh] as described in Ref. [36].
## 4 Signal selection and background rejection
A first set of loose requirements is applied to select $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ decays. Charged tracks are identified as either protons or pions
using information provided by the RICH system. Candidate $\mathchar
28931\relax$ baryons are reconstructed from oppositely-charged proton and pion
candidates. They are reconstructed either when the $\mathchar 28931\relax$
decays within the VELO (“long $\mathchar 28931\relax$”), or when the decay
occurs outside the VELO acceptance (“downstream $\mathchar 28931\relax$”). The
latter category increases the acceptance significantly for long-lived
$\mathchar 28931\relax$ decays. In both cases, the two tracks are required to
have $p>2{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, to be well separated from the
PVs and to originate from a common vertex. In addition, protons are required
to have $\mbox{$p_{\rm T}$}>0.5{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and pions
to have $\mbox{$p_{\rm T}$}>0.1{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. Finally,
the invariant mass of the $\mathchar 28931\relax$ candidates is required to be
within ${15{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}}$ of the nominal
$\mathchar 28931\relax$ mass [12]. To form
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ candidates, two oppositely-
charged muons with $\mbox{$p_{\rm
T}$}(\mu)>0.5{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ are combined and their
invariant mass is required to be within
${80{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}}$ of the nominal
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ mass. Subsequently, $\mathchar
28931\relax^{0}_{b}$ candidates are formed by combining the $\mathchar
28931\relax$ and ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ candidates. To
improve the $\mathchar 28931\relax^{0}_{b}$ mass resolution, the muons from
the ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ decay are constrained to
come from a common point and to have an invariant mass equal to the
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ mass. We constrain the
$\mathchar 28931\relax$ and ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$
candidates to originate from a common vertex and to have an invariant mass
between 5120 and ${6120{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}}$. Moreover,
$\mathchar 28931\relax^{0}_{b}$ candidates must have their momenta pointing to
the associated PV by requiring ${\cos\theta_{\rm d}>0.99}$ where $\theta_{\rm
d}$ is the angle between the $\mathchar 28931\relax^{0}_{b}$ momentum vector
and the direction from the PV to the $\mathchar 28931\relax^{0}_{b}$ vertex.
The associated PV is the PV having the smallest $\chi^{2}_{\rm IP}$ value.
To reduce the combinatorial background, a multivariate selection based on a
boosted decision tree (BDT) [37, 38] with eight variables is used. Five
variables are related to the $\mathchar 28931\relax^{0}_{b}$ candidate:
$\cos\theta_{\rm d}$, the $\chi^{2}_{\rm IP}$, the proper decay time, the
vertex $\chi^{2}$ and the vertex separation $\chi^{2}$ between the PV and the
vertex. Here, the vertex separation $\chi^{2}$ is the difference in $\chi^{2}$
between the nominal vertex fit and a vertex fit where the $\mathchar
28931\relax^{0}_{b}$ is assumed to have zero lifetime. The proper decay time
is the distance between the associated PV and the $\mathchar
28931\relax^{0}_{b}$ decay vertex divided by the $\mathchar
28931\relax^{0}_{b}$ momentum. Two variables are related to the
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ candidate: the vertex
$\chi^{2}$ and the invariant mass of the two muons. The last variable used in
the BDT is the invariant mass of the $\mathchar 28931\relax$ candidate. The
BDT is using simulation for signal and sideband data
($M({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax)>5800{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$) for background in
its training. The optimal BDT requirement is found separately for downstream
and long candidates by maximising the signal significance $N_{\rm
sig}/\sqrt{N_{\rm sig}+N_{\rm bkg}}$, where $N_{\rm sig}$ and $N_{\rm bkg}$
are the expected signal and background yields in a tight signal region around
the $\mathchar 28931\relax^{0}_{b}$ mass. These two yields are estimated using
the signal and background yields measured in data after the first set of loose
requirements and using the BDT efficiency measured with the training samples.
The BDT selection keeps about $90\%$ of the signal while removing about $80\%$
($90\%$) of the background events for the downstream (long) candidates. Less
background is rejected in the downstream case due to larger contamination from
misreconstructed $B^{0}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K^{0}_{\rm\scriptscriptstyle S}$ background decays. Candidates with
$5550<M({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax)<5700{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ are used for the
final analysis. In this mass range, the
$B^{0}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K^{0}_{\rm\scriptscriptstyle S}$ background is found to have a similar
shape as the combinatorial background.
## 5 Fitting procedure
An unbinned extended maximum likelihood fit to the mass distribution of the
$\mathchar 28931\relax^{0}_{b}$ candidates is performed. The likelihood
function is defined as
$\mathcal{L}_{\rm
mass}=\frac{e^{-\sum_{j}N_{j}}}{N!}\times\prod_{i=1}^{N}\left(\sum_{j}N_{j}P_{j}(M_{i}({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\mathchar 28931\relax))\right),$ (2)
where $i$ runs over the events, $j$ runs over the different signal and
background probability density functions (PDF), $N_{j}$ are the yields and
$P_{j}$ the PDFs. The sum of two Crystal Ball functions [39] with opposite
side tails and common mean and width parameters is used to describe the signal
mass distribution. The mean and width parameters are left free in the fit
while the other parameters are taken from the simulated signal sample. The
background is modelled with a first-order polynomial function. The candidates
reconstructed from downstream and long $\mathchar 28931\relax$ combinations
are fitted separately taking into account that the resolution is worse for the
downstream signal candidates. The results of the fits to the mass
distributions are shown in Fig. 2. We obtain $5346\pm 96$ ($5189\pm 95$)
downstream and $1861\pm 49$ ($761\pm 36$) long signal (background) candidates.
Using the results of this fit, sWeights ($w_{\rm mass}$) are computed by means
of the sPlot technique [40], in order to statistically subtract the background
in the angular distribution.
To ensure accurate modelling of the signal, corrections to the $p_{\rm T}$ and
rapidity ($y$) spectra are obtained by comparing the simulation with data by
means of the sPlot technique. For the $\mathchar 28931\relax^{0}_{b}$ and
$\mathchar 28931\relax$ particles, the simulated data is corrected using two-
dimensional $(\mbox{$p_{\rm T}$},y)$ distributions in order to better
reproduce the data. These distributions do not depend on the polarisation and
the decay amplitudes but have an impact on the reconstruction acceptance. The
same procedure is used on the pion of
$B^{0}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K^{0}_{\rm\scriptscriptstyle S}$ decays and is subsequently used to
calibrate the $(\mbox{$p_{\rm T}$},y)$ spectrum of the pion of the $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ decay.
|
---|---
Figure 2: Mass distribution for the $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ mode for the (left) downstream and (right) long candidates. The
fitted signal component is shown as a solid blue curve while the background
component is shown as a dashed red line.
Since the detector acceptance depends on the three decay angles, the
acceptance is modelled with a sum of products of Legendre polynomials
($L_{i}$)
$f_{\rm
acc}=\sum_{i,j,k}c_{ijk}L_{i}(\cos\theta)L_{j}(\cos\theta_{1})L_{k}(\cos\theta_{2}),$
(3)
where $i$ and $k$ are chosen to be even or equal to one. Unbinned maximum
likelihood fits to the simulated signal candidates are performed, separately
for downstream and long candidates. The simulated is produced using a phase-
space model and unpolarised $\mathchar 28931\relax^{0}_{b}$ baryons. The three
angular distributions are therefore uniformly generated. Acceptances of the
$\mathchar 28931\relax^{0}_{b}$ and $\kern
1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}^{0}_{b}$ decays are
found to be statistically consistent. A common acceptance function is
therefore used. The maximum orders of the Legendre polynomials are chosen by
comparing the fit probability. The requirements $i<5$, $j<4$, $k<5$ and
$i+j+k<9$ are chosen. The results of the fit to the acceptance distributions
are shown in Fig. 3.
| |
---|---|---
| |
Figure 3: Projections of the acceptance function together with the simulated
signal data for (top) downstream and (bottom) long candidates.
We then perform an unbinned likelihood fit to the ($\cos\theta$,
$\cos\theta_{1}$, $\cos\theta_{2}$) distribution. Each candidate is weighted
with ${w_{\rm tot}=w_{\rm mass}\times w_{\rm acc}}$ where $w_{\rm mass}$
subtracts the background and ${w_{\rm acc}=1/f_{\rm
acc}(\cos\theta,\cos\theta_{1},\cos\theta_{2})}$ corrects for the angular
acceptance [41]. The sum of the $w_{\rm mass}$ weights over all the events is
by construction equal to the signal yield, and $w_{\rm tot}$ is normalised in
the same way. Since the weighting procedure performs background subtraction
and corrects for acceptance effects, only the signal PDF has to be included in
the fit of the angular distribution. The detector resolution is neglected in
the nominal fit as it is found to have little effect on the results. It will
be considered as source of systematic uncertainty. The likelihood is therefore
$\mathcal{L}_{\rm ang}=\prod_{i=1}^{N}w_{\rm
tot}^{i}\frac{\mathrm{d}\Gamma}{\mathrm{d}\Omega_{3}}(\cos\theta^{i},\cos\theta_{1}^{i},\cos\theta_{2}^{i}),$
(4)
where $i$ runs over all events. A simultaneous fit to the angular
distributions of the downstream and long samples is performed. The
$\alpha_{\mathchar 28931\relax}$ parameter is fixed to its measured value,
$0.642\pm 0.013$ [12].
The accurate modelling of the acceptance is checked with a similar decay,
$B^{0}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K^{0}_{\rm\scriptscriptstyle S}$. Here, the angular distribution is
known, and $B^{0}$ mesons are unpolarised. These decays are selected in the
same way as signal, and the fitting procedure described above is performed.
Agreement with the expected ($\cos\theta$, $\cos\theta_{1}$, $\cos\theta_{2}$)
distribution is obtained.
## 6 Results
The results of the fits to the angular distributions of the weighted
$\mathchar 28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\mathchar 28931\relax$ data are shown in Fig. 4. We obtain the following
results: $P_{b}=0.06\pm 0.06$, $\alpha_{b}=0.00\pm 0.10$, $r_{0}=0.58\pm 0.02$
and $r_{1}=-0.58\pm 0.06$, where the uncertainties are statistical only.
The polarisation could be different between $\mathchar 28931\relax^{0}_{b}$
and $\kern 1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}^{0}_{b}$
due to their respective production mechanisms. The data are separated
according to the $\mathchar 28931\relax^{0}_{b}$ flavour and fitted using the
same amplitude parameters but different parameters for the $\mathchar
28931\relax^{0}_{b}$ and $\kern 1.00006pt\overline{\kern-1.00006pt\mathchar
28931\relax}^{0}_{b}$ polarisations. As compatible results are obtained within
statistical uncertainties, the polarisations of $\mathchar
28931\relax^{0}_{b}$ and $\kern 1.00006pt\overline{\kern-1.00006pt\mathchar
28931\relax}^{0}_{b}$ baryons are assumed to be equal.
| |
---|---|---
| |
Figure 4: Projections of the angular distribution of the background
subtracted and acceptance corrected $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar
28931\relax$ data for the (top) downstream and (bottom) long candidates. The
fit is shown as solid lines.
A possible bias is investigated by fitting samples of generated experiments
with sizes and parameters close to those measured in data. We generate many
samples varying $\alpha_{b}$ between $-0.25$ to $0.25$ while keeping $r_{0}$
equal to $-r_{1}$, thus keeping $|\mathcal{M}_{+\frac{1}{2},+1}|^{2}$ and
$|\mathcal{M}_{+\frac{1}{2},0}|^{2}$ equal to zero. We find that the fitting
procedure biases all parameters toward negative values, slightly for $P_{b}$
and $r_{0}$ ($\sim$$10\%$ of their respective statistical uncertainties) and
more significantly for $\alpha_{b}$ and $r_{1}$ ($\sim$$40\%$ of their
respective statistical uncertainties). For $P_{b}$ and $r_{0}$, the biases do
not change significantly when changing the value of $\alpha_{b}$ used to
generate the simulated samples. On the other hand, the biases on $\alpha_{b}$
and $r_{1}$ do change, and the observed discrepancies are treated as
systematic uncertainties. Moreover, the statistical uncertainties on the four
fit parameters are underestimated: again slightly for $P_{b}$ and $r_{0}$ and
significantly, by a factor of $\sim$$1.7$, for $\alpha_{b}$ and $r_{1}$.
We correct the measured values and statistical uncertainties of the four fit
parameters. The corrected statistical uncertainties are obtained by
multiplying the covariance matrix with a correction matrix obtained from the
study of the simulated samples. This correction matrix contains on its
diagonal the squares of the widths of the pull distributions of the four fit
parameters. The remaining entries of this matrix are set to zero as the
correlation matrix computed with the results of the fits of the generated
samples is found to be very close to the correlation matrix calculated when
fitting the data.
Finally, the corrected result is $P_{b}=0.06\pm 0.07$, $\alpha_{b}=0.05\pm
0.17$, $r_{0}=0.58\pm 0.02$, $r_{1}=-0.56\pm 0.10$, where the uncertainties
are statistical only. The corrected statistical correlation matrix between the
four fit parameters ($P_{b}$, $\alpha_{b}$, $r_{0}$, $r_{1}$) is
$\displaystyle\begin{pmatrix}[l]1&\phantom{-}0.10&-0.07&\phantom{-}0.13\\\
&\phantom{-}1&-0.63&\phantom{-}0.95\\\ &&\phantom{-}1&-0.56\\\
&&&\phantom{-}1\end{pmatrix}.$
Large correlations are not seen between the polarisation and the amplitude
parameters. On the other hand, the amplitude parameters are strongly
correlated with respect to each other, $\alpha_{b}$ and $r_{1}$ being almost
fully correlated.
## 7 Systematic uncertainties and significance
The systematic uncertainty on each measured physics parameter is evaluated by
repeating the fit to the data varying its input parameters assuming Gaussian
distributions and taking into account correlations when possible. The
systematic uncertainties are summarised in Table 3. They are dominated by the
uncertainty arising from the acceptance function, the calibration of the
simulated signal sample and the fit bias. The uncertainty related to the
acceptance function is obtained by varying the coefficients of the Legendre
function within their uncertainties and taking into account their
correlations. For the calibration of our simulated data, the uncertainty is
obtained when changing the $(\mbox{$p_{\rm T}$},y)$ calibrations of the
$\mathchar 28931\relax^{0}_{b}$, $\mathchar 28931\relax$ and pion particles
within their uncertainties and obtaining a new acceptance function. The
function that is used to fit the data does not include the effect of the
angular resolution. The angular resolution, obtained with simulated samples,
is negligible for $\theta$ and $\theta_{2}$. However, it is large, up to
$\sim$$70\%$, for small values of $\theta_{1}$. The systematic uncertainty is
obtained by fitting simulated samples in which the resolution effect is
introduced. Effects of the deviation from an uniform acceptance in $\phi_{1}$
and $\phi_{2}$ assumed in Eq. (2) are found to be negligible. The
simplification to use only one component to describe the background is found
not to bias the result. Other systematic uncertainties are small or
negligible. These are related to the signal mass PDF parameters, the
background subtraction and $\alpha_{\mathchar 28931\relax}$. The uncertainty
related to the background subtraction are obtained when varying the obtained
result of the mass fit and computing the $w_{\rm mass}$ weights again. The
$\alpha_{\mathchar 28931\relax}$ parameter is varied within its measurement
uncertainties [12].
Table 3: Absolute systematic uncertainties on the measured parameters. Source | $P_{b}$ | $\alpha_{b}$ | $r_{0}$ | $r_{1}$
---|---|---|---|---
Acceptance | 0.02 | 0.04 | 0.006 | 0.03
Simulated data calibration | 0.01 | 0.04 | 0.006 | 0.03
Fit bias | 0.004 | 0.04 | 0.001 | 0.02
Angular resolution | 0.002 | 0.01 | $<$0.001 | 0.005
Background subtraction | 0.001 | 0.006 | 0.001 | 0.005
$\alpha_{\mathchar 28931\relax}$ | 0.002 | $<$0.001 | $<$0.001 | 0.01
Total (quadratic sum) | 0.02 | 0.07 | 0.01 | 0.05
To compare our results with a prediction on a parameter $p$, we compute the
significance with respect to a $p_{\rm test}$ value using a profile along $p$
of the likelihood function, i.e. the likelihood value obtained when varying
$p$ and minimising with respect to the other parameters. A Monte Carlo
integration is performed to include the systematic uncertainties in the
likelihood profiles. We perform the fit to the data when varying all
systematic uncertainties and obtain a likelihood profile for each fit of the
data. The likelihood profile which includes all systematic uncertainties is
then the average of all the obtained profiles. The significance is defined as
$\mathcal{S}(p=p_{\rm test})=\sqrt{2(\log\mathcal{L}(p_{\rm
test})-\log\mathcal{L}(p_{0}))}$, where $\mathcal{L}(p_{0})$ is the likelihood
value of the nominal fit. Significances are given in the concluding section of
this Letter.
## 8 Conclusion
We have performed an angular analysis of about 7200 $\mathchar
28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}(\to\mu^{+}\mu^{-})\mathchar 28931\relax(\to p\pi^{-})$ decays. The
$\mathchar 28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\mathchar 28931\relax$ decay amplitudes are measured for the first time,
and the $\mathchar 28931\relax^{0}_{b}$ production polarisation for the first
time at a hadron collider. The results are
$\displaystyle P_{b}$ $\displaystyle=\phantom{-}0.06\pm 0.07\pm 0.02,$
$\displaystyle\alpha_{b}$ $\displaystyle=\phantom{-}0.05\pm 0.17\pm 0.07,$
$\displaystyle r_{0}$ $\displaystyle=\phantom{-}0.58\pm 0.02\pm 0.01,$
$\displaystyle r_{1}$ $\displaystyle=-0.56\pm 0.10\pm 0.05,$
which correspond to the four helicity amplitudes
$\displaystyle|\mathcal{M}_{+\frac{1}{2},0}|^{2}$
$\displaystyle=\phantom{-}0.01\pm 0.04\pm 0.03,$
$\displaystyle|\mathcal{M}_{-\frac{1}{2},0}|^{2}$
$\displaystyle=\phantom{-}0.57\pm 0.06\pm 0.03,$
$\displaystyle|\mathcal{M}_{-\frac{1}{2},-1}|^{2}$
$\displaystyle=\phantom{-}0.51\pm 0.05\pm 0.02,$
$\displaystyle|\mathcal{M}_{+\frac{1}{2},+1}|^{2}$ $\displaystyle=-0.10\pm
0.04\pm 0.03,$
where the first uncertainty is statistical and the second systematic. The
reported polarisation and amplitudes are obtained for the combination of
$\mathchar 28931\relax^{0}_{b}$ and $\kern
1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}^{0}_{b}$ decays. More
data are required to probe any possible difference.
Our result cannot exclude a transverse polarisation at the order of $10\%$
[6]. However, a value of $20\%$ as mentioned in Ref. [7] is disfavoured at the
level of $2.7$ standard deviations.
For the $\mathchar 28931\relax^{0}_{b}$ asymmetry parameter, our result is
compatible with the predictions ranging from $-21\%$ to $-10\%$ [16, 17, 18,
19, 20] but does not agree with the HQET prediction of $77.7\%$ [7] at $5.8$
standard deviations.
## Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC
(China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG
(Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR
(Poland); ANCS/IFA (Romania); MinES, Rosatom, RFBR and NRC “Kurchatov
Institute” (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER
(Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We
also acknowledge the support received from the ERC under FP7. The Tier1
computing centres are supported by IN2P3 (France), KIT and BMBF (Germany),
INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United
Kingdom). We are thankful for the computing resources put at our disposal by
Yandex LLC (Russia), as well as to the communities behind the multiple open
source software packages that we depend on.
## References
* [1] T. Mannel and G. A. Schuler, Semileptonic decays of bottom baryons at LEP, Phys. Lett. B279 (1992) 194
* [2] A. F. Falk and M. E. Peskin, Production, decay, and polarization of excited heavy hadrons, Phys. Rev. D49 (1994) 3320, arXiv:hep-ph/9308241
* [3] ALEPH collaboration, D. Buskulic et al., Measurement of $\mathchar 28931\relax^{0}_{b}$ polarization in $Z^{0}$ decays, Phys. Lett. B365 (1996) 437
* [4] OPAL collaboration, G. Abbiendi et al., Measurement of the average polarization of $b$ baryons in hadronic $Z^{0}$ decays, Phys. Lett. B444 (1998) 539, arXiv:hep-ex/9808006
* [5] DELPHI collaboration, P. Abreu et al., $\mathchar 28931\relax^{0}_{b}$ polarization in $Z^{0}$ decays at LEP, Phys. Lett. B474 (2000) 205
* [6] G. Hiller, M. Knecht, F. Legger, and T. Schietinger, Photon polarization from helicity suppression in radiative decays of polarized $\mathchar 28931\relax^{0}_{b}$ to spin-3/2 baryons, Phys. Lett. B649 (2007) 152, arXiv:hep-ph/0702191
* [7] Z. Ajaltouni, E. Conte, and O. Leitner, $\mathchar 28931\relax^{0}_{b}$ decays into $\mathchar 28931\relax$-vector, Phys. Lett. B614 (2005) 165, arXiv:hep-ph/0412116
* [8] E799 collaboration, E. Ramberg et al., Polarization of $\mathchar 28931\relax$ and $\kern 1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}$ produced by 800-GeV protons, Phys. Lett. B338 (1994) 403
* [9] NA48 collaboration, V. Fanti et al., A measurement of the transverse polarization of $\mathchar 28931\relax$-hyperons produced in inelastic $pN$ reactions at 450 GeV proton energy, Eur. Phys. J. C6 (1999) 265
* [10] HERA-B collaboration, I. Abt et al., Polarization of $\mathchar 28931\relax$ and $\kern 1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}$ in 920 GeV fixed-target proton-nucleus collisions, Phys. Lett. B638 (2006) 415, arXiv:hep-ex/0603047
* [11] LHCb collaboration, A. A. Alves Jr. et al., The LHCb detector at the LHC, JINST 3 (2008) S08005
* [12] Particle Data Group, J. Beringer et al., Review of particle physics, Phys. Rev. D86 (2012) 010001
* [13] R. Lednicky, On evaluation of polarization of the charmed baryon $\mathchar 28931\relax^{+}_{c}$, Sov. J. Nucl. Phys. 43 (1986) 817
* [14] J. F. Donoghue, X.-G. He, and S. Pakvasa, Hyperon decays and CP nonconservation, Phys. Rev. D34 (1986) 833
* [15] J. F. Donoghue, B. R. Holstein, and G. Valencia, CP violation in low-energy in $p\overline{}p$ reactions, Phys. Lett. B178 (1986) 319
* [16] H.-Y. Cheng, Nonleptonic weak decays of bottom baryons, Phys. Rev. D56 (1997) 2799, arXiv:hep-ph/9612223
* [17] Fayyazuddin and Riazuddin, Two-body nonleptonic $\mathchar 28931\relax^{0}_{b}$ decays in quark model with factorization ansatz, Phys. Rev. D58 (1998) 014016, arXiv:hep-ph/9802326
* [18] R. Mohanta et al., Hadronic weak decays of $\mathchar 28931\relax^{0}_{b}$ baryon in the covariant oscillator quark model, Prog. Theor. Phys. 101 (1999) 959, arXiv:hep-ph/9904324
* [19] C.-H. Chou, H.-H. Shih, S.-C. Lee, and H.-n. Li, $\mathchar 28931\relax^{0}_{b}\\!\to{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\mathchar 28931\relax$ decay in perturbative QCD, Phys. Rev. D65 (2002) 074030, arXiv:hep-ph/0112145
* [20] Z.-T. Wei, H.-W. Ke, and X.-Q. Li, Evaluating decay rates and asymmetries of $\mathchar 28931\relax^{0}_{b}$ into light baryons in LFQM, Phys. Rev. D80 (2009) 094016, arXiv:0909.0100
* [21] CDF collaboration, T. Aaltonen et al., First observation of heavy baryons $\Sigma_{b}$ and $\Sigma_{b}^{*}$, Phys. Rev. Lett. 99 (2007) 202001, arXiv:0706.3868
* [22] CDF collaboration, T. Aaltonen et al., Measurement of the masses and widths of the bottom baryons $\Sigma_{b}^{+-}$ and $\Sigma_{b}^{*+-}$, Phys. Rev. D85 (2012) 092011, arXiv:1112.2808
* [23] LHCb collaboration, R. Aaij et al., Observation of excited $\Lambda_{b}$ baryons, Phys. Rev. Lett. 109 (2012) 172003, arXiv:1205.3452
* [24] T. Mannel and S. Recksiegel, Flavor changing neutral current decays of heavy baryons: the case $\mathchar 28931\relax^{0}_{b}\\!\to\mathchar 28931\relax\gamma$, J. Phys. G24 (1998) 979, arXiv:hep-ph/9701399
* [25] F. Legger and T. Schietinger, Photon helicity in $\mathchar 28931\relax^{0}_{b}\to pK\gamma$ decays, Phys. Lett. B645 (2007) 204, arXiv:hep-ph/0605245
* [26] J. Hrivnac, R. Lednicky, and M. Smizanska, Feasibility of beauty baryon polarization measurement in the $\Lambda^{0}{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ decay channel with the $pp$ collider experiment, J. Phys. G21 (1995) 629, arXiv:hep-ph/9405231
* [27] M. Adinolfi et al., Performance of the LHCb RICH detector at the LHC, arXiv:1211.6759, submitted to Eur. Phys. J. C
* [28] A. A. Alves Jr et al., Performance of the LHCb muon system, JINST 8 (2013) P02022, arXiv:1211.1346
* [29] R. Aaij et al., The LHCb trigger and its performance in 2011, JINST 8 (2013) P04022, arXiv:1211.3055
* [30] T. Sjöstrand, S. Mrenna, and P. Skands, PYTHIA 6.4 Physics and manual, JHEP 05 (2006) 026, arXiv:hep-ph/0603175
* [31] I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulation framework, Nuclear Science Symposium Conference Record (NSS/MIC) IEEE (2010) 1155
* [32] D. J. Lange, The EvtGen particle decay simulation package, Nucl. Instrum. Meth. A462 (2001) 152
* [33] P. Golonka and Z. Was, PHOTOS Monte Carlo: a precision tool for QED corrections in $Z$ and $W$ decays, Eur. Phys. J. C45 (2006) 97, arXiv:hep-ph/0506026
* [34] GEANT4 collaboration, J. Allison et al., Geant4 developments and applications, IEEE Trans. Nucl. Sci. 53 (2006) 270
* [35] GEANT4 collaboration, S. Agostinelli et al., GEANT4: A simulation toolkit, Nucl. Instrum. Meth. A506 (2003) 250
* [36] M. Clemencic et al., The LHCb simulation application, Gauss: design, evolution and experience, J. of Phys: Conf. Ser. 331 (2011) 032023
* [37] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and regression trees, Wadsworth international group, Belmont, California, USA, 1984
* [38] R. E. Schapire and Y. Freund, A decision-theoretic generalization of on-line learning and an application to boosting, Jour. Comp. and Syst. Sc. 55 (1997) 119
* [39] T. Skwarnicki, A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, DESY-F31-86-02
* [40] M. Pivk and F. R. Le Diberder, sPlot: a statistical tool to unfold data distributions, Nuclear Instruments and Methods in Physics Research A555 (2005) 356, arXiv:physics/0402083
* [41] Y. Xie, sFit: a method for background subtraction in maximum likelihood fit, arXiv:0905.0724
|
arxiv-papers
| 2013-02-22T13:17:20 |
2024-09-04T02:49:42.022413
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "LHCb collaboration: R. Aaij, C. Abellan Beteta, B. Adeva, M. Adinolfi,\n C. Adrover, A. Affolder, Z. Ajaltouni, J. Albrecht, F. Alessio, M. Alexander,\n S. Ali, G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, S. Amerio,\n Y. Amhis, L. Anderlini, J. Anderson, R. Andreassen, R.B. Appleby, O. Aquines\n Gutierrez, F. Archilli, A. Artamonov, M. Artuso, E. Aslanides, G. Auriemma,\n S. Bachmann, J.J. Back, C. Baesso, V. Balagura, W. Baldini, R.J. Barlow, C.\n Barschel, S. Barsuk, W. Barter, Th. Bauer, A. Bay, J. Beddow, F. Bedeschi, I.\n Bediaga, S. Belogurov, K. Belous, I. Belyaev, E. Ben-Haim, M. Benayoun, G.\n Bencivenni, S. Benson, J. Benton, A. Berezhnoy, R. Bernet, M.-O. Bettler, M.\n van Beuzekom, A. Bien, S. Bifani, T. Bird, A. Bizzeti, P.M. Bj{\\o}rnstad, T.\n Blake, F. Blanc, J. Blouw, S. Blusk, V. Bocci, A. Bondar, N. Bondar, W.\n Bonivento, S. Borghi, A. Borgia, T.J.V. Bowcock, E. Bowen, C. Bozzi, T.\n Brambach, J. van den Brand, J. Bressieux, D. Brett, M. Britsch, T. Britton,\n N.H. Brook, H. Brown, I. Burducea, A. Bursche, G. Busetto, J. Buytaert, S.\n Cadeddu, O. Callot, M. Calvi, M. Calvo Gomez, A. Camboni, P. Campana, A.\n Carbone, G. Carboni, R. Cardinale, A. Cardini, H. Carranza-Mejia, L. Carson,\n K. Carvalho Akiba, G. Casse, M. Cattaneo, Ch. Cauet, M. Charles, Ph.\n Charpentier, P. Chen, N. Chiapolini, M. Chrzaszcz, K. Ciba, X. Cid Vidal, G.\n Ciezarek, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier, C. Coca, V.\n Coco, J. Cogan, E. Cogneras, P. Collins, A. Comerma-Montells, A. Contu, A.\n Cook, M. Coombes, S. Coquereau, G. Corti, B. Couturier, G.A. Cowan, D. Craik,\n S. Cunliffe, R. Currie, C. D'Ambrosio, P. David, P.N.Y. David, I. De Bonis,\n K. De Bruyn, S. De Capua, M. De Cian, J.M. De Miranda, M. De Oyanguren\n Campos, L. De Paula, W. De Silva, P. De Simone, D. Decamp, M. Deckenhoff, L.\n Del Buono, D. Derkach, O. Deschamps, F. Dettori, A. Di Canto, H. Dijkstra, M.\n Dogaru, S. Donleavy, F. Dordei, A. Dosil Su\\'arez, D. Dossett, A. Dovbnya, F.\n Dupertuis, R. Dzhelyadin, A. Dziurda, A. Dzyuba, S. Easo, U. Egede, V.\n Egorychev, S. Eidelman, D. van Eijk, S. Eisenhardt, U. Eitschberger, R.\n Ekelhof, L. Eklund, I. El Rifai, Ch. Elsasser, D. Elsby, A. Falabella, C.\n F\\\"arber, G. Fardell, C. Farinelli, S. Farry, V. Fave, D. Ferguson, V.\n Fernandez Albor, F. Ferreira Rodrigues, M. Ferro-Luzzi, S. Filippov, C.\n Fitzpatrick, M. Fontana, F. Fontanelli, R. Forty, O. Francisco, M. Frank, C.\n Frei, M. Frosini, S. Furcas, E. Furfaro, A. Gallas Torreira, D. Galli, M.\n Gandelman, P. Gandini, Y. Gao, J. Garofoli, P. Garosi, J. Garra Tico, L.\n Garrido, C. Gaspar, R. Gauld, E. Gersabeck, M. Gersabeck, T. Gershon, Ph.\n Ghez, V. Gibson, V.V. Gligorov, C. G\\\"obel, D. Golubkov, A. Golutvin, A.\n Gomes, H. Gordon, M. Grabalosa G\\'andara, R. Graciani Diaz, L.A. Granado\n Cardoso, E. Graug\\'es, G. Graziani, A. Grecu, E. Greening, S. Gregson, O.\n Gr\\\"unberg, B. Gui, E. Gushchin, Yu. Guz, T. Gys, C. Hadjivasiliou, G.\n Haefeli, C. Haen, S.C. Haines, S. Hall, T. Hampson, S. Hansmann-Menzemer, N.\n Harnew, S.T. Harnew, J. Harrison, T. Hartmann, J. He, V. Heijne, K. Hennessy,\n P. Henrard, J.A. Hernando Morata, E. van Herwijnen, E. Hicks, D. Hill, M.\n Hoballah, C. Hombach, P. Hopchev, W. Hulsbergen, P. Hunt, T. Huse, N.\n Hussain, D. Hutchcroft, D. Hynds, V. Iakovenko, M. Idzik, P. Ilten, R.\n Jacobsson, A. Jaeger, E. Jans, P. Jaton, F. Jing, M. John, D. Johnson, C.R.\n Jones, B. Jost, M. Kaballo, S. Kandybei, M. Karacson, T.M. Karbach, I.R.\n Kenyon, U. Kerzel, T. Ketel, A. Keune, B. Khanji, O. Kochebina, I. Komarov,\n R.F. Koopman, P. Koppenburg, M. Korolev, A. Kozlinskiy, L. Kravchuk, K.\n Kreplin, M. Kreps, G. Krocker, P. Krokovny, F. Kruse, M. Kucharczyk, V.\n Kudryavtsev, T. Kvaratskheliya, V.N. La Thi, D. Lacarrere, G. Lafferty, A.\n Lai, D. Lambert, R.W. Lambert, E. Lanciotti, G. Lanfranchi, C. Langenbruch,\n T. Latham, C. Lazzeroni, R. Le Gac, J. van Leerdam, J.-P. Lees, R. Lef\\'evre,\n A. Leflat, J. Lefran\\c{c}ois, S. Leo, O. Leroy, B. Leverington, Y. Li, L. Li\n Gioi, M. Liles, R. Lindner, C. Linn, B. Liu, G. Liu, J. von Loeben, S. Lohn,\n J.H. Lopes, E. Lopez Asamar, N. Lopez-March, H. Lu, D. Lucchesi, J. Luisier,\n H. Luo, F. Machefert, I.V. Machikhiliyan, F. Maciuc, O. Maev, S. Malde, G.\n Manca, G. Mancinelli, U. Marconi, R. M\\\"arki, J. Marks, G. Martellotti, A.\n Martens, L. Martin, A. Mart\\'in S\\'anchez, M. Martinelli, D. Martinez Santos,\n D. Martins Tostes, A. Massafferri, R. Matev, Z. Mathe, C. Matteuzzi, E.\n Maurice, A. Mazurov, J. McCarthy, R. McNulty, A. Mcnab, B. Meadows, F. Meier,\n M. Meissner, M. Merk, D.A. Milanes, M.-N. Minard, J. Molina Rodriguez, S.\n Monteil, D. Moran, P. Morawski, M.J. Morello, R. Mountain, I. Mous, F.\n Muheim, K. M\\\"uller, R. Muresan, B. Muryn, B. Muster, P. Naik, T. Nakada, R.\n Nandakumar, I. Nasteva, M. Needham, N. Neufeld, A.D. Nguyen, T.D. Nguyen, C.\n Nguyen-Mau, M. Nicol, V. Niess, R. Niet, N. Nikitin, T. Nikodem, A.\n Nomerotski, A. Novoselov, A. Oblakowska-Mucha, V. Obraztsov, S. Oggero, S.\n Ogilvy, O. Okhrimenko, R. Oldeman, M. Orlandea, J.M. Otalora Goicochea, P.\n Owen, B.K. Pal, A. Palano, M. Palutan, J. Panman, A. Papanestis, M.\n Pappagallo, C. Parkes, C.J. Parkinson, G. Passaleva, G.D. Patel, M. Patel,\n G.N. Patrick, C. Patrignani, C. Pavel-Nicorescu, A. Pazos Alvarez, A.\n Pellegrino, G. Penso, M. Pepe Altarelli, S. Perazzini, D.L. Perego, E. Perez\n Trigo, A. P\\'erez-Calero Yzquierdo, P. Perret, M. Perrin-Terrin, G. Pessina,\n K. Petridis, A. Petrolini, A. Phan, E. Picatoste Olloqui, B. Pietrzyk, T.\n Pila\\v{r}, D. Pinci, S. Playfer, M. Plo Casasus, F. Polci, G. Polok, A.\n Poluektov, E. Polycarpo, D. Popov, B. Popovici, C. Potterat, A. Powell, J.\n Prisciandaro, V. Pugatch, A. Puig Navarro, G. Punzi, W. Qian, J.H.\n Rademacker, B. Rakotomiaramanana, M.S. Rangel, I. Raniuk, N. Rauschmayr, G.\n Raven, S. Redford, M.M. Reid, A.C. dos Reis, S. Ricciardi, A. Richards, K.\n Rinnert, V. Rives Molina, D.A. Roa Romero, P. Robbe, E. Rodrigues, P.\n Rodriguez Perez, S. Roiser, V. Romanovsky, A. Romero Vidal, J. Rouvinet, T.\n Ruf, F. Ruffini, H. Ruiz, P. Ruiz Valls, G. Sabatino, J.J. Saborido Silva, N.\n Sagidova, P. Sail, B. Saitta, C. Salzmann, B. Sanmartin Sedes, M. Sannino, R.\n Santacesaria, C. Santamarina Rios, E. Santovetti, M. Sapunov, A. Sarti, C.\n Satriano, A. Satta, M. Savrie, D. Savrina, P. Schaack, M. Schiller, H.\n Schindler, M. Schlupp, M. Schmelling, B. Schmidt, O. Schneider, A. Schopper,\n M.-H. Schune, R. Schwemmer, B. Sciascia, A. Sciubba, M. Seco, A. Semennikov,\n K. Senderowska, I. Sepp, N. Serra, J. Serrano, P. Seyfert, M. Shapkin, I.\n Shapoval, P. Shatalov, Y. Shcheglov, T. Shears, L. Shekhtman, O. Shevchenko,\n V. Shevchenko, A. Shires, R. Silva Coutinho, T. Skwarnicki, N.A. Smith, E.\n Smith, M. Smith, M.D. Sokoloff, F.J.P. Soler, F. Soomro, D. Souza, B. Souza\n De Paula, B. Spaan, A. Sparkes, P. Spradlin, F. Stagni, S. Stahl, O.\n Steinkamp, S. Stoica, S. Stone, B. Storaci, M. Straticiuc, U. Straumann, V.K.\n Subbiah, S. Swientek, V. Syropoulos, M. Szczekowski, P. Szczypka, T. Szumlak,\n S. T'Jampens, M. Teklishyn, E. Teodorescu, F. Teubert, C. Thomas, E. Thomas,\n J. van Tilburg, V. Tisserand, M. Tobin, S. Tolk, D. Tonelli, S.\n Topp-Joergensen, N. Torr, E. Tournefier, S. Tourneur, M.T. Tran, M. Tresch,\n A. Tsaregorodtsev, P. Tsopelas, N. Tuning, M. Ubeda Garcia, A. Ukleja, D.\n Urner, U. Uwer, V. Vagnoni, G. Valenti, R. Vazquez Gomez, P. Vazquez\n Regueiro, S. Vecchi, J.J. Velthuis, M. Veltri, G. Veneziano, M. Vesterinen,\n B. Viaud, D. Vieira, X. Vilasis-Cardona, A. Vollhardt, D. Volyanskyy, D.\n Voong, A. Vorobyev, V. Vorobyev, C. Vo\\ss, H. Voss, R. Waldi, R. Wallace, S.\n Wandernoth, J. Wang, D.R. Ward, N.K. Watson, A.D. Webber, D. Websdale, M.\n Whitehead, J. Wicht, J. Wiechczynski, D. Wiedner, L. Wiggers, G. Wilkinson,\n M.P. Williams, M. Williams, F.F. Wilson, J. Wishahi, M. Witek, S.A. Wotton,\n S. Wright, S. Wu, K. Wyllie, Y. Xie, F. Xing, Z. Xing, Z. Yang, R. Young, X.\n Yuan, O. Yushchenko, M. Zangoli, M. Zavertyaev, F. Zhang, L. Zhang, W.C.\n Zhang, Y. Zhang, A. Zhelezov, A. Zhokhov, L. Zhong, A. Zvyagin",
"submitter": "Jean Wicht",
"url": "https://arxiv.org/abs/1302.5578"
}
|
1302.5660
|
# Variability Improvement by Interface Passivation and EOT Scaling of InGaAs
Nanowire MOSFETs
Jiangjiang J. Gu, Xinwei Wang, Heng Wu,
Roy G. Gordon, and Peide D. Ye This work was supported in part by Air Force
Office of Scientific Research (AFOSR) monitored by Prof. James C. M. Hwang and
in part by Semiconductor Research Corporation (SRC) Focus Center Research
Program (FCRP) Materials, Structures, and Devices (MSD) Focus Center.J. J. Gu,
H. Wu, and P. D. Ye are with the Department of Electrical and Computer
Engineering, Purdue University, West Lafayette, IN, 47907 USA e-mail:
([email protected]).X. Wang, and R. G. Gordon are with the Department of
Chemistry and Chemical Biology, Harvard University, Cambridge, MA, 02138 USA.
###### Abstract
High performance InGaAs gate-all-around (GAA) nanowire MOSFETs with channel
length ($L_{ch}$) down to 20nm have been fabricated by integrating a higher-
_k_ LaAlO3-based gate stack with an equivalent oxide thickness of 1.2nm. It is
found that inserting an ultrathin (0.5nm) Al2O3 interfacial layer between
higher-_k_ and InGaAs can significantly improve the interface quality and
reduce device variation. As a result, a record low subthreshold swing of
63mV/dec has been demonstrated at sub-80nm $L_{ch}$ for the first time, making
InGaAs GAA nanowire devices a strong candidate for future low-power
transistors.
###### Index Terms:
Variability, MOSFET, InGaAs, nanowire.
## I Introduction
III-V compound semiconductors have recently been explored as alternative
channel materials for future CMOS technologies [1]. InxGa1-xAs gate-all-around
(GAA) nanowire MOSFETs fabricated using either bottom-up [2, 3] or top-down
technology [4, 5, 6] are of particular interest due to their excellent
electrostatic control. Although the improvement of on-state and off-state
device metrics has been enabled by nanowire width ($W_{NW}$) scaling, the
scalability of the devices in [4] is greatly limited by the large equivalent
oxide thickness ($EOT$) of 4.5nm. Aggressive $EOT$ scaling is required to meet
the stringent requirements on electrostatic control [7, 8, 5]. It is shown
recently that sub-1nm $EOT$ with good interface quality can be achieved by
Al2O3 passivation on planar InGaAs devices [9]. Considering the inherent 3D
nature of the nanowire structure, whether such a gate stack technology can be
successfully integrated in the InGaAs nanowire MOSFET fabrication process
remains to be shown. In addition, the electron transport in the devices [4]
can be enhanced by increasing the Indium concentration in the InGaAs nanowire
channel, which promises further on-state metrics improvements such as on-
current ($I_{ON}$) and transconductance ($g_{m}$).
In this letter, we fabricated In0.65Ga0.35As GAA nanowire MOSFETs with atomic
layer deposited (ALD) LaAlO3-based gate stack ($EOT$=1.2nm). ALD LaAlO3 is a
promising gate dielectric for future 3D transistors because of its high
dielectric constant (_k_ =16), precise thickness control, excellent uniformity
and conformality [10]. The effect of ultra-thin Al2O3 insertion on the device
on-state and off-state characteristics has been systematically studied. It is
shown that Al2O3 insertion effectively passivates the LaAlO3/InGaAs interface,
leading to the improvement in both device scalability and variability. Record
low subthreshold swing (_SS_) of 63mV/dec has been achieved at sub-80nm
$L_{ch}$, indicating excellent interface quality and gate electrostatic
control. Detailed device variation analysis has been presented for the first
time for InGaAs MOSFETs, which helps identify new manufacturing challenges for
future logic devices with high mobility channels.
## II Experiment
Figure 1: (a) Schematic diagram and (b) cross sectional view of InGaAs GAA
nanowire MOSFETs with ALD Al2O3/LaAlO3 gate stack. (c) Output characteristics
(source current) of InGaAs GAA nanowire MOSFETs ($L_{ch}$=20nm) with
Al2O3-first (solid line) and LaAlO3-first (dashed line) gate stack.
Fig. 1(a) and (b) show the schematic diagram and cross sectional view of
InGaAs GAA nanowire MOSFETs fabricated in this work. The fabrication process
is similar to that described in [4]. A HCl-based wet etch process was used to
release the InGaAs nanowires with minimum $W_{NW}$ of 20nm. Each device had 4
nanowires in parallel as shown in Fig. 1(a). Because of the relatively high
etch selectivity between InAlAs and InP, an additional 100nm InAlAs etch stop
layer was added under the 80nm InP sacrificial layer to improve the control of
the nanowire release process. The InGaAs nanowire channel consists of one 10nm
In0.53Ga0.47As layer sandwiched by two 10nm In0.65Ga0.35As layers shown in
Fig. 1(b), yielding a total nanowire height ($H_{NW}$) of 30nm. Here the
heterostructure design ensures the high quality epitaxial layers grown by
molecular beam epitaxy while maximizing the Indium concentration in the
nanowire. A 0.5nm Al2O3, 4nm LaAlO3, and 40nm WN high-k/metal gate stack were
grown by ALD surrounding all facets of the nanowires. Two samples were
fabricated in parallel with only the sequence of the Al2O3 and LaAlO3 growth
deliberately switched. Both samples were treated with 10% (NH4)2S, and then
transferred into the ALD chamber within 1 minute of air break. Since the
Al2O3-first and LaAlO3-first sample had the same $EOT$ of 1.2nm and underwent
the same process flow, the difference of device performance can be ascribed to
the effect of the Al2O3 passivation. All other fabrication details can be
found in [4]. In this letter, the channel length ($L_{ch}$) is defined as the
width of the electron beam resist in the source/drain implantation process and
has been verified by scanning electron microscopy.
## III Results and discussion
Figure 2: (a) Transfer characteristics (source current) at $V_{ds}$=0.05, 0.5,
and 1V (b) $g_{m}$-$V_{gs}$ of Al2O3-first and LaAlO3-first InGaAs GAA
nanowire MOSFETs with $L_{ch}=$20nm.
Fig. 1(c) shows the output characteristics of two representative Al2O3-first
and LaAlO3-first InGaAs GAA nanowire MOSFETs with $L_{ch}=20nm$. Fig. 2(a) and
(b) show the transfer characteristics and transconductance of the same
devices. Due to the large junction leakage current in the drain, the source
current $I_{s}$ is shown in the current-voltage characteristics and used to
calculate $I_{ON}$ and $g_{m}$. The Al2O3-first device shows higher
$I_{ON}=57\mu$A/wire at $V_{DD}=V_{ds}=V_{gs}-V_{T}=0.5V$ and peak
transconductance $g_{m,max}=165\mu$S/wire at $V_{ds}=0.5V$, compared to
48$\mu$A/wire and 155$\mu$S/wire for the LaAlO3-first device. Both devices
operate in enhancement-mode, with a linearly extrapolated $V_{T}$ of 0.14V and
0.11V, respectively. For the off-state performance, the Al2O3-first device
shows a $SS$ of 75mV/dec and $DIBL$ of 40mV/V, while the LaAlO3-first device
shows higher $SS$ of 80mV/V and higher $DIBL$ of 73mV/V.
Figure 3: $I_{ON}$ and peak $g_{m}$ box plots of Al2O3-first and LaAlO3-first
devices with $L_{ch}=20nm$ and $W_{NW}=20nm$ at $V_{DD}=0.5V$.
To study the statistical distribution of the on-state metrics, the box plots
for $I_{ON}$ and $g_{m,max}$ at $V_{DD}=0.5V$ are shown in Fig. 3. The box
plots include measurements from all 50 devices with $L_{ch}$ of 20nm and
$W_{NW}$ of 20nm. Although only a 12% (10%) increase in mean $I_{ON}$
($g_{m,max}$) is observed for the devices with Al2O3 insertion, a 54% (64%)
reduction in standard deviation of $I_{ON}$ ($g_{m,max}$) is obtained on the
Al2O3-first devices, indicating a significant improvement in device variation
by effective passivation of interface traps. The $I_{ON}$ variation is
impacted by several variation sources including parasitic resistance,
effective mobility and $V_{T}$ variation [11], all of which are sensitive to
the interface quality of the high-_k_ /InGaAs nanowire surface.
To further investigate the scalability and off-state performance variability,
the averages and standard deviations of $SS$, $DIBL$ and $V_{T}$ as a function
of $L_{ch}$ are shown in Fig. 4 for Al2O3-first and LaAlO3-first devices with
$W_{NW}=20nm$. The $SS$ and $DIBL$ remain almost constant with $L_{ch}$
scaling down to 50nm for both samples. This indicates that the current GAA
structure with 1.2nm $EOT$ has yielded a very small geometric screening length
and the devices show excellent resistance to short channel effects. Average
$SS=76$mV/dec and $DIBL=25$mV/V are obtained for Al2O3-first devices with
$L_{ch}$ between 50 and 80nm, compared to 79mV/dec and 39mV/V for the
LaAlO3-first devices, indicating a reduction of interface trap density
($D_{it}$) with Al2O3 passivation. A small increase in $V_{T}$ is also
observed for the Al2O3-first sample, which is ascribed to the reduction in
negative donor-type charges at the interface.
Figure 4: Scaling metrics of $SS$, $DIBL$ and $V_{T}$ and their standard
deviations (STDs) for Al2O3-first and LaAlO3-first InGaAs GAA nanowire MOSFETs
with $W_{NW}=20nm$
Furthermore, larger standard deviations of $SS$, $DIBL$ and $V_{T}$ are
observed for devices without Al2O3 insertion at all $L_{ch}$, indicating that
the relatively low interface quality of the LaAlO3-first devices introduced
additional device variation. It is also shown that the off-state performance
variation increases as $L_{ch}$ scales below 50nm, which is ascribed to the
reduction in electrostatic control.
Figure 5: (a) $SS$ box plot and histogram for all Al2O3-first and LaAlO3-first
devices with $L_{ch}$ between 50$-$80nm and $W_{NW}$ of 20nm. (b) Transfer
characteristic (source current) of a Al2O3-first and a LaAlO3-first InGaAs GAA
nanowire MOSFET with lowest SS of 63mV/dec and 69mV/dec, respectively.
Fig. 5(a) show the box plot and histogram of $SS$ measured from all the
Al2O3-first and LaAlO3-first devices with $L_{ch}$ between 50$-$80nm and
$W_{NW}$ of 20nm. Although the average $SS$ for Al2O3-first devices is only
1.9mV/dec lower than LaAlO3-first devices, 25% and 46% reduction in standard
deviation and interquartile range has been obtained on Al2O3-first devices,
indicating the effectiveness of Al2O3 passivation. Since these devices are
immune to short channel effects, the $SS$ is dominated by $D_{it}$. Therefore,
$D_{it}$ can be estimated from $SS$ using the following equation,
$SS=\frac{60}{300}T(1+(\frac{qD_{it}}{C_{ox}}))mV/dec$ (1)
where $T$ is the temperature in Kelvin, $q$ is the electronic charge, and
$C_{ox}$ is the oxide capacitance. 90% of the devices with Al2O3 insertion
show $SS$ between $66.0-83.3$mV/dec, corresponding to a $D_{it}$ between
$1.80\times 10^{12}-6.98\times 10^{12}$cm-2eV-1. Fig. 5(b) shows the transfer
characteristics of an 80nm $L_{ch}$ hero Al2O3-first device and a 60nm
$L_{ch}$ hero LaAlO3-first device with the lowest $SS=63$mV/dec and 69mV/dec,
respectively. The estimated $D_{it}$ for these two devices are $8.98\times
10^{11}$ and $2.69\times 10^{12}$cm-2eV-1. The near-ideal $SS$ is achieved
because of the surface area of the nanowires, aggressive _EOT_ scaling, and
effective interface passivation.
## IV Conclusion
InGaAs GAA nanowire MOSFETs with $L_{ch}$ down to 20nm and $EOT$ down to 1.2nm
have been demonstrated, showing excellent gate electrostatic control. The
insertion of an ultra-thin 0.5nm Al2O3 between LaAlO3/InGaAs interface has
shown to effectively improve the scalability and variability of the devices.
Near-60mV/dec $SS$ is achieved on InGaAs nanowires with scaled $EOT$ and
effective interface passivation. The InGaAs GAA nanowire MOSFET is a promising
candidate for low-power logic applications beyond 10nm.
## Acknowledgment
The authors would like to thank A. T. Neal, M. S. Lundstrom, D. A. Antoniadis,
and J. A. del alamo for the valuable discussions.
## References
* [1] J. A. del Alamo, “Nanometre-scale electronics with III-V compound semiconductors,” _Nature_ , vol. 479, no. 7373, pp. 317–323, Nov. 2011.
* [2] C. Thelander, C. Rehnstedt, L. Froberg, E. Lind, T. Martensson, P. Caroff, T. Lowgren, B. Ohlsson, L. Samuelson, and L.-E. Wernersson, “Development of a vertical wrap-gated InAs FET,” _IEEE Transactions on Electron Devices_ , vol. 55, no. 11, pp. 3030–3036, 2008.
* [3] K. Tomioka, M. Yoshimura, and T. Fukui, “Vertical In0.7Ga0.3As nanowire surrounding-gate transistors with high-k gate dielectric on Si substrate,” in _IEDM Tech. Dig._ , 2011, pp. 33.3.1–33.3.4.
* [4] J. J. Gu, Y. Q. Liu, Y. Q. Wu, R. Colby, R. G. Gordon, and P. D. Ye, “First experimental demonstration of gate-all-around III-V MOSFETs by top-down approach,” in _IEDM Tech. Dig._ , 2011, pp. 769–772.
* [5] J. J. Gu, X. W. Wang, H. Wu, J. Shao, A. T. Neal, M. J. Manfra, R. G. Gordon, and P. D. Ye, “20-80nm channel length InGaAs gate-all-around nanowire MOSFETs with EOT=1.2nm and Lowest SS=63mV/dec,” in _IEDM Tech. Dig._ , 2012, pp. 633–666.
* [6] F. Xue, A. Jiang, Y.-T. Chen, Y. Wang, F. Zhou, Y.-F. Chang, and J. Lee, “Excellent Device Performance of 3D In0.53Ga0.47As Gate-Wrap-Around Field-Effect-Transistors with High-k Gate Dielectrics,” in _IEDM Tech. Dig._ , 2012, pp. 629–632.
* [7] M. Radosavljevic, G. Dewey, D. Basu, J. Boardman, B. Chu-Kung, J. Fastenau, S. Kabehie, J. Kavalieros, V. Le, W. Liu, D. Lubyshev, M. Metz, K. Millard, N. Mukherjee, L. Pan, R. Pillarisetty, W. Rachmady, U. Shah, H. Then, and R. Chau, “Electrostatics improvement in 3-D tri-gate over ultra-thin body planar InGaAs quantum well field effect transistors with high-K gate dielectric and scaled gate-to-drain/gate-to-source separation,” in _IEDM Tech. Dig._ , 2011, pp. 33.1.1–33.1.4.
* [8] M. Egard, L. Ohlsson, B. Borg, F. Lenrick, R. Wallenberg, L.-E. Wernersson, and E. Lind, “High transconductance self-aligned gate-last surface channel In0.53Ga0.47As MOSFET,” in _IEDM Tech. Dig._ , 2011, pp. 13.2.1–13.2.4.
* [9] R. Suzuki, N. Taoka, M. Yokoyama, S. Lee, S. H. Kim, T. Hoshii, T. Yasuda, W. Jevasuwan, T. Maeda, O. Ichikawa, N. Fukuhara, M. Hata, M. Takenaka, and S. Takagi, “1-nm-capacitance-equivalent-thickness HfO2/Al2O3/InGaAs metal-oxide-semiconductor structure with low interface trap density and low gate leakage current density,” _Applied Physics Letters_ , vol. 100, no. 13, p. 132906, 2012.
* [10] J. Huang, N. Goel, H. Zhao, C. Kang, K. Min, G. Bersuker, S. Oktyabrsky, C. Gaspe, M. Santos, P. Majhi, P. Kirsch, H.-H. Tseng, J. Lee, and R. Jammy, “InGaAs MOSFET performance and reliability improvement by simultaneous reduction of oxide and interface charge in ALD (La)AlOx/ZrO2 gate stack,” in _IEDM Tech. Dig._ , 2009, pp. 1–4.
* [11] T. Matsukawa, Y. Liu, S. O’uchi, K. Endo, J. Tsukada, H. Yamauchi, Y. Ishikawa, H. Ota, S. Migita, Y. Morita, W. Mizubayashi, K. Sakamoto, and M. Masahara, “Decomposition of on-current variability of nMOS FinFETs for prediction beyond 20 nm,” _IEEE Transactions on Electron Devices_ , vol. 59, no. 8, pp. 2003–2010, August 2012.
|
arxiv-papers
| 2013-02-22T17:44:15 |
2024-09-04T02:49:42.031576
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jiangjiang J. Gu, Xinwei Wang, Heng Wu, Roy G. Gordon, and Peide D. Ye",
"submitter": "Jiangjiang J. Gu",
"url": "https://arxiv.org/abs/1302.5660"
}
|
1302.5749
|
arxiv-papers
| 2013-02-23T01:08:36 |
2024-09-04T02:49:42.039517
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jonathan M. Friedman",
"submitter": "Jonathan Friedman",
"url": "https://arxiv.org/abs/1302.5749"
}
|
|
1302.5761
|
# Path-integral simulations with fermionic and bosonic reservoirs:
Transport and dissipation in molecular electronic junctions
Lena Simine Chemical Physics Theory Group, Department of Chemistry,
University of Toronto, 80 Saint George St. Toronto, Ontario, Canada M5S 3H6
Dvira Segal Chemical Physics Theory Group, Department of Chemistry,
University of Toronto, 80 Saint George St. Toronto, Ontario, Canada M5S 3H6
###### Abstract
We expand iterative numerically-exact influence functional path-integral tools
and present a method capable of following the nonequilibrium time evolution of
subsystems coupled to multiple bosonic and fermionic reservoirs
simultaneously. Using this method, we study the real-time dynamics of charge
transfer and vibrational mode excitation in an electron conducting molecular
junction. We focus on nonequilibrium vibrational effects, particularly, the
development of vibrational instability in a current-rectifying junction. Our
simulations are performed by assuming large molecular vibrational
anharmonicity (or low temperature). This allows us to truncate the molecular
vibrational mode to include only a two-state system. Exact numerical results
are compared to perturbative Master equation calculations demonstrating an
excellent agreement in the weak electron-phonon coupling regime. Significant
deviations take place only at strong coupling. Our simulations allow us to
quantify the contribution of different transport mechanisms, coherent dynamics
and inelastic transport, in the overall charge current. This is done by
studying two model variants: The first admits inelastic electron transmission
only, while the second one allows for both coherent and incoherent pathways.
## I Introduction
Following the quantum dynamics of an open-dissipative many-body system with
multiple bosonic and fermionic reservoirs in a nonequilibrium state, beyond
the linear response regime, is a significant theoretical and computational
challenge. In the realm of molecular conducting junctions, we should describe
the out-of-equilibrium dynamics of the molecular unit while handling both
electrons and molecular vibrations, accounting for many-body effects such as
electron-electron, phonon-phonon and electron-phonon interactions. Given this
complexity, studies in this field are mostly focused on steady-state
properties, using e.g., scattering theory Bonca and Trugmann (1995); Ness _et
al._ (2001); Cízek _et al._ (2004), while ignoring vibrational nonequilibrium
effects. Perturbative treatments (in either the molecule-leads coupling
parameter or the electron-phonon interaction energy) are commonly used,
including the nonequilibrium Green’s function technique Galperin _et al._
(2007a); Mitra _et al._ (2004); Galperin _et al._ (2007b, 2009); Fransson
and Galperin (2010) and Master equation approaches Mitra _et al._ (2004);
Segal and Nitzan (2002); Leijnse and Wegewijs (2008); Hartle and Thoss (2011a,
b); Volkovich _et al._ (2010). For following the real-time dynamics of such
systems, involved methods have been recently developed, e.g., semiclassical
approaches Swenson _et al._ (2011, 2012).
Figure 1: Left Panel: Generic setup considered in this work, including a
subsystem ($S$) coupled to multiple fermionic ($F$) and bosonic ($B$)
reservoirs. Right panel: Molecular electronic realization with two metals, $L$
and $R$, connected by two electronic levels, $D$ and $A$. Electronic
transitions in this junction are coupled to excitation/de-excitation processes
of a particular, anharmonic, vibrational mode that plays the role of the
“subsystem”. This mode may dissipate its excess energy to a secondary phonon
bath $B$.
In this work, we extend numerically-exact path-integral methods, and follow
the dynamics of a subsystem coupled to multiple out-of-equilibrium bosonic and
fermionic reservoirs. The technique is then applied on a molecular junction
realization, with the motivation to address basic problems in the field of
molecular electronics. Particularly, in this work we consider the dynamics and
steady-state properties of a conducting molecular junction acting as a charge
rectifier. A scheme of the generic setup and a particular molecular junction
realization are depicted in Fig. 1.
The time evolution scheme developed in this paper treats both bosonic and
fermionic reservoirs. This is achieved by combining two related iterative
path-integral methods: (i) The quasi-adiabatic path-integral approach (QUAPI)
of Makri et al. Makri and Makarov (1995a); *QUAPI2, applicable for the study
of subsystem-boson models, and (i) the recently developed influence-functional
path-integral (INFPI) technique Segal _et al._ (2010); *IF2, able to produce
the dynamics of subsystems in contact with multiple fermi baths. The latter
method (INFPI) essentially generalizes QUAPI. It relies on the observation
that in out-of-equilibrium (and/or finite temperature) situations bath
correlations have a finite range, allowing for their truncation beyond a
memory time dictated by the voltage-bias and the temperature. Taking advantage
of this fact, an iterative-deterministic time-evolution scheme can be
developed, where convergence with respect to the memory length can in
principle be reached.
The principles of the INFPI approach have been detailed in Segal _et al._
(2010); *IF2, where it has been adopted for investigating dissipation effects
in the nonequilibrium spin-fermion model and charge occupation dynamics in
correlated quantum dots. Recently, it was further utilized for examining the
effect of a magnetic flux on the intrinsic coherence dynamics in a double
quantum dot system Bedkihal and Segal (2012), and for studying relaxation and
equilibration dynamics in finite metal grains Kulkarni _et al._ (2012);
*Kunal2.
Numerically-exact methodologies are typically limited to simple models;
analytic results are further restricted to specific parameters. The Anderson-
Holstein (AH) model has been studied extensively in this context. In this
model the electronic structure of the molecule is represented by a single
spinless electronic level, with electron occupation on the dot coupled to the
displacement of a single oscillator mode, representing an internal vibration.
This vibration may connect with a secondary phonon bath, representing a larger
phononic environment (internal modes, solvent). The AH model has been
simulated exactly with the secondary phonon bath, using a a real-time path-
integral Monte Carlo approach Muhlbacher and Rabani (2008), and by extending
the multilayer multiconfiguration time-dependent Hartree method to include
fermionic degrees of freedom Wang _et al._ (2011). More recently, the model
has been simulated by adopting the iterative-summation of path-integral
approach Weiss _et al._ (2008); Eckel _et al._ (2010); Hutzen _et al._
(2012).
In this paper, we examine a variant of the AH model, the Donor (D)-Acceptor
(A) electronic rectifier model Aviram and Ratner (1974). This model
incorporates nonlocal electron-vibration interactions: electronic transitions
between the two molecular states, A and D, are coupled to a particular
internal molecular vibrational mode. Within this simple system, we are
concerned with the development of vibrational instability: Significant
molecular heating can take place once the D level is lifted above the A level,
as the excess electronic energy is used to excite the vibrational mode. This
process may ultimately lead to junction instability and breakdown Lu _et al._
(2011). We have recently studied a variant of this model (excluding direct D-A
tunneling element), using a Master equation method, by working in the weak
electron-phonon coupling limit. Simine and Segal (2012). An important
observation in that work has been that since the development of this type of
instability is directly linked to the breakdown of the detailed balance
relation above a certain bias (resulting in an enhanced vibrational excitation
rate constant, over relaxation), it suffices to describe the vibrational mode
as a truncated two-level system. In this picture, population inversion in the
two-state system evinces on the development of vibrational instability.
Our objectives here are threefold: (i) To present a numerically-exact
iterative scheme for following the dynamics of a quantum system driven to a
nonequilibrium steady-state due to its coupling to multiple bosonic and
fermionic reservoirs. (ii) To demonstrate the applicability of the method in
the field of molecular electronics. Particularly, to explore the development
of vibrational instability in conducting molecules. (iii) To evaluate the
performance and accuracy of standard-perturbative Master equation treatments,
by comparing their predictions to exact results. Since Master equation
techniques are extensively used for explaining charge transfer phenomenology,
scrutinizing their validity and accuracy is an important task.
The plan of the paper is as follows. In Sec. II we introduce the path-integral
formalism. We describe the iterative time evolution scheme in Sec. III, by
exemplifying it to the case of a spin subsystem. Sec. IV describes a molecular
electronics application, and we follow both electrons and vibrational dynamics
in a dissipative molecular rectifier. Sec. V concludes. For simplicity, we use
the conventions $\hbar\equiv 1$, electron charge $e\equiv 1$, and Boltzmann
constant $k_{B}=1$.
## II Path-integral formulation
We consider a multi-level subsystem, with the Hamiltonian $H_{S}$, coupled to
multiple bosonic ($B$) and fermionic ($F$) reservoirs that are prepared in an
out-of-equilibrium initial state. The total Hamiltonian $H$ is written as
$\displaystyle H=H_{S}+H_{B}+H_{F}+V_{SB}+V_{SF}.$ (1)
In the energy representation of the isolated subsystem, its Hamiltonian can be
written as
$\displaystyle H_{S}=\sum_{s}\epsilon_{s}|s\rangle\langle s|+\sum_{s\neq
s^{\prime}}v_{s,s^{\prime}}|s\rangle\langle s^{\prime}|.$ (2)
The Hamiltonian $H_{F}$ may comprise of multiple fermionic baths, and
similarly, $H_{B}$ may contain more than a single bosonic reservoir. The terms
$V_{SF}$ and $V_{SB}$ include the coupling of the subsystem to the fermionic
and bosonic environments, respectively. Coupling terms which directly link the
subsystem to both bosonic and fermionic degrees of freedom are not included.
However, $V_{SB}$ and $V_{SF}$ may contain non-additive contributions with
their own set of reservoirs. For example, $V_{SF}$ may admit subsystem
assisted tunneling terms, between separate fermionic baths (metals), see Fig.
1.
We are interested in the time evolution of the reduced density matrix
$\rho_{S}(t)$. This quantity is obtained by tracing the total density matrix
$\rho$ over the bosonic and fermionic reservoirs’ degrees of freedom
$\displaystyle\rho_{S}(t)={\rm Tr}_{B}{\rm
Tr}_{F}\left[e^{-iHt}\rho(0)e^{iHt}\right].$ (3)
We also study the dynamics of certain expectation values, for example, charge
current and energy current. The time evolution of an operator $A$ can be
calculated using the relation
$\displaystyle\langle A(t)\rangle$ $\displaystyle=$ $\displaystyle{\rm
Tr}[\rho(0)A(t)]$ (4) $\displaystyle=$ $\displaystyle\lim_{\lambda\rightarrow
0}\frac{\partial}{\partial\lambda}{\rm Tr}\big{[}\rho(0)e^{iHt}e^{\lambda
A}e^{-iHt}\big{]}.$
Here $\lambda$ is a real number, taken to vanish at the end of the
calculation. When unspecified, the trace is performed over the subsystem
states and all the environmental degrees of freedom. In what follows, we
detail the path-integral approach for the calculation of the reduced density
matrix. Section III.5 presents expressions useful for time-evolving
expectation values of operators.
As in standard path-integral approaches, we decompose the time evolution
operator into a product of $N$ exponentials, $e^{iHt}=\left(e^{iH\delta
t}\right)^{N}$ where $t=N\delta t$, and define the discrete time evolution
operator $\mathcal{G}\equiv e^{iH\delta t}$. Using the Trotter decomposition,
we approximate $\mathcal{G}$ by
$\displaystyle\mathcal{G}\sim\mathcal{G_{F}}\mathcal{G_{B}}\mathcal{G_{S}}\mathcal{G_{B}}\mathcal{G_{F}},$
(5)
where we define
$\displaystyle\mathcal{G_{F}}$ $\displaystyle\equiv$ $\displaystyle
e^{i(H_{F}+V_{SF})\delta t/2},\,\,\,\,\mathcal{G_{B}}\equiv
e^{i(H_{B}+V_{SB})\delta t/2}$ $\displaystyle\mathcal{G_{S}}$
$\displaystyle\equiv$ $\displaystyle e^{iH_{S}\delta t}$ (6)
Note that the breakup of the subsystem-bath term,
$e^{i(H_{F}+V_{SF}+H_{B}+H_{SB})\delta
t/2}\sim\mathcal{G_{B}}\mathcal{G_{F}}$, is exact if the commutator
$[V_{SB},V_{SF}]$ vanishes. This fact allows for an exact separation between
the bosonic and fermionic influence functionals, as we explain below. This
commutator nullifies if the fermionic and bosonic baths couple to commuting
subsystem degrees of freedom, for example, $V_{SB}\propto|s\rangle\langle s|$
and $V_{SF}\propto|s^{\prime}\rangle\langle s^{\prime}|$.
As an initial condition, we assume that at time $t=0$ the subsystem and the
baths are decoupled, $\rho(0)=\rho_{S}(0)\otimes\rho_{B}\otimes\rho_{F}$, and
the baths are prepared in a nonequilibrium (biased) state. For example, we may
include in $H_{F}$ two Fermi seas that are prepared each in a grand canonical
state with different chemical potentials and temperatures. The overall time
evolution can be represented by a path-integral over the subsystem states,
$\displaystyle\langle s_{N}^{+}|\rho_{S}(t)|s_{N}^{-}\rangle$
$\displaystyle=\sum_{s^{\pm}_{0}}\sum_{s_{1}^{\pm}}...\sum_{s_{N-1}^{\pm}}{\rm
Tr}_{B}{\rm Tr}_{F}\Big{[}\langle
s_{N}^{+}|\mathcal{G}^{\dagger}|s^{+}_{N-1}\rangle\langle
s_{N-1}^{+}|\mathcal{G}^{\dagger}|s^{+}_{N-2}\rangle...\langle
s_{0}^{+}|\rho(0)|s^{-}_{0}\rangle...\langle
s_{N-2}^{-}|\mathcal{G}|s^{-}_{N-1}\rangle\langle
s_{N-1}^{-}|\mathcal{G}|s_{N}^{-}\rangle\Big{]}.$ (7)
Here $s_{k}^{\pm}$ represents the discrete path on the forward ($+$) and
backward ($-$) contour. The calculation of each discrete term is done by
introducing four additional summations, e.g.,
$\displaystyle\langle
s_{k}^{-}|{\mathcal{G}}|s_{k+1}^{-}\rangle=\sum_{f^{-}_{k}}\sum_{g^{-}_{k}}\sum_{m^{-}_{k}}\sum_{n^{-}_{k}}\langle
s_{k}^{-}|\mathcal{G_{F}}|f_{k}^{-}\rangle\langle
f_{k}^{-}|\mathcal{G_{B}}|m_{k}^{-}\rangle\langle
m_{k}^{-}|\mathcal{G_{S}}|n_{k}^{-}\rangle\langle
n_{k}^{-}|\mathcal{G_{B}}|g_{k}^{-}\rangle\langle
g_{k}^{-}|\mathcal{G_{F}}|s_{k+1}^{-}\rangle.$ (8)
We substitute Eq. (8) into Eq. (7), further utilizing the factorized
subsystem-reservoirs initial condition as mentioned above, and find that the
function under the sum can be written as a product of separate terms,
$\displaystyle\langle s_{N}^{+}|\rho_{S}(t)|s_{N}^{-}\rangle=\sum_{\bf
s^{\pm}}\sum_{\bf f^{\pm}}\sum_{\bf g^{\pm}}\sum_{\bf m^{\pm}}\sum_{\bf
n^{\pm}}I_{S}({\bf m^{\pm}},{\bf n^{\pm}},s^{\pm}_{0})I_{F}({\bf
s^{\prime\pm}},{\bf f^{\pm}},{\bf g^{\pm}})I_{B}({\bf f^{\pm}},{\bf
m^{\pm}},{\bf n^{\pm}},{\bf g^{\pm}}).$ (9)
Here $I_{S}$ follows the subsystem ($H_{S}$) free evolution. The term $I_{F}$
is referred to as a fermionic “influence functional” (IF), and it contains the
effect of the fermionic degrees of freedom on the subsystem dynamics.
Similarly, $I_{B}$, the bosonic IF, describes how the bosonic degrees of
freedom affect the subsystem. Bold letters correspond to a path, for example,
${\bf m^{\pm}}=\\{m_{0}^{\pm},m_{1}^{\pm},...,m_{N-1}^{\pm}\\}$. We also
define the path ${\bf
s^{\pm}}=\\{s_{0}^{\pm},s_{1}^{\pm},...,s_{N-1}^{\pm}\\}$, and the associate
path which covers $N+1$ points, ${\bf
s^{\prime\pm}}=\\{s_{0}^{\pm},s_{1}^{\pm},...,s_{N-1}^{\pm},s_{N}^{\pm}\\}$.
Given the product structure of Eq. (9), the subsystem, bosonic and the
fermionic terms can be independently evaluated, while coordinating their path.
Explicitly, the elements in Eq. (9) are given by
$\displaystyle I_{S}$ $\displaystyle=$ $\displaystyle\langle
s_{0}^{+}|\rho_{S}(0)|s_{0}^{-}\rangle\Pi_{k=0,...,N-1}\langle
m_{k}^{-}|{\mathcal{G}_{S}}|n_{k}^{-}\rangle\langle
n_{k}^{+}|{\mathcal{G}_{S}^{\dagger}}|m_{k}^{+}\rangle$ $\displaystyle I_{F}$
$\displaystyle=$ $\displaystyle{\rm Tr}_{F}\Big{[}\langle
s_{N}^{+}|{\mathcal{G}_{F}^{\dagger}}|g_{N-1}^{+}\rangle\langle
f_{N-1}^{+}|{\mathcal{G}_{F}^{\dagger}}|s_{N-1}^{+}\rangle...$
$\displaystyle\times$ $\displaystyle\langle
s_{1}^{+}|{\mathcal{G}_{F}^{\dagger}}|g_{0}^{+}\rangle\langle
f_{0}^{+}|{\mathcal{G}_{F}^{\dagger}}|s_{0}^{+}\rangle\rho_{F}\langle
s_{0}^{-}|{\mathcal{G}_{F}}|f_{0}^{-}\rangle\langle
g_{0}^{-}|{\mathcal{G}_{F}}|s_{1}^{-}\rangle...$ $\displaystyle\times$
$\displaystyle\langle s_{N-1}^{-}|{\mathcal{G}_{F}}|f_{N-1}^{-}\rangle\langle
g_{N-1}^{-}|{\mathcal{G}_{F}}|s_{N}^{-}\rangle\Big{]}$ $\displaystyle I_{B}$
$\displaystyle=$ $\displaystyle{\rm Tr}_{B}\Big{[}\langle
g_{N-1}^{+}|{\mathcal{G}_{B}^{\dagger}}|n_{N-1}^{+}\rangle\langle
m_{N-1}^{+}|{\mathcal{G}_{B}^{\dagger}}|f_{N-1}^{+}\rangle...$ (10)
$\displaystyle\times$ $\displaystyle\langle
g_{0}^{+}|{\mathcal{G}_{B}^{\dagger}}|n_{0}^{+}\rangle\langle
m_{0}^{+}|{\mathcal{G}_{B}^{\dagger}}|f_{0}^{+}\rangle\rho_{B}\langle
f_{0}^{-}|{\mathcal{G}_{B}}|m_{0}^{-}\rangle\langle
n_{0}^{-}|{\mathcal{G}_{B}}|g_{0}^{-}\rangle...$ $\displaystyle\times$
$\displaystyle\langle f_{N-1}^{-}|{\mathcal{G}_{B}}|m_{N-1}^{-}\rangle\langle
n_{N-1}^{-}|{\mathcal{G}_{B}}|g_{N-1}^{-}\rangle\Big{]}.$
The dynamics in Eq. (9) can be retrieved by following an iterative scheme, by
using the principles of the INFPI approach Segal _et al._ (2010); *IF2. In
the next section we illustrate this evolution with a spin subsystem.
## III Iterative time evolution scheme
We consider here the spin-boson-fermion model. It includes a two-state
subsystem that is coupled through its polarization to bosonic and fermionic
reservoirs. With this relatively simple model, we exemplify the iterative
propagation technique, see Secs. III.1-III.5. Relevant expressions for a
multi-level subsystem and general interaction form are included in Sec. III.6.
### III.1 spin-boson-fermion model
The spin-fermion model, with a qubit, spin, coupled to a fermionic bath is
kindred to the eminent spin-boson model, describing a qubit interacting with
bosonic environment. It is also related to the Kondo model Kondo (1964), only
lacking direct coupling of the reservoir degrees of freedom to spin-flip
processes. It provides a minimal setting for the study of dissipation and
decoherence effects in the presence of nonequilibrium reservoirs Mitra and
Millis (2005, 2007); Segal _et al._ (2007); Lutchyn _et al._ (2008). Here we
put together the spin-boson and the spin-fermion models, and present it in the
general form,
$\displaystyle H_{S}$ $\displaystyle=$
$\displaystyle\Delta\sigma_{x}+B\sigma_{z},\,\,\,\,\,$ $\displaystyle H_{F}$
$\displaystyle=$
$\displaystyle\sum_{j}\epsilon_{j}c_{j}^{\dagger}c_{j}+\sum_{j\neq
j^{\prime}}v_{j,j^{\prime}}^{F}c_{j}^{\dagger}c_{j^{\prime}}$ $\displaystyle
V_{SF}$ $\displaystyle=$
$\displaystyle\sigma_{z}\sum_{j,j^{\prime}}\xi^{F}_{j,j^{\prime}}c_{j}^{\dagger}c_{j^{\prime}}.$
$\displaystyle H_{B}$ $\displaystyle=$
$\displaystyle\sum_{p}\omega_{p}b_{p}^{\dagger}b_{p}+\sum_{p,p^{\prime}}v_{p,p^{\prime}}^{B}b_{p}^{\dagger}b_{p^{\prime}},$
$\displaystyle V_{SB}$ $\displaystyle=$
$\displaystyle\sigma_{z}\sum_{p}\xi_{p}^{B}\left(b_{p}^{\dagger}+b_{p}\right)+\sigma_{z}\sum_{p,p^{\prime}}\zeta_{p,p^{\prime}}^{B}b_{p}^{\dagger}b_{p^{\prime}}.$
(11)
The subsystem includes only two states, with an energy gap $2B$ and a
tunneling splitting $2\Delta$. This minimal subsystem is coupled here through
its polarization to a set of boson and fermion degrees of freedom, where
$\sigma_{z}$ and $\sigma_{x}$ denote the $z$ and $x$ Pauli matrices for a two-
state subsystem, respectively. $b_{p}$ stands for a bosonic operator, to
destroy a mode of frequency $\omega_{p}$, similarly, $c_{j}$ is a fermionic
operator, to annihilate an electron of energy $\epsilon_{j}$ (we assume later
a linear dispersion relation). In this model, spin polarization couples to
harmonic displacements, to scattering events between electronic states in the
metals (fermi reservoirs), and to scattering evens between different modes in
the harmonic bath. Since the commutator between the interaction terms vanish,
$[V_{SF},V_{SB}]=0$, the separation between the bosonic and fermionic IFs is
exact. Moreover, since the fermionic and bosonic operators couple both to
$\sigma_{z}$, we immediately note that $f_{k}^{\pm}=s_{k}^{\pm}$,
$m_{k}^{\pm}=f_{k}^{\pm}$, $n_{k}^{\pm}=g_{k}^{\pm}$ and
$g_{k}^{\pm}=s_{k+1}^{\pm}$. Eq. (9) then simplifies to
$\displaystyle\langle s_{N}^{+}|\rho_{S}(t)|s_{N}^{-}\rangle=\sum_{\bf
s^{\pm}}I_{S}({\bf s^{\prime\pm}})I_{F}({\bf s^{\prime\pm}})I_{B}({\bf
s^{\prime\pm}}),$ (12)
where we recall the definitions of the paths ${\bf
s^{\pm}}=\\{s_{0}^{\pm},s_{1}^{\pm},...,s_{N-1}^{\pm}\\}$ and ${\bf
s^{\prime\pm}}=\\{s_{0}^{\pm},s_{1}^{\pm},...,s_{N-1}^{\pm},s_{N}^{\pm}\\}$.
The subsystem evolution and the IFs are now given by
$\displaystyle I_{S}({\bf s^{\prime\pm}})=\langle
s_{0}^{+}|\rho_{S}(0)|s_{0}^{-}\rangle
K(s_{N}^{\pm},s_{N-1}^{\pm})...K(s_{2}^{\pm},s_{1}^{\pm})K(s_{1}^{\pm},s_{0}^{\pm})$
$\displaystyle I_{B}({\bf s^{\prime\pm}})={\rm
Tr}_{B}\Big{[}e^{-iW_{B}(s_{N}^{+})\delta t/2}e^{-iW_{B}(s_{N-1}^{+})\delta
t}...e^{-iW_{B}(s_{0}^{+})\delta t/2}\rho_{B}e^{iW_{B}(s_{0}^{-})\delta
t/2}....e^{iW_{B}(s_{N-1}^{-})\delta t}e^{iW_{B}(s_{N}^{-})\delta
t/2}\Big{]}.$ $\displaystyle I_{F}({\bf s^{\prime\pm}})={\rm
Tr}_{F}\Big{[}e^{-iW_{F}(s_{N}^{+})\delta t/2}e^{-iW_{F}(s_{N-1}^{+})\delta
t}...e^{-iW_{F}(s_{0}^{+})\delta t/2}\rho_{F}e^{iW_{F}(s_{0}^{-})\delta
t/2}....e^{iW_{F}(s_{N-1}^{-})\delta t}e^{iW_{F}(s_{N}^{-})\delta
t/2}\Big{]},$ (13)
where
$\displaystyle K(s_{k+1}^{\pm},s_{k}^{\pm})=\langle
s_{k+1}^{+}|e^{-iH_{S}\delta t}|s_{k}^{+}\rangle\langle
s_{k}^{-}|e^{iH_{S}\delta t}|s_{k+1}^{-}\rangle$ (14)
is the propagator matrix for the subsystem. We have also used the short
notation $W$ for bath operators that are evaluated along the path,
$\displaystyle W_{F}(s)$ $\displaystyle=$ $\displaystyle H_{F}+\langle
s|V_{SF}|s\rangle,$ $\displaystyle W_{B}(s)$ $\displaystyle=$ $\displaystyle
H_{B}+\langle s|V_{SB}|s\rangle.$ (15)
In the next sections we explain how we compute the bosonic and fermionic IFs.
The former has a closed analytic form in certain situations. The latter is
computed only numerically.
### III.2 Bosonic IF
We present the structure of the bosonic IF in two separate models,
corresponding to different types of subsystem-boson bath interactions. In both
cases the bosonic bath is prepared in a canonical state of inverse temperature
$\beta_{ph}=1/T_{ph}$,
$\displaystyle\rho_{B}=e^{-\beta_{ph}H_{B}}/{\rm
Tr}_{B}[e^{-\beta_{ph}H_{B}}].$ (16)
Displacement interaction model, $v_{p,p^{\prime}}^{B}=0$ and
$\zeta_{p,p^{\prime}}^{B}=0$. Given the remaining linear displacement-
polarization interaction, an analytic form for the bosonic IF can be written,
the so-called “Feynman-Vernon” influence functional (FV IF) Feynman and Hibbs
(1965). In its time-discrete form, the bosonic IF is given by an exponent with
pairwise interactions along the path Makri and Makarov (1995a); *QUAPI2
$\displaystyle
I_{B}(s_{0}^{\pm},...,s_{N}^{\pm})=\exp\left[-\sum_{k=0}^{N}\sum_{k^{\prime}=0}^{k}(s_{k}^{+}-s_{k}^{-})(\eta_{k,k^{\prime}}s_{k^{\prime}}^{+}-\eta_{k,k^{\prime}}^{*}s_{k^{\prime}}^{-})\right].$
(17)
The coefficients $\eta_{k,k^{\prime}}$ are additive in the number of thermal
baths, and they depend on these baths’ spectral functions and initial
temperatures Makri and Makarov (1995a); *QUAPI2. For completeness, these
coefficients are included in Appendix A.
Boson scattering model, $\xi_{p}^{B}=0$. The bosonic IF can now be computed
numerically, by using the trace formula for bosons Klich (2003)
$\displaystyle{\rm
Tr}_{B}[e^{M_{1}}e^{M_{2}}...e^{M_{k}}]=\det[1-e^{m_{1}}e^{m_{2}}...e^{m_{k}}]^{-1}.$
(18)
Here $m_{k}$ is a single particle operator corresponding to a quadratic
bosonic operator
$M_{k}=\sum_{p,p^{\prime}}(m_{k})_{p,p^{\prime}}b_{p}^{\dagger}b_{p^{\prime}}$.
Application of the trace formula to the bosonic IF (13) leads to
$\displaystyle I_{B}$ $\displaystyle=$ $\displaystyle{\rm
Tr}_{B}[e^{M_{1}}e^{M_{2}}...e^{M_{k}}\rho_{B}]$ (19) $\displaystyle=$
$\displaystyle{\rm
det}\Big{\\{}[\hat{I}_{B}+f_{B}]-e^{m_{1}}e^{m_{2}}...e^{m_{k}}f_{B}\Big{\\}}^{-1}.$
The matrix $\hat{I}_{B}$ is an identity matrix, and the function $f_{B}$
stands for the Bose-Einstein distribution,
$f_{B}=[e^{\beta_{ph}\omega}-1]^{-1}$. The determinant in Eq. (19) can be
evaluated numerically by taking into account $L_{B}$ modes for the boson bath.
This discretization implies a numerical error. Generalizations, to include
more that one bosonic baths, are immediate.
### III.3 Fermionic IF
The fermionic IF is computed numerically since an exact analytic form is not
known in the general strong coupling limit Mitra and Millis (2005, 2007);
Segal _et al._ (2007). It is calculated by using the trace formula for
fermions Klich (2003)
$\displaystyle{\rm
Tr}_{F}[e^{M_{1}}e^{M_{2}}...e^{M_{k}}]=\det[1+e^{m_{1}}e^{m_{2}}...e^{m_{k}}].$
(20)
Here $m_{k}$ is a single particle operator corresponding to a quadratic
operator $M_{k}=\sum_{i,j}(m_{k})_{i,j}c_{i}^{\dagger}c_{j}$. In the next
section we consider a model with two Fermi seas, $H_{F}=H_{L}+H_{R}$, prepared
in a factorized state of distinct grand canonical states,
$\rho_{F}=\rho_{L}\otimes\rho_{R}$, with
$\displaystyle\rho_{\nu}=e^{-\beta_{\nu}(H_{\nu}-\mu_{\nu}N_{\nu})}/{\rm
Tr}_{F}[e^{-\beta_{\nu}(H_{\nu}-\mu_{\nu}N_{\nu})}],\,\,\,\,\ \nu=L,R$ (21)
Here $\beta_{\nu}=1/T_{\nu}$ stands for an inverse temperature, and
$\mu_{\nu}$ denotes the chemical potential of the $\nu$ bath. Application of
the trace formula to the fermionic IF in Eq. (13) leads to
$\displaystyle I_{F}$ $\displaystyle=$ $\displaystyle{\rm
Tr}_{F}[e^{M_{1}}e^{M_{2}}...e^{M_{k}}\rho_{F}]$ (22) $\displaystyle=$
$\displaystyle{\rm
det}\Big{\\{}[\hat{I}_{L}-f_{L}]\otimes[\hat{I}_{R}-f_{R}]+e^{m_{1}}e^{m_{2}}...e^{m_{k}}[f_{L}\otimes
f_{R}]\Big{\\}}.$
The matrices $\hat{I}_{\nu}$ are the identity matrices for the $\nu=L,R$
space. The functions $f_{L}$ and $f_{R}$ are the bands electrons’ energy
distribution, $f_{\nu}=[e^{\beta_{\nu}(\epsilon-\mu_{\nu})}+1]^{-1}$. The
determinant in Eq. (22) can be evaluated numerically by taking into account
$L_{s}$ electronic states for each metal. This discretization implies a
numerical error.
### III.4 The iterative scheme
The dynamics described by Equation (12) includes long-range interactions along
the path, limiting brute force direct numerical simulations to very short
times. The iterative scheme, developed in Ref. Segal _et al._ (2010); *IF2,
is based on the observation that in standard nonequilibrium situations and at
finite temperatures bath correlations exponentially die Segal _et al._
(2007); Hutzen _et al._ (2012), thus the IF can be truncated beyond a memory
time $\tau_{c}=N_{s}\delta t$, corresponding to the time where bath
correlations sustain. Here $N_{s}$ is an integer, $\delta t$ is the
discretized time step, and the correlation time $\tau_{c}$ is dictated by the
bias and temperature. Roughly, for a system under a potential bias $\Delta\mu$
and a temperature $T$, $\tau_{c}\sim\max\\{1/T,\,\,1/\Delta\mu\\}$ Segal _et
al._ (2010); *IF2. By recursively breaking the IF to include terms only within
$\tau_{c}$, we reach the following (non-unique) structure for the $\alpha=B,F$
influence functional,
$\displaystyle
I_{\alpha}(s_{0}^{\pm},s_{1}^{\pm},s_{2}^{\pm},...,s_{N}^{\pm})\approx
I_{\alpha}(s_{0}^{\pm},s_{1}^{\pm},...,s_{N_{s}}^{\pm})I_{\alpha}^{(N_{s})}(s_{1}^{\pm},s_{2}^{\pm},...,s_{N_{s}+1}^{\pm})I_{\alpha}^{(N_{s})}(s_{2}^{\pm},s_{3}^{\pm},...,s_{N_{s}+2}^{\pm})...$
$\displaystyle\times
I_{\alpha}^{(N_{s})}(s_{N-N_{s}}^{\pm},s_{N-N_{s}+1}^{\pm},...,s_{N}^{\pm}),$
(23)
where we identify the “truncated IF”, $I_{\alpha}^{(N_{s})}$, as the ratio
between two IFs, with the numerator calculated with an additional time step,
$\displaystyle
I_{\alpha}^{(N_{s})}(s_{k},s_{k+1},...,s_{k+N_{s}})=\frac{I_{\alpha}(s_{k}^{\pm},s_{k+1}^{\pm},...,s_{k+N_{s}}^{\pm})}{I_{\alpha}(s_{k}^{\pm},s_{k+1}^{\pm},...,s_{k+N_{s}-1}^{\pm})}.$
(24)
The truncated IF is the central object in our calculations. For fermions, its
numerator and denominator are separately computed using Eq. (22). The bosonic
IF is similarly computed with the help of Eq. (19) when $\xi_{p}^{B}=0$. In
the complementary case, $\zeta_{p,p^{\prime}}^{B}=0$ and
$v_{p,p^{\prime}}^{B}=0$, the truncated-bosonic IF has a closed analytic form:
Using Eq. (17) we find that it comprises only two-body interactions, of
$s_{k+N_{s}}$ with the preceding spins, down to $s_{k}$,
$\displaystyle
I_{B}^{(N_{s})}(s_{k},s_{k+1},...,s_{k+N_{s}})=\exp\left[-\sum_{k^{\prime}=k}^{k+N_{s}}(s_{k+N_{s}}^{+}-s_{k+N_{s}}^{-})(\eta_{k+N_{s},k^{\prime}}s_{k^{\prime}}^{+}-\eta_{k+N_{s},k^{\prime}}^{*}s_{k^{\prime}}^{-})\right].$
Based on the decompositions (24) and (III.4), we time-evolve Eq. (12)
iteratively, by defining a multi-time reduced density matrix
$\tilde{\rho}_{S}(s_{k},s_{k+1},..,s_{k+N_{s}-1})$. Its initial value is given
by
$\displaystyle\tilde{\rho}_{S}(s_{0}^{\pm},...,s_{N_{s}}^{\pm})=I_{S}(s_{0}^{\pm},...,s_{N_{s}}^{\pm})I_{B}(s_{0}^{\pm},...,s_{N_{s}}^{\pm})I_{F}(s_{0}^{\pm},...,s_{N_{s}}^{\pm}).$
(26)
Its evolution is dictated by
$\displaystyle\tilde{\rho}_{S}(s_{k+1}^{\pm},...,s_{k+N_{s}}^{\pm})=\sum_{s_{k}^{\pm}}\tilde{\rho}_{S}(s_{k}^{\pm},...,s_{k+N_{s}-1}^{\pm})K(s_{k+N_{s}}^{\pm},s_{k+N_{s}-1}^{\pm})$
$\displaystyle\times
I_{F}^{(N_{s})}(s_{k}^{\pm},...,s_{k+N_{s}}^{\pm})I_{B}^{(N_{s})}(s_{k}^{\pm},...,s_{k+N_{s}}^{\pm}).$
(27)
The time-local ($t_{k}=k\delta t$) reduced density matrix, describing the
state of the subsystem at a certain time, is reached by summing over all
intermediate states,
$\displaystyle\rho_{S}(t_{k})=\sum_{s_{k-1}^{\pm}...s_{k-N_{s}+1}^{\pm}}\tilde{\rho}_{S}(s_{k-N_{s}+1}^{\pm},...,s_{k}^{\pm}).$
(28)
The bosonic and fermionic IFs may be (and often this is the case)
characterized by different memory time. Thus, in principle we could truncate
the fermionic IF to include $N_{s}^{F}$ terms, and the bosonic IF to include
$N_{s}^{B}$ elements. However, the efficiency of the computation is dictated
by the longest memory time, thus, for convenience, we truncate both IFs using
the largest value, identified by $N_{s}$.
By construction, this iterative approach conserves the trace of the reduced
density matrix, ensuring the stability of the iterative algorithm to long
times Makri and Makarov (1995a); *QUAPI2. This property can be inferred from
Eqs. (12) and (13), by using the formal expressions for the truncated IFs, Eq.
(23) and (24). To prover this property, we trace over the reduced density
matrix at time $t$, identifying $s_{N}=s_{N}^{+}=s_{N}^{-}$,
$\displaystyle{\rm Tr}_{S}[\rho_{S}(t)]$ $\displaystyle\equiv$
$\displaystyle\sum_{s_{N}}\langle s_{N}|\rho_{S}(t)|s_{N}\rangle$
$\displaystyle=$ $\displaystyle\sum_{{\bf s^{\prime\pm}}}I_{S}({\bf
s^{\prime\pm}})I_{F}({\bf s^{\prime\pm}})I_{B}({\bf
s^{\prime\pm}})\delta(s_{N}^{+}-s_{N}^{-})$
Using the cyclic property of the trace, we note that both the fermionic and
bosonic IFs are independent of $s_{N}$, when $s_{N}^{+}=s_{N}^{-}$. Therefore,
the summation over the $s_{N}$ coordinate reduces to a simple sum which can be
performed using the completeness relation for the subsystem states, resulting
in
$\displaystyle\sum_{s_{N}}\langle s_{N}|e^{-iH_{S}\delta
t}|s_{N-1}^{+}\rangle\langle s_{N-1}^{-}|e^{iH_{S}\delta
t}|s_{N}\rangle=\delta(s_{N-1}^{+}-s_{N-1}^{-}).$ (29)
Iterating in this manner we conclude that
$\displaystyle{\rm Tr}_{S}[\rho_{S}(t)]$ $\displaystyle\equiv$
$\displaystyle\sum_{s_{N}}\langle s_{N}|\rho_{S}(t)|s_{N}\rangle$ (30)
$\displaystyle=$ $\displaystyle\sum_{{\bf s^{\prime\pm}}}I_{S}({\bf
s^{\prime\pm}})I_{F}({\bf s^{\prime\pm}})I_{B}({\bf
s^{\prime\pm}})\delta(s_{N}^{+}-s_{N}^{-})\delta(s_{N-1}^{+}-s_{N-1}^{-})...\delta(s_{1}^{+}-s_{1}^{-})\delta(s_{0}^{+}-s_{0}^{-})$
$\displaystyle=$ $\displaystyle\sum_{s_{0}}\langle
s_{0}|\rho_{S}(0)|s_{0}\rangle={\rm Tr}_{S}[\rho_{S}(0)]$
We emphasize that the trace conservation is maintained even with the use of
the truncated form for the IFs. Moreover, it holds irrespective of the details
of the bath and the system-bath interaction form. It is also obeyed in the
more general case, Eq. (9). Equation (27) [and its generalized form, Eq. (34)
below], describe a linear map. Its fixed points are stable if the eigenvalues
of the map have modulus less than one, which is the case here. Thus, our
scheme is expected to approach a stationary-state in the long time limit.
### III.5 Expectation values for operators
Besides the reduced density matrix, we can also acquire the time evolution of
several expectation values. Adopting the Hamiltonian (11), we illustrate next
how we achieve the charge current behavior. For simplicity, we consider the
case with only two fermionic reservoirs, $\nu=L,R$. The current operator,
e.g., at the $L$ bath is defined as the time derivative of the number
operator. The expectation value of this current is given by
$\displaystyle j_{L}=-\frac{d}{dt}{\rm Tr}[\rho
N_{L}],\,\,\,\,\,N_{L}\equiv\sum_{j\in L}c_{j}^{\dagger}c_{j}$ (31)
We consider the time evolution of the related exponential operator $e^{\lambda
N_{L}}$, with $\lambda$ a real number that is taken to vanish at the end of
the calculation,
$\displaystyle\langle N_{L}(t)\rangle$ $\displaystyle\equiv$
$\displaystyle{\rm Tr}\left[\rho N_{L}(t)\right]$ (32) $\displaystyle=$
$\displaystyle\lim_{\lambda\rightarrow 0}\frac{\partial}{\partial\lambda}{\rm
Tr}\big{[}\rho(0)e^{iHt}e^{\lambda N_{L}}e^{-iHt}\big{]}.$
As before, the initial condition is factorized at $t=0,$
$\rho(0)=\rho_{S}(0)\otimes\rho_{B}\otimes\rho_{F}$. The trace is performed
over subsystem and reservoirs degrees of freedom. By following the same steps
as in Eqs. (3)-(7), we reach the path-integral expression
$\displaystyle\langle e^{\lambda
N_{L}(t)}\rangle=\sum_{s_{0}^{\pm}}\sum_{s_{1}^{\pm}}...\sum_{s_{N-1}^{\pm}}\sum_{s_{N}}{\rm
Tr}_{B}{\rm Tr}_{F}\Big{[}e^{\lambda N_{L}}\langle
s_{N}|\mathcal{G}^{\dagger}|s^{+}_{N-1}\rangle\langle
s_{N-1}^{+}|\mathcal{G}^{\dagger}|s^{+}_{N-2}\rangle...$ (33)
$\displaystyle\times$ $\displaystyle\langle
s_{0}^{+}|\rho(0)|s_{0}^{-}\rangle...\langle
s^{-}_{N-2}|\mathcal{G}|s^{-}_{N-1}\rangle\langle
s_{N-1}^{-}|\mathcal{G}|s_{N}\rangle\Big{]}.$
Factorizing the time evolution operators using Eq. (5), we accomplish the
compact form
$\displaystyle\langle e^{\lambda N_{L}(t)}\rangle$ $\displaystyle=$
$\displaystyle\sum_{\bf s^{\prime\pm}}I_{S}({\bf s^{\prime\pm}})I_{B}({\bf
s^{\prime\pm}})\tilde{I}_{F}({\bf s^{\prime\pm}})\delta(s_{N}^{+}-s_{N}^{-}).$
The terms $I_{S}$ and $I_{B}$ are given in Eq. (13). The fermionic IF
accommodates an additional exponent,
$\displaystyle\tilde{I}_{F}({\bf s^{\prime\pm}})={\rm Tr}_{F}\Big{[}e^{\lambda
N_{L}}e^{-iW_{F}(s_{N}^{+})\delta t/2}e^{-iW_{F}(s_{N-1}^{+})\delta
t}...e^{-iW_{F}(s_{0}^{+})\delta t/2}\rho_{F}e^{iW_{F}(s_{0}^{-})\delta
t/2}....e^{iW_{F}(s_{N-1}^{-})\delta t}e^{iW_{F}(s_{N}^{-})\delta
t/2}\Big{]}.$
We can time evolve the operator $\langle e^{\lambda N_{L}}\rangle$ by using
the iterative scheme of Sec. III.D, by truncating the bosonic and fermionic
IFs up to the memory time $\tau_{c}=N_{s}\delta t$, for several values of
$\lambda$. We then take the numerical derivative with respect to $\lambda$ and
$t$, to attain the charge current itself.
The approach explained here could be used to explore several fermionic
operators, for example, the averaged current $j_{av}=(j_{L}-j_{R})/2$. The
minus sign in front of $j_{R}$ originates from the sign notation, with the
current defined positive when flowing $L$ to $R$. The implementation of a heat
current operator, describing the heat current flowing between two bosonic
reservoirs, requires first the derivation of an analytic form for the bosonic
IF, an expression analogous to the FV IF, and the subsequent time
discretization of this IF, to reach an expression analogous to (17).
### III.6 Expression for multilevel subsystems and general interactions
So far we have detailed the iterative time evolution scheme for the spin-
boson-fermion model (11). The procedure can be extended, to treat more complex
cases. Based on the general principles outlined in Sec. III.4, one notes that
the path-integral expression (9) can be evaluated iteratively by generalizing
Eq. (27) to the form
$\displaystyle\tilde{\rho}_{S}(v_{k+1}^{\pm},...,v_{k+N_{s}}^{\pm})=$
$\displaystyle\sum_{v_{k}^{\pm}}\tilde{\rho}_{S}(v_{k}^{\pm},...,v_{k+N_{s}-1}^{\pm})K(m_{k+N_{s}}^{\pm},n_{k+N_{s}}^{\pm})I_{F}^{(N_{S})}(s_{k}^{\pm},f_{k}^{\pm},g_{k}^{\pm},...,s_{k+N_{s}}^{\pm},f_{k+N_{s}}^{\pm},g_{k+N_{s}}^{\pm})$
$\displaystyle\times
I_{B}^{(N_{s})}(f_{k}^{\pm},g_{k}^{\pm},m_{k}^{\pm},n_{k}^{\pm},...,f_{k+N_{s}}^{\pm},g_{k+N_{s}}^{\pm},m_{k+N_{s}}^{\pm},n_{k+N_{s}}^{\pm})$
(34)
where we compact several variables,
$v_{k}^{\pm}=\\{s_{k}^{\pm},f_{k}^{\pm},g_{k}^{\pm},m_{k}^{\pm},n_{k}^{\pm}\\}$.
It should be noted that in cases when the IF is time invariant, as in the
molecular electronics case discussed below, one needs to evaluate
$I_{B}^{(N_{s})}$ and $I_{F}^{(N_{s})}$ only once, then use the saved array to
time-evolve the auxiliary density matrix.
Figure 2: Molecular electronic rectifier setup. A biased donor-acceptor
electronic junction is coupled to an anharmonic mode, represented by the two-
state system with vibrational levels $|0\rangle$ and $|1\rangle$. This
molecular vibrational mode may further relax its energy to a phononic thermal
reservoir. This process is represented by a dashed arrow. Direct electron
tunneling element between D and A is depicted by a dotted double arrow. Top:
$\Delta\mu>0$. In our construction both molecular electronic levels are placed
within the bias window at large positive bias, resulting in a large (resonant)
current. Bottom: At negative bias the energy of A is placed outside the bias
window, thus the total charge current is small.
## IV Application: Molecular Rectifier
The functionality and stability of electron-conducting molecular junctions are
directly linked to heating and cooling effects experienced by molecular
vibrational modes in biased situations Galperin _et al._ (2007a); Yu. _et
al._ (2004); Djukic _et al._ (2005); Kumar _et al._ (2012); Pasupathy _et
al._ (2005); Ioffe _et al._ (2008); Ward _et al._ (2011); Huang _et al._
(2007). In particular, junction heating and breakdown may occur once the bias
voltage exceeds typical molecular vibrational frequencies, when the electronic
levels are situated within the bias window, if energy dissipation from the
molecule to its environment is not efficient.
In this section we study the dynamics and steady-state behavior of electrons
and a specific vibrational mode in a molecular conducting junction serving as
an electrical rectifier. The rectifier Hamiltonian is detailed in Sec. IV.1.
In Sec. IV.2 we show that this model can be mapped onto the spin-boson-fermion
Hamiltonian (11). This allows us to employ the path-integral technique of Sec.
III for simulating the rectifier dynamics. The rectification mechanism is
explained in Sec. IV.3. Relevant expressions of a (perturbative) Master
equation method are described in Sec. IV.4, to be compared to our path-
integral based results in Sec. IV.5. Convergence issues and computational
aspects are discussed in Sec. IV.6.
### IV.1 Rectifier Hamiltonian
The D-A rectifier model includes a biased molecular electronic junction and a
selected (generally anharmonic) internal vibrational mode which is coupled to
an electronic transition in the junction and to a secondary phonon bath,
representing other molecular and environmental degrees of freedom. In the
present study we model the anharmonic mode by a two-state system, and this
model can already capture the essence of the vibrational instability effect
Simine and Segal (2012). For a schematic representation, see Fig. 2. This
model allows us to investigate the exchange of electronic energy with
molecular vibrational heating, and the competition between elastic and
inelastic transport mechanisms. Its close variant has been adopted in Refs.
Entin-Wohlman _et al._ (2010); Jiang _et al._ (2012); Entin-Wohlman and
Aharony (2012) for studying the thermopower and thermal transport of electrons
in molecular junctions with electron-phonon interactions, within the linear
response regime.
We assume that the D molecular group is strongly attached to the neighboring
$L$ metal surface, and that this unit is overall characterized by the chemical
potential $\mu_{L}$. Similarly, the A group is connected to the metal $R$,
characterized by $\mu_{R}$. At time $t=0$ the D and A states are put into
contact. Experimentally, the $R$ metal may stand for an STM tip decorated by a
molecular group. This tip is approaching the D site which is attached to the
metal surface $L$. Once the D and A molecular groups are put into contact,
electrons can flow across the junction in two parallel pathways: (i) through a
direct D-A tunneling mechanism, and (ii) inelastically, assisted by a
vibration: excess electron energy goes to excite the D-A vibrational motion,
and vice versa.
The rectifier (rec) Hamiltonian includes the electronic Hamiltonian $H_{el}$
with decoupled D and A states, the vibrational, two-state subsystem $H_{vib}$,
electronic-vibrational coupling $H_{I}$, a free phonon Hamiltonian $H_{ph}$,
and the coupling of this secondary phonon bath to the selected vibration,
$\displaystyle\bar{H}_{rec}=H_{el}+H_{vib}+H_{I}+H_{ph}+H_{vib-ph}.$ (35)
The electronic (fermionic) contribution $H_{el}$ attends for all fermionic
terms besides the direct D and A tunneling term, which for convenience is
included in $H_{I}$,
$\displaystyle H_{el}$ $\displaystyle=$ $\displaystyle
H_{M}+H_{L}^{0}+H_{R}^{0}+H_{C}$ $\displaystyle H_{M}$ $\displaystyle=$
$\displaystyle\epsilon_{d}c_{d}^{\dagger}c_{d}+\epsilon_{a}c_{a}^{\dagger}c_{a}$
$\displaystyle H_{L}^{0}$ $\displaystyle=$ $\displaystyle\sum_{l\in
L}\epsilon_{l}c_{l}^{\dagger}c_{l};\,\,\,\,\,\,\,H_{R}^{0}=\sum_{r\in
R}\epsilon_{r}c_{r}^{\dagger}c_{r}.$ $\displaystyle H_{C}$ $\displaystyle=$
$\displaystyle\sum_{l}v_{l}\left(c_{l}^{\dagger}c_{d}+c_{d}^{\dagger}c_{l}\right)+\sum_{r}v_{r}\left(c_{r}^{\dagger}c_{a}+c_{a}^{\dagger}c_{r}\right).$
(36)
$H_{M}$ stands for the molecular electronic part including two electronic
states, a donor D and an acceptor A. $c_{d/a}^{\dagger}$ ($c_{d/a}$) is a
fermionic creation (annihilation) operator of an electron on the D or A sites,
of energies $\epsilon_{d,a}$. The two metals, $H_{\nu}^{0}$, $\nu=L,R$, are
each composed of a collection of noninteracting electrons. The hybridization
of the D state to the left ($L$) bath, and similarly, the coupling of the A
site to the right ($R$) metal, are described by $H_{C}$. The vibrational
Hamiltonian includes a special nuclear anharmonic vibrational mode of
frequency $\omega_{0}$,
$\displaystyle H_{vib}=\frac{\omega_{0}}{2}\sigma_{z}.$ (37)
The displacement of this mode from equilibrium is coupled to an electron
transition in the system, with an energy cost $\kappa$, resulting in heating
and/or cooling effects,
$\displaystyle
H_{I}=\left(\kappa\sigma_{x}+v_{da}\right)\left(c_{d}^{\dagger}c_{a}+c_{a}^{\dagger}c_{d}\right).$
(38)
Besides the electron-vibration coupling term, $H_{I}$ further includes a
direct electron tunneling element between the D and the A states, of strength
$v_{da}$. Electron transfer between the two metals can therefore proceed
through two mechanisms: coherent tunneling and vibrational-assisted inelastic
transport.
The selected vibrational mode may couple to many other phonons, either
internal to the molecules or external, grouped into a harmonic reservoir,
$\displaystyle H_{ph}$ $\displaystyle=$
$\displaystyle\sum_{p}\omega_{p}b_{p}^{\dagger}b_{p}$ $\displaystyle H_{vib-
ph}$ $\displaystyle=$
$\displaystyle\sigma_{x}\sum_{p}\xi_{p}^{B}\left(b_{p}^{\dagger}+b_{p}\right)$
(39)
The Hamiltonian $H_{vib-ph}$ corresponds to a displacement-displacement
interaction type.
The motivation behind the choice of the two-level system (TLS) mode is
twofold. First, as we showed in Ref. Simine and Segal (2012), the development
of vibrational instability in the D-A rectifier does not depend on the mode
harmonicity, at least in the weak electron-phonon coupling limit. Since it is
easier to simulate a truncated mode with our approach, rather than a harmonic
mode, we settle on the TLS model. Second, while there are many studies where a
perfectly harmonic mode is assumed, for example, see Refs. Muhlbacher and
Rabani (2008); Wang _et al._ (2011); Hutzen _et al._ (2012), to the best of
our knowledge our work is the first to explore electron conduction in the
limit of strong vibrational anharmonicity.
### IV.2 Mapping to the spin-boson-fermion model
We diagonalize the electronic part of the Hamiltonian $H_{el}$ to acquire,
separately, the exact eigenstates for the $L$-half and $R$-half ends of
$H_{el}$,
$\displaystyle H_{el}$ $\displaystyle=$ $\displaystyle H_{L}+H_{R}$
$\displaystyle H_{L}$ $\displaystyle=$
$\displaystyle\sum_{l}\epsilon_{l}a_{l}^{\dagger}a_{l},\,\,\,H_{R}=\sum_{r}\epsilon_{r}a_{r}^{\dagger}a_{r}.$
(40)
Assuming that the reservoirs are dense, their new operators are assigned
energies that are the same as those before diagonalization. The D and A (new)
energies are assumed to be placed within a band of continuous states,
excluding the existence of bound states. The old operators are related to the
new ones by Mahan (2000)
$\displaystyle c_{d}$ $\displaystyle=$
$\displaystyle\sum_{l}\lambda_{l}a_{l},\,\,\,\,\,\,\
c_{l}=\sum_{l^{\prime}}\eta_{l,l^{\prime}}a_{l^{\prime}}$ $\displaystyle
c_{a}$ $\displaystyle=$ $\displaystyle\sum_{r}\lambda_{r}a_{r},\,\,\,\,\,\,\
c_{r}=\sum_{r^{\prime}}\eta_{r,r^{\prime}}a_{r^{\prime}},$ (41)
where the coefficients, e.g., for the $L$ set, are given by
$\displaystyle\lambda_{l}$ $\displaystyle=$
$\displaystyle\frac{v_{l}}{\epsilon_{l}-\epsilon_{d}-\sum_{l^{\prime}}\frac{v_{l^{\prime}}^{2}}{\epsilon_{l}-\epsilon_{l^{\prime}}+i\delta}}$
$\displaystyle\eta_{l,l^{\prime}}$ $\displaystyle=$
$\displaystyle\delta_{l,l^{\prime}}-\frac{v_{l}\lambda_{l^{\prime}}}{\epsilon_{l}-\epsilon_{l^{\prime}}+i\delta}.$
(42)
Similar expressions hold for the $R$ set. It is easy to derive the following
relation,
$\displaystyle\sum_{l^{\prime}}\frac{v_{l^{\prime}}^{2}}{\epsilon_{l}-\epsilon_{l^{\prime}}+i\delta}=PP\sum_{l^{\prime}}\frac{v_{l^{\prime}}^{2}}{\epsilon_{l}-\epsilon_{l^{\prime}}}-i\Gamma_{L}(\epsilon_{l})/2,$
(43)
with the hybridization strength ($v_{j}$ is assumed real),
$\displaystyle\Gamma_{L}(\epsilon)=2\pi\sum_{l}v_{l}^{2}\delta(\epsilon-\epsilon_{l}).$
(44)
With the new operators, the Hamiltonian (35) can be rewritten as
$\displaystyle\bar{H}_{rec}$ $\displaystyle=$
$\displaystyle\sum_{l}\epsilon_{l}a_{l}^{\dagger}a_{l}+\sum_{r}\epsilon_{r}a_{r}^{\dagger}a_{r}+\frac{\omega_{0}}{2}\sigma_{z}$
(45) $\displaystyle+$
$\displaystyle\left(\kappa\sigma_{x}+v_{da}\right)\sum_{l,r}\left[\lambda_{l}^{*}\lambda_{r}a_{l}^{\dagger}a_{r}+\lambda_{r}^{*}\lambda_{l}a_{r}^{\dagger}a_{l}\right]$
$\displaystyle+$
$\displaystyle\sum_{p}\omega_{p}b_{p}^{\dagger}b_{p}+\sigma_{x}\sum_{p}\xi_{p}^{B}\left(b_{p}^{\dagger}+b_{p}\right).$
This Hamiltonian can be transformed into the spin-boson-fermion model of zero
energy spacing, using the unitary transformation
$\displaystyle
U^{\dagger}\sigma_{z}U=\sigma_{x},\,\,\,\,\,U^{\dagger}\sigma_{x}U=\sigma_{z},$
(46)
with $U=\frac{1}{\sqrt{2}}(\sigma_{x}+\sigma_{z})$. The transformed
Hamiltonian $H_{rec}=U^{\dagger}\bar{H}_{rec}U$ includes a $\sigma_{z}$-type
electron-vibration coupling,
$\displaystyle H_{rec}$ $\displaystyle=$
$\displaystyle\sum_{l}\epsilon_{l}a_{l}^{\dagger}a_{l}+\sum_{r}\epsilon_{r}a_{r}^{\dagger}a_{r}+\frac{\omega_{0}}{2}\sigma_{x}$
(47) $\displaystyle+$
$\displaystyle\left(\kappa\sigma_{z}+v_{da}\right)\sum_{l,r}\left[\lambda_{l}^{*}\lambda_{r}a_{l}^{\dagger}a_{r}+\lambda_{r}^{*}\lambda_{l}a_{r}^{\dagger}a_{l}\right]$
$\displaystyle+$
$\displaystyle\sum_{p}\omega_{p}b_{p}^{\dagger}b_{p}+\sigma_{z}\sum_{p}\xi_{p}^{B}\left(b_{p}^{\dagger}+b_{p}\right).$
It describes a spin (TLS) coupled diagonally to two fermionic environments and
to a single boson bath. One can immediately confirm that this Hamiltonian is
accounted for by Eq. (11). To simplify our notation, we further identify the
electronic-vibration effective coupling parameter
$\displaystyle\xi_{l,r}^{F}=\kappa\lambda_{l}^{*}\lambda_{r}.$ (48)
For later use we also define the spectral function of the secondary phonon
bath as
$\displaystyle
J_{ph}(\omega)=\pi\sum_{p}(\xi^{B}_{p})^{2}\delta(\omega-\omega_{p}).$ (49)
In our simulations below we adopt an ohmic function,
$\displaystyle J_{ph}(\omega)=\frac{\pi K_{d}}{2}\omega
e^{-\omega/\omega_{c}},$ (50)
with the dimensionless Kondo parameter $K_{d}$, characterizing subsystem-bath
coupling, and the cutoff frequency $\omega_{c}$.
As an initial condition for the reservoirs, we assume canonical distributions
with the boson-phonon bath distribution following
$\rho_{B}=e^{-\beta_{ph}H_{ph}}/{\rm Tr}_{B}[e^{-\beta_{ph}H_{ph}}]$ and the
electronic-fermionic initial density matrix obeying
$\rho_{F}=\rho_{L}\otimes\rho_{R}$, with
$\rho_{\nu}=e^{-\beta_{\nu}(H_{\nu}-\mu_{\nu}N_{\nu})}/{\rm
Tr}_{F}[e^{-\beta_{\nu}(H_{\nu}-\mu_{\nu}N_{\nu})}]$, $\nu=L,R$. This results
in the expectation values of the exact eigenstates,
$\displaystyle\langle
a_{l}^{\dagger}a_{l^{\prime}}\rangle=\delta_{l,l^{\prime}}f_{L}(\epsilon_{l}),\,\,\,\,\,\langle
a_{r}^{\dagger}a_{r^{\prime}}\rangle=\delta_{r,r^{\prime}}f_{R}(\epsilon_{r}),$
(51)
where $f_{L}(\epsilon)=[\exp(\beta_{L}(\epsilon-\mu_{L}))+1]^{-1}$ denotes the
Fermi distribution function. An analogous expression holds for
$f_{R}(\epsilon)$. The reservoirs temperatures are denoted by $1/\beta_{\nu}$;
the chemical potentials are $\mu_{\nu}$.
Figure 3: Left panel: Energy of the donor (full line) and acceptor states
(dashed line). The dotted lines correspond to the chemical potentials at the
left and right sides. Right panel: Damping rate $K_{vib}$. The junction’s
parameters are $\Gamma_{\nu}=1$, $\beta_{\nu}=200$, $\kappa=0.1$,
$\omega_{0}=0.2$, and $\epsilon_{d}(\Delta\mu=0)=-0.2$,
$\epsilon_{a}(\Delta\mu=0)=0.4$. We used fermionic metals with a linear
dispersion relations for the original $H_{\nu}^{0}$ baths and sharp cutoffs at
$\pm 1$. All energy parameters are given in units of eV.
Figure 4: Scheme of the vibrational mode excitation and relaxation processes.
A full circle represents an electron transferred; a hollow circle depicts the
hole that has been left behind.
### IV.3 Rectifying mechanism
We now explain the operation principles of the molecular rectifier. In our
construction the application of a bias voltage linearly shifts the energies of
the molecular electronic levels, D and A. In equilibrium, we set
$\epsilon_{a}<0$ and $\epsilon_{d}>0$. Under positive bias, defined as
$\mu_{L}-\mu_{R}>0$, the energy of the acceptor level increases, and the donor
level drops down, see Fig. 2. When both levels are buried within the bias
window, the junction can support large currents. At negative bias the
electronic level A is positioned above the bias window, resulting in small
currents. For a scheme of the energy organization of the system, see Fig. 3
panel, left panel.
A generic mechanism leading to vibrational instabilities (and eventually
junction rupture) in D-A molecular rectifiers has been discussed in Ref. Lu
_et al._ (2011): At large positive bias, when the D state is positioned above
the acceptor level, electron-hole pair excitations by the molecular vibration
(TLS) dominate the mode dynamics. This can be schematically seen in Fig. 4.
The second-order perturbation theory rate constant, to excite the vibrational
mode, while transferring an electron from $L$ to $R$, $k_{0\rightarrow
1}^{L\rightarrow R}$, overcomes other rates once the density of states at the
left end is positioned above the density of states at the right side. This is
the case at large positive bias, given our construction. The rate
$k_{0\rightarrow 1}^{L\rightarrow R}$ is defined next, in Sec. IV.4.
### IV.4 Master equation ($v_{da}=0$)
In the limit of weak electron-vibration coupling, once the direct tunneling
term is neglected, $v_{da}$=0, it can be shown that the population of the
truncated vibrational mode satisfies a kinetic equation Simine and Segal
(2012),
$\displaystyle\dot{p}_{1}=-\left(k_{1\rightarrow 0}^{e}+k_{1\rightarrow
0}^{b}\right)p_{1}+\left(k_{0\rightarrow 1}^{e}+k_{0\rightarrow
1}^{b}\right)p_{0},$ $\displaystyle p_{0}+p_{1}=1.$ (52)
The excitation ($k_{0\rightarrow 1}$) and relaxation ($k_{1\rightarrow 0}$)
rate constants are given by a Fourier transform of bath correlation functions
of the operators $F_{e}$ and $F_{b}$, defined as
$\displaystyle F_{e}$ $\displaystyle=$
$\displaystyle\sum_{l,r}(\xi_{l,r}^{F}a_{l}^{\dagger}a_{r}+\xi_{r,l}^{F}a_{r}^{\dagger}a_{l}),$
$\displaystyle F_{b}$ $\displaystyle=$
$\displaystyle\sum_{p}\xi_{p}^{B}(b_{p}^{\dagger}+b_{p}),$ (53)
to yield
$\displaystyle k_{s\rightarrow s^{\prime}}^{e}$ $\displaystyle=$
$\displaystyle\int_{-\infty}^{\infty}e^{i(\epsilon_{s}-\epsilon_{s^{\prime}})\tau}{\rm
Tr}_{F}\left[\rho_{F}F_{e}(\tau)F_{e}(0)\right]d\tau$ $\displaystyle
k_{s\rightarrow s^{\prime}}^{b}$ $\displaystyle=$
$\displaystyle\int_{-\infty}^{\infty}e^{i(\epsilon_{s}-\epsilon_{s^{\prime}})\tau}{\rm
Tr}_{B}\left[\rho_{B}F_{b}(\tau)F_{b}(0)\right]d\tau.$ (54)
Here $s=0,1$ and $\epsilon_{1}-\epsilon_{0}=\omega_{0}$. The operators are
given in the interaction representation, e.g.,
$a_{l}^{\dagger}(t)=e^{iH_{L}t}a_{l}^{\dagger}e^{-iH_{L}t}$.
Phonon-bath induced rates. Expression (54) can be simplified, and the
contribution of the phonon bath to the vibrational rates reduces to
$\displaystyle k_{1\rightarrow 0}^{b}$ $\displaystyle=$
$\displaystyle\Gamma_{ph}(\omega_{0})[f_{B}(\omega_{0})+1],$ $\displaystyle
k_{0\rightarrow 1}^{b}$ $\displaystyle=$ $\displaystyle k_{1\rightarrow
0}^{b}e^{-\omega_{0}\beta_{ph}},$ (55)
where $f_{B}(\omega)=[e^{\beta_{ph}\omega}-1]^{-1}$ denotes the Bose-Einstein
distribution function. The damping rate is defined as
$\Gamma_{ph}(\omega)=2J_{ph}(\omega)$,
$\displaystyle\Gamma_{ph}(\omega)=2\pi\sum_{p}(\xi_{p}^{B})^{2}\delta(\omega_{p}-\omega).$
(56)
For brevity, we ignore below the direct reference to frequency.
Electronic-baths induced rates. The electronic rate constants (54) include the
following contributions Simine and Segal (2012),
$\displaystyle k_{1\rightarrow 0}^{e}=k_{1\rightarrow 0}^{L\rightarrow
R}+k_{1\rightarrow 0}^{R\rightarrow L};\,\,\,\ k_{0\rightarrow
1}^{e}=k_{0\rightarrow 1}^{L\rightarrow R}+k_{0\rightarrow 1}^{R\rightarrow
L},$ (57)
satisfying
$\displaystyle k_{1\rightarrow 0}^{L\rightarrow R}$ $\displaystyle=$
$\displaystyle
2\pi\kappa^{2}\sum_{l,r}|\lambda_{l}|^{2}|\lambda_{r}|^{2}f_{L}(\epsilon_{l})(1-f_{R}(\epsilon_{r}))\delta(\omega_{0}+\epsilon_{l}-\epsilon_{r})$
$\displaystyle k_{0\rightarrow 1}^{L\rightarrow R}$ $\displaystyle=$
$\displaystyle
2\pi\kappa^{2}\sum_{l,r}|\lambda_{l}|^{2}|\lambda_{r}|^{2}f_{L}(\epsilon_{l})(1-f_{R}(\epsilon_{r}))\delta(-\omega_{0}+\epsilon_{l}-\epsilon_{r}).$
(58)
Similar relations hold for the right-to-left going excitations. The energy in
the Fermi function $f_{\nu}(\epsilon)$ is measured with respect to the
(equilibrium) Fermi energy, placed at $(\mu_{L}+\mu_{R})$, and we assume that
the bias is applied symmetrically, $\mu_{L}=-\mu_{R}$. The rates can be
expressed in terms of the fermionic $\nu=L,R$ spectral density functions
$\displaystyle J_{\nu}(\epsilon)$ $\displaystyle=$ $\displaystyle
2\pi\kappa\sum_{j\in\nu}|\lambda_{j}|^{2}\delta(\epsilon_{j}-\epsilon).$ (59)
Using Eq. (42) we resolve this as a Lorentzian function, centered around
either the D or the A level,
$\displaystyle J_{L}(\epsilon)$ $\displaystyle=$
$\displaystyle\kappa\frac{\Gamma_{L}(\epsilon)}{(\epsilon-\epsilon_{d})^{2}+\Gamma_{L}(\epsilon)^{2}/4}$
$\displaystyle J_{R}(\epsilon)$ $\displaystyle=$
$\displaystyle\kappa\frac{\Gamma_{R}(\epsilon)}{(\epsilon-\epsilon_{a})^{2}+\Gamma_{R}(\epsilon)^{2}/4}.$
(60)
The electronic hybridization $\Gamma_{\nu}(\epsilon)$ is given in Eq. (44).
Using these definitions, we express the electronic rates [Eq. (58)] by
integrals ($s,s^{\prime}$=0,1)
$\displaystyle k_{s\rightarrow s^{\prime}}^{\nu\rightarrow\nu^{\prime}}$
$\displaystyle=$
$\displaystyle\frac{1}{2\pi}\int_{-\infty}^{\infty}f_{\nu}(\epsilon)\left[1-f_{\nu^{\prime}}(\epsilon+(s-s^{\prime})\omega_{0})\right]J_{\nu}(\epsilon)J_{\nu^{\prime}}(\epsilon+(s-s^{\prime})\omega_{0})d\epsilon.$
(61)
Observables. Within this simple kinetic approach, junction stability can be
recognized by watching the TLS population in the steady-state limit:
population inversion reflects on vibrational instability Simine and Segal
(2012). Solving Eq. (52) in the long time limit we find that
$\displaystyle p_{1}=\frac{k_{0\rightarrow 1}^{e}+k_{0\rightarrow
1}^{b}}{k_{0\rightarrow 1}^{e}+k_{0\rightarrow 1}^{b}+k_{1\rightarrow
0}^{e}+k_{1\rightarrow 0}^{b}},\,\,\,\,\,p_{0}=1-p_{1}.$ (62)
A related measure is the damping rate $K_{vib}$ Lu _et al._ (2011), depicted
in Fig. 3 panel (b). It is defined as the difference between relaxation and
excitation rates,
$\displaystyle K_{vib}\equiv k_{1\rightarrow 0}^{e}+k_{1\rightarrow
0}^{b}-\left(k_{0\rightarrow 1}^{e}+k_{0\rightarrow 1}^{b}\right).$ (63)
Positive $K_{vib}$ indicates on a “normal” thermal-like behavior, when
relaxation processes overcome excitations. In this case, the junction remains
stable in the sense that the population of the ground state is larger than the
population of the excited level. A negative value for $K_{vib}$ evinces on the
process of an uncontrolled heating of the molecular mode, eventually leading
to vibrational instability and junction breakdown.
In the steady-state limit, the charge current $j$, flowing from $L$ to $R$, is
given by Simine and Segal (2012)
$\displaystyle j=p_{1}\left(k_{1\rightarrow 0}^{L\rightarrow
R}-k_{1\rightarrow 0}^{R\rightarrow L}\right)+p_{0}\left(k_{0\rightarrow
1}^{L\rightarrow R}-k_{0\rightarrow 1}^{R\rightarrow L}\right).$ (64)
This relation holds even when the TLS is coupled to an additional boson bath.
Note that in the long time limit the current that is evaluated at the left end
$j_{L}$ is equal to $j_{R}$. Therefore, we simple denote the current by $j$ in
that limit.
Master equation calculations proceed as follows. We set the hybridization
energy $\Gamma_{\nu}$ as an energy independent parameter, and evaluate the
fermionic spectral functions $J_{\nu}(\epsilon)$ of Eq. (60). With this at
hand, we integrate (numerically) Eq. (61), and gain the fermionic-bath induced
rates. The phonon bath-induced rates (55) are reached by setting the
parameters of the spectral function $J_{ph}$, to directly obtain
$\Gamma_{ph}$, see Eq. (56). Using this set of parameters, we evaluate the
levels occupation and the charge current directly in the steady-state limit.
We can also time evolve the set of differential equations (52), to obtain the
trajectory $p_{1,0}(t)$.
Figure 5: Absolute value of the quantity $\pi\rho\xi_{l,r}^{F}$. The figure
was generated by discretizing the reservoirs, using bands extending from $-D$
to $D$, $D=1$, with $N_{L}=200$ states per each band a linear dispersion
relation and a constant density of states for the $H_{L,R}^{0}$ reservoirs,
with a constant density of states $\rho=N_{L}/2D$. Electron-vibration coupling
is given by $\kappa=0.1$.
### IV.5 Results
We simulate the dynamics of the subsystem in the spin-boson-fermion
Hamiltonian (47) using the path-integral approach of Sec. III. In order to
retrieve the vibrational mode occupation in the original basis in which Eq.
(45) is written, we rotate the reduced density matrix $\rho_{S}(t)$ back to
the original basis by applying the transformation
$U=\frac{1}{\sqrt{2}}(\sigma_{x}+\sigma_{z})$,
$\displaystyle\bar{\rho}_{S}(t)=U\rho_{S}(t)U.$ (65)
The diagonal elements of $\bar{\rho}_{S}(t)$, correspond to the vibrational
mode occupation, the ground state $|0\rangle$ and the excited state
$|1\rangle$,
$\displaystyle p_{0}(t)=\langle
0|\bar{\rho}_{S}(t)|0\rangle\,\,\,\,\,\,p_{1}(t)=\langle
1|\bar{\rho}_{S}(t)|1\rangle.$ (66)
As an initial condition we usually take
$\rho_{S}(0)=\frac{1}{2}(-\sigma_{x}+\hat{I}_{s})$, $\hat{I}_{s}$ is a
$2\times 2$ unit matrix. Under this choice, $\bar{\rho}_{S}(0)$ has only its
ground state populated.
Our simulations are performed with the following setup, displayed in the left
panel of Fig. 3: In the absence of a bias voltage we assign the donor the
energy $\epsilon_{d}=-0.2$ and the acceptor the value $\epsilon_{a}=0.4$.
These molecular electronic states are assumed to linearly follow the bias
voltage. The right panel in Fig. 3 depicts the damping rate $K_{vib}$ in the
absence of coupling to the phonon bath, as evaluated using the Master equation
method. This measure becomes negative beyond $\Delta\mu\sim 0.85$, which
corresponds to the situation where the (bias shifted) donor energy exceeds the
acceptor by $\omega_{0}$, $\epsilon_{d}-\epsilon_{a}\gtrsim\omega_{0}$;
$\omega_{0}=0.2$. This results in a significant exchange of electronic energy
to heat, affecting junction’s instability.
#### IV.5.1 Isolated mode
We study the time evolution of the vibrational mode occupation using
$v_{da}=0$ (unless otherwise stated), further decoupling it from a secondary
phonon bath, $K_{d}$=0.
Electron-vibration interaction energy. The interaction energy of the subsystem
(TLS) to the electronic degrees of freedom is encapsulated in the matrix
elements $\xi_{l,r}^{F}\equiv\kappa\lambda_{l}^{*}\lambda_{r}$, see Eq. (48).
The strength of this interaction is measured by the dimensionless parameter
$\pi\rho(\epsilon_{F})\xi_{l,r}^{F}$, which connects to the phase shift
experienced by Fermi sea electrons due to a scattering potential, introduced
here by the vibrational mode Roulet _et al._ (1969). Here,
$\rho(\epsilon_{F})$ stands for the density of states at the Fermi energy.
Using the parameters of Fig. 3, taking $\kappa=0.1$, we show the absolute
value of these matrix elements in Fig. 5. The contour plot is mostly limited
to values smaller than 0.1, thus we conclude that this set of parameters
correspond to the weak coupling limit Roulet _et al._ (1969). In this limit,
path-integral simulations should agree with Master equation calculations, as
we indeed confirm below. Deviations should be expected at larger values,
$\kappa\gtrsim 0.2$, and we study below these cases.
Units. We perform the simulations in arbitrary units with $\hbar\equiv 1$. One
can scale all energies with respect to the molecule-metal hybridization
$\Gamma_{\nu}$. With $\Gamma_{\nu}=1$, the weak coupling limit covers
$\kappa/\Gamma_{\nu}\lesssim 0.2$. To present results in physical units, we
assume that all energy parameters are given in eV, and scale correspondingly
the time unit and currents.
Figure 6: Population dynamics and convergence behavior of the truncated and
isolated vibrational mode (TLS) with increasing $N_{s}$. (a)-(b) Stable
behavior at $\mu_{L}=-\mu_{R}=0.2$. (c)-(d) Population inversion at
$\mu_{L}=-\mu_{R}=0.6$. Other parameters are the same as in Fig. 3. In all
figures $\delta t$=1, $N_{s}=3$ (heavy dotted), $N_{s}=4$ (heavy dashed),
$N_{s}=5$ (dashed-dotted), $N_{s}=6$ (dotted), $N_{s}=7$ (dashed) and
$N_{s}=8$ (full). We used $L_{s}=30$ electronic states at each fermionic bath
with sharp cutoffs at $\pm 1$.
Figure 7: Independence of the population $p_{0}$ on the initial state for
different biases, $\Delta\mu=$ 0.4, 0.8, 1.2 top to bottom. Other parameters
are the same as in Fig. 3 and Fig. 6.
Figure 8: Population dynamics, $p_{0}(t)$. (a) Comparison between exact
simulations (dashed) and Master equation results (dashed-dotted) at
$\kappa=0.2$. (b) Deviations between exact results and Master equations for
$\kappa=0.1$ (dot and $\circ$) and for $\kappa=0.2$ ($+$ and $x$). Other
parameters are as determined in Fig. 3.
Dynamics. We first focus on two representative values for the bias voltage: In
the low-positive bias limit a stable operation is expected, reflected by a
normal population, $p_{0}>p_{1}$. At large positive bias population inversion
may take place, indicating on the onset of instability and potential junction
rupture Simine and Segal (2012).
Fig. 6 displays the TLS dynamics, and we present data for different memory
sizes $N_{s}\delta t$. At small positive bias,
$\epsilon_{d}-\epsilon_{a}<\omega_{0}$, the mode occupation is “normal”,
$p_{0}>p_{1}$. In particular, in panels (a)-(b) we discern the case
$\mu_{L}=-\mu_{R}=0.2$, resulting in the (shifted) electronic energies
$\epsilon_{d}=0$ and $\epsilon_{a}=0.2$. In this case the (converged)
asymptotic long-time population (representing steady-state values), are
$p_{0}^{ss}=0.76$ and $p_{1}^{ss}=0.24$. In contrast, when the bias is large,
$\mu_{L}=-\mu_{R}=0.6$, the electronic levels are shifted to
$\epsilon_{d}=0.4$ and $\epsilon_{a}=-0.2$, and electrons crossing the
junction discard their excess energy into the vibrational mode. Indeed, we see
in Fig. 6(c)-(d) the process of population inversion, $p_{0}^{ss}=0.43$ and
$p_{1}^{ss}=0.57$. The TLS approaches the steady-state value around
$t_{ss}\sim 0.1$ ps. Regarding convergence behavior, we note that at large
bias convergence is reached with a shorter memory size, compared to the small
bias case, as expected Segal _et al._ (2010).
Fig. 7 exhibits the dynamics with different initial conditions, demonstrating
that the steady-state value is identical, yet the timescale to reach the
stationary limit may depend on the initial state.
We compare the exact dynamics to the Master equation time evolution behavior,
reached by solving Eq. (52). Panel (a) in Fig. 8 demonstrates excellent
agreement for $\kappa=0.2$, for both positive and negative biases. Below we
show that at this value Master equation’s predictions for the charge current
deviate from the exact result. Panel (b) in Fig. 8 focuses on the departure of
Master equation data from the exact values. These deviations are small, but
their dynamics indicate on the existence of high order excitation and
relaxation rates, beyond the second order rates of Sec. IV.4.
Steady-state characteristics. The full bias scan of the steady-state
population is displayed in Fig. 9, and we compare path-integral results with
Master equation calculations, revealing an excellent agreement in this weak
coupling limit ($\kappa=0.1$). The convergence behavior is presented in Fig.
10, and we plot the steady-state values as a function of memory size
($\tau_{c}$) for three different time steps, for representative biases. The
path-integral results well converge at intermediate-to-large positive biases,
$\Delta\mu\gtrsim 0.2$. We had difficulty converging our results in two
domains: (i) At small-positive potential bias, $\Delta\mu<0.2$. Here, large
memory size should be used for reaching full convergence; decorrelation time
approximately scales with $1/\Delta\mu$. (ii) At large negative biases,
$\Delta\mu<-0.4$ the current is very small as we show immediately. This
implies poor convergence at the range of $\tau_{c}$ employed. At these
negative biases the data oscillates with $\tau_{c}$, thus at negative bias it
is the averaged value for several-large $\tau_{c}$ which is plotted in Fig. 9.
Charge current. We show the current characteristics in Fig. 11, and confirm
that the junction acts as a charge rectifier. The insets display transient
data, affirming that at large bias steady-state is reached faster than in the
low bias case.
Strong coupling. Results at weak-to-strong couplings are shown in Fig. 12. The
value of the current, as reached from Master equation calculations, scale with
$\kappa^{2}$. In contrast, exact simulations indicate that the current grows
more slowly with $\kappa$, and it displays clear deviations (up to $50\%$)
from the perturbative Master equation result at $\kappa=0.3$. Interestingly,
the vibrational occupation (inset) shows little sensitivity to the coupling
strength, and even at $\kappa=0.3$ the Master equation technique provides an
excellent estimation for the levels occupation. This could be reasoned by the
fact that excited levels occupation is given by ratio of excitation rates to
the sum of excitation and relaxation rates. Such a ratio is (apparently) only
weakly sensitive to the value of $\kappa$ itself, even when high-order
processes do contribute to the current.
Figure 9: Converged data for the population of the isolated vibrational mode
in the steady-state limit with $\kappa=0.1$. Other parameters are the same as
in Fig. 3. We display path-integral data for $p_{0}$ ($\circ$) and $p_{1}$
($\square$). Master equation results appear as dashed line for $p_{0}$ and
dashed-dotted line for $p_{1}$.
Figure 10: (a) Convergence behavior of the population $p_{0}$ in the steady-
state limit for $\kappa=0.1$. Other parameters are the same as in Fig. 3.
Plotted are the steady-state values using different time steps, $\delta t=0.8$
($\circ$), $\delta t=1.0$ ($\square$), and $\delta t=1.2$ ($\diamond$) at
different biases, as indicated at the right end. (b) Population mean and its
standard deviation, utilizing the last six points from panel (a). (c) Current
mean and its standard deviation, similarly attained from the data in panel
(a).
Figure 11: Charge current in the steady-state limit for $\kappa=0.1$,
$K_{d}=0$. Other parameters are the same as in Fig. 3. Path-integral data
($\circ$), Master equation results (dashed). The insets display transient
results at $\Delta\mu=1.0$ eV (top) and $\Delta\mu=-0.5$ eV (bottom).
Figure 12: Charge current and vibrational occupation in the steady-state limit
at different electron-vibration coupling. Path-integral data is marked by
symbols, $\kappa=0.1$ ($\circ$), $\kappa=0.2$ ($\diamond$) and $\kappa=0.3$
($\square$). The corresponding Master equation results appear as dashed lines.
Inset: The population behavior in the steady-state limit for the three cases
$\kappa=0.1$ ($\circ$), $\kappa=0.2$ ($\diamond$) and $\kappa=0.3$
($\square$), with empty symbols for $p_{0}$ and filled ones for $p_{1}$. Other
parameters are the same as in Fig. 3.
Direct tunneling vs. vibrational assisted transport. Until this point (and
beyond this subsection) we have taken $v_{da}=0$. We now evaluate the
contribution of different transport mechanisms by adding a direct D-A
tunneling term, $v_{da}\neq 0$ to our model Hamiltonian. Electrons can now
either cross the junction in a coherent manner, or inelastically, by
exciting/de-exciting the vibrational mode. Fig. 13 demonstrates that when the
vibration assisted transport energy $\kappa$ is identical in strength to the
direct tunneling element $v_{da}$, the overall current is enhanced by about a
factor of two, compared to the case when only vibrational-assisted processes
are allowed. We also note that the occupation of the vibrational mode is
barely affected by the opening of the new electron transmission route
(deviations are within the convergence error). While we compare IF data to
Master equation results when $v_{da}=0$, in the general case of a nonzero D-A
tunneling term perturbative methods are more involved, and techniques similar
to those developed for the AH model should be used Galperin _et al._ (2007a);
Mitra _et al._ (2004); Galperin _et al._ (2007b, 2009); Fransson and
Galperin (2010); Leijnse and Wegewijs (2008); Hartle and Thoss (2011a, b).
Figure 13: Study of the contribution of different transport mechanisms.
$v_{da}=0$ ($\circ$), with Master equation results noted by the dashed line,
and $v_{da}=0.1$ ($\square$). The main plot displays the charge current. The
inset presents the vibrational levels occupation, with empty symbols for
$p_{0}$ and filled symbols for $p_{1}$. Other parameters are the same as in
Fig. 3, particularly, the vibrational-electronic coupling is $\kappa=0.1$.
#### IV.5.2 Equilibration with a secondary phonon bath
We couple the isolated-truncated vibrational mode to a secondary phonon bath,
and follow the mode equilibration with this bath and the removal of the
vibrational instability effect, as we increase the vibrational mode-phonon
bath coupling. As an initial condition, the boson bath is assumed to be
thermal with an inverse temperature $\beta_{ph}$. This bath is characterized
by an ohmic spectral function (50) with the dimensionless Kondo parameter
$K_{d}$, characterizing subsystem-bath coupling, and the cutoff frequency
$\omega_{c}$.
Population behavior. We follow the mode dynamics to the steady-state limit
using the path-integral approach of Sec. III. The bosonic IF is given in the
appendix. We compare exact results with Master equation predictions, and Fig.
14 depicts our simulations. The following observations can be made: (i) The
vibrational instability effect is removed already for $K_{d}=0.01$, though
nonequilibrium effects are still largely visible in the mode occupation. (ii)
The vibrational mode is closed to be equilibrated with the phonon bath once
$K_{d}\sim 0.1$. (iii) For the present range of parameters (large
$\omega_{c}$, weak subsystem-bath couplings), Master equation tools reproduce
the behavior of the vibrational mode.
Charge Current. The role of the secondary phonon bath on the charge current
characteristics is displayed in Fig. 15. There are two main effects related to
the presence of the phonon bath: The step structure about zero bias is
flattened when $K_{d}\sim 0.1$, and the current-voltage characteristics as a
whole is slightly enhanced at finite $K_{d}$, at large bias. Both of these
effects are excellently reproduced with the Master equation, and we conclude
that in this weak-coupling regime the presence of the phonon bath does not
affect the rectifying behavior of the junction. We have also verified (not
shown) that at stronger coupling, $\kappa=0.2$ (where Master equation fails),
the thermal bath similarly affects the current-voltage behavior.
An important observation is that the current itself does not testify on the
state of the vibrational mode, whether it is in a stable or an unstable
nonequilibrium state, and whether it is thermalized. The study of the current
characteristics itself ($j$ vs. $\Delta\mu$) is therefore insufficient to
determine junction stability. More detailed information can be gained from the
structure of the first derivative, $dj/d(\Delta\mu)$, the local density of
states, and the second derivative, $d^{2}j/d\Delta\mu^{2}$, providing spectral
features Mii _et al._ (2003); Galperin _et al._ (2004a, b). In order to
examine these quantities, our simulations should be performed with many more
bath states, to eliminate possible spurious oscillations in the current (of
small amplitudes) that may result from the finite discretization of the fermi
baths.
Figure 14: Equilibration of the molecular vibrational mode with increasing
coupling to a secondary phonon bath. Path-integral results, (full symbols for
$p_{1}$, and empty symbols for $p_{0}$) with $K_{d}=0$ ($\circ$), $K_{d}=0.01$
($\diamond$), $K_{d}=0.1$ ($\square$), and, $K_{d}=0.1$, $\kappa=0$
($\triangleleft$). Unless otherwise specified, $\kappa=0.1$, $\beta_{ph}=5$
and the spectral function follows (50) with $\omega_{c}$=15. All other
electronic parameters are the same as in Fig. 3. Master equation results
appear in dotted lines.
Figure 15: Charge current for an isolated mode,$K_{d}=0$ ($\circ$), and an
equilibrated mode, $K_{d}=0.1$, $\beta_{ph}$=5, $\omega_{c}$=15 ($\square$).
Other electronic parameters are given in Fig. 3. Master equation results
appear in dashed-dotted lines.
### IV.6 Convergence and Computational aspects
Convergence of the path-integral method should be verified with respect to
three numerical parameters: the number of states used to mimic a fermi sea,
$L_{s}$, the time step adopted, $\delta t$, and the memory time accounted for,
$\tau_{c}$. (i) Fermi sea discretization. We have found that excellent
convergence is achieved for relatively “small” fermi reservoirs, taking into
account $L_{s}>20$ states for each reservoir. In our simulations we
practically adopted $L_{s}=30$ for each Fermi bath. (ii) Time-step
discretization. The first criteria in selecting the value of the time step
$\delta t$ is that dynamical features of the isolated vibrational mode should
be observed. Using $\omega_{0}=0.2$, the period of the bath-free Rabbi
oscillation is $2\pi/(\omega_{0})\sim 30$, thus a time step of $\delta t\sim
1$ can capture the details of the TLS oscillation. This consideration serves
as an “upper bound” criteria. The second consideration connects to the time
discretization error, originates from the approximate splitting of the total
time evolution operator into a product of terms, see Eq. (5). For the
particular Trotter decomposition employed, the leading error grows with
$\delta t^{3}\times\left([H_{S},[V,H_{S}]]/12+[V,[V,H_{S}]]/24\right)$ Tannor
(2007) where $V=V_{SB}+V_{SF}+H_{B}+H_{F}$. The decomposition is exact when
the coupling of the subsystem to the reservoirs is weak and the time-step is
small, $\delta t\rightarrow 0$. For large coupling one should take a
sufficiently small time-step in order to avoid significant error buildup. In
the preset work, the dimensionless coupling to the fermi sea
$\pi\rho\kappa\lambda_{l}^{*}\lambda_{r}$ is typically maintained lower that
0.3; the dimensionless coupling to the boson bath is taken as $K_{d}=0.1$. The
value of $\delta t=0.6-1.2$ is thus sufficiently small for our simulations.
(iii) Memory error. Our approach assumes that bath correlations exponentially
decay resulting from the finite temperature and the nonequilibrium condition.
Based on this assumption, the total influence functional was truncated to
include only a finite number of time steps $N_{s}$, where
$\tau_{c}=N_{s}\delta t$. The total IF is retrieved by taking the limit
$N_{s}\rightarrow N$, ($N=t/\delta t$). Our simulations were performed for
$N_{s}=3...9$, covering memory time up to $\tau_{c}=N_{s}\delta t\sim 10$. The
results displayed converged for $N_{s}\sim 7-9$ for $\delta t=1$.
Computational efforts can be partitioned into two parts: In the initialization
step the (time invariant) IFs are computed. The size of the fermionic IF is
$d^{2N_{s}}$, where $d$ is the dimensionality of the subsystem (two in our
simulations). The power of two at the exponent results from the forward and
backward time evolution operators in the path-integral expression. This
initialization effort thus scales exponentially with the memory size accounted
for. The preparation of the bosonic IF is more efficient if the FV IF is used
Makri and Makarov (1995a); *QUAPI2. In the second, time evolution, stage, we
iteratively apply the linear map (27) or (34), a multiplication of two objects
of length $d^{2N_{s}}$. This operation linearly scales in time.
We now comment on the simulation time of a convergence analysis as presented
in Fig. 10, covering three different time steps and $N_{s}=3,...,9$. The
MATLAB implementation of the computational algorithm took advantage of the
MATLAB built-in multi-threaded parallel features and utilized 100$\%$ of all
available CPU cores on a node. When executed on one cluster node with two
quad-core 2.2GHz AMD Opteron cpus and 16GB memory, convergence analysis for of
the full voltage scan took about 4x24 hours and 250MB of memory. Computations
performed on the GPC supercomputer at the SciNet HPC Consortium Loken and et
al. (2010) were three times faster. Computational time scales linearly with
the simulated time $t$. For a fixed $N_{s}$ value, the computational effort
does not depend on the system temperature and other parameters employed.
## V Summary
We have developed an iterative numerically-exact path-integral scheme that can
follow the dynamics, to the steady-state limit, of subsystems coupled to
multiple bosonic and fermionic reservoirs in an out-of-equilibrium initial
state. The method is based on the truncation of time correlations in the
influence functional, beyond the memory time dictated by temperature and
chemical biases. It combines two techniques: the QUAPI method Makri and
Makarov (1995a); *QUAPI2, for treating the dynamics of subsystems coupled to
harmonic baths, and the INFPI approach Segal _et al._ (2010); *IF2, useful
for following the evolution of a subsystem when interacting with fermionic
baths.
The method is stable, efficient, and flexible, and it allows one to achieve
transient and steady-state data for both the reduced density matrix of the
subsystem and expectation values of operators, such as the charge current and
energy current. The method can be viewed as an extension of QUAPI, to
incorporate fermions in the dynamics. It could be further expanded to include
time-dependent Hamiltonians, e.g., pulsed fields.
To demonstrate the method usability in the field of molecular conduction, we
have applied the general scheme, and studied vibrational dynamics in a
molecular rectifier setup, where vibrational equilibration with an additional
phonon bath is allowed. Our main conclusions in this study are the following:
(i) The vibrational instability effect disappears once the vibrational mode is
weakly coupled ($K_{d}\sim 0.01$) to an additional phonon bath that can
dissipate the excess energy. (ii) When $K_{d}\sim 0.1$, the vibrational mode
is equilibrated with the secondary phonon bath. (iii) The charge current does
not testify on vibrational heating and instability. While we have performed
those simulations using a truncated vibrational mode, a TLS, representing an
anharmonic mode, we argue that the main characteristics of the vibrational
instability effect remain intact when the selected mode is made harmonic
Simine and Segal (2012).
Our simulations indicate that Master equation methods can excellently
reproduce exact results at weak coupling, in the markovian limit. More
significantly, Master equation tools can be used beyond the weak coupling
limit ($\kappa\sim 0.3$), if only a qualitative understanding of the junction
behavior is inquired. One should note that our Master equation technique
treats the D and A coupling to the metals exactly. It is perturbative only in
the interaction of the vibrational mode to the electrons, and to other phonon
degrees of freedom. In the case where tunneling transmission competes with
phonon-assisted transport, only path-integral simulations were provided, as
more involved Master equation methodologies should be developed in this case.
Our future objectives are twofold: (i) to improve the time-evolution
algorithm, and (ii) to employ the method for the study of other problems in
molecular electronics and phononics. By improving the methodology, we would
like to extend the usability of our method to difficult parameter regimes
(strong coupling), e.g., by developing an equation-of-motion for the memory
function Golosov _et al._ (1999); Cohen and Rabani (2011). This will also
allow us to simulate more feasibly the dynamics of an $n-$level subsystem.
Another related objective is the study of heat current characteristics in the
spin-boson molecular junction Segal and Nitzan (2005a); *SegalR2. The single-
bath spin-boson model displays a rich dynamics with a complex phase diagram.
Similarly, we expect that the nonequilibrium version, with two harmonic baths
of different temperatures coupled to the TLS, will show complex behavior for
its heat current- temperature characteristics. Recent results, obtained using
an extension of the noninteracting blip approximation to the nonequilibrium
regime Nicolin and Segal (2011), demonstrate rich behavior. Other problems
that could be directed with our method include plexcitonics systems, as the
coupling between surface plasmons and molecular excitons should be treated
beyond the perturbative regime Manjavacas _et al._ (2011). Finally, we have
discussed the calculation of reduced density matrix and currents in the path-
integral framework. It is of interest to generalize these expressions and gain
higher order cumulants, for the study of current, noise, and fluctuation
relations in many-body out-of-equilibrium systems.
###### Acknowledgements.
DS acknowledges support from an NSERC discovery grant. The work of LS was
supported by an Early Research Award of DS. Computations were performed on the
GPC supercomputer at the SciNet HPC Consortium Loken and et al. (2010). SciNet
is funded by: the Canada Foundation for Innovation under the auspices of
Compute Canada; the Government of Ontario; Ontario Research Fund - Research
Excellence; and the University of Toronto.
## Appendix A: Time-discrete Feynman-Vernon Influence functional
With the discretization of the path, the influence functional takes the form
(17). The coefficients $\eta_{k,k^{\prime}}$ were given in Makri and Makarov
(1995a); *QUAPI2 and we include them here for the completeness of our
presentation. The expressions are given here for the case of a single boson
bath with the initial temperature $1/\beta_{ph}$ and the spectral function
$J_{ph}(\omega)=\pi\sum_{p}(\xi^{B}_{p})^{2}\delta(\omega-\omega_{p})$,
$J_{ph}(\omega)=J_{ph}(-\omega)$,
$\displaystyle\eta_{k,k^{\prime}}$ $\displaystyle=$
$\displaystyle\frac{2}{\pi}\int_{-\infty}^{\infty}d\omega\frac{J_{ph}(\omega)}{\omega^{2}}\frac{\exp(\beta_{ph}\omega/2)}{\sinh(\beta_{ph}\omega/2)}\sin^{2}(\omega\delta
t/2)e^{-i\omega\delta t(k-k^{\prime})},\,\,\,\,\,0<k^{\prime}<k<N$
$\displaystyle\eta_{k,k}$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega\frac{J_{ph}(\omega)}{\omega^{2}}\frac{\exp(\beta_{ph}\omega/2)}{\sinh(\beta_{ph}\omega/2)}\left(1-e^{-i\omega\delta
t}\right),\,\,\,\,0<k<N$ $\displaystyle\eta_{k,0}$ $\displaystyle=$
$\displaystyle\frac{2}{\pi}\int_{-\infty}^{\infty}d\omega\frac{J_{ph}(\omega)}{\omega^{2}}\frac{\exp(\beta_{ph}\omega/2)}{\sinh(\beta_{ph}\omega/2)}\sin(\omega\delta
t/4)\sin(\omega\delta t/2)e^{-i\omega(k\delta t-\delta
t/4)},\,\,\,\,\,\,\,0<k<N$ $\displaystyle\eta_{N,k^{\prime}}$ $\displaystyle=$
$\displaystyle\frac{2}{\pi}\int_{-\infty}^{\infty}d\omega\frac{J_{ph}(\omega)}{\omega^{2}}\frac{\exp(\beta_{ph}\omega/2)}{\sinh(\beta_{ph}\omega/2)}\sin(\omega\delta
t/4)\sin(\omega\delta t/2)e^{-i\omega(N\delta t-k^{\prime}\delta t-\delta
t/4)},\,\,\,\,\,\,\,0<k^{\prime}<N$ $\displaystyle\eta_{N,0}$ $\displaystyle=$
$\displaystyle\frac{2}{\pi}\int_{-\infty}^{\infty}d\omega\frac{J_{ph}(\omega)}{\omega^{2}}\frac{\exp(\beta_{ph}\omega/2)}{\sinh(\beta_{ph}\omega/2)}\sin^{2}(\omega\delta
t/4)e^{-i\omega(N\delta t-\delta t/2)}$ $\displaystyle\eta_{0,0}$
$\displaystyle=$
$\displaystyle\eta_{N,N}=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega\frac{J_{ph}(\omega)}{\omega^{2}}\frac{\exp(\beta_{ph}\omega/2)}{\sinh(\beta_{ph}\omega/2)}\left(1-e^{-i\omega\delta
t/2}\right)$ (A1)
## References
* Bonca and Trugmann (1995) J. Bonca and S. Trugmann, Phys. Rev. Lett. 75, 2566 (1995).
* Ness _et al._ (2001) H. Ness, S. Shevlin, and A. Fisher, Phys. Rev. B 63, 125422 (2001).
* Cízek _et al._ (2004) M. Cízek, M. Thoss, and W. Domcke, Phys. Rev. B 70, 125406 (2004).
* Galperin _et al._ (2007a) M. Galperin, M. A. Ratner, and A. Nitzan, J. Phys.: Condens. Matter 19, 103201 (2007a).
* Mitra _et al._ (2004) A. Mitra, I. Aleiner, and A. J. Millis, Phys. Rev. B 69, 245302 (2004).
* Galperin _et al._ (2007b) M. Galperin, A. Nitzan, and M. A. Ratner, Phys. Rev. B 75, 155312 (2007b).
* Galperin _et al._ (2009) M. Galperin, M. A. Ratner, and A. Nitzan, J. Chem. Phys 130, 144109 (2009).
* Fransson and Galperin (2010) J. Fransson and M. Galperin, Phys. Rev. B 81, 075311 (2010).
* Segal and Nitzan (2002) D. Segal and A. Nitzan, J. Chem. Phys. 117, 3915 (2002).
* Leijnse and Wegewijs (2008) M. Leijnse and M. R. Wegewijs, Phys. Rev. B 78, 235424 (2008).
* Hartle and Thoss (2011a) R. Hartle and M. Thoss, Phys. Rev. B 83, 125419 (2011a).
* Hartle and Thoss (2011b) R. Hartle and M. Thoss, Phys. Rev. B 83, 115414 (2011b).
* Volkovich _et al._ (2010) R. Volkovich, R. Hartle, and M. Thoss, Phys. Chem. Chem. Phys. 32, 14333 (2010).
* Swenson _et al._ (2011) D. W. H. Swenson, T. Levy, G. Cohen, E. Rabani, and W. H. Miller, J. Chem. Phys. 134, 164103 (2011).
* Swenson _et al._ (2012) D. W. H. Swenson, G. Cohen, and E. Rabani, Molecular Physics 110, 743 (2012).
* Makri and Makarov (1995a) N. Makri and D. E. Makarov, J. Chem. Phys. 102, 4600 (1995a).
* Makri and Makarov (1995b) N. Makri and D. E. Makarov, J. Chem. Phys. 102, 4611 (1995b).
* Segal _et al._ (2010) D. Segal, A. J. Millis, and D. R. Reichman, Phys. Rev. B 82, 205323 (2010).
* Segal _et al._ (2011) D. Segal, A. J. Millis, and D. R. Reichman, Phys. Chem. Chem. Phys. 13, 14378 (2011).
* Bedkihal and Segal (2012) S. Bedkihal and D. Segal, Phys. Rev. B 85, 155324 (2012).
* Kulkarni _et al._ (2012) M. Kulkarni, K. L. Tiwari, and D. Segal, Phys. Rev. B 86, 155424 (2012).
* Kulkarni _et al._ (2013) M. Kulkarni, K. L. Tiwari, and D. Segal, New J. Phys. 15, 013014 (2013).
* Muhlbacher and Rabani (2008) L. Muhlbacher and E. Rabani, Phys. Rev. Lett. 100, 176403 (2008).
* Wang _et al._ (2011) H. Wang, I. Pshenichnyuk, R. Hartle, and M. Thoss, J. Chem. Phys. 135, 244506 (2011).
* Weiss _et al._ (2008) S. Weiss, J. Eckel, M. Thorwart, and R. Egger, Phys. Rev. B 77, 195316 (2008).
* Eckel _et al._ (2010) J. Eckel, F. Heidrich-Meisner, S. G. Jakobs, M. Thorwart, M. Pletyukhov, and R. Egger, New J. Phys. 12, 043042 (2010).
* Hutzen _et al._ (2012) R. Hutzen, S. Weiss, M. Thorwart, and R. Egger, Phys. Rev. B 85, 121408 (2012).
* Aviram and Ratner (1974) A. Aviram and M. A. Ratner, Chem. Phys. Lett. 29, 277 (1974).
* Lu _et al._ (2011) J.-T. Lu, P. Hedegard, and M. Brandbyge, Phys. Rev. Lett. 107, 046801 (2011).
* Simine and Segal (2012) L. Simine and D. Segal, Phys. Chem. Chem. Phys. 14, 13820 (2012).
* Kondo (1964) J. Kondo, Prog. Theor. Phys. 32, 37 (1964).
* Mitra and Millis (2005) A. Mitra and A. J. Millis, Phys. Rev. B 72, 121102(R) (2005).
* Mitra and Millis (2007) A. Mitra and A. J. Millis, Phys. Rev. B 76, 085342 (2007).
* Segal _et al._ (2007) D. Segal, D. R. Reichman, and A. J. Millis, Phys. Rev. B 76, 195316 (2007).
* Lutchyn _et al._ (2008) R. M. Lutchyn, L. Cywinski, C. P. Nave, and S. D. Sarma, Phys. Rev. B 78, 024508 (2008).
* Feynman and Hibbs (1965) R. P. Feynman and A. R. Hibbs, _Quantum Mechanics and Path Integrals_ (McGraw-Hill, New-York, 1965).
* Klich (2003) I. Klich, “in quantum noise in mesoscopic systems,” (Kluwer, 2003).
* Yu. _et al._ (2004) L. H. Yu., Z. K. Keane, J. W. Ciszek, L. Cheng, M. P. Stewart, J. M. Tour, and D. Natelson, Phys. Rev. Lett. 93, 266802 (2004).
* Djukic _et al._ (2005) D. Djukic, K. S. Thygesen, C. Untiedt, R. H. M. Smit, K. Jacobsen, and J. M. van Ruitenbeek, Phys. Rev. B 71, 161402 (2005).
* Kumar _et al._ (2012) M. Kumar, R. Avriller, A. L. Yeyati, and J. M. van Ruitenbeek, Phys. Rev. Lett. 108, 146602 (2012).
* Pasupathy _et al._ (2005) A. N. Pasupathy, J. Park, C. Chang, A. V. Soldatov, R. C. B. S. Lebedkin, J. E. Grose, L. A. K. Donev, J. P. Sethna, D. C. Ralph, and P. L. McEuen, Nano Lett. 5, 203 (2005).
* Ioffe _et al._ (2008) Z. Ioffe, T. Shamai, A. Ophir, G. Noy, I. Yutsis, K. Kfir, O. Cheshnovsky, and Y. Selzer, Nature Nanotech. 3, 727 (2008).
* Ward _et al._ (2011) D. R. Ward, D. A. Corley, J. M. Tour, and D. Natelson, Nature Nanotech. 6, 33 (2011).
* Huang _et al._ (2007) Z. Huang, F. Chen, R. Dagosta, P. A. Bennett, M. D. Ventra, and N. Tao, Nature Nanotech. 2, 698 (2007).
* Entin-Wohlman _et al._ (2010) O. Entin-Wohlman, Y. Imry, and A. Aharony, Phys. Rev. B 82, 115314 (2010).
* Jiang _et al._ (2012) J.-H. Jiang, O. Entin-Wohlman, and Y. Imry, Phys. Rev. B 85, 075412 (2012).
* Entin-Wohlman and Aharony (2012) O. Entin-Wohlman and A. Aharony, Phys. Rev. B 85, 085401 (2012).
* Mahan (2000) G. D. Mahan, _Many-particle physics_ (Plenum press, New York, 2000).
* Roulet _et al._ (1969) B. Roulet, J. Gavoret, and P. Nozieres, Phys. Rev. 178, 1072 (1969).
* Mii _et al._ (2003) T. Mii, S. Tikhodeev, and H. Ueba, Phys. Rev. B 68, 205406 (2003).
* Galperin _et al._ (2004a) M. Galperin, M. Ratner, and A. Nitzan, J. Chem. Phys. 121, 11965 (2004a).
* Galperin _et al._ (2004b) M. Galperin, M. Ratner, and A. Nitzan, Nano. Lett. 4, 1605 (2004b).
* Tannor (2007) D. J. Tannor, _Intriduction to Quantum Mechanics: A time dependent perspective_ (University Science Books, 2007).
* Loken and et al. (2010) C. Loken and et al., J. Phys.: Conf. Ser. 256, 012026 (2010).
* Golosov _et al._ (1999) A. A. Golosov, R. A. Friesner, and P. Pechukas, J. Chem. Phys 110, 138 (1999).
* Cohen and Rabani (2011) G. Cohen and E. Rabani, Phys. Rev. B 84, 075150 (2011).
* Segal and Nitzan (2005a) D. Segal and A. Nitzan, Phys. Rev. Lett. 94, 034301 (2005a).
* Segal and Nitzan (2005b) D. Segal and A. Nitzan, J. Chem. Phys. 122, 194704 (2005b).
* Nicolin and Segal (2011) L. Nicolin and D. Segal, J. Chem. Phys. 135, 164106 (2011).
* Manjavacas _et al._ (2011) A. Manjavacas, F. J. G. de Abajo, and P. Nordlander, Nano Lett. 11, 2318 (2011).
|
arxiv-papers
| 2013-02-23T04:35:42 |
2024-09-04T02:49:42.043492
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Lena Simine and Dvira Segal",
"submitter": "Dvira Segal",
"url": "https://arxiv.org/abs/1302.5761"
}
|
1302.5794
|
# Constant Communities in Complex Networks
Tanmoy Chakraborty [email protected] Dept. of Computer Science &
Engg., Indian Institute of Technology, Kharagpur, India – 721302 Sriram
Srinivasan [email protected] Dept. of Computer Science, University of
Nebraska, Omaha, Nebraska 68106 Niloy Ganguly [email protected]
Dept. of Computer Science & Engg., Indian Institute of Technology, Kharagpur,
India – 721302 Sanjukta Bhowmick [email protected] Dept. of Computer
Science, University of Nebraska, Omaha, Nebraska 68106 Animesh Mukherjee
[email protected] Dept. of Computer Science & Engg., Indian
Institute of Technology, Kharagpur, India – 721302
Identifying community structure is a fundamental problem in network analysis.
Most community detection algorithms are based on optimizing a combinatorial
parameter, for example modularity. This optimization is generally NP-hard,
thus merely changing the vertex order can alter their assignments to the
community. However, there has been very less study on how vertex ordering
influences the results of the community detection algorithms. Here we identify
and study the properties of invariant groups of vertices (constant
communities) whose assignment to communities are, quite remarkably, not
affected by vertex ordering. The percentage of constant communities can vary
across different applications and based on empirical results we propose
metrics to evaluate these communities. Using constant communities as a pre-
processing step, one can significantly reduce the variation of the results.
Finally, we present a case study on phoneme network and illustrate that
constant communities, quite strikingly, form the core functional units of the
larger communities.
A fundamental problem in understanding the behavior of complex networks is the
ability to correctly detect communities. Communities are groups of entities
(represented as vertices) that are more connected to each other as opposed to
other entities in the system. Mathematically, this question can be translated
to a combinatorial optimization problem with the goal of optimizing a given
metric of interrelation, such as modularity or conductance. The goodness of
community detection algorithms (see lf2009 (1, 2) for a review) is often
objectively measured according to how well they achieve the optimization.
However, these algorithms can be applied to any network, regardless of whether
it possesses a community structure or not. Furthermore when the optimization
problem is NP-hard, as in the case of modularity ng2002 (1), the order in
which vertices are processed as well as the heuristics can change the results.
These inherent fluctuations of the results associated with modularity have
long been a source of concern among researchers. Indeed the goodness of
modualrity as an indicator of community structure has also been questioned,
and there exist examples gmc2010 (5) which demonstrate that high modularity
does not always indicate the correct community structure. Consequently,
orthogonal metrics, such as conductance lldm2008 (6) (which is also NP-
complete brandes2008 (4)) have been proposed.
Research in addressing the fluctuations in the results due to modularity
maximization heuristics include identifying stability among communities from
the consensus networks built from the successive iterations of a non-
deterministic community detection algorithm (such as by Seifi et al. sjri2012
(7)). Lancichinetti et al. lf2012 (8) proposed consensus clustering by
reweighting the edges based on how many times the pair of vertices were
allocated to the same community, for different identification methods.
Delvenne et al. dyb2010 (9) introduced the notion of the stability of a
partition, a measure of its quality as a community structure based on the
clustered auto-covariance of a dynamic Markov process taking place on the
network. Lai et al. lln2010 (10) proposed a random walk based approach to
enhance the modularity of a community detection algorithm. Ovelgonne et al.
ogy2012 (11) pointed out an ensemble learning strategy for graph clustering.
Gfeller et al. gcr2012 (12) investigated the instabilities in the community
structure of complex networks. Finally, several pre-processing techniques
scb2012 (14, 13) have been developed to improve the quality of the solution.
These methods form an initial estimate of the community allocation over a
small percentage of the vertices and then refine this estimate over successive
steps.
All these methods focus on compiling the differences in the results to arrive
at an acceptable solution, and despite these advances a crucial question about
the variance of results remains unanswered – what do the invariance of the
results tell us about the network structure? In this paper, we focus on the
invariance in community detection as obtained by modularity maximization. Our
results, on a set of scale-free networks, show that while the vertex orderings
produce very different set of communities, some groups of vertices are always
allocated to the same community for all different orderings. We define the
group of vertices that remain invariant as constant communities and the
vertices that are part of the constant communities as constant vertices.
Figure 1 shows a schematic diagram of constant communities. Note that not all
vertices in the network belong to constant communities. This is a key
difference of constant communities with the consensus methods lf2012 (8)
described earlier. Consensus methods attempt to find the best (most stable or
most similar) community among all available results and thus include all the
vertices. Constant communities, on the other hand, focus on finding subgraphs
where the cohesive groups can be unambiguously identified. As discussed
earlier, communities obtained by modularity maximization may include vertices
that can move from one group to another depending on the heuristic or the
vertex ordering. The vertex groups obtained using constant communities are
invariant under these algorithmic parameters and, thereby, provide a lower
bound on the number of uniquely identifiable communities in the network.
Although trivially each vertex can be considered to be a constant community by
itself, our goal is to identify the largest number of vertices (i.e., at least
three or more) that can be included in an invariant group.
Figure 1: Schematic illustration of the formation of constant communities.
Two colors (red and green) indicate two communities of the network formed in
each iteration. Combined results of two algorithms produce two constant
communities (rectangular and circular vertices). Remaining one vertex
(hexagonal shaped) is not included since it switches its community between the
two algorithms.
The presence of such invariant structures can be used to evaluate the accuracy
of the communities when other independent methods of verifications are
unavailable. However in many networks, constant communities constitute only a
small percentage of the total number of vertices. To understand how other non-
constant vertices are allocated to communities, we show that by using constant
communities we can significantly reduce the variations in results. Thus,
building from the more accurate results reduces the variance over the larger
network. In brief our main contributions are as follows:
* •
demonstrate the possibility of extreme variance in community structure due to
vertex perturbations
* •
develop metrics to determine whether a network possess invariant groups of
constant communities
* •
demonstrate how using constant communities as a pre-processing step can reduce
the variance in modularity maximization methods.
## Results
Experimental setup. In this section, we first demonstrate that even for the
same optimization objective (in this case maximizing modularity) and the same
heuristic, the inherent non-determinism of the method can significantly change
the results. Based on our results, we define metrics to estimate the
propensity of a network to form communities. Finally, we show how combining
constant communities as a pre-processing step can help improve the modularity
of the community detection algorithm for the network as a whole.
We selected two popular agglomerative modularity maximization techniques – the
method proposed by Clauset et al. cnm2004 (15) (henceforth referred to as the
CNM method) and the method proposed by Blondel et al. bgll2008 (16)
(henceforth referred to as Louvain method). Both these methods initially start
by assigning one vertex per community. Then at each iterative step, two
communities whose combination most increases the value of modularity are
joined. This process of joining community pairs is continued until the value
of modularity no longer increases. The Louvain method generally produces a
higher value of modularity than CNM, because it allows vertices to migrate
across communities if that leads to a more optimum value.
In order to identify these communities, for each network in the test suite, we
applied the CNM (and Louvain) method over different permutations of the
vertices and then isolated the common groups that were preserved across the
different orderings (see Methods section). These common groups of vertices
were marked as the constant communities for the respective network.
We identified constant communities using both the CNM and Louvain algorithms.
We observed based on the high ($>$ 0.80) Normalized Mutual Information (NMI)
nmi (4) (see the supplementary information for the definition of NMI) values
that the overlap between the constant communities obtained from the two
methods is considerable nmi_1 (5, 6) (see Table III in the supplementary
information). Therefore, in the interest of space and clarity we confine our
discussion about the properties of constant communities to those obtained from
the Louvain method.
Degree preserving order. Ideally, the total number of different orderings to
be tested should be equal to the factorial of the number of vertices in the
network. However, even for the smallest network in our set (Chesapeake with 39
vertices) this value is astronomical. We therefore restrict our permutations
to maintain a degree-preserving order. The vertices are ordered such that if
degree of $v_{i}$ is greater than the degree of $v_{j}$, then $v_{i}$ is
processed prior to $v_{j}$.
In addition, to reducing the number of vertex permutation, degree-preserving
permutation also has another important advantage. Recall that the networks in
the test suite have few vertices with high degrees and a lot with low degrees.
Therefore, arranging the high degree vertices earlier pushes most of the
fluctuations towards the later part of the agglomeration process. This ensures
that the sub-communities formed initially are relatively constant and only
later do the divergence in community memberships take place. Clearly, such
orderings based on decreasing degrees are geared towards facilitating low
variance in communities. If even this ordering does not produce constant
structures, it makes a very strong case about the inherent fluctuations that
underlie modularity maximization methods.
Test suites. Our experiments were conducted on networks obtained from real-
world data as well as on a set of synthetically generated networks using the
LFR model lfr2009 (17). The set of real-world networks is obtained from the
instances available at the 10th DIMACS challenge website dimacs (18). The
networks, which are undirected and unweighted, include – Jazz (network of jazz
musicians; $|V|=198,|E|=2742$) jazz (19), Polbooks (network of books on USA
politics; $|V|=105,|E|=441$) polbooks (20), Chesapeake (Chesapeake bay
mesohaline network; $|V|=39,|E|=340$) chesapeake (21), Dolphin (Dolphin social
network; $|V|=62,|E|=159$) dolphin (22), Football (American college football;
$|V|=115,|E|=1226$) football (23), Celegans (Metabolic network of C. elegans;
$|V|=453,|E|=2025$) celegans (24), Power (topology of the Western States Power
Grid of the USA; $|V|=4941,|E|=6594$) power (25) and Email (e-mail
interchanges between members of the Univeristy Rovira i Virgili;
$|V|=1133,|E|=5451$) email (26) (note that $|V|$ refers to the number of
vertices and $|E|$ refers to the number of edges). All these networks exhibit
scale-free degree distribution (see Figure S1 in the supplementary
information).
Networks generated using the LFR model are associated with a mixing parameter
$\mu$ that represents the ratio of the external connections of a node to its
total degree. We created LFR networks based on the following parameters lf2012
(8): number of nodes = 500, average degree = 20, maximum degree = 50, minimum
community size = 10, maximum community size = 50, degree exponent for power
law = 2, community size exponent = 3. We varied the value of $\mu$ from 0.05 -
0.90. Low values of $\mu$ correspond to well-separated communities that are
easy to detect and consequently these networks contain larger percentage of
constant communities. As $\mu$ increases, communities get more ambiguous and
community detection algorithms provide more varied results leading to fewer
vertices being in significantly sized constant communities.
Sensitivity of community structure to vertex perturbations. In our first
experiment we study how the community structures of the networks change under
vertex perturbations. Since constant communities are the groups of vertices
that remain invariant, we measure the change in community structure based on
the number of constant communities. We define sensitivity ($\phi$) as the
ratio of the number of constant communities to the total number of vertices.
If $\phi$ is 1 then each vertex by itself is a constant community (the trivial
case), thus there is no consensus at all over the set of communities obtained
over different permutations. The higher the sensitivity metric, the fewer the
vertices in each constant community and, therefore, this metric is useful for
identifying networks that do not have a good community structure under
modularity maximization.
The sensitivity of each network is given in Figure 2. The x-axis indicates the
number of different permutations of the vertices and the y-axis plots the
value of the sensitivity. We observe that for most of the networks the number
of constant communities become stable within the first 100 permutations, and
the sensitivity values are low. This indicates that there can potentially
exist very strong groups in these networks that have to be together to achieve
high modularity. However, for networks such as Power grid and Email, the
number of constant communities kept increasing until the values of $\phi$ were
close to 1. Thus, the community detection results for these two networks are
extremely sensitive to the vertex perturbations. This implies that the
communities (if any) in these two networks are not tightly knit, i.e., very
``amorphous''.
Figure 2: Sensitivity of each network across 5000 permutations. X-axis
indicates the number of permutations. The x-axis is rescaled by a constant
factor of 100 for better visualization. Y-axis indicates the value of
sensitivity as it changes over the permutations. Power and Email networks have
very high sensitivity values indicating that they possibly do not have a
tightly knit community structure.
Percentage of constant communities. We now investigate, in further detail, the
properties of constant communities. We define the relative size ($\xi$) of a
constant community as the ratio of the number of vertices in that constant
community to the total number of vertices in the network and the strength
($\Theta$) as the ratio of the edges internal to the constant community to the
edges external to the constant community.
Figure 3 plots the relative size (in percentage) of the constant communities
with respect to their strength. If the strength of a constant community is
above 1 (above 0 in log scale) then the number of internal edges in the
community is larger than the number of external edges. The higher the value,
the more tightly connected is the community. We see that the value of relative
size ranges from 0-34, with a larger cluster of values around 0-5. This shows
that most of the constant communities contain very few vertices with respect
to the network. If the relative size of the constant communities is low then
the remaining vertices have more freedom in migrating across communities,
making the community structure weaker. We observe that, despite there being
more constant communities of low relative size, there are some networks that
have multiple constant communities with relative size over 15% of the total
number of nodes indicating that they have a much stronger community structure.
These include Jazz, followed by Dolphin and then Polbooks and Chesapeake.
Figure 3: Comparison between the relative size and strength of the constant
communities. X-axis plots the relative size in percentage. Y-axis (in
logarithmic scale) plots the strength. Jazz, Dolphin, Polbooks and Chesapeake
show strong constant community structure. But Email and Power hardly have any
constant communities. The plot is vertically divided at x = 17 that could help
systematically analyze the distribution of the points.
Relative size and strength together provide an estimate of which networks have
good community structure. If we divide the x-axis at roughly the mid-point of
the range and the y-axis at 1, then we obtain four quadrants each representing
different types of community structures. The first quadrant (upper right)
contains communities that have high relative size as well as high strength.
Networks containing a large number of such constant communities are less
likely to be affected by perturbations. Diagonally opposite is the third
quadrant (lower left), which contains communities of low relative size and low
strength. As discussed earlier, networks having communities predominantly from
this quadrant will produce significantly different results under perturbations
and are likely to not have a strong community structure under modularity
maximization. The second quadrant (upper left) contains the groups of vertices
that are strongly connected but have small relative size. This indicates that
there are some pockets of the network with strong community structure. The
fourth quadrant (lower right) represents communities with high relative size
but low strength. In this set of experiments it is empty, and we believe that
this area will be sparsely populated, if at all. This is because networks
having such communities will have a very special structure: strongly connected
groups of very few vertices with many spokes radiating out to account for the
high number of external communities.
Pull from external connections. We note in Figure 3 that there are several
constant communities whose strength is below one, i.e., they have more
external than internal connections. This is counterintuitive to the idea that
a strong community should have more internal connections. Indeed, modularity
maximization methods always tend to create communities whose strengths are
greater than one. However, the structure of some of the constant communities
belies this convention.
We observe that in these cases, the external connections are distributed
across different communities. Furthermore, the number of connections to any
one external community is always lower than the internal connections. Based on
this observation, we hypothesize that a group of vertices are likely to be
placed together so long as the internal connection is greater than the
connections to any one single external community. Then the vertices within the
community do not experience a significant ``pull'' from any of the external
communities that will cause them to migrate, and, therefore, their propensity
to remain within their own communities is high. We quantify this measurement
as follows:
Let $v$ be a vertex in a constant community; further, let $D(v)$ denote the
degree of $v$, and $EN(v)$ and $IN(v)$ denote the number of external and
internal neighbors of $v$ respectively (i.e., $D(v)=IN(v)+EN(v)$). We also
assume that the $EN(v)$ external neighbors are divided into $k$ external
groups, and $ENG(v)$ denote a set of $k$ elements where the ith element in the
set represents the number of neighbors of $v$ belonging to the ith external
group. For instance, considering the vertex $A$ in $CC_{1}$ in Figure 4 (Top),
$D(A)=9,IN(A)=3,EN(A)=6$ and $ENG(A)=\\{3,2,1\\}$ (i.e., three external
neighbors in $CC_{2}$, one external neighbor in $CC_{3}$, and two external
neighbors in $CC_{4}$). Similarly, we calculate $ENG(v)$ for each vertex in
the network and form a list $DENG(G)$ by taking union over all $ENG(v)$, that
is, only unique entries across $ENG(v)$ get listed in $DENG(G)$ (see Figure
4-top). The list is then ranked in ascending order. The intuition behind this
ranking is to identify the diverse range of the sizes of the external groups.
The inverse of the rank would therefore signify the intensity of the pull of
the particular external community. For a particular vertex, if the inverse
rank of each of the external group is equal to one, it would point to the fact
that all its external neighbors are diversely distributed (i.e., well-
sparsed), and therefore the pull experienced should be minimum; in contrast,
if the value is much lower than one, it would imply that the vertex
experiences a strong pull from its external neighbors. We define the strength
of a vertex v, $\theta(v)$, as the ratio of the internal neighbor ($IN(v)$) to
the external neighbor ($EN(v)$) of vertex $v$ similar to the strength
($\Theta$) of a constant community defined earlier. Mathematically, the
suitably normalized value of $relative\ permanence$, $\Omega(v)$, of a vertex
$v$ in a constant community can be expressed as:
$\Omega(v)=\theta(v)\times\frac{\sum_{i=1}^{k}{\frac{1}{Rank_{i}(ENG(v))}}}{D(v)}$
(1)
where $Rank_{i}(ENG(v))$ denote the rank (retrieved from the DENG(v) list) of
the ith element in $ENG(v)$. This metric indicates the propensity of a vertex
to remain in the same community regardless of any algorithmic parameters.
Figure 4 (Top) presents a schematic diagram for computing relative permanence
of vertices within the communities. Figure 4 (Bottom) plots the cumulative
distribution of the relative permanence over the vertices in all networks. The
x-axis indicates the value of the relative permanence and the y-axis, the
cumulative fraction of vertices having the corresponding relative permanence
value. The nature of the cumulative permanence distribution of the vertices is
roughly same for all networks except Email and Power. The distinguishing
nature of the curves for Email and Power graphs compared to the other graphs
indicates that very few number of vertices in these two networks have higher
relative permanence values and therefore experience more ``pull'' from the
external communities. Another observation is that a high fraction of vertices
in Jazz, Polbooks, Dolphin and Celegans have relative permanence close to one.
These vertices are more ``stable'' compared to the other vertices in the
respective networks.
Figure 4: Top: Schematic diagram illustrating the computation of the relative
permanence of the vertices. Bottom: Distribution of relative permanence
values. X-axis plots the value of $\Omega$ and y-axis plots the cumulative
fraction of vertices ($P(\Omega)$) exhibiting that $\Omega$. Both axes are in
logarithmic scale.
Constant communities for improving the modularity. We note that in many
networks (such as Football and Celegans) constant communities form only a
small percentage of the vertices. Thus, finding only the constant communities
may not provide adequate information about the relationship amongst the rest
of the vertices. We therefore leverage on the invariant results in the first
and second quadrants of Figure 3 as building blocks to identify larger
communities.
Table 1: Modularity before and after pre-processing for real networks (left)
and for different values of mixing parameter ($\mu$) over LFR graphs (right)
| Louvain
---|---
Networks | Before pre-processing | After pre-processing
| Mean ($m_{q}$) | Var ($\sigma_{q}$) | Mean ($m_{q}$) | Var ($\sigma_{q}$)
Jazz | 0.448 | 3.13e-6 | 0.452 | 0
Chesapeake | 0.301 | 1.17e-5 | 0.303 | 3.36e-33
Polbooks | 0.539 | 1.74e-5 | 0.557 | 1.24e-32
Dolphin | 0.543 | 1.76e-5 | 0.550 | 0
Football | 0.610 | 2.01e-5 | 0.623 | 0
Celegans | 0.438 | 2.89e-5 | 0.442 | 1.33e-26
Email | 0.542 | 6.89e-5 | 0.568 | 0.95e-12
Power | 0.936 | 1.09e-5 | 0.937 | 2.25e-10
| | Louvain
---|---|---
$\mu$ | Planted | Before | After
| Modularity | pre-processing | pre-processing
| | Mean($m_{q}$) | Var($\sigma_{q}$) | Mean($m_{q}$) | Var($\sigma_{q}$)
0.05 | 0.878 | 0.834 | 1.98e-24 | 0.877 | 0
0.10 | 0.817 | 0.802 | 2.28e-28 | 0.817 | 0
0.20 | 0.716 | 0.690 | 5.74e-7 | 0.686 | 0
0.50 | 0.440 | 0.385 | 2.05e-6 | 0.389 | 1.58e-28
0.70 | 0.223 | 0.298 | 9.70e-10 | 0.219 | 1.04e-28
0.90 | 0.029 | 0.225 | 4.25e-10 | 0.205 | 5.64e-28
We first combine the constant communities into super-vertices. This process
creates a smaller network as well as ensures that the vertices in the constant
communities always stay together. Then we execute a modularity maximization
algorithm over the entire network (see Methods section). We compute the
variance in results by executing the underlying modularity maximization
algorithm over 5000 permutations using the degree-preserving order. As shown
in Table 1 (left), combining constant communities as a pre-processing step
both increases the mean modularity value as well as reduces the variability
across permutations for real-world networks.
We also observe that the variance becomes 0 or very low for the networks which
have significant number of constant communities in the first and second
quadrants of Figure 3. The results obtained from the other networks with high
sensitivity, such as Email and Power, still indicate some variance although
the value is less pronounced.
These observations on real-world networks lead us to believe that pre-
processing using constant communities is more effective if a network has
strong community structure. To test this hypothesis, we created LFR graphs
with mixing parameters from 0.05 to 0.90. Low mixing parameters indicate
strong community structure. As shown in Table 1 (right), pre-processing using
constant communities helps increase the modularity value and reduces
variability of the results.
Another advantage of LFR networks is that we know the ``ground truth'' i.e.,
the correct distribution of communities (exact number of vertices in each
community and the number of in-community connections between them). We used
NMI to compare the communities obtained, with and without using the pre-
processing step. As shown in Figure 5, when the community structure is strong
(low mixing parameter), using constant communities pushes the result towards
the ground truth. In contrast, when the community structure is not well-
defined (high mixing parameter), use of constant communities does not mimic
the community distribution of the ground truth, because there can be many
variations of community distribution in such networks that lead to high
modularity. These results once again highlight the significance of constant
communities.
Figure 5: Variation of NMI for different values of mixing parameters. The
broken line corresponds to the experiment without the pre-processing step and
the solid line to the experiment after using the pre-processing step.
Relative ranking of constant communities. A constant community is strong if it
is large (high $\xi$) or is well-connected (high $\Omega(v)$). We experimented
to see which one of these two properties is more important in determining high
modularity. To do so, we ordered the constant communities according to (a)
decreasing order of $\xi$ and (b) decreasing order of $\Omega$. We combined
the constant communities into super-vertices one by one following the order
obtained from (a) and (b) separately. After each combination, we computed the
modularity and compared the value with the average modularity (over 5000
permutations) obtained by using the Louvain method without any pre-processing.
Figure 6 compares the modularity obtained by collapsing constant communities
according to the order obtained from (a) (dotted blue line) and (b) (dotted
green lines). For almost all the networks, there is a transition where the
modularity values cross over the mean modularity (solid red line). Once this
transition takes place, the modularity values generally remain above (or at
least equal to) the mean modularity. This critical point indicates the
smallest fraction of constant communities required to outperform the original
algorithms. We observe further that the green lines (ordered according to
$\Omega(v)$) generally reach the critical point earlier than the blue lines
(ordered according to $\xi$), indicating that $\Omega(v)$ is a better
indicator of constant communities.
Figure 6: Modularity after partially collapsing the constant communities. The
broken blue lines are in decreasing order of size and the broken green lines
are in decreasing order of relative permanence. The red lines depict the mean
modularities without using constant communities.
Case study. The significance of constant community in a network can be further
understood if we consider networks where nodes have specific functionalities
associated with them. We hypothesize that in such a network a constant
community would represent indispensable functional blocks that reflect the
defining characteristics of the network. In order to corroborate this
hypothesis we conduct a case study on a specific type of linguistic network
constructed from the speech sound inventories of the world's language pho (7).
The sound inventory of a language comprises a set of consonants and vowels
also sometimes together known as phonemes. In order to unfurl the co-
occurrence principles of consonant inventories, the authors pho (7)
constructed a network (phoneme-phoneme network or PhoNet) where each node is a
consonant and an edge between two nodes denotes if the corresponding
consonants have co-occurred in a language. The number of languages in which
the two nodes (read consonants) co-occur defines the weight of the edge
between these nodes. Note that each node here has a functional representation
since it can be represented by means of a set of phonetic features (e.g.,
bilabial, dental, nasal, plosive etc) that indicate how it is articulated.
Since this is a weighted graph, we suitably define a threshold to construct
the unweighted version. We compute constant communities of PhoNet and observe
that each such graph (see Table 2) represents a natural class, i.e., a set of
consonants that have a large overlap of the features pho (7). Such groups are
frequently found to appear together across languages, and linguists describe
this observation through the principle of feature economy pho (7). According
to this principle, the speakers of a language tend to be economic in choosing
the features in order to reduce their learning effort. For instance, if they
have learnt to use a set of features by virtue of learning a set of sounds,
they would tend to admit those other sounds in their language that are
combinatorial variations of the features already learnt – if a language has
the phonemes /p/ (voiceless, bilabial, plosive), /b/ (voiced, bilabial,
plosive) and /t/ (voiceless, dental, plosive) in its inventory then the
chances that it will have /d/ (voiced, dental, plosive) is disproportionately
higher compared to any other arbitrary phoneme since by virtue of learning to
articulate /p/, /b/ and /t/ the speakers need to learn no new feature to
articulate /d/. Identification of constant communities therefore
systematically unfolds the natural classes and provides a formal definition
for the same (otherwise absent in the literature). We plot in Figure S2 (see
supplementary information), the average hamming distance between the feature
vectors of phonemes forming a constant community versus the community size.
The average hamming distance is significantly lower in the case when a set of
randomly chosen phonemes are grouped together and assumed to represent a
community with varying sizes as that of the constant communities. Further, we
observe that collapsing the constant communities results either in more dilute
groups (still with a certain degree of feature overlap) or reproduces the same
constant communities indicating that no valid dilution is possible for these
functional blocks.
Table 2: Few constant communities of PhoNet and the features they have in common Constant communities | Features in common
---|---
/ph/, /th/, /kh/ | voiceless, aspirated, plosive
/mb/, /nd/, /ng/ | prenasalized, voiced, plosive
/p/, /t/, /k/ | laryngealized, voiceless, plosive
/t/, /d/, /n/ | dental
/ l/, / n/, / t/, / d/ | retroflex
## Discussion
Constant communities are regions of the network whose community structure is
invariant under different perturbations and community detection algorithms.
They, thereby, represent the core similar relationships in the network. The
existence of multiple results for community detection is well known; however,
this is one of the first studies of the invariant subgraphs that occur in a
network.
Although we currently detect constant communities by comparing across
different permutations, our results have uncovered some interesting facets
about the community structure of networks, which can lead to improved
algorithms for community detection. First, we observe that constant
communities do not always have more internal connections than external
connections. Rather, the strength of the community is determined by the number
of different external communities to which it is connected. We have proposed a
metric to quantify the pull that a vertex experiences from the external
communities and the relative permanence of the said vertex indicating its
inertia to stay in its own community.
Secondly, in most networks, constant communities cover only a subset of the
vertices. Depending on the size of the constant communities it may not be
correct or necessary to assign every vertex to a community, as is the focus of
most community detection algorithms. Furthermore, even if when we insist on
assigning a community to each vertex, the constant communities can be
leveraged to produce results with higher modularity and lower variance. Thus,
as discussed earlier, constant communities form the smallest indivisible units
in the networks and particularly in the case of agglomerative methods can be
used to hierarchically build larger communities.
Thirdly, the high functional cohesion among the vertices of the constant
community can render meaning to the community structure of the networks. This
conclusion is much more apparent for labeled graphs where the vertices are
associated with certain functional properties. If we stop at detecting only
the constant communities and treat them as the actual community structure of
the graph, we observe that sometimes it acts as a hard bound since no further
community detection might be possible. Therefore, we suggest that the prior
detection of these building blocks is always significant in order to further
decide to merge them into more coarse-grained communities pertaining to a
diluted functional cohesion.
The fourth and most important observation is that not all networks have
significant constant community structure. The two most egregious examples in
our test suites are Power and Email graphs. The absence of constant
communities in the networks indicate that either communities in general do not
exist or they are highly overlapped and therefore do not have a significant
constant region. The first case is true for Power grid, which as a grid is
unlikely to have communities. We believe that the second reason probably holds
for the Email network. A set of professional emails within correspondents in
the same university is likely to have more overlaps than clear cut
communities.
Finally, we have demonstrated evidence that the modularity measure is not
enough to judge the inherent compartmental structure of a network. For
instance, Email and Power networks have reasonably higher modularities
compared to the others. Still, no consensus is observed in their community
structures. Rather their sensitivity measures indicate that each node might
act as individual constant community in the further iterations. Therefore, the
goodness metric of the community detection algorithm should be redefined in a
way that can effectively capture the modular structure of the network.
We note that the experiments in this paper focused solely on agglomerative
modularity maximization methods. We plan to continue our studies on the effect
of vertex perturbations on other types of community detection algorithms such
as divisive and spectral methods as well as different optimization objectives.
In particular we are very keen to understand how the randomness of a network
could be quantified in order to develop algorithms that take into account the
variation in randomness of connections for determining the quality of the
communities.
## Methods
Identifying constant communities. In order to identify constant communities we
permute the order of the vertices, and then apply a community detection
algorithm to each of the permuted networks. The results vary across
permutations. We select the groups of vertices that were always allocated
together across all the permutations and mark them as constant communities.
Algorithm 1 in supplementary information formalizes the steps to find out
constant communities (see Figure S3 for the schematic diagram of the
algorithmic steps in the supplementary information). The rationale behind this
process is that these vertices must have some intrinsic connectivity
properties that force them to stay together under all orderings.
To implement the vertex permutation, we adopt a stochastic degree-preserving
scheme that can arrange the vertices based on the descending order of their
degrees. The ordering of the set of vertices with the same degree is permuted.
By applying this method we preserve the relative ordering of the degrees of
the vertices since it is well-known that node-degrees constitute a fundamental
network property. We have also observed that the random permutations producing
high modularity usually preserve a degree-descending order of vertices and the
ones that result in low modularity usually are outcomes of cases where the
algorithm would start executing from a low-degree vertex. Thus, our
permutations prevent us from the possibility of getting confined in a local
maximum of the modularity.
Combining constant communities for modularity maximization. For these tests,
we first collapse the constant communities to individual nodes (see Figure S3
in the supplementary information). This step ensures that the constant
vertices are always grouped together and are guaranteed to remain within the
same community. The total number of edges between the vertices of the two
collapsed communities is computed and this sum is assigned as the new edge
weight between them. We then apply a community detection method to the new
weighted network to obtain the final modularity.
## References
* (1) Lancichinetti, A. & Fortunato, S. Community detection algorithms: a comparative analysis. Physical Review E. 80, 056117 (2009).
* (2) Porter, M. A., Onnela, J. P. & Mucha, P. J. Communities in networks. Notices of the American Mathematical Society. 56, 1082-1097 & 1164-1166 (2009).
* (3) Newman, M.E.J. & Girvan, M. Finding and evaluating community structure in networks. Physical Review E. 69, 026113 (2002).
* (4) Brandes, U., Gaertler, M. & Wagner, D. Experiments on graph clustering algorithms. 11th Europ. Symp. Algorithms. 2832, 568-579 (2003).
* (5) Good, B. H., Montjoye, Y. A. & Clauset, A. The performance of modularity maximization in practical contexts. Phys. Rev. E. 81, 046106 (2010).
* (6) Leskovec, J., Lang, K. L., Dasgupta, A. & Mahoney, M. W. Community structure in large networks: natural cluster sizes and the absence of large well-defined clusters. CoRR. abs/0810.1355 (2008).
* (7) Seifi, M., Guillaume, J. L., Junier, I., Rouquier, J. B. & Iskrov, S. Stable community cores in complex networks. CompleNet. 87-98 (2012).
* (8) Lancichinetti, A. & Fortunato, S. Consensus clustering in complex networks. Nature Scientific Reports. 2, (2012).
* (9) Delvenne, J. C., Yaliraki, S. N. & Barahona, M. Stability of graph communities across time scales. Proceedings of the National academy of sciences of the United States of America. 107, 12755-12760 (2010).
* (10) Lai, D., Lu, H. & Nardini, C. Enhanced modularity-based community detection by random walk network preprocessing. Phys. Rev. E. 81, 066118 (2010).
* (11) Ovelgonne, M. & Geyer-Schulz, A. An ensemble learning strategy for graph clustering. 10th DIMACS Implementation Challenge Graph Partitioning and Graph Clustering (2012).
* (12) Gfeller, D., Chappelier, J. C., & De Los Rios, P. Finding instabilities in the community structure of complex networks. Phys. Rev. E. 72, 056135 (2005).
* (13) Reidy, J., Bader, D. A., Jiang, K., Pande, P. & Sharma, R. Detecting communities from given seeds in social networks. Technical Report, http://hdl.handle.net/1853/36980.
* (14) Srinivasan, S., Chakraborty, T. & Bhowmick, S. Identifying base clusters and their application to maximizing modularity. Contemporary Mathematics. Graph partitioning and Graph Clustering. (D. A. Bader, H. Meyerhenke, P. Sanders and D. Wagner eds.), AMS-DIMACS (2012) (in press).
* (15) Clauset, A., Newman, M. E. J. & Moore, C. Finding community structure in very large networks. Phys. Rev. E. 70, 066111 (2004).
* (16) Blondel, V. D., Guillaume, J. L., Lambiotte, R. & Lefebvre, E. Fast unfolding of community hierarchies in large networks. J. Stat. Mech. 2008(10):P10008+ (2008).
* (17) Lancichinetti, A. & Fortunato, S. Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities. Phys. Rev. E. 80, 016118 (2009).
* (18) 10th DIMACS Implementation Challenge - Graph Partitioning and Graph Clustering. http://www.cc.gatech.edu/dimacs10/archive/clustering.shtml (12.01.2012).
* (19) Gleiser, P. & Danon, L. Jazz musicians network: List of edges of the network of Jazz musicians. Adv. Complex Syst. 6, 565 (2003).
* (20) Kreb, V. Books on US Politics. http://www.orgnet.com/
* (21) Baird, D. & Ulanowicz, R. E. The seasonal dynamics of the Chesapeake Bay ecosystem. Ecol. Monogr. 59, 329-364 (1989).
* (22) Lusseau, D. et al. The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations. Behavioral Ecology and Sociobiology. 54, 396-405 (2003).
* (23) Girvan, M. & Newman, M.E.J. Community structure in social and biological networks. Proc. Natl. Acad. Sci. USA. 99, 7821-7826 (2002).
* (24) Duch, J. & Arenas, A. Community identification using extremal optimization. Phys. Rev. E. 72, (2005).
* (25) Watts, D. J. & Strogatz, S. H. Collective dynamics of `small-world' networks. Nature. 393, 440-442 (1998).
* (26) Guimera, R., Danon, L., Diaz-Guilera, A., Giralt, F. & Arenas, A. Phys. Rev. E. 68, 065103(R) (2003).
* (27) Manning, C. D., Raghavan, P. & Schutze, H. Introduction to Information Retrieval. Cambridge University Press. 1st Edition (2008).
* (28) Vinh, N., Epps, J. & Bailey, J. Information theoretic measures for clusterings comparison: is a correction for chance necessary? Proceedings of the 26th Annual International Conference on Machine Learning. 1073-1080 (2009).
* (29) Strehl, A. & Ghosh, J. Cluster ensembles – a knowledge reuse framework for combining multiple partitions. J. Mach. Learn. Res. 3, 1532-4435 (2003).
* (30) Mukherjee, A., Choudhury, M., Basu, A. & Ganguly, N. Modeling the co-occurrence principles of the consonant inventories: a complex network approach. International Journal of Modern Physics C. World Scientific Publishing Company. 18, 281-295 (2008).
## Acknowledgement
This work has been supported by the Google India PhD fellowship Grant in
Social Computing, College of IS&T at University of Nebraksa at Omaha(UNO) and
the FIRE and GRACA grants from the Office of Research and Creative Activity at
UNO.
## Author contributions
T.C., S.B., N.G., A.M. designed research; T.C., S.S., S.B., N.G., A.M.
performed research; T.C., S.S., S.B., N.G., A.M. contributed new
reagents/analytic tools; T.C., S.S., S.B., N.G., A.M. analyzed data and T.C.,
S.B., N.G., A.M. wrote the paper.
## Additional information
Supplementary information accompanies this paper.
Competing financial interests: The authors declare no competing financial
interests.
Supplementary Information
## Definitions, formulae and notations
This section contains the definition of some of the terms used in the main
text. Most of the networks considered here are undirected, unweighted and
connected graphs, $G(V,E)$, where $V$ is the set of vertices and $E$ is the
set of edges. An edge $e\in E$ is associated with two vertices ${u,v}$ which
are called its endpoints. A vertex $u$ is a neighbor of $v$ if they are joined
by an edge. $N(v)$ is the set of neighbors of vertex $v$ and the degree of
$v$, $degree(v)$, is equal to $|N(v)|$, the cardinality of the set of its
neighbors.
### .1 Clustering coefficient
Clustering coefficient measures the propensity of the network to form
clusters. The local clustering coefficient of a vertex $v$ is computed as the
ratio of the edges between the neighbors of a vertex to the total possible
connections between the neighbors, as follows:
$C(v)=\frac{2\times|e_{ij}|}{N(v)\times(N(v)-1)};\ \ i,j\in N(v)$ (2)
where $N(v)$ is the set of neighbors of $v$, $e_{ij}$ is the set of edges
between the neighbors of $v$ and $C(v)$ is the clustering coefficient of the
vertex $v$.
### .2 Modularity of a network
Newman and Girvan ng2002 (1) proposed a metric called modularity that can
judge the goodness of a community detection method. It is based on the concept
that random networks do not form strong communities. Given a partition of a
network into $M$ groups, let $C_{ij}$ represent the fraction of total links
starting at a node in group $i$ and ending at a node in group $j$. Let
$a_{i}=\sum_{j}C_{ij}$ corresponds to the fraction of links connected to
subgroup $i$. Under random connections, the probability of links that begin at
a node in $i$ is $a_{i}$, and the probability of links that end at a node in
$j$ is $a_{j}$. Thus, the expected number of within-community links of group
$i$ (i.e., links between nodes in group $i$) is $a^{2}_{i}$. The actual
fraction of links within each group $i$ is $C_{ii}$. Therefore, a comparison
of the actual and expected values, summed over all the partitions gives us the
modularity, which is the deviation of the partitions from the perfectly random
case: $Q=\sum(C_{ii}-a^{2}_{i})$. Generally, the higher the modularity, the
better is the estimation of the correct community structure in the network.
### .3 Normalized mutual information (NMI)
The problem of comparing different community detection approaches can be
reduced to comparing how good the partitions produced by each of the
approaches are when compared against the ground-truth. One way to test this
goodness would be to compute the Normalized Mutual Information (NMI) nmi (4,
5). Let $C$ be the confusion matrix. Also let $N_{ij}$ (elements of the
confusion matrix $C$) be the number of nodes in the intersection of the
original community $i$ and the generated community $j$. If $C_{A}$ denotes the
number of the communities in the ground truth, $C_{B}$ the number of the
generated communities by an approach, $N_{i}$ the sum of row $i$, $N_{j}$ the
sum of column $j$, and $N$ the sum of all elements in $C$, then the NMI score
between the ground truth partition $A$, and the generated partition $B$ can be
computed as shown in the following equation.
$NMI(A,B)=\frac{-2\sum\limits_{i=1}^{C_{A}}\sum\limits_{j=1}^{C_{B}}N_{ij}log\frac{N_{ij}N}{N_{i}N_{j}}}{\sum\limits_{i=1}^{C_{A}}{N_{i}}log\frac{N_{i}}{N}+\sum\limits_{j=1}^{C_{B}}{N_{j}}log\frac{N_{j}}{N}}$
(3)
The values of NMI range between 0 and 1 where 0 refers to no match with the
ground truth and 1 refers to a perfect match.
All the notations that are used in the paper are tabulated in Table 3.
Table 3: Notations used and their descriptions
Notation | Name | Description/ Functionality
---|---|---
$\mu$ | Mixing parameter | The ratio of the external connections of a node to its total degree
| of the LFR graph |
$\phi$ | Sensitivity | The number of constant communities to the total number of vertices
| of a network |
$\xi$ | Relative size | Number of vertices in the constant community to the total number
| of the constant community | of vertices in the network
$D(v)$ | Degree | Degree of vertex $v$
$IN(v)$ | Internal neighbor | Neighbors of vertex $v$ internal to the community of $v$
$EN(v)$ | External neighbor | Neighbors of vertex $v$ outside the community of $v$
$ENG(v)$ | External neighbor | A set of elements each of which represents the number of external neighbors
| group of $v$ | of $v$ distributed among the communities other than that of $v$
$DENG(G)$ | Distinct external neighbor | The union set of the $ENG$ of all vertices in the network
| group of the graph $G$ |
$Rank_{i}(ENG(v))$ | Rank of the $i$th entry | Rank of the $i$th entry of the $ENG(v)$ obtained by sorting the elements
| of the $ENG(v)$ | of the set $DENG(G)$ in ascending order
$\Theta$ | Strength | Ratio of the edges internal to the constant community to the edges
| of a constant community | external to the constant community
$\theta(v)$ | Internal strength | Ratio of the number of internal neighbors to the number of external neighbors of $v$
| of vertex $v$ |
$\Omega(v)$ | Relative permanence | It indicates the propensity of the vertex $v$ to stay in a single community despite
| of vertex $v$ | any vertex perturbations or different algorithms used
$m_{q}$ | Mean modularity | Average of the modularity values obtained from the different permutations
| | of the input sequences
$\sigma_{q}$ | Variance of the | Variance of the modularity values obtained from the different permutations
| modularity | of the input sequences
$C(v)$ | Clustering coefficient of a vetrex | Clustering coefficient of a vertex
$\tilde{C}$ | Avg. clustering coefficient | Average clustering coefficient of the network
| of a network | obtained by averaging the clustering coefficient of all vertices
$k$ | Degree of a vertex | Degree of a vertex
$P(k)$ | Cumulative degree distribution | Fraction of vertices having degree greater than or equal to $k$
$H(f_{i},f_{j})$ | Hamming distance between binary | The hamming distance between two binary vectors $f_{i}$ and $f_{j}$ of equal length
| vectors $f_{i}$ and $f_{j}$ | is the number of positions at which the corresponding symbols are different.
## Comparing properties of the real-world networks
The results in this section demonstrate that the real-world networks in our
test suite possess characteristics such as power-law degree distribution and
high average clustering coefficient. However, we also see that when comparing
with the data in Figure 2 and Figure 3 in the main document, the above
characteristics do not necessarily guarantee that the network has strong
community structure.
### .4 Degree distribution
An important characteristic of many real-world networks is that they exhibit
power-law degree distribution bara (2). That is, if the fraction of nodes
having degree greater than or equal to $k$ is $P(k)$, then $P(k)\approx
ck^{-\gamma}$, where $c$ is a constant and the value of $\gamma$ is generally
between $2\leq\gamma\leq 3$. Figure S1 shows that all the networks in our
test-suite exhibit power law distribution; however not all of them are found
to possess constant communities (see Figures 2 and 3 in the main document).
Moreover, the slope of the curve does not provide an indication of the
presence of constant communities. For example, Email and Polbook have nearly
similar slopes, but Email does not exhibit any constant communities, while
Polbook has about three large constant communities.
Figure 7: [S1] Cumulative degree distributions of the real-world networks.
The networks, regardless of the number of constant communities present,
exhibit power-law degree distribution.
### .5 Average clustering coefficient
We computed the average clustering coefficient for a network with $n$ vertices
as $\tilde{C}=(1/n)\times\sum_{i=1}^{i=n}C(i)$. We created the random graphs
using the Erdos-Reyni graph er (3) generator in MatlabBGL with the probability
of connection between the nodes set chosen such that the number of edges is
close to the original networks. Table 4 compares the clustering coefficients
obtained from the original graph and the corresponding Erdos-Reyni (ER) graph.
The values indicate that the networks in the set are indeed more densely
packed than the random graphs.
Table 4: Average clustering coefficients of the real-world networks. The values are higher than those obtained from a random network of nearly the same size. Name | $\tilde{C}$ | Vertex ; Edges | $\tilde{C}$
---|---|---|---
| (Original) | of ER | (ER)
Jazz | 0.6174 | 198; 2042 | 0.0407
Chesapeake | 0.4502 | 39; 340 | 0.2134
Dolphin | 0.2589 | 62 ; 156 | 0.0183
Football | 0.4032 | 115; 1226 | 0.1026
Polbooks | 0.4875 | 105; 426 | 0.0443
Celegans | 0.6464 | 453; 2048 | 0.0102
Email | 0.2201 | 1133; 5170 | 0.0021
Power | 0.0801 | 4941; 6386 | 4e-04
Table 5: Comparison between the constant communities obtained from Louvain and CNM algorithms using NMI Networks | Jazz | Chesapeake | Dolphin | Football | Polbooks | Celegans | Email | Power
---|---|---|---|---|---|---|---|---
NMI | 0.8856 | 0.8429 | 0.8663 | 0.8765 | 0.8950 | 0.9232 | 0.8103 | 0.8097
## Comparing constant communities obtained from two algorithms
The primary intuition behind constant community is that these sub-modules are
invariant under any circumstance, i.e., across any ordering of the vertices or
any non-deterministic, optimized algorithm used to detect the community
structure from the network. We have judged the invariability of the structure
of the constant community for two algorithms – Louvain and CNM. The comparison
of the constant community structure for these two methods using NMI is
tabulated in Table 5. For all the cases, the NMI value is greater than 0.80
which proves to be reasonably standard nmi_1 (5, 6) indicating the high
overlap between the partition structures of the detected constant communities
from two different algorithms. This follows our initial claim that the
constant communities are nearly invariant across different community detection
algorithms.
## Feature overlaps of constant communities
We conduct constant community analysis of PhoNet and compute the average
hamming distance between the feature vectors of the constituent members of the
community. We report in Figure S2 the average hamming distance
($H(f_{i},f_{j})$, see Table 3) versus the size of the communities and compare
the results with randomly constructed same-sized groups of phonemes showing
that the constant communities of PhoNet are far from being arbitrary. In
addition, we observe that collapsing the constant communities produce
communities that are functionally dilute and at times could be quite relevant
for certain applications. Note that the larger the size, the lesser the
feature overlap since a large group would have higher chances to admit more
feature variations.
Figure 8: [S2] Feature overlap of constant communities, communities after
collapsing and random communities of different size. X-axis denotes the size
of the community and y-axis denotes the average pair-wise hamming distance of
the feature vectors. Figure 9: [S3] Schematic diagram of the proposed
algorithm (Algorithm 1) for modularity maximization using constant
communities.
## Modularity Maximization Using Constant Communities
We provide a schematic diagram of the Algorithm 1 in Figure S3. The process
consists of two steps: first, the pre-processing step for finding constant
communities, and then enhancing the performance of the community detection
algorithm using the detected constant communities. Initially, the vertices are
ordered according to their degrees (Line 2 in Algorithm 1). The permutations
of the vertex preserve this order, that is, if vertex $v_{i}$ is placed before
$v_{j}$ in the sequence then $degree(v_{i})\geq degree(v_{j})$ (Lines 3 and
4). We then compute the communities obtained for each permutation $i$ (Lines
7-11). The constant communities constitute those vertices which are always
assigned together (Lines 13-20).
The second step consists of collapsing the constant communities into a single
super-vertex (Lines 23-25). The edges from the super-vertex are weighted to
reflect the number of connections from that vertex to the rest of the network.
Self loops are also included to represent internal connections within the
constant communities. The network with the super-vertex is called the
collapsed network. We again permute the vertices according to their descending
degree (Line 26) and find the communities (Line 27). We then unfold the super-
vertices back to their constituent vertices (Line 28), and compute the
modularity on the network.
We compute the variance in the modularity values and the arithmetic mean, and
compare the results of the computation with and without using constant
communities in the pre-processing step. The results of Table I in the main
document show that pre-processing leads to higher modularity values on average
as well as less variance among the results.
Algorithm 1: Modularity Maximization Using Constant Communities
Input: A network (graph) $G=(V,E)$; Community Detection Algorithm $A$.
Output: Set of Constant Communities ${CC_{1}}$, …${CC_{k}}$; Modularity $Q$
1:procedure Finding Constant Communities
2: Sort vertices in $V$ in degree descending order
3: Apply degree preserving permutation $P$ to vertices such that
degree($v_{i}$) $\geq$ degree($v_{i+1}$) in $P$.
4: $|P|$ is number of degree preserving permutations applied.
5: Initialize array $Vertex[|V|][|P|]$ to -1 $\triangleright$
$Vertex[|V|][|P|]$ will store the community membership of the vertices in each
permutation
6: Set $i=0$ $\triangleright$ This variable indicates the permutation index
7: for all $P_{i}\in P$ do $\triangleright$ Detect community memberships of
the vertices in each permutation using $A$ and store them in $Vertex$
8: Apply algorithm $A$ to find the communities of the permuted network
$G_{P_{i}}$
9: if Vertex $v$ is in community $c$ then
10: $Vertex[v][i]=c$ $\triangleright$ Vertex $v$ in permutation $P_{i}$
belongs to community c after applying $A$ to $P_{i}$
11: end if
12: $i=i+1$
13: end for
14: Set $j=0$ $\triangleright$ This variable indicates the index of the
constant community
15: for all $v\in V$ do $\triangleright$ Detecting constant communities using
the community information stored in $Vertex$
16: if vertex $v$ is not in a constant community then
17: Create constant community $CC_{j}$
18: Insert $v$ to $CC_{j}$ $\triangleright$ All $CC_{j}s^{\prime}$ are the
constant communities
19: for all $u\in V\setminus CC_{j}$ do
20: if $Vertex[v][i]=Vertex[u][i]$, $\forall$ $i=1$ to $|P|$ then
$\triangleright$ Check for the exact matching of community memberships of u
and v
21: Insert $u$ to $CC_{j}$
22: end if
23: end for
24: end if
25: $j=j+1$
26: end for
27:end procedure
28:procedure Computing Modularity
29: Set of constant communities in $CC$
30: for all $CC_{j}\in CC$ do $\triangleright$ Create intermediate small,
weighted network
31: Combine vertices in $CC_{j}$ into a super-vertex $X_{j}$
32: Replace edges from $X_{j}$ to another vertex $X_{i}$ by their aggregate
weight $\triangleright$ For the self-loop, i=j
33: end for
34: Sort vertices of collapsed network, $G^{\prime}$, in degree descending
order
35: Apply community detection method $A$
36: Unfold all $X_{j}$ in $G^{\prime}$ and compute the modualrity $Q$
37:end procedure
## References
* (1) Newman, M.E.J. & Girvan, M. Finding and evaluating community structure in networks. Physical Review E. 69, 026113 (2002).
* (2) Barabasi, A.L. et al. Evolution of the social network of scientific collaborations. Physica. A. 311, 590614 (2002).
* (3) Gilbert, E.N. Random graphs. Annals of Mathematical Statistics. 30, 1141-1144 (1959).
* (4) Manning, C. D., Raghavan, P. & Schutze, H. Introduction to Information Retrieval. Cambridge University Press. 1st Edition (2008).
* (5) Vinh, N., Epps, J. & Bailey, J. Information theoretic measures for clusterings comparison: is a correction for chance necessary? Proceedings of the 26th Annual International Conference on Machine Learning. 1073-1080 (2009).
* (6) Strehl, A. & Ghosh, J. Cluster ensembles – a knowledge reuse framework for combining multiple partitions. J. Mach. Learn. Res. 3, 1532-4435 (2003).
* (7) Mukherjee, A., Choudhury, M., Basu, A. & Ganguly, N. Modeling the co-occurrence principles of the consonant inventories: a complex network approach. International Journal of Modern Physics C. World Scientific Publishing Company. 18, 281-295 (2008).
|
arxiv-papers
| 2013-02-23T13:02:20 |
2024-09-04T02:49:42.056337
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Tanmoy Chakraborty, Sriram Srinivasan, Niloy Ganguly, Sanjukta\n Bhowmick and Animesh Mukherjee",
"submitter": "Tanmoy Chakraborty",
"url": "https://arxiv.org/abs/1302.5794"
}
|
1302.5820
|
# An exact algorithm with the time complexity of $O^{*}(1.299^{m})$ for the
weighed mutually exclusive set cover problem
Songjian Lu and Xinghua Lu
###### Abstract
In this paper, we will introduce an exact algorithm with a time complexity of
$O^{*}(1.299^{m})^{{\dagger}}$ ††footnotetext: †Note: Following the recent
convention, we use a star $*$ to represent that the polynomial part of the
time complexity is neglected. for the weighted mutually exclusive set cover
problem, where $m$ is the number of subsets in the problem. This problem has
important applications in recognizing mutation genes that cause different
cancer diseases.
Department of Biomedical Informatics,
University of Pittsburgh, Pittsburgh, PA 15219, USA
Email: [email protected], [email protected]
## 1 Introduction
The set cover problem is that: given a ground set $X$ of $n$ elements and a
collection ${\cal F}$ of $m$ subsets of $X$, try to find a minimum number of
subsets $S_{1},S_{2},\ldots,S_{h}$ in ${\cal F}$ such that
$\cup_{i=1}^{h}S_{i}=X$. If we add an additional constrain such that all
subsets in the solution are pairwise disjoint, then the set cover problem
becomes the mutually exclusive set cover problem. If we further assign each
subset in ${\cal F}$ a real number weight and search the solution with the
minimum weight, i.e. the sum of weights of subsets in the solution is
minimized, then the problem becomes the weighted mutually exclusive set cover
problem.
Recently, the weighted mutually exclusive set cover problem has found
important applications in cancer study to identify driver mutations [4, 12],
i.e. somatic mutations that cause cancers. As somatic mutations will change
the structures (and therefore the functions) of signaling proteins; thus,
perturb cancer pathways that regulate the expressions of genes in certain
important biological processes, such as cell death, cell proliferation etc.
The perturbations within a common cancer pathway are often found to be
mutually exclusive in a single cancer cell, i.e. each tumor usually has only
one perturbation on one given cancer pathways (one perturbation is enough to
cause the disease; hence, there is no need to wait for another perturbation).
Modern lab techniques can identify somatic mutations and gene expressions of
cancer cells. After preprocessing the data, we will obtain following
information for important biological processes, e.g. cell death: 1)which
cancer cells have disturbed the expressions of genes in the biological
process; 2) which genes have been mutated in those cancer cells; 3) how
possible each mutation is related to the given biological process (i.e. each
mutation is assigned a real number weight). Then next step is finding a set of
mutations such that each cancer cell has one and only one mutation in the
solution set (mutually exclusive) and the sum of weights of all genes in the
solution set is minimized, which is the weighted mutually exclusive set cover
problem.
While there is not much research on the mutually exclusive set cover or the
weighted mutually exclusive set cover problems, the set cover problem has been
paid much attention. The set cover, which is equivalent to the hitting set
problem, is a fundamental NP-hard problem in Karp’s 21 NP-complete problems
[8]. One research direction for the set cover problem is approximation
algorithms, e.g. papers [1, 5, 9, 11] gave polynomial time approximation
algorithms that find solutions whose sizes are at most $c\log n$ times the
size of the optimal solution, where $c$ is a constant. Second direction is
using $k$, the number of subsets in the solution, as parameter to design
fixed-parameter tractable (FPT) algorithms for the equivalent problem, the
hitting set problem. Those algorithms have a constrain such that each element
in $X$ is included in at most $d$ subsets in ${\cal F}$, i.e. sizes of all
subsets in the hittng set problem are upper bound by $d$; it is also called
the $d$-hitting set problem. For example, paper [13] gave an
$O^{*}(2.270^{k})$ algorithm for the $3$-hitting set problem, and paper [6]
further improved the time complexity to $O^{*}(2.179^{k})$. The third
direction is designing algorithms that use $n$ as parameter in the condition
that $n$ is much less than $m$. Papers [2, 7] designed algorithms with time
complexities of $O^{*}(2^{n})$ for the problem. The paper [2] also extended
the algorithm to solve the weighted mutually exclusive set cover problem with
the same time complexity. Paper [10] improved the time complexity to
$O^{*}(2^{\frac{\log_{2}d}{1+\log_{2}d}n})$ under the condition that at least
$\frac{n}{1+\log_{2}n}$ elements in $X$ are included in at most $d$ subsets in
${\cal F}$. This algorithm can also be extended to the weighted mutually
exclusive set cover problem with the same time complexity. However, in the
application of cancer study, neither $n$ is less than $m$ nor each element in
$x$ is included in bounded number of subsets in ${\cal F}$. Hence, there is a
need to design new algorithms.
In this paper, we will design a new algorithm that uses $m$ as parameter (in
application of cancer study, $m$ is smaller than $n$, where $n$ can be as
large as several hundreds). Trivially, if using $m$ as parameter, we can solve
the problem in time of $O^{*}(2^{m})$, where the algorithm basically just
tests every combination of subsets in ${\cal F}$. To our best knowledge, we
have not found any algorithm that is better than the trivial algorithms when
using $m$ as parameter. This paper will give the first un-trivial algorithm
with the time complexity of $O^{*}(1.299^{m})$ to solve the weighted mutually
exclusive set cover problem. We have tested this algorithm in the cancer
study, and the program can finish the computation practically when $m$ is less
than 100.
## 2 The weighted mutually exclusive set cover problem is NP-hard
The formal definition of the weighted mutually exclusive set cover problem is:
given a ground set $X$ of $n$ elements, a collection ${\cal F}$ of $m$ subsets
of $X$, and a weight function $w:{\cal F}\rightarrow[0,\infty)$, if ${\cal
F^{\prime}}=\\{S_{1},S_{2},\ldots,S_{h}\\}\subset{\cal F}$ such that
$\cup_{i=1}^{h}S_{i}=X$, and $S_{i}\cap S_{j}=\emptyset$ for any $i\neq j$,
then we say ${\cal F^{\prime}}$ is a mutually exclusive set cover of $X$ and
$\sum_{i=1}^{h}w(S_{i})$ is the weight of ${\cal F^{\prime}}$; the goal of the
problem is to find a mutually exclusive set cover of $X$ with the minimum
weight, or report that no such solution exists.
As we have not found the proof of NP-hardness for the weighted mutually
exclusive set cover problem, in this section, we will prove that the mutually
exclusive set cover problem is NP-hard; thus, prove that the weighted mutually
exclusive set cover problem is NP-hard.
We will prove the NP-hardness of the mutually exclusive set cover problem by
reducing another NP-hard problem, the maximum set packing problem, to it.
Remember that the maximum set packing problem is: given a collection ${\cal
F}$ of subsets, try to find an ${\cal S}\subset{\cal F}$ such that subsets in
${\cal S}$ are pairwise disjoint and $|{\cal S}|$ is maximized.
###### Theorem 2.1
The mutually exclusive set cover problem is NP-hard.
Proof. Let ${\cal S}=\\{S_{1},S_{2},\ldots,S_{m}\\}$ be an instance of the
maximum set packing problem, where
$X^{\prime}=\cup_{i=1}^{m}S_{i}=\\{x_{1},x_{2},\ldots,x_{n}\\}$. We create an
instance of the mutually exclusive set cover problem such that:
* •
$X=X^{\prime}\cup\\{T_{1},T_{2},\ldots,T_{m}\\}$, where
$T_{i}=\\{t_{i1},t_{i2},\ldots,t_{i(n+1)}\\}$ for all $1\leq i\leq m$;
* •
${\cal F}={\cal F^{\prime}}\cup{\cal F^{\prime\prime}}\cup{\cal
F^{\prime\prime\prime}}$, where ${\cal
F^{\prime}}=\\{\\{x_{1}\\},\\{x_{2}\\},\ldots,\\{x_{n}\\}\\}$, ${\cal
F^{\prime\prime}}=\\{S_{1}\cup T_{1},S_{2}\cup T_{2},\ldots,S_{m}\cup
T_{m}\\}$, and ${\cal
F^{\prime\prime\prime}}=\cup_{i=1}^{m}\\{\\{t_{i1}\\},\\{t_{i2}\\},\ldots,\\{t_{i(n+1)}\\}\\}$.
Next, we will prove that if ${\cal P}=\\{P_{1},P_{2},\ldots,P_{k}\\}$ is a
solution of the mutually exclusive set cover problem, then ${\cal
S^{\prime}}=\\{S^{\prime}_{1},S^{\prime}_{2},\ldots,S^{\prime}_{k^{\prime}}\\}$
is a solution of the maximum set packing problem, where ${\cal P}\cap{\cal
F^{\prime\prime}}=\\{S^{\prime}_{1}\cup T^{\prime}_{1},S^{\prime}_{2}\cup
T^{\prime}_{2},\ldots,S^{\prime}_{k^{\prime}}\cup T^{\prime}_{k^{\prime}}\\}$.
Thus we will prove that the time to solve the maximum set packing problem is
bounded by the total time of transforming the maximum set packing problem into
the mutually exclusive set cover, and of solving the mutually exclusive set
cover problem. Therefore, the mutually exclusive set cover problem is NP-hard.
As subsets in ${\cal P}$ are pairwise disjoint, it is obvious that subsets in
${\cal S^{\prime}}$ are pairwise disjoint. Hence, if we suppose that ${\cal
S^{\prime}}$ is not the solution of the maximum set packing problem, then
there must exists a ${\cal
S^{\prime\prime}}=\\{S^{\prime\prime}_{1},S^{\prime\prime}_{2},\ldots,S^{\prime\prime}_{k^{\prime}}\\}\subset{\cal
S}$ such that subsets in ${\cal S^{\prime\prime}}$ are pairwise disjoint and
$k^{\prime}>k$. Thus we can make a new solution ${\cal P^{\prime}}$ of the
mutually exclusive set cover problem such that ${\cal P^{\prime}}$ includes
$\\{S^{\prime\prime}_{1}\cup T^{\prime\prime}_{1},S^{\prime\prime}_{2}\cup
T^{\prime\prime}_{2},\ldots,S^{\prime\prime}_{k^{\prime}}\cup
T^{\prime\prime}_{k^{\prime}}\\}\subset{\cal F^{\prime\prime}}$ and other
subsets in ${\cal F^{\prime}}$ and ${\cal F^{\prime\prime\prime}}$. If let
$|X^{\prime}-\cup_{i=1}^{k}S^{\prime}_{i}|=n_{1}$ and
$|X^{\prime}-\cup_{i=1}^{k^{\prime}}S^{\prime\prime}_{i}|=n_{2}$ (Note: any
$T_{i}$, which is not covered by a subset in ${\cal F^{\prime\prime}}$, needs
$n+1$ subsets in ${\cal F^{\prime\prime\prime}}$ to cover it; any $x_{i}\in
X^{\prime}$, which is not covered by a subset in ${\cal F^{\prime\prime}}$,
needs a subset in ${\cal F^{\prime}}$ to cover it), then
$|{\cal P}|=k+(m-k)(n+1)+n_{1},~{}~{}$
and
$|{\cal P^{\prime}}|=k^{\prime}+(m-k^{\prime})(n+1)+n_{2}.$
Therefore $|{\cal P}|-|{\cal P^{\prime}}|=(k^{\prime}-k)n+n_{1}-n_{2}>0$, i.e.
${\cal P^{\prime}}$ is a solution with less subsets in ${\cal F}$, which cases
contradiction that ${\cal P}$ is the solution of the mutually exclusive set
cover problem. Hence, ${\cal S^{\prime}}$ is a solution of the maximum set
packing problem.
## 3 The main Algorithm
In this section, we will introduce our new algorithm to solve the weighted
mutually exclusive set cover problem.
Let $(X,{\cal F},w)$ be an instance of the weighted mutually exclusive set
cover problem. We can use a bipartite graph to represent $(X,{\cal F},w)$ such
that all nodes on one sides are subsets in ${\cal F}$ while nodes on the other
side are elements in $X$, and if an element $u$ of $X$ is in subset $U$, i.e.
$u\in U$, then an edge is added between $u$ and $U$. For the convenience, let
us introduce some notations. The Figure 1 can help you to understand and
remember following notations.
Figure 1: Graph representation and some notations of the problem
For any $x\in X$, let $neighbor(x)=\\{S|S\in{\cal F}\text{ and }x\in S\\}$,
$degree(x)=|neighbor(x)|$, $partner(x)=\cup_{S\in neighbor(x)}S$. For any $y$
in $partner(x)$, let $neighbor_{in}=neighbor(y)\cap neighbor(x)$,
$degree_{in}(y)=|neighbor_{in}(y)|$, $neighbor_{out}=neighbor(y)-neighbor(x)$,
$degree_{out}(y)=|neighbor_{out}(y)|$.
Algorithm-1 WMES-Cover$((X,{\cal F},w),Solution_{partial},Solution_{final}))$
---
Input: An instance of the weighted mutually exclusive set cover problem, two
variables,
| | | where $Solution_{final}$ is a global variable to keep the best solution.
Output: A minimum weight mutually exclusive set cover or “No Solution”.
1 | if $X==\emptyset$ then
1.1 | | if $weight(Solution_{partial})<weight(Solution_{final})$ then replace $Solution_{final}$ with $Solution_{patial}$;
2 | Find $x\in X$ such that $d=degree(x)$ is minimized;
3 | if $d==0$ then return “No Solution”;
4 | if $d==1$ then WMES-Cover$((X-\\{x\\},{\cal F}-neighbor(x),w),Solution_{partial}\cup neighnor(x),Solution_{final})$;
5 | if $degree_{out}(y)==0$ for all $y\in partner(x)$ then
5.1 | | if there exists $S\in neighbor(x)$ such that $S==partner(x)$ then
5.1.1 | | | WMES-Cover$((X-S,{\cal F}-neighbor(x),w),Solution_{partial}\cup\\{S\\},Solution_{final})$;
| | else
5.1.2 | | | return “No Solution”;
6 | if $d==2$ then // Suppose $neighbor(x)=\\{S_{1},S_{2}\\}$; note that $S_{1}\subset X$ and $S_{2}\subset X$.
6.1 | | WMES-Cover$((X-S_{1},{\cal F}-\cup_{u\in S_{1}}neighbor(u),w),Solution_{partial}\cup\\{S_{1}\\},Solution_{final})$;
6.2 | | WMES-Cover$((X-S_{2},{\cal F}-\cup_{u\in S_{2}}neighbor(u),w),Solution_{partial}\cup\\{S_{2}\\},Solution_{final})$;
| else // (Note: $d>2$)
6.3 | | if there exists a $y\in partner(x)$ such that $degree_{out}(y)=1$ then
6.3.1 | | | Let $y\in partner(x)$ such that $degree_{out}(y)=1$ and $W^{\prime}\in neighbor_{out}(y)$;
6.3.2 | | | if $|neighbor(x)-neighbor(y)|>0$ then // (Note: $|neighbor(x)-neighbor(y)|\leq 1$)
6.3.2.1 | | | | Find any $W\in neighbor(x)-neighbor(y)$;
6.3.2.2 | | | | WMES-Cover$((X-W^{\prime}\cup W,{\cal F}-\cup_{u\in W^{\prime}\cup W}neighbor(u),w),Solution_{partial}\cup\\{W^{\prime},W\\},Solution_{final})$;
6.3.2.3 | | | | WMES-Cover$((X,{\cal F}-\\{W^{\prime},W\\},w),Solution_{partial},Solution_{final})$;
| | | else
6.3.2.4 | | | | Find any $W\in neighbor(x)$;
6.3.2.5 | | | | WMES-Cover$((X-W,{\cal F}-\cup_{u\in W}neighbor(u),w),Solution_{partial}\cup\\{W\\},Solution_{final})$;
6.3.2.6 | | | | WMES-Cover$((X,{\cal F}-\\{W^{\prime},W\\},w),Solution_{partial},Solution_{final})$;
| | else
6.3.3 | | | Find a $y\in partner(x)$) such that $degree_{out}(y)$ is maximized;
6.3.4 | | | Find a $Z\in neighbor_{in}(y)$;
6.3.5 | | | WMES-Cover$((X-Z,{\cal F}-\cup_{u\in Z}neighbor(u),w),Solution_{partial}\cup\\{Z\\},Solution_{final})$;
6.3.6 | | | WMES-Cover$((X,{\cal F}-\\{Z\\},w),Solution_{partial},Solution_{final})$;
Figure 2: Algorithm for the weighted mutually exclusive set cover problem.
The main algorithm, Algorithm-1, is shown in Figure 2. Basically, the
Algorithm-1 first finds an $x\in X$ with minimum degree and then branches at
one subset in $neighbor(x)$ (such as in step 6.2.2 and 6.2.3). For the
convenience, if $degree(x)=d$, then we say that Algorithm-1 is doing a
$d$-branch. Because of steps 3,4,5, when the program arrives at step 6, we
must have: 1) $d=degree(x)\geq 2$; 2) for any $u\in X$, $degree(u)\geq d$; 3)
there exists a $y\in partner(x)$ such that $degree_{out}(y)>0$.
The Algorithm-1 is basically searching the solution by going through a search
tree; hence, if knowing the number of leaves in the search tree, then we will
obtain the time complexity of the Algorithm-1. Next, we will estimate the
number of leaves in the search tree by studying the different cases of
branching. We begin from the $2$-branch.
###### Proposition 3.1
The search tree has at most $1.273^{m}$ leaves If only the 2-branches are
applied in Algorithm-1.
Proof. Suppose that $degree(x)=2$ and $y\in partner(x)$ such that
$degree_{out}(y)>0$. Let $neighbor(x)=\\{S_{1},S_{2}\\}$.
In the case of $degree_{out}(y)=1$, let
$neighbor_{out}(y)=\\{S^{\prime\prime}\\}$. In the branches of choosing either
$S_{1}$ or $S_{2}$ into the solution, if $y$ is covered, then
$S^{\prime\prime}$ will be removed from the ${\cal F}$, or else if $y$ is not
covered yet, then $S^{\prime\prime}$ will be chosen into the solution in order
to cover $y$ (note: after $S_{1},S_{2}$ are removed, $degree(y)=1$ in the new
instance (at line 6.1.1 and 6.1.2 of Algorithm-1); thus, $S^{\prime\prime}$
will be included into the solution in the next call of the Algorithm-1 in this
branch). Hence, in any case, $3$ subsets in ${\cal F}$ will be removed. If
letting $T(k)$ be the number of leaves in the search tree when $|{\cal F}|=k$,
then we will obtain the following recurrence relation
$T(k)\leq
2T(k-3).~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(1)}$
The characteristic equationof this recurrence relation is $r^{3}-2=0$ ‡
††footnotetext: ‡Note: Given a recurrence relation
$T(k)\leq\sum_{i=0}^{k-1}c_{i}T(i)$ such that all $c_{i}$ are nonnegative real
numbers, $\sum_{i=0}^{k-1}c_{i}>0$, and $T(0)$ represents the leaves, then
$T(k)\leq r^{k}$, where $r$ is the unique positive root of the characteristic
equation $t^{k}-\sum_{i=0}^{k-1}c_{i}t^{i}=0$ deduced from the recurrence
relation [3].; hence, we will have $T(m)<1.260^{m}$.
In the case of $degree_{out}(y)>1$, we consider following sub-cases.
Sub-case 1. Suppose $degree_{in}(y)=1$, and $y\in S_{1}$. Then at least
$S_{1}$ and $S_{2}$ will be removed from ${\cal F}$ for the branch of choosing
$S_{2}$ into the solution; at least $S_{1}$, $S_{2}$, and all subsets (at
least two) in $neighbor_{out}(y)$ will be removed for the branch of choosing
$S_{1}$ into the solution. Thus the recurrence relation of $T(k)$ is
$T(k)\leq
T(k-2)+T(k-4).~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(2)}$
which leads to $T(m)<1.273^{m}$.
Sub-case 2. Suppose $degree_{in}(y)=2$. Then in either branch, $y$ is covered
by $S_{1}$ or $S_{2}$, which is chosen into the solution. Hence,
$S_{1},S_{2}$, and all subsets (at least two) in $neighbor_{out}(y)$ will be
removed from ${\cal F}$. Thus we will obtain the recurrence relation
$T(k)\leq
2T(k-4).~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(3)}$
which leads to $T(m)<1.190^{m}$.
By considering all above cases, we obtain that $T(m)\leq 1.273^{m}$.
Now, we consider the case of doing $3$-branch. Remember that when Algorithm-1
is doing a $3$-branch, $degree(x)\geq 3$ for all $x\in X$.
###### Proposition 3.2
The search tree has at most $1.299^{m}$ leaves If only the $d$-branches for
$d<=3$ are applied in Algorithm-1.
Proof. The cases of $2$-branches are considered in the last proposition. Now
we consider the cases of $3$-branches. Suppose that $degree(x)=3$ and $y\in
partner(x)$ such that $degree_{out}(y)>0$. Let
$neighbor(x)=\\{S_{1},S_{2},S_{3}\\}$.
If $degree_{out}(y)=1$, then $degree_{in}(y)\geq 2$ (as $degree(y)\geq 3$).
Let $\\{S^{\prime}\\}=neighbor_{out}(y)$. We further consider following sub-
cases.
Sub-case 1. Suppose $degree_{in}(y)=2$. Let $S_{1}\in
neighbor(x)-neighbor(y)$. The Algorithm-1 branches at $S_{1}$. The branch one
includes $S_{1}$ into the solution; thus, $S_{2},S_{3}$ will be removed. This
will further make $degree(y)=1$. Hence, $S^{\prime}$ will also be included
into the solution. Totally, in this branch, we will remove at least $4$
subsets from ${\cal F}$. In branch two, we will exclude $S_{1}$ from the
solution. Then either $S_{2}$ or $S_{3}$ must be included into the solution.
Thus $y$ is covered by $S_{2}$ or $S_{3}$, and $S^{\prime}$ will not be in the
solution. Therefore, in this branch, we know that at least $S_{1}$ and
$S^{\prime}$ will be removed. So we will obtain the recurrence relation
$T(k)\leq
T(k-2)+T(k-4),~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(4)}$
which leads to $T(m)<1.273^{m}$.
Sub-case 2. Suppose $degree_{in}(y)=3$. Then $S^{\prime}$ will not in the
solution and any one of $S_{1},S_{3},S_{3}$ (one and only one of them must be
included into the solution to cover $x$) will cover $y$. The Algorithm-1 will
branch at any one of $S_{1},S_{2},S_{3}$. Without loss of generality, we
branch at $S_{1}$. In the branch of including $S_{1}$ into the solution,
$S_{1},S_{2},S_{3}$ will be removed, which will totally remove at least $4$
subsets. In the branch of excluding $S_{1}$ into the solution, $S_{1}$ will be
removed. Thus $2$ subsets will be removed. We will obtain the following
recurrence relation
$T(k)\leq
T(k-2)+T(k-4),~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(5)}$
which leads to $T(m)<1.273^{m}$.
In the case of $degree_{out}(y)>1$, Let $S_{1}\in neighbor_{in}(y)$.
Algorithm-1 branches at $S_{1}$. In the first branch, $S_{1}$ is included into
the solution. Then $S_{1},S_{2},S_{3}$ and at least $2$ subsets in
$neighbor_{out}(y)$ will be removed. In the second branch, $S_{1}$ is
excluded, which will make $degree(x)=2$ in the new instance; hence, in this
branch, a $2$-branch will follow. Thus even considering the worst case of the
$2$-branch (the recurrence relation (2)), we will have
$T(k)\leq
2T(k-5)+T(k-3),~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(6)}$
which will lead to $T(m)\leq 1.299^{m}$.
From all above cases and Proposition 3.1, we will have $T(m)\leq 1.299^{m}$.
Let us consider the case of doing $d$-branch for $d>3$.
###### Proposition 3.3
The search tree in Algorithm-1 has at most $1.299^{m}$ leaves.
Proof. We only need to consider the cases of $d$-branches for $d>3$. Suppose
that $degree(x)=d$ and $y\in partner(x)$ such that $degree_{out}(y)>0$. Let
$neighbor(x)=\\{S_{1},S_{2},\ldots,S_{d}\\}$.
In the case of $degree_{out}(y)=1$, $degree_{in}(y)$ can only be $d-1$ or $d$.
Sub-case 1. Suppose $degree_{in}(y)=d-1$. Then there is one and only one
subset in $neighbor(x)-neighbor_{in}(y)$. Without loss of generality, we
suppose $S_{1}\not\in neighbor_{in}(y)$. Algorithm-1 will branch on $S_{1}$
such that in the branch of including $S_{1}$ into the solution, all $d$
subsets in $neighbor(x)$ and one subset in $neighbor_{out}(y)$ will be removed
(i.e. in this branch, at least $5$ subsets will be removed; in the branch of
excluding $S_{1}$ from the solution, one subset in
$\\{S_{2},S_{3},\ldots,S_{d}\\}$ will be included into the solution, which $y$
will be covered and the only subset in $neighbor_{out}(y)$ will be removed
(i.e. in this branch, two subsets will be removed). Therefore, we will have
following recurrence relation
$T(k)\leq
T(k-5)+T(k-2),~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(7)}$
which leads to $T(m)<1.237^{m}$.
Sub-case 2. Suppose $degree_{in}(y)=d$. Without loss of generality, we suppose
that Algorithm-1 branches on $S_{1}$. Then it is easy to understand the we
will have the following recurrence relation
$T(k)\leq
T(k-5)+T(k-2),~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(8)}$
which leads to $T(m)<1.237^{m}$.
In the case of $degree_{out}(y)>1$, suppose $S_{1}\in degree_{in}(y)$ and
Algorithm-1 branches on $S_{1}$. Then in the branch of including $S_{1}$ into
the solution, all subsets in $neighbor(x)$ and $neighbor_{out}(y)$ will be
removed (at least $6$ subsets will be removed). In the branch of excluding
$S_{1}$ into the solution, at least one subset $S_{1}$ will be removed. Hence,
we will have the recurrence relation
$T(k)\leq
T(k-6)+T(k-1),~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(9)}$
which leads to $T(m)<1.286^{m}$.
Considering all above cases, Proposition 3.1, and Proposition 3.2, we have
$T(m)\leq 1.299^{m}$.
###### Theorem 3.4
The weighted mutually exclusive set cover problem can be solved by an
algorithm with a time complexity of $O^{*}(1.299^{m})$.
Proof. Let $({\cal F},X,w)$ be an instance of the weighted mutually exclusive
set cover problem, where $X$ is a ground set of $n$ elements, $\cal{F}$ is a
collection of $m$ subsets of $X$, and $w:{\cal F}\rightarrow[0,\infty)$ is the
weight function. Now we prove that the problem can be solved by the
Algorithm-1 in time $O^{*}(1.299^{m})$.
The correctness of the algorithm is easy to understand. If there is an $x\in
X$ such that $degree(x)=0$, then $x$ cannot be covered by any subset in ${\cal
F}$. Thus, the problem has no solution. The step 3 of the Algorithm-1 deals
with this situation. If, for any given $x\in X$, $degree(x)=1$, then there
exists one and only one subset in ${\cal F}$ that covers $x$, i.e.
$neighbor(x)$ must be included into the solution. Thus $x$ and $neighbor(x)$
will be removed from the problem. This situation is dealt with in step 4. If
for all $y$ in $partner(x)$, $degree_{out}(y)=0$, then $partner(x)$ can only
be covered by subset(s) in $neighbor(x)$. By the exclusivity, at most one
subset in $neighbor(x)$ can be chosen into the solution. Thus, if finding a
subset $S$ in $neighbor(x)$ such that $S=partner(x)$, then Algoirhtm-1 will
include $S$ into the solution, or else the problem has no solution. The step 5
of the Algorithm-1 deals with this situation.
After the Algorithm-1 reaches step 6, we have: 1) for all $x^{\prime}\in X$,
$degree(x^{\prime})\geq degree(x)>1$ (as $x$ is the element in $X$ with the
minimum degree); 2) there is a $y\in partner(x)$ such that
$degree_{out}(y)>0$. If $d=neighbor(x)=2$, then one and only one subset in
$neighbor(x)$ will be in the solution. The step 6.1 and 6.2 correctly deals
with this situation. For the cases after step 6.2, the Algorithm-1 basically
chooses one subset $S$ in $neighbor(x)$ and branches on $S$ such that one
branch includes $S$ into the solution and the other branch excludes $S$ from
the solution (Note: when $degree_{out}(y)=1$, we used a small trick to include
or exclude the additional subset in $neighbor_{our}(y)$ into or from the
solution; please refer to sub-case 1 and sub-case 2 in the Proposition 3.3).
Therefore, Algorithm-1 will go through the search tree and find the solution
with the minimum weight (if the solution exists), which is saved in step 1.1.
By Proposition 3.3, the search tree has at most $1.299^{m}$ leaves. Hence, the
time complexity of the algorithm is bounded by $O^{*}(1.299^{m})$. If we
further notice that the time to process each node is bounded by $O(mn)$, then
the more accurate time complexity of the algorithm is $O(1.299^{m}mn)$.
## 4 Problem extension
In this paper, we first proved that the weighted mutually exclusive set cover
problem is NP-hard. Then we designed the first non-trivial algorithm, which
uses the $m$ as parameter, with a time complexity of $O^{*}(1.352^{m})$ for
the problem. the weighted mutually exclusive set cover problem has been used
to find the driver mutations in cancers [4, 12]. Our new algorithm can find
the optimal solution for the problem, which is better than solutions found by
the heuristic algorithms in the previous research [4, 12]. The exclusivity is
the extreme case. In practical applications, a cancer cell may have more than
one mutation to perturb a common pathway. Hence, a modified model is finding a
set of mutations with minimum weight sum such that each cancer cell has at
least one and at most t (t=2 or 3) mutations in the solutions, which leads to
the small overlapped set cover problem. Also, on application, some mutations
in cancer cells may not be detected because of errors. Thus, it is not always
ideal to find a solution mutations that cover all cancer cells. A modified
model is finding a set of mutually exclusive mutations that cover at least $r$
percent ($90\%$ or $95\%$) of cancer cells, which leads to the maximal set
cover problem. Our next research will design efficient algorithms for above
two new problems.
## References
* [1] N. Alon, D. Moshkovitz, and S. Safra, Algorithmic Construction of Sets for $k$-Restrictions, ACM Transaction on Algorithms, 2(2), pp. 153-177, 2006.
* [2] A. Bjölund, T. Husfeldt, M. Koivisto, Set partitioning via Inclusion-Exclusion. SIAM Journal on Computing, Special Issue for FOCS 2006.
* [3] J. Chen, I, Kanj, and W. Jia, Vertex Cover: Further Observations and Further Improvements, Journal of Algorithm, 41, pp. 280-301, 2001\.
* [4] G. Ciriello, E. Cerami, C. Sander, N. Schultz, Mutual exclusivity analysis identifies oncogenic netwrok modules, Genome research, 22(2), pp. 398-406, 2012.
* [5] U. Feige, A Threshold of $\ln n$ for Approximation Set Cover, J. of the ACM, 45(4), pp. 634-652, 1998.
* [6] H. Fernau, a top-down approach to search-trees: Improved algorithmics for $3$-Hitting Set, Algorithmica, 57, pp. 97-118, 2010\.
* [7] Q. Hua, Y. Wang, D. Yu, F. Lau, Dynamic programming based algorithms for set multicover and multiset multicover problem. Theoretical Computer Science V411, pp. 2467-2474, 2010.
* [8] R. Karp, Reducibility Among Combinatorial Problems, In R. E. Miller and J. W. Thatcher (editors). Complexity of Computer Computations. New York: Plenum, pp. 85-103, 1972.
* [9] S. Kolliopoulos, N. Young, Approximation algorithms for covering/packing integer programs. J. Comput. Syst. Sci. 71(4), pp.495-505, 2005.
* [10] S. Lu, X. Lu, A graph model and an exact algorithm for finding transcription factor modules, 2nd ACM Conference on Bioinformatics, Computational Biology and Biomedicine, pp. 355-359, 2011.
* [11] C. Lund, and M. Yannakakis, On the Hardness of Approximating Minimization Problem, J. of the Association for Computing Machinery, 45(5), pp. 960-981, 1994.
* [12] C. Miller, S. Settle, E. Sulman, K. Aldape, A. Milosavljevic, Discovering functional modules by identifying recurrent and mutually ecxlusive mutational patterns in tumors, BMC medical genomics, 4, pp. 34, 2011.
* [13] R. Niedermeier, and P. Rossmanith, An Effcient Fixed-parameter Algorithm for 3-Hitting Set, J. of Discrete Algorithms, 1(1), pp. 89-102, 2003.
|
arxiv-papers
| 2013-02-23T15:55:48 |
2024-09-04T02:49:42.065163
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Songjian Lu and Xinghua Lu",
"submitter": "Songjian Lu",
"url": "https://arxiv.org/abs/1302.5820"
}
|
1302.5847
|
# Characterizing Branching Processes from Sampled Data
Fabricio Murai1, Bruno Ribeiro1, Don Towsley1, and Krista Gile2 1School of
Computer Science 2Department of Mathematics and Statistics University of
Massachusetts University of Massachusetts MA 01003 & Amherst MA 01003
{fabricio ribeiro towsley}@cs.umass.edu & [email protected]
###### Abstract
Branching processes model the evolution of populations of agents that randomly
generate offsprings. These processes, more patently Galton-Watson processes,
are widely used to model biological, social, cognitive, and technological
phenomena, such as the diffusion of ideas, knowledge, chain letters, viruses,
and the evolution of humans through their Y-chromosome DNA or mitochondrial
RNA. A practical challenge of modeling real phenomena using a Galton-Watson
process is the offspring distribution, which must be measured from the
population. In most cases, however, directly measuring the offspring
distribution is unrealistic due to lack of resources or the death of agents.
So far, researchers have relied on informed guesses to guide their choice of
offspring distribution. In this work we propose two methods to estimate the
offspring distribution from real sampled data. Using a small sampled fraction
of the agents and instrumented with the identity of the ancestors of the
sampled agents, we show that accurate offspring distribution estimates can be
obtained by sampling as little as 14% of the population.
## 1 Introduction
Branching processes, more markedly Galton-Watson (GW) processes, have been
used to model a variety of phenomena, ranging from human Y-chromosome DNA and
mitochondrial RNA evolution [5], to epidemics on complex networks [6], to
block dissemination in peer-to-peer networks [8]. The GW process can be
represented as a growing tree, where agents are nodes connected to their
offspring by edges. The number of offspring is a random variable associated
with a distribution function. An example of a GW branching process is a family
tree considering either only the females or only the males in the family
(which represent the transmission of mitochondrial RNA or Y-chromosome DNA,
respectively). A GW process is completely characterized by its offspring
distribution. A practical challenge when modeling real world systems from a GW
process is knowing the offspring distribution of the process, which must be
measured from the population.
In most applications, however, directly measuring the offspring distribution
is unrealistic due to the lack of resources or the inaccessibility of agents
(e.g. death). It is not reasonable to assume that one can collect genetic
material from the entire human population or that in the branching process of
chain letter signatures (see Chierichetti et al. [3] for further details), one
may collect all possible branches of the chain letter created by forwarding
the letter. So far, researchers have relied on informed guesses to guide their
choice of offspring distribution.
In this work we propose a collection of methods to estimate the offspring
distribution from real sampled data. Our goal is to accurately estimate the
offspring distribution by sampling and collecting ancestors ids of a small
fraction of the agents. We study the case where a sampled agent reveals the
identity of its ancestors and the trees are generated in the supercritical
regime (i.e., average offspring > 1) when the maximum offspring and the
maximum tree height are upperbounded by a (possibly large) constant. We show
that accurate offspring distribution estimates can be obtained by sampling as
little as 14% of the population.
A related problem is characterizing graphs using traceroute sampling.
Traceroute sampling from a single source can be thought as sampling a tree
where nodes have different offspring (degree) distributions depending on their
position with respect to the source. This is an important well known hard
problem [4, 1] and it remains open to date. Our results have the added benefit
of shedding some light also into the traceroute problem.
The outline of this work is as follows. Section 2 describes the network and
sampling models. In Section 3 we first show how to estimate the offspring
distribution through exact inference, showing it does not scale. We then
propose an MCMC method of performing approximate inference that works for
small and medium sized trees (up to 2,000 nodes). In Section 4 we evaluate
both methods using a set of 900 syntethic datasets, comprising small and
medium trees. For small trees, exact inference yielded accurate estimates and
outperforms the approximate estimator. On the other hand, approximate
inference can handle larger trees, while obtaining significant improvement
over more naïve approaches. Finally Section 5 presents our conclusions and
future work.
## 2 Model
We assume that the underlying tree comes from a Galton-Watson (GW) process.
The GW process models the growth of a population of individuals that evolves
in discrete-time ($n=0,1,2,\dots$) as follows. The population starts with one
individual at the $0$-th generation ($n=0$). Each individual $i$ at the $n$-th
generation produces a random number of individuals at the $(n+1)$st
generation, called offspring. The offspring counts of all individuals are
assumed to be i.i.d. random variables. An instance of the GW process is
therefore described by a sequence of integers which denote the number of
individuals at each generation.
Formally, the GW process is a discrete-time Markov Chain
$\\{X_{n}\\}_{n=1}^{L}$, where $L$ is the number of generations, given by the
following recursion
$X_{n+1}=\sum_{i=1}^{X_{n}}Y_{i}^{(n)}\,,$
with $X_{0}=1$, where the $Y_{i}^{(n)}\geq 0$ are i.i.d. random variables with
distribution ${\boldsymbol{\theta}}=(\theta_{0},\dots,\theta_{W})$, $\forall
i,n\geq 1$, where $W$ is the maximum number of offspring of an agent. The GW
process can be seen as a generative process of a tree $G=(V,E)$, where $X_{n}$
is the number of nodes at the $n$th generation and $Y_{i}^{(n)}$ is the
offspring count of the $i$th node at the $n$th generation. For simplicity, we
assume that $\theta_{0}=0$ and that the number of generations is fixed, so
that all tree leaves sit exactly at generation $L$. Our results, however, can
be easily adapted to the case where $\theta_{0}>0$ and the leaves have
different levels. But the above assumptions lead to a simpler model in the
sense that we can have average offspring greater than one without worrying
about infinite trees.
Since the numbers of offspring are mutually independent, the probability of a
given tree $G$ is
$P(G|{\boldsymbol{\theta}})=\prod_{j=1}^{W}\theta_{j}^{c_{j}}\,,$ (1)
where $c_{j}=\sum_{i,n}\mathbf{1}\\{Y_{i}^{(n)}=j\\}$ is the number of nodes
with offspring count $j$. Fig. 1a depicts an example of tree generated from
${\boldsymbol{\theta}}=(0.3,0.6,0.1)$ with $L=3$. In this case,
$P(G|{\boldsymbol{\theta}})=0.3^{1}\cdot 0.6^{2}=0.108$.
(a) Original
(b) Examples of samples
Figure 1: (a) Branching process tree. (b) Samples with 2 and 3 targets,
respectively.
### Sampling Model
A node is said to be observed when the sampling process explicitly reveals its
presence in the original graph (e.g. node look up is performed or node
spontaneously advertise its presence). The observed path, however, consists of
the observed node and its path to the root. A sample is a set of observed
paths.
Let $V^{\prime}\subset V$ be a set of randomly observed nodes of the unlabeled
graph $G$. Let $S$ be the sampled tree formed by the union of the paths from
all nodes $v\in V^{\prime}$ to the root of $G$. For instance, Fig. 1b shows
sampled trees $S_{1}$ formed by $V^{\prime}=\\{a,b\\}$ and $S_{2}$ formed by
$V^{\prime}=\\{a,b,c\\}$. We assume that nodes in $V^{\prime}$ are sampled
from $V$ with probability $p$.
We now show how to compute $P(S|G)$, assuming that that $V^{\prime}$ is known,
i.e., we know which nodes in $S$ are observed. However, it is easy to modify
the following analysis to the cases where (1) we only know $|V^{\prime}|$ or
(2) we know the topology of $S$, but not which or how many nodes are observed.
Let $C_{G,S}$ be the number of ways in which $S$ can be mapped onto $G$.
Clearly, $C_{G,S}=0$ if $S$ is not a subgraph of $G$. Conditioning on a given
mapping, we must have exactly $|V^{\prime}|$ nodes chosen as targets and
$|V\setminus V^{\prime}|$ not chosen as so. Therefore,
$P(S|G)=C_{G,S}\ p^{|V^{\prime}|}(1-p)^{|V\setminus V^{\prime}|}.$ (2)
Computing $C_{G,S}$ can be done recursively by first computing $c_{ij}$, the
number of ways the $i$-th subtree connected to the root of $S$ can be mapped
to $j$-th subtree connected to the root of $G$, for all $i,j$. Now consider
the matrix $\mathbf{C}=[c_{ij}]_{n\times m}$. If we define the operator
$|\mathbf{C}_{n\times
m}|=\begin{cases}\sum_{j=1}^{m}c_{1j}|\mathbf{C}_{1j}|,&n\geq 1\\\
\sum_{j=1}^{m}c_{1j},&n=1,\end{cases}$ (3)
where $\mathbf{C}_{1j}$ is $\mathbf{C}$ after removal of the 1st row and $j$th
column, then we can show that $C_{G,S}=|\mathbf{C}|$. Consider the simple case
of $G$ and $S_{2}$ shown in Fig. 1b. Here we have
$\mathbf{C}=\left[\begin{array}[]{cc}1&1\\\ 2&1\end{array}\right]$ and hence,
$|\mathbf{C}|=1\cdot 1+1\cdot 2=3$. We can visually check that this is indeed
the number of ways to map $S_{2}$ onto $G$. Therefore,
$P(S_{2}|G)=3p^{3}(1-p)^{3}$.
Inference on the structure of the tree $G$ from the partial observation $S$ is
possible because we can compute $P(S|G^{\prime})$ for any $G^{\prime}$. This,
in turn, allows us to do inference on the offspring distribution by
considering how likely $G^{\prime}$ is to be generated from
${\boldsymbol{\theta}}$ by using $P(G^{\prime}|{\boldsymbol{\theta}})$ and
weighting by how likely $S$ is to be sampled given $G^{\prime}$. In the next
section we propose two estimation methods based on this idea.
## 3 Estimators
We consider the problem of estimating the offspring distribution
${\boldsymbol{\theta}}$ of the GW process that generates a tree $G$ given a
sample $S$ consisting of the union of random observed paths when nodes are
observed with probability $p$.
Two approaches to this problem based on Maximum Likelihood Estimation are
proposed in this paper. While the former consists of the exact computation of
the likelihood function $P(S|{\boldsymbol{\theta}})$, the latter approximates
this function via Metropolis-Hastings with importance sampling.
### 3.1 Exact inference
Figure 2: Graphical model representing network generation and sampling. White
nodes are unobservable and shaded node is observable.
The graphical model in Fig. 2 depicts the statistical relationship between
$S$, ${\boldsymbol{\theta}}$ and $G$. The shaded node, $S$, is the only
observable variable, while the white nodes, ${\boldsymbol{\theta}}$ and $G$
are unobservable. This figure shows that to find the relationship between $S$
and ${\boldsymbol{\theta}}$, we have to sum over the variable $G$, i.e., over
all possible trees given the number of generations $L$ and the maximum degree
$W$. Let $\mathcal{G}_{L,W}$ be the set of all possible trees given $L$ and
$W$. It follows that
$\displaystyle P(S|{\boldsymbol{\theta}})$ $\displaystyle=$
$\displaystyle\sum_{G\in\mathcal{G}_{L,W}}P(S,G|{\boldsymbol{\theta}})$ (4)
$\displaystyle=$
$\displaystyle\sum_{G\in\mathcal{G}_{L,W}}P(S|G,{\boldsymbol{\theta}})P(G|{\boldsymbol{\theta}})$
$\displaystyle=$
$\displaystyle\sum_{G\in\mathcal{G}_{L,W}}P(S|G)P(G|{\boldsymbol{\theta}}),$
where from line 2 to line 3 we use the fact that $S$ is conditionally
independent on ${\boldsymbol{\theta}}$ given $G$ (see Fig. 2). However,
$|\mathcal{G}_{L,W}|$ grows exponentially both in $L$ and $W$, which limits
this approach to very small trees. In fact, we can show that
$|\mathcal{G}_{L,W}|=\begin{cases}W,&L=1\\\
\sum_{i=1}^{W}|\mathcal{G}_{L-1,W}|^{i}>|\mathcal{G}_{L-1,W}|^{W},&L>1.\end{cases}$
Solving the recursion yields $\log^{(L-1)}|\mathcal{G}_{L,W}|=O(W)$, where
$\log^{(.)}$ is the repeated logarithm. Note however that isomorphic trees are
being counted more than once. Therefore, we can reduce the computational cost
by counting only non-isomorphic trees (appropriately weighted by their
multiplicity).
$L$ | 1 | 2 | 3 | 4 | 5
---|---|---|---|---|---
$|\mathcal{G}_{L,3}|\approx$ | 3 | 39 | $6\times 10^{4}$ | $2.3\times 10^{14}$ | $1.2\times 10^{43}$
$|\mathcal{G}_{L,3}^{\textrm{non-iso}}|\approx$ | 3 | 19 | $1.5\times 10^{3}$ | $6.1\times 10^{8}$ | $3.8\times 10^{25}$
Table 1: Growth of the space of trees as a function of $\mathbf{L}$, for
$\mathbf{W=3}$.
Let $\mathcal{G}_{L,W}^{\textrm{non-iso}}$ be the maximal set of non-
isomorphic trees of $\mathcal{G}_{L,W}$. It is possible to show that
$|\mathcal{G}_{L,W}^{\textrm{non-iso}}|=\begin{cases}W,&L=1\\\
\frac{(W+1)\binom{W+|\mathcal{G}_{L-1,W}^{\textrm{non-
iso}}|}{W+1}}{|\mathcal{G}_{L-1,W}^{\textrm{non-iso}}|}-1,&L>1.\end{cases}$
Table 1 illustrates some values of $|\mathcal{G}_{L,W}|$ and
$|\mathcal{G}_{L,W}^{\textrm{non-iso}}|$ for $W=3$ and $L=1,\dots,5$. As we
can see, counting only non-isomorphic trees reduces significantly the state
space, but it is still not feasible to compute eq. (4) except for rather small
numbers such as $W=3$ and $L=4$. Nevertheless, we utilize this approach to
perform inference more efficiently. In the following, we explain how to
enumerate trees in $\mathcal{G}_{L,W}^{\textrm{non-iso}}$ and how to compute
their multiplicities.
#### Counting only non-isomorphic trees
A straightforward way to enumerate all trees in
$\mathcal{G}_{L,W}^{\textrm{non-iso}}$ is: (1) to enumerate non-isomorphic
trees in $\mathcal{G}_{L-1,W}^{\textrm{non-iso}}$ and assign a numeric id to
each of them; and (2) construct trees in $\mathcal{G}_{L,W}^{\textrm{non-
iso}}$ by attaching to a root node trees from
$\mathcal{G}_{L-1,W}^{\textrm{non-iso}}$ where ids are in non-increasing
order. Note that two trees are isomorphic in this construction if the sets of
ids of the subtrees connected to the root node are permutations of each other,
which cannot occur due to the ordering.
In what follows we compute the probability that sample $S$ is observed given
the offspring distribution ${\boldsymbol{\theta}}$ through the enumeration of
non-isomorphic trees. Let $m_{i}^{(L)}$ denote the multiplicity of the $i$-th
tree, say $G_{i}$, in the labeled space $\mathcal{G}_{L,W}^{\textrm{non-
iso}}$. Eq. (4) is equivalent to
$P(S|\mathbf{\theta})=\sum_{G_{i}\in\mathcal{G}_{L,W}^{\textrm{non-
iso}}}m_{i}^{(L)}P(S|G_{i})P(G_{i}|\mathbf{\theta}).$ (5)
The multiplicity $m_{i}^{(L)}$ can be calculated from the ids of subtrees
directly connected to the root node in $G_{i}$ and their multiplicities. More
precisely, $m_{i}^{(L)}$ is simply the number of permutations of the ids
multiplied by the product of the multiplicities of each subtree. For instance,
if there are $j$ subtrees connected to the root with distinct ids
$(1),\dots,(j)$, then $m_{i}^{(L)}=j!\times\prod_{k=1}^{j}m_{(k)}^{(L-1)}$. In
the general case, where ids can appear more than once, we have
$m_{i}^{(L)}=\frac{j!\times\prod_{k=1}^{j}m_{(k)}^{(L-1)}}{\prod_{id=1}^{|\mathcal{G}_{L-1,W}^{\textrm{non-
iso}}|}\left(\sum_{k=1}^{j}\mathbf{1}\\{id=k\\}\right)!}.$
The first estimator we propose is
$\hat{{\boldsymbol{\theta}}}_{\textrm{Exact}}=\arg\max_{\boldsymbol{\theta}}P(S|{\boldsymbol{\theta}}),$
(6)
where $P(S|{\boldsymbol{\theta}})$ is computed as in (5).
#### Maximum Likelihood Estimation
After obtaining a sample, we write the summation in Eq. (5) as a function of
${\boldsymbol{\theta}}$. Unfortunately, this likelihood function is a sum of a
potentially enormous number of terms and using the log-likelihood is not
helpful in this case. We apply several tricks to solve this optimization task.
One simple trick to reduce the number of terms consists of grouping together
trees that have the same configuration in terms of offspring counts, i.e.,
that account for the same $P(G|\mathbf{\theta})$. Note that there are many
such trees even when considering non-isomorphic trees only, although they
correspond to different values of $P(S|G)$.
Also note that this is a constrained maximization problem. Since
${\boldsymbol{\theta}}$ is a probability distribution, $0\leq\theta_{i}\leq 1$
for $i\in\\{1,\dots,W\\}$ and $\sum_{i=1}^{W}\theta_{i}=1$. We can turn it
into a non-constrained maximization problem by replacing
$\theta_{i}=\frac{e^{\alpha_{i}}}{Z}$ for $i\in\\{1,\dots,W\\}$ where
$Z=\sum_{i=1}^{W}e^{\alpha_{i}}$, setting $\alpha_{W}=1$ (for regularization
purposes) and then maximizing w.r.t. ${\boldsymbol{\alpha}}$. Note that
$\mathbf{\alpha}_{i}$ can now assume any value in $\mathbb{R}$ for
$i\in\\{1,\dots,W-1\\}$. Nevertheless, one must be careful when using this
parameter transformation since the products of the exponentials can quickly
lead to overflows. Therefore, we use log representation and the logsumexp
trick.
After this transformation, the maximization problem becomes
$\max_{\alpha}l(\alpha)=\sum_{j}c_{j}\frac{e^{\sum_{i=1}^{W}x_{ji}\alpha_{i}}}{Z^{y_{j}}}$
where $c_{j}$ is the sum of the coefficients of the terms corresponding to the
same $j$-th configuration of $P(G|\theta)$, $x_{ji}$ is the number of nodes
with offspring $i$ in the $j$-th configuration and
$y_{j}=\sum_{i=1}^{W}x_{ji}$. In order to compute the likelihood function and
its gradient more efficiently, we express them in matrix notation as
$l(\alpha)=\mathbf{c}^{T}\cdot(\exp(\mathbf{X}{\boldsymbol{\alpha}})/Z^{\mathbf{y}})$
$\nabla
l(\alpha)=\exp(\mathbf{X}{\boldsymbol{\alpha}})/Z^{\mathbf{y}}-\exp(\mathbf{X}{\boldsymbol{\alpha}})/Z^{\mathbf{y}+\mathbf{1}}$
where $\mathbf{c}=[c_{j}]$, $\mathbf{X}=[x_{ji}]$,
${\boldsymbol{\alpha}}=[\alpha_{i}]$, $\mathbf{y}=[y_{j}]$,
$Z^{\mathbf{y}}=[Z^{y_{j}}]$, the “$/$” symbol corresponds to division of two
vectors element-wise and $\mathbf{1}$ is a column vector with all entries
equal to 1.
The maximization then goes as follows. We sample $10,000$ points uniformly
from $\mathbb{R}^{W-1}$. The one with the maximum value of $l(.)$ will be
${\boldsymbol{\alpha}}^{(0)}$, the starting point to be used with the
BFGS111We use R implementation in package stats. (limited to 100 iterations,
relative convergence tolerance of $10^{-8}$, step size $10^{-3}$). The
estimate $\hat{{\boldsymbol{\theta}}}$ can be obtained from
$\hat{{\boldsymbol{\alpha}}}$ by exponentiating and then normalizing the
latter.
### 3.2 Approximate inference with MCMC
The previous approach only applies to small problems due to the enormous
number of terms in the summation (5). To solve larger problems, we approximate
eq. (4) using MCMC.
Let $h=P(S|G)$ and $f(G)=P(G|{\boldsymbol{\theta}})$. Since $f(G)$ defines a
probability distribution on the space $\mathcal{G}_{L,W}$, it follows that
$P(S|{\boldsymbol{\theta}})=\sum_{G}hf(G)=E_{f}[h].$ (7)
where $E_{f}[.]$ denotes expectation w.r.t. distribution $f$.
Monte Carlo simulation approximates expectations (integrals, more generally)
by sampling from a desired distribution $f$ [2]. The problem here is that we
cannot sample from $f$ because we don’t know ${\boldsymbol{\theta}}$. However,
we can sample from some other distribution $g$ and compensate for the fact
that in $g$ some trees are more (or less) likely to appear than in $f$ by
using importance sampling. More precisely,
$P(S|{\boldsymbol{\theta}})=\sum_{G}hf(G)=\sum_{G}h\frac{f(G)}{g(G)}g(G)=E_{g}\left[h\frac{f(G)}{g(G)}\right].$
(8)
Recall from Section 2 that we can generate trees using the GW process from a
given offspring distribution ${\boldsymbol{\theta}}_{0}$. Hence we can set
$g(G)=\frac{1}{\mathcal{Z}}P(S|G)P(G|{\boldsymbol{\theta}}_{0})\,,$ (9)
where $\mathcal{Z}$ is a normalizing constant222We could have set
$g(G)=P(G|{\boldsymbol{\theta}}_{0})$ instead, but our approach restrict us to
generating trees that are consistent with the sample and thus, is more
efficient.. Substituting eq. (9) into (8) yields
$P(S|{\boldsymbol{\theta}})=E_{g}\left[\frac{P(S|G)P(G|{\boldsymbol{\theta}})}{\frac{1}{\mathcal{Z}}P(S|G)P(G|{\boldsymbol{\theta}}_{0})}\right]\approx\frac{\mathcal{Z}}{m}\sum_{i=1}^{m}\frac{P(G_{i}|{\boldsymbol{\theta}})}{P(G_{i}|{\boldsymbol{\theta}}_{0})},$
where $G_{i}\sim g(G)$. Note that $\frac{\mathcal{Z}}{m}$ is not a function of
${\boldsymbol{\theta}}$ and do not need to be considered when maximizing
${\boldsymbol{\theta}}$. Therefore, the second estimator we propose is
$\hat{{\boldsymbol{\theta}}}_{\textrm{Approximate}}=\arg\max_{\boldsymbol{\theta}}\sum_{i=1}^{m}\frac{P(G_{i}|{\boldsymbol{\theta}})}{P(G_{i}|{\boldsymbol{\theta}}_{0})}.$
(10)
In order to draw $G_{i}\sim g(G)$, we use the Metropolis-Hastings algorithm
where each state $X_{j}$ of the Markov Chain is a tree. We start the chain in
a state $X_{0}$ consistent with $S$, in particular, we set $X_{0}=S$. The
transition kernel $X_{i}\rightarrow X_{i+1}$ we use is shown in Algorithm 1.
Algorithm 1 Transition Kernel($X_{i},X_{i+1}$)
$v\leftarrow$ internal node selected uniformly at random from $X_{i}$
$d_{v}\leftarrow$ degree($v$)
if $d_{v}=1$ then
$action\leftarrow add$
else if $d_{v}=W$ then
$action\leftarrow remove$
else$\triangleright$ $1<d_{v}<W$
if $U(0,1)<0.5$ then $\triangleright$ $U(0,1)$ is the uniform dist.
$action\leftarrow add$
else
$action\leftarrow remove$
end if
end if
if $action=add$ then
$T_{v}\leftarrow$ GaltonWatson(${\boldsymbol{\theta}}_{0},L-l$)
$v.child[d_{v}+1]\leftarrow T_{v}$ $\triangleright$ adds new branch
$d_{v}\leftarrow d_{v}+1$
else if $action=remove$ then
shuffle($v.child$) $\triangleright$ shuffle children
$v.child[d_{v}]\leftarrow$ nil$\triangleright$ removes “right-most” branch
$d_{v}\leftarrow d_{v}-1$
end if
The new tree $X_{i+1}$ is accepted with probability
$r=\min\left(1,\frac{P(S|X_{i+1})P(X_{i+1}|{\boldsymbol{\theta}}_{0})q(X_{i+1}\rightarrow
X_{i})}{P(S|X_{i})P(X_{i}|{\boldsymbol{\theta}}_{0})q(X_{i}\rightarrow
X_{i+1})}\right).$ (11)
where $q(X_{i}\rightarrow X_{j})$ is the probability that the transition
kernel proposes transition $X_{i}\rightarrow X_{j}$. It is easy to include the
calculation of $q(X_{i}\rightarrow X_{i+1})$ and $q(X_{i+1}\rightarrow X_{i})$
in the transition kernel implementation. In particular, let $N_{i}$ and
$L_{i}$ denote the number of nodes and leaves in $X_{i}$, respectively. Hence,
if $action=add$,
$\displaystyle q(X_{i}\rightarrow X_{i+1})$ $\displaystyle=$
$\displaystyle\frac{0.5^{\mathbf{1}\\{d_{v}>1\\}}\times
P(T_{v}|\theta_{0})}{N_{i}-L_{i}-1},$ $\displaystyle q(X_{i+1}\rightarrow
X_{i})$ $\displaystyle=$
$\displaystyle\frac{0.5^{\mathbf{1}\\{d_{v}+1<W\\}}(d_{v}+1)^{-1}}{N_{i+1}-L_{i+1}-1},$
otherwise,
$\displaystyle q(X_{i}\rightarrow X_{i+1})$ $\displaystyle=$
$\displaystyle\frac{0.5^{\mathbf{1}\\{d_{v}<W\\}}d_{v}^{-1}}{N_{i}-L_{i}-1},$
$\displaystyle q(X_{i+1}\rightarrow X_{i})$ $\displaystyle=$
$\displaystyle\frac{0.5^{\mathbf{1}\\{d_{v}-1>1\\}}\times
P(T_{v}|\theta_{0})}{N_{i+1}-L_{i+1}-1},$
where $0.5^{\mathbf{1}\\{d_{v}>1\\}}$ accounts for the fact that if $v$ has
degree $>1$, action add is chosen with probability $0.5$, but when $d_{v}=1$,
add is always chosen. The case for remove is similar333All calculations should
be performed in log space..
#### Maximum Likelihood Estimation
After obtaining roughly independent samples $G_{i}\sim g(G)$, we write the
summation in the RHS of eq. (8) and perform maximization as in the case of
exact inference.
## 4 Experiments and Results
We first describe the experiments used to assess the performance of the two
estimation methods, henceforth referred to as Exact and Approximate,
respectively. We then compare methods w.r.t. the KL-divergence of the
estimated distribution from ${\boldsymbol{\theta}}$. In addition, we show some
results in detail to illustrate the Mean Squared Error (MSE) per distribution
parameter and how performance increases with the sampling probability. In
general, Exact performs best but is only feasible for small datasets.
Nevertheless, Approximate exhibits comparable performance and can cope with
larger datasets (up to 2,000 nodes).
### 4.1 Experiments description
Based on the size of $\mathcal{G}_{L,W}$, we define two classes of estimation
problems: small and medium size problems. For medium size ones, we would like
to compare the methods’ performance for short and long tail offspring
distributions, hereby represented by truncated444Here truncated means that we
took the original probability mass function for values between $1$ and $W$ and
normalized by their sum, while setting the probability mass of other values to
zero. Poisson and Zipf distributions, respectively. Parameters of these
distributions were chosen so that their average is $\bar{d}$.
In what concerns the sampling process, we choose three sampling probabilities
representing low, medium and high sampling rates for each class. The set of
values of $p$ has to be different for each class for two reasons. The
practical reason is that as the tree size grows, the cost to sample it grows
linearly on $p$ and we may be limited by a budget. The second reason is that,
if there is no such constraint, while values of $p$ such as $0.5$ are
reasonable for small problems, they will likely reveal all nodes from the top
levels for large problems. Hence, taking the empirical distribution from the
first levels per se would be an accurate estimator. Inside each class,
consider the following distributions and sampling probabilities:
1. 1.
Small size: $W=3,L=3,\bar{d}=2.1$
* •
${\boldsymbol{\theta}}^{(1)}=(0.2,0.5,0.3)$
* •
$p\in\\{0.1,0.2,0.5\\}$
2. 2.
Medium size: $W=10,L=5,\bar{d}=3.15$
* •
${\boldsymbol{\theta}}^{(2)}\sim$ truncated Poisson($\lambda=3$)
* •
${\boldsymbol{\theta}}^{(3)}\sim$ Zipf($\alpha=1.132$, $N=10$)
* •
$p\in\\{0.5,1.0,5.0\\}\times 10^{-2}$
Average tree sizes per class are $\approx 17$ and $\approx 454$, respectively.
In order to test the inference methods, we build a set of estimation problems
as follows. For each distribution ${\boldsymbol{\theta}}^{(i)},\,i=1,\dots,3$,
we generate 10 trees $t_{ij},\,j=1,\dots,10$ from a GW process with height
$L+1$ (30 trees in total). Next, for each of the 90 pairs
$(t_{ij},p_{ik}),\,k=1,2,3$, we generate 10 samples $s_{ijkl},\,l=1,\dots,10$
(900 samples in total).
We assume each sample $s_{ijkl}$ constitutes a separate estimation problem
(also referred to as dataset to avoid confusion with MCMC samples). This can
be interpreted as if we had one tree (originated from the GW process), and a
single opportunity to sample it. No other samples can be obtained from the
same tree, nor other trees are available for sampling. Ideally, we would like
to try both methods with each problem, but Exact is only feasible for small
problems. Before presenting the results, we briefly discuss implementation
issues related to Approximate.
### 4.2 Implementation issues of APPROXIMATE
The main difficulty in the Approximate method is knowing when to stop the
approximation as, without knowing the true distribution, we need a mechanism
that tells us how close we are to the steady state distribution of the Markov
chain.
Recall that we use the Metropolis Hastings (MH) algorithm to sample graphs
from $g(G)$ (see Eq. (8)). As with any MCMC method, three questions must be
addressed: (1) How long should the burn-in period be? (2) What should the
thinning ratio be? (3) What is the minimum number of uncorrelated samples that
we need? We use the Raftery-Lewis (RL) Diagnostic [7] to address these
issues555We use the R implementation in package coda..
The RL Diagnostic attempts to determine the necessary conditions to estimate a
quantile $q$ of the measure of interest, within a tolerance $r$ with
probability $s$. We take the likelihood of the MH samples as the measure of
interest. The diagnostic was then applied individually to each dataset with
default parameters ($q=0.025$, $r=0.005$ and $s=0.95$). Results concerning the
burn-in period and thinning ratio are subsumed by the required number of
samples and hence will be ommited. The minimum number of MCMC samples for
small datasets was less than 50,000 graphs and for medium datasets, less than
500,000, in summary. We conducted some experiments with more MCMC samples than
those values, but there was no significant improvement w.r.t. the estimation
accuracy. Therefore, the results described in the following refer to the
minimum number of samples suggested by the Raftery-Lewis test.
Last, recall from Section 3.2 that ${\boldsymbol{\theta}}_{0}$ can be any
distribution. However, the closer it is to ${\boldsymbol{\theta}}$, the better
is the convergence of the MCMC. When estimating the offspring distribution in
medium size problems, we will assume that ${\boldsymbol{\theta}}_{0}$ is
binomial and set its parameters so that the average is $\bar{d}$. This implies
assuming that the average number of offspring can be estimated, but in fact a
rough estimate can be obtained by simply taking the average of the observed
node degrees from the first generations in the sample, whose edges have a
relatively high probability of being sampled. For small sized tree, we simply
set ${\boldsymbol{\theta}}_{0}$ to be uniform.
### 4.3 Results
The estimation results span over a number of dimensions equal to the number of
parameters assumed in the multinomial distribution. We use the Kullback-
Leibler (KL) divergence as an objective criterion to compare the estimation
methods in a single dimension.
Let the estimated offspring distribution be
$\hat{{\boldsymbol{\theta}}}=(\hat{\theta}_{1},\dots,\hat{\theta}_{W})$. The
KL-divergence of $\hat{{\boldsymbol{\theta}}}$ from ${\boldsymbol{\theta}}$ is
defined by
$D_{\textrm{KL}}({\boldsymbol{\theta}}||\hat{{\boldsymbol{\theta}}})=\sum_{i=1}^{W}(\log\theta_{i}-\log\hat{\theta}_{i})\theta_{i},$
(12)
when $\hat{\theta}_{i}>0,\,i=1,\dots,W$. When this condition does not always
hold, as in our case, absolute discounting is frequently used to smooth
$\hat{\boldsymbol{\theta}}$. Hence, we distribute $\epsilon=10^{-7}$ of
probability mass among the zero estimates, discounting this value equally from
the non-zero estimates.
Table 2 shows the median KL-divergence obtained for each set of problems
(indexed by ${\boldsymbol{\theta}}^{(i)},\,i=1,\dots,3$), for Exact and
Approximate, when the sampling probability $p$ is medium.
| ${\boldsymbol{\theta}}^{(1)}$ | ${\boldsymbol{\theta}}^{(2)}$ | ${\boldsymbol{\theta}}^{(3)}$
---|---|---|---
Exact | 1.86 | - | -
Approximate | 2.98 | 0.58 | 0.78
Table 2: Median KL-divergence of estimators
Dashes indicate that Exact could not find estimates for medium size problems
in a reasonable amount of time. However, it outperfomed Approximate in the
estimation of ${\boldsymbol{\theta}}^{(1)}$. Note that although KL-divergence
implies some ordering within each column in terms of accuracy, neither the
relative ratios have a direct interpretation, nor values accross different
columns can be compared. We will next evaluate the results w.r.t. the MSE of
each parameter estimate, which will allow us to conclude that the performance
of Approximate is in fact very close to the one of Exact for small datasets.
### The effect of sampling probabilities
As we increase $p$, we gather more information about the original graph and
hence estimators will clearly perform better. We study the performance gains
w.r.t. the MSE of the parameter estimates.
Figs. 3(a-b) show boxplots of the MSE of the estimates
$\hat{\theta_{i}},\,i=1,\dots,W$ obtained by Exact and Approximate,
respectively, for datasets coming from ${\boldsymbol{\theta}}^{(1)}$. Each
boxplot shows minimum, 1st quartile, median, 3rd quartile and maximum values,
computed over 100 estimates (10 samples for each of the 10 trees). Colors
correspond to different sampling probabilities. In both cases, the median MSE
increases as we decrease $p$, as expected.
(a) Exact
(b) Approximate
Figure 3: Boxplots of the MSE per parameter for ${\boldsymbol{\theta}}^{(1)}$.
Similarly, Fig. 4 shows the results obtained by Approximate for datasets that
come from ${\boldsymbol{\theta}}^{(2)}$. In general, increasing the sampling
probability reduces the MSE, but not by a significant amount. Results for
${\boldsymbol{\theta}}^{(3)}$ are similar and will be ommitted.
We conjecture that most of the information that allows us to estimate
${\boldsymbol{\theta}}$ comes from the top levels of the tree. As we increase
$p$, we obtain many more observations from the bottom levels of the tree, but
only a few new observations from the top levels. While edges closer to the
root are observed with higher probability, edges from lower levels are more
rarely sampled and there is much more uncertainty in those samples. This
implies that increasing $p$ should not improve the estimates significantly
after a certain point.
This short digression might lead the reader wonder whether the values of $p$
we use would sample so many edges from the top levels that would be enough to
take the empirical distribution of the observed degrees at those levels as an
estimate for ${\boldsymbol{\theta}}$. Hence, we compare the MSE results for
Approximate with the empirical distribution of the observed degrees from the
top 1, 2 and 3 levels in a cumulative fashion. Intuitively, the empirical
distribution is biased towards smaller degrees, especially if lower levels are
taken into account, this being the reason why we stop at 3 levels.
Fig. 5 shows the median values of the MSE (also seen in the previous figure),
but only for “small” and “large” $p$ values, for the sake of clarity. In
addition, dashed lines display the median MSE obtained when the empirical
distributions are used as estimators. Estimates for $p=5\times 10^{-3}$
exhibit a one-order magnitude gain in accuracy (for most parameters) relative
to the best empirical estimate, but estimates for $p=5\times 10^{-2}$ only
yield significant improvements at the tail of the distribution. In general,
empirical distributions are not good estimates, especially for distribution
tails due to its bias towards small degrees. One exception we found was in the
case of ${\boldsymbol{\theta}}^{(3)}$, where the probability mass at the tail
is so large that high degree nodes are likely to be observed at the top
levels. However, we observed in additional experiments that this is not the
case for long tailed distributions with larger support, such as $W=100$.
Figure 4: Boxplots of the MSE of Approximate for
${\boldsymbol{\theta}}^{(2)}$. Figure 5: Median MSE of Approximate and
empirical estimates for ${\boldsymbol{\theta}}^{(2)}$.
## 5 Conclusions
In this paper we propose and analyze two methods to estimate the offspring
distribution of a branching process from a sample of random observed paths to
the root. The former, based on exact inference, is limited to small problems
since the number of terms to be computed in the likelihood function grows
exponentially with the maximum degree and number of levels. The latter,
approximates the likelihood function using MCMC samples, and was able to
handle both small and medium size problems. For small problems, its
performance was similar to that of exact inference.
## References
* [1] D. Achlioptas, A. Clauset, D. Kempe, and C. Moore. On the bias of traceroute sampling: Or, power-law degree distributions in regular graphs. J. ACM, 56(4):21:1–21:28, July 2009.
* [2] P. Beerli and J. Felsenstein. Maximum-Likelihood Estimation of Migration Rates and Effective Population Numbers in Two Populations Using a Coalescent Approach. Genetics, 152(2):763–773, June 1999.
* [3] F. Chierichetti, J. Kleinberg, and D. Liben-Nowell. Reconstructing patterns of information diffusion from incomplete observations. In NIPS’11, pages 792–800, 2011.
* [4] A. Lakhina, J.W. Byers, M. Crovella, and P. Xie. Sampling biases in ip topology measurements. In INFOCOM 2003, volume 1, pages 332 – 341 vol.1, march-3 april 2003.
* [5] A. Neves and C. Moreira. Applications of the galton-watson process to human dna evolution and demography. Physica A, 368(1):132 – 146, 2006.
* [6] R. Pastor-Satorras and A. Vespignani. Evolution and Structure of the Internet: A Statistical Physics Approach. Cambridge University Press, New York, NY, USA, 2004.
* [7] A. Raftery and S. Lewis. The number of iterations, convergence diagnostics and generic Metropolis algorithms. In In Practical Markov Chain Monte Carlo (W.R. Gilks, D.J. Spiegelhalter and S. Richardson, eds.), pages 115–130, 1995.
* [8] X. Yang and G. de Veciana. Service capacity of peer to peer networks. In INFOCOM, pages 2242–2252 vol.4, march 2004.
|
arxiv-papers
| 2013-02-23T21:49:53 |
2024-09-04T02:49:42.072068
|
{
"license": "Public Domain",
"authors": "Fabricio Murai, Bruno Ribeiro, Don Towsley, Krista Gile",
"submitter": "Bruno Ribeiro",
"url": "https://arxiv.org/abs/1302.5847"
}
|
1302.5854
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-PH-EP-2013-018 LHCb-PAPER-2012-050 Feb. 23, 2013
First observations of $\kern
3.73305pt\overline{\kern-3.73305ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}$,
$D^{+}_{s}D^{-}$ and $D^{0}\kern 4.14793pt\overline{\kern-4.14793ptD}{}^{0}$
decays
The LHCb collaboration†††Authors are listed on the following pages.
First observations and measurements of the branching fractions of the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}$, $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}$ and
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ decays are presented using $1.0$
$\rm fb^{-1}$ of data collected by the LHCb experiment. These branching
fractions are normalized to those of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{-}$,
$B^{0}\rightarrow D^{+}_{s}D^{-}$ and $B^{-}\rightarrow D^{0}D^{-}_{s}$,
respectively. An excess of events consistent with the decay $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ is also seen, and its branching
fraction is measured relative to that of $B^{-}\rightarrow D^{0}D^{-}_{s}$.
Improved measurements of the branching fractions ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s})$
and ${\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})$ are reported, each relative
to ${\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})$. The ratios of branching
fractions are
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}D^{-})\over{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{-})}$
$\displaystyle=1.08\pm 0.20\pm 0.10,$ $\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle=0.050\pm 0.008\pm 0.004,$ $\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})\over{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})}$ $\displaystyle=0.019\pm 0.003\pm 0.003,$
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})\over{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})}$ $\displaystyle=0.0014\pm 0.0006\pm 0.0002,$
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle=0.56\pm 0.03\pm 0.04,$
$\displaystyle{{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle=1.22\pm 0.02\pm 0.07,$
where the uncertainties are statistical and systematic, respectively.
Submitted to Physical Review D
© CERN on behalf of the LHCb collaboration, license CC-BY-3.0.
LHCb collaboration
R. Aaij40, C. Abellan Beteta35,n, B. Adeva36, M. Adinolfi45, C. Adrover6, A.
Affolder51, Z. Ajaltouni5, J. Albrecht9, F. Alessio37, M. Alexander50, S.
Ali40, G. Alkhazov29, P. Alvarez Cartelle36, A.A. Alves Jr24,37, S. Amato2, S.
Amerio21, Y. Amhis7, L. Anderlini17,f, J. Anderson39, R. Andreassen59, R.B.
Appleby53, O. Aquines Gutierrez10, F. Archilli18, A. Artamonov 34, M.
Artuso56, E. Aslanides6, G. Auriemma24,m, S. Bachmann11, J.J. Back47, C.
Baesso57, V. Balagura30, W. Baldini16, R.J. Barlow53, C. Barschel37, S.
Barsuk7, W. Barter46, Th. Bauer40, A. Bay38, J. Beddow50, F. Bedeschi22, I.
Bediaga1, S. Belogurov30, K. Belous34, I. Belyaev30, E. Ben-Haim8, M.
Benayoun8, G. Bencivenni18, S. Benson49, J. Benton45, A. Berezhnoy31, R.
Bernet39, M.-O. Bettler46, M. van Beuzekom40, A. Bien11, S. Bifani12, T.
Bird53, A. Bizzeti17,h, P.M. Bjørnstad53, T. Blake37, F. Blanc38, J. Blouw11,
S. Blusk56, V. Bocci24, A. Bondar33, N. Bondar29, W. Bonivento15, S. Borghi53,
A. Borgia56, T.J.V. Bowcock51, E. Bowen39, C. Bozzi16, T. Brambach9, J. van
den Brand41, J. Bressieux38, D. Brett53, M. Britsch10, T. Britton56, N.H.
Brook45, H. Brown51, I. Burducea28, A. Bursche39, G. Busetto21,q, J.
Buytaert37, S. Cadeddu15, O. Callot7, M. Calvi20,j, M. Calvo Gomez35,n, A.
Camboni35, P. Campana18,37, A. Carbone14,c, G. Carboni23,k, R. Cardinale19,i,
A. Cardini15, H. Carranza-Mejia49, L. Carson52, K. Carvalho Akiba2, G.
Casse51, M. Cattaneo37, Ch. Cauet9, M. Charles54, Ph. Charpentier37, P.
Chen3,38, N. Chiapolini39, M. Chrzaszcz 25, K. Ciba37, X. Cid Vidal36, G.
Ciezarek52, P.E.L. Clarke49, M. Clemencic37, H.V. Cliff46, J. Closier37, C.
Coca28, V. Coco40, J. Cogan6, E. Cogneras5, P. Collins37, A. Comerma-
Montells35, A. Contu15, A. Cook45, M. Coombes45, S. Coquereau8, G. Corti37, B.
Couturier37, G.A. Cowan38, D. Craik47, S. Cunliffe52, R. Currie49, C.
D’Ambrosio37, P. David8, P.N.Y. David40, I. De Bonis4, K. De Bruyn40, S. De
Capua53, M. De Cian39, J.M. De Miranda1, M. De Oyanguren Campos35,o, L. De
Paula2, W. De Silva59, P. De Simone18, D. Decamp4, M. Deckenhoff9, L. Del
Buono8, D. Derkach14, O. Deschamps5, F. Dettori41, A. Di Canto11, H.
Dijkstra37, M. Dogaru28, S. Donleavy51, F. Dordei11, A. Dosil Suárez36, D.
Dossett47, A. Dovbnya42, F. Dupertuis38, R. Dzhelyadin34, A. Dziurda25, A.
Dzyuba29, S. Easo48,37, U. Egede52, V. Egorychev30, S. Eidelman33, D. van
Eijk40, S. Eisenhardt49, U. Eitschberger9, R. Ekelhof9, L. Eklund50, I. El
Rifai5, Ch. Elsasser39, D. Elsby44, A. Falabella14,e, C. Färber11, G.
Fardell49, C. Farinelli40, S. Farry12, V. Fave38, D. Ferguson49, V. Fernandez
Albor36, F. Ferreira Rodrigues1, M. Ferro-Luzzi37, S. Filippov32, C.
Fitzpatrick37, M. Fontana10, F. Fontanelli19,i, R. Forty37, O. Francisco2, M.
Frank37, C. Frei37, M. Frosini17,f, S. Furcas20, E. Furfaro23, A. Gallas
Torreira36, D. Galli14,c, M. Gandelman2, P. Gandini54, Y. Gao3, J. Garofoli56,
P. Garosi53, J. Garra Tico46, L. Garrido35, C. Gaspar37, R. Gauld54, E.
Gersabeck11, M. Gersabeck53, T. Gershon47,37, Ph. Ghez4, V. Gibson46, V.V.
Gligorov37, C. Göbel57, D. Golubkov30, A. Golutvin52,30,37, A. Gomes2, H.
Gordon54, M. Grabalosa Gándara5, R. Graciani Diaz35, L.A. Granado Cardoso37,
E. Graugés35, G. Graziani17, A. Grecu28, E. Greening54, S. Gregson46, O.
Grünberg58, B. Gui56, E. Gushchin32, Yu. Guz34, T. Gys37, C. Hadjivasiliou56,
G. Haefeli38, C. Haen37, S.C. Haines46, S. Hall52, T. Hampson45, S. Hansmann-
Menzemer11, N. Harnew54, S.T. Harnew45, J. Harrison53, T. Hartmann58, J. He7,
V. Heijne40, K. Hennessy51, P. Henrard5, J.A. Hernando Morata36, E. van
Herwijnen37, E. Hicks51, D. Hill54, M. Hoballah5, C. Hombach53, P. Hopchev4,
W. Hulsbergen40, P. Hunt54, T. Huse51, N. Hussain54, D. Hutchcroft51, D.
Hynds50, V. Iakovenko43, M. Idzik26, P. Ilten12, R. Jacobsson37, A. Jaeger11,
E. Jans40, P. Jaton38, F. Jing3, M. John54, D. Johnson54, C.R. Jones46, B.
Jost37, M. Kaballo9, S. Kandybei42, M. Karacson37, T.M. Karbach37, I.R.
Kenyon44, U. Kerzel37, T. Ketel41, A. Keune38, B. Khanji20, O. Kochebina7, I.
Komarov38,31, R.F. Koopman41, P. Koppenburg40, M. Korolev31, A. Kozlinskiy40,
L. Kravchuk32, K. Kreplin11, M. Kreps47, G. Krocker11, P. Krokovny33, F.
Kruse9, M. Kucharczyk20,25,j, V. Kudryavtsev33, T. Kvaratskheliya30,37, V.N.
La Thi38, D. Lacarrere37, G. Lafferty53, A. Lai15, D. Lambert49, R.W.
Lambert41, E. Lanciotti37, G. Lanfranchi18,37, C. Langenbruch37, T. Latham47,
C. Lazzeroni44, R. Le Gac6, J. van Leerdam40, J.-P. Lees4, R. Lefèvre5, A.
Leflat31,37, J. Lefrançois7, S. Leo22, O. Leroy6, B. Leverington11, Y. Li3, L.
Li Gioi5, M. Liles51, R. Lindner37, C. Linn11, B. Liu3, G. Liu37, J. von
Loeben20, S. Lohn37, J.H. Lopes2, E. Lopez Asamar35, N. Lopez-March38, H. Lu3,
D. Lucchesi21,q, J. Luisier38, H. Luo49, F. Machefert7, I.V.
Machikhiliyan4,30, F. Maciuc28, O. Maev29,37, S. Malde54, G. Manca15,d, G.
Mancinelli6, U. Marconi14, R. Märki38, J. Marks11, G. Martellotti24, A.
Martens8, L. Martin54, A. Martín Sánchez7, M. Martinelli40, D. Martinez
Santos41, D. Martins Tostes2, A. Massafferri1, R. Matev37, Z. Mathe37, C.
Matteuzzi20, E. Maurice6, A. Mazurov16,32,37,e, J. McCarthy44, R. McNulty12,
A. Mcnab53, B. Meadows59,54, F. Meier9, M. Meissner11, M. Merk40, D.A.
Milanes8, M.-N. Minard4, J. Molina Rodriguez57, S. Monteil5, D. Moran53, P.
Morawski25, M.J. Morello22,s, R. Mountain56, I. Mous40, F. Muheim49, K.
Müller39, R. Muresan28, B. Muryn26, B. Muster38, P. Naik45, T. Nakada38, R.
Nandakumar48, I. Nasteva1, M. Needham49, N. Neufeld37, A.D. Nguyen38, T.D.
Nguyen38, C. Nguyen-Mau38,p, M. Nicol7, V. Niess5, R. Niet9, N. Nikitin31, T.
Nikodem11, A. Nomerotski54, A. Novoselov34, A. Oblakowska-Mucha26, V.
Obraztsov34, S. Oggero40, S. Ogilvy50, O. Okhrimenko43, R. Oldeman15,d,37, M.
Orlandea28, J.M. Otalora Goicochea2, P. Owen52, B.K. Pal56, A. Palano13,b, M.
Palutan18, J. Panman37, A. Papanestis48, M. Pappagallo50, C. Parkes53, C.J.
Parkinson52, G. Passaleva17, G.D. Patel51, M. Patel52, G.N. Patrick48, C.
Patrignani19,i, C. Pavel-Nicorescu28, A. Pazos Alvarez36, A. Pellegrino40, G.
Penso24,l, M. Pepe Altarelli37, S. Perazzini14,c, D.L. Perego20,j, E. Perez
Trigo36, A. Pérez-Calero Yzquierdo35, P. Perret5, M. Perrin-Terrin6, G.
Pessina20, K. Petridis52, A. Petrolini19,i, A. Phan56, E. Picatoste Olloqui35,
B. Pietrzyk4, T. Pilař47, D. Pinci24, S. Playfer49, M. Plo Casasus36, F.
Polci8, G. Polok25, A. Poluektov47,33, E. Polycarpo2, D. Popov10, B.
Popovici28, C. Potterat35, A. Powell54, J. Prisciandaro38, V. Pugatch43, A.
Puig Navarro38, G. Punzi22,r, W. Qian4, J.H. Rademacker45, B.
Rakotomiaramanana38, M.S. Rangel2, I. Raniuk42, N. Rauschmayr37, G. Raven41,
S. Redford54, M.M. Reid47, A.C. dos Reis1, S. Ricciardi48, A. Richards52, K.
Rinnert51, V. Rives Molina35, D.A. Roa Romero5, P. Robbe7, E. Rodrigues53, P.
Rodriguez Perez36, S. Roiser37, V. Romanovsky34, A. Romero Vidal36, J.
Rouvinet38, T. Ruf37, F. Ruffini22, H. Ruiz35, P. Ruiz Valls35,o, G.
Sabatino24,k, J.J. Saborido Silva36, N. Sagidova29, P. Sail50, B. Saitta15,d,
C. Salzmann39, B. Sanmartin Sedes36, M. Sannino19,i, R. Santacesaria24, C.
Santamarina Rios36, E. Santovetti23,k, M. Sapunov6, A. Sarti18,l, C.
Satriano24,m, A. Satta23, M. Savrie16,e, D. Savrina30,31, P. Schaack52, M.
Schiller41, H. Schindler37, M. Schlupp9, M. Schmelling10, B. Schmidt37, O.
Schneider38, A. Schopper37, M.-H. Schune7, R. Schwemmer37, B. Sciascia18, A.
Sciubba24, M. Seco36, A. Semennikov30, K. Senderowska26, I. Sepp52, N.
Serra39, J. Serrano6, P. Seyfert11, M. Shapkin34, I. Shapoval42,37, P.
Shatalov30, Y. Shcheglov29, T. Shears51,37, L. Shekhtman33, O. Shevchenko42,
V. Shevchenko30, A. Shires52, R. Silva Coutinho47, T. Skwarnicki56, N.A.
Smith51, E. Smith54,48, M. Smith53, M.D. Sokoloff59, F.J.P. Soler50, F.
Soomro18,37, D. Souza45, B. Souza De Paula2, B. Spaan9, A. Sparkes49, P.
Spradlin50, F. Stagni37, S. Stahl11, O. Steinkamp39, S. Stoica28, S. Stone56,
B. Storaci39, M. Straticiuc28, U. Straumann39, V.K. Subbiah37, S. Swientek9,
V. Syropoulos41, M. Szczekowski27, P. Szczypka38,37, T. Szumlak26, S.
T’Jampens4, M. Teklishyn7, E. Teodorescu28, F. Teubert37, C. Thomas54, E.
Thomas37, J. van Tilburg11, V. Tisserand4, M. Tobin39, S. Tolk41, D.
Tonelli37, S. Topp-Joergensen54, N. Torr54, E. Tournefier4,52, S. Tourneur38,
M.T. Tran38, M. Tresch39, A. Tsaregorodtsev6, P. Tsopelas40, N. Tuning40, M.
Ubeda Garcia37, A. Ukleja27, D. Urner53, U. Uwer11, V. Vagnoni14, G.
Valenti14, R. Vazquez Gomez35, P. Vazquez Regueiro36, S. Vecchi16, J.J.
Velthuis45, M. Veltri17,g, G. Veneziano38, M. Vesterinen37, B. Viaud7, D.
Vieira2, X. Vilasis-Cardona35,n, A. Vollhardt39, D. Volyanskyy10, D. Voong45,
A. Vorobyev29, V. Vorobyev33, C. Voß58, H. Voss10, R. Waldi58, R. Wallace12,
S. Wandernoth11, J. Wang56, D.R. Ward46, N.K. Watson44, A.D. Webber53, D.
Websdale52, M. Whitehead47, J. Wicht37, J. Wiechczynski25, D. Wiedner11, L.
Wiggers40, G. Wilkinson54, M.P. Williams47,48, M. Williams55, F.F. Wilson48,
J. Wishahi9, M. Witek25, S.A. Wotton46, S. Wright46, S. Wu3, K. Wyllie37, Y.
Xie49,37, F. Xing54, Z. Xing56, Z. Yang3, R. Young49, X. Yuan3, O.
Yushchenko34, M. Zangoli14, M. Zavertyaev10,a, F. Zhang3, L. Zhang56, W.C.
Zhang12, Y. Zhang3, A. Zhelezov11, A. Zhokhov30, L. Zhong3, A. Zvyagin37.
1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
3Center for High Energy Physics, Tsinghua University, Beijing, China
4LAPP, Université de Savoie, CNRS/IN2P3, Annecy-Le-Vieux, France
5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-
Ferrand, France
6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot,
CNRS/IN2P3, Paris, France
9Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
10Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
11Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
12School of Physics, University College Dublin, Dublin, Ireland
13Sezione INFN di Bari, Bari, Italy
14Sezione INFN di Bologna, Bologna, Italy
15Sezione INFN di Cagliari, Cagliari, Italy
16Sezione INFN di Ferrara, Ferrara, Italy
17Sezione INFN di Firenze, Firenze, Italy
18Laboratori Nazionali dell’INFN di Frascati, Frascati, Italy
19Sezione INFN di Genova, Genova, Italy
20Sezione INFN di Milano Bicocca, Milano, Italy
21Sezione INFN di Padova, Padova, Italy
22Sezione INFN di Pisa, Pisa, Italy
23Sezione INFN di Roma Tor Vergata, Roma, Italy
24Sezione INFN di Roma La Sapienza, Roma, Italy
25Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of
Sciences, Kraków, Poland
26AGH University of Science and Technology, Kraków, Poland
27National Center for Nuclear Research (NCBJ), Warsaw, Poland
28Horia Hulubei National Institute of Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
29Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia
30Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia
31Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
32Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN),
Moscow, Russia
33Budker Institute of Nuclear Physics (SB RAS) and Novosibirsk State
University, Novosibirsk, Russia
34Institute for High Energy Physics (IHEP), Protvino, Russia
35Universitat de Barcelona, Barcelona, Spain
36Universidad de Santiago de Compostela, Santiago de Compostela, Spain
37European Organization for Nuclear Research (CERN), Geneva, Switzerland
38Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
39Physik-Institut, Universität Zürich, Zürich, Switzerland
40Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands
41Nikhef National Institute for Subatomic Physics and VU University Amsterdam,
Amsterdam, The Netherlands
42NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
43Institute for Nuclear Research of the National Academy of Sciences (KINR),
Kyiv, Ukraine
44University of Birmingham, Birmingham, United Kingdom
45H.H. Wills Physics Laboratory, University of Bristol, Bristol, United
Kingdom
46Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
47Department of Physics, University of Warwick, Coventry, United Kingdom
48STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
49School of Physics and Astronomy, University of Edinburgh, Edinburgh, United
Kingdom
50School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
51Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
52Imperial College London, London, United Kingdom
53School of Physics and Astronomy, University of Manchester, Manchester,
United Kingdom
54Department of Physics, University of Oxford, Oxford, United Kingdom
55Massachusetts Institute of Technology, Cambridge, MA, United States
56Syracuse University, Syracuse, NY, United States
57Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de
Janeiro, Brazil, associated to 2
58Institut für Physik, Universität Rostock, Rostock, Germany, associated to 11
59University of Cincinnati, Cincinnati, OH, United States, associated to 56
aP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS),
Moscow, Russia
bUniversità di Bari, Bari, Italy
cUniversità di Bologna, Bologna, Italy
dUniversità di Cagliari, Cagliari, Italy
eUniversità di Ferrara, Ferrara, Italy
fUniversità di Firenze, Firenze, Italy
gUniversità di Urbino, Urbino, Italy
hUniversità di Modena e Reggio Emilia, Modena, Italy
iUniversità di Genova, Genova, Italy
jUniversità di Milano Bicocca, Milano, Italy
kUniversità di Roma Tor Vergata, Roma, Italy
lUniversità di Roma La Sapienza, Roma, Italy
mUniversità della Basilicata, Potenza, Italy
nLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain
oIFIC, Universitat de Valencia-CSIC, Valencia, Spain
pHanoi University of Science, Hanoi, Viet Nam
qUniversità di Padova, Padova, Italy
rUniversità di Pisa, Pisa, Italy
sScuola Normale Superiore, Pisa, Italy
## 1 Introduction
Double-charm decays of $B$ mesons can be used to probe the Cabibbo-Kobayashi-
Maskawa matrix [1, 2] elements, and provide a laboratory to study final state
interactions. The time-dependent $C\\!P$ asymmetry in the $B^{0}\rightarrow
D^{+}D^{-}$ decay provides a way to measure the $B^{0}$ mixing phase [3, 4],
where information from other double-charm final states can be used to account
for loop (penguin) contributions and other non-factorizable effects [5, 6, 7,
8, 9]. Double-charm decays of $B$ mesons can also be used to measure the weak
phase $\gamma$, assuming $U$-spin symmetry [10, 11]. The purely $C\\!P$-even
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s}$ decay is also of interest, as it can be used to measure
the $B^{0}_{s}$ mixing phase. Moreover, a lifetime measurement using the
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s}$ decay provides complementary information on
$\Delta\Gamma_{s}$ [11, 12, 13] to that obtained from direct measurements
[14], or from lifetime measurements in other $C\\!P$ eigenstates [15, 16].
The study of $B\rightarrow D\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{\prime}$ decays111Throughout this
paper, the notation $D$ is used to refer to a $D^{+}$, $D^{0}$ or $D^{+}_{s}$
meson, and $B$ represents either a $B^{0}$, $B^{-}$ or $B^{0}_{s}$ meson. can
also provide a better theoretical understanding of the processes that
contribute to $B$ meson decay. Feynman diagrams contributing to the decays
considered in this paper are shown in Fig. 1. The $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$, $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}$ and
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ decays are mediated by the
$W$-exchange amplitude, along with penguin-annihilation contributions and
rescattering [17]. The only other observed $B$ meson decays of this type are
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow
D_{s}^{(*)+}K^{(*)-}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow\pi^{+}\pi^{-}$, with
branching fractions of the order of $10^{-5}$ [18] and $10^{-6}$ [19],
respectively. Predictions of the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}$
branching fraction using perturbative approaches yield $3.6\times 10^{-3}$
[20], while the use of non-perturbative approaches has led to a smaller value
of $1\times 10^{-3}$ [21]. More recent phenomenological studies, which assume
a dominant contribution from rescattering, predict a significantly lower
branching fraction of ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}D^{-})={\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})=(7.8\pm 4.7)\times 10^{-5}$ [17].
This paper reports the first observations of the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}$, $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}$ and
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ decays, and measurements of their
branching fractions normalized relative to those of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{-}$,
$B^{0}\rightarrow D^{+}_{s}D^{-}$ and $B^{-}\rightarrow D^{0}D^{-}_{s}$,
respectively. An excess of events consistent with $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ is also seen, and its branching
fraction is reported. Improved measurements of the ratios of branching
fractions ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})/{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})$ and
${\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})/{\cal{B}}(B^{0}\rightarrow
D^{+}_{s}D^{-})$ are also presented. All results are based upon an integrated
luminosity of 1.0 $\rm fb^{-1}$ of $pp$ collision data at $\sqrt{s}=7$
$\mathrm{\,Te\kern-1.00006ptV}$ recorded by the LHCb experiment in 2011.
Inclusion of charge conjugate final states is implied throughout.
Figure 1: Feynman diagrams contributing to the double-charm final states
discussed in this paper. They include (a) tree, (b) $W$-exchange and (c)
penguin diagrams.
## 2 Data sample and candidate selection
The LHCb detector [22] is a single-arm forward spectrometer covering the
pseudorapidity range $2<\eta<5$, designed for the study of particles
containing $b$ or $c$ quarks. The detector includes a high precision tracking
system consisting of a silicon-strip vertex detector surrounding the $pp$
interaction region, a large-area silicon-strip detector located upstream of a
dipole magnet with a bending power of about $4{\rm\,Tm}$, and three stations
of silicon-strip detectors and straw drift tubes placed downstream. The
combined tracking system has a momentum resolution ($\Delta p/p$) that varies
from 0.4% at 5${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ to 0.6% at
100${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, and an impact parameter (IP)
resolution of 20$\,\upmu\rm m$ for tracks with high transverse momentum
($p_{\rm T}$). The impact parameter is defined as the distance of closest
approach of a given particle to the primary $pp$ interaction vertex (PV).
Charged hadrons are identified using two ring-imaging Cherenkov detectors
[23]. Photons, electrons and charged particles are identified by a calorimeter
system consisting of scintillating-pad and preshower detectors, an
electromagnetic calorimeter and a hadronic calorimeter. Muons are identified
by a system composed of alternating layers of iron and multiwire proportional
chambers.
The trigger [24] consists of a hardware stage, based on information from the
calorimeter and muon systems, followed by a software stage that performs a
partial event reconstruction (only tracks with $\mbox{$p_{\rm T}$}>0.5$
${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ are reconstructed and used). The
software trigger requires a two-, three- or four-track secondary vertex with a
large track $p_{\rm T}$ sum and a significant displacement from any of the
reconstructed PVs. At least one track must have $\mbox{$p_{\rm
T}$}>1.7{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and IP $\chi^{2}$ greater than
16 with respect to all PVs. The IP $\chi^{2}$ is defined as the difference
between the $\chi^{2}$ of the PV reconstructed with and without the considered
particle. A multivariate algorithm [25] is used to identify secondary vertices
that originate from the decays of $b$ hadrons.
For the ratios of branching fractions between modes with identical final
states, no requirements are made on the hardware trigger decision. When the
final states differ, a trigger selection is applied to facilitate the
determination of the relative trigger efficiency. The selection requires that
either (i) at least one of the tracks from the reconstructed signal decay is
associated with energy depositions in the calorimeters that passed the
hardware trigger requirements, or (ii) the event triggered independently of
the signal decay particles, e.g., on the decay products of the other $b$
hadron in the event. Events that do not fall into either of these two
categories ($\sim$5%) are discarded.
Signal efficiencies and specific backgrounds are studied using simulated
events. Proton-proton collisions are generated using Pythia 6.4 [26] with a
specific LHCb configuration [27]. Decays of hadronic particles are described
by EvtGen [28] in which final state radiation is generated using Photos [29].
The interaction of the generated particles with the detector and its response
are implemented using the Geant4 toolkit [30, *Agostinelli:2002hh] as
described in Ref. [32]. Efficiencies for identifying $K^{+}$ and $\pi^{+}$
mesons are determined using $D^{*+}$ calibration data, with kinematic
quantities reweighted to match those of the signal particles [23].
Signal $B$ candidates are formed by combining pairs of $D$ meson candidates
reconstructed in the following decay modes: ${D^{0}\rightarrow K^{-}\pi^{+}}$
or ${K^{-}\pi^{+}\pi^{-}\pi^{+}}$, ${D^{+}\rightarrow K^{-}\pi^{+}\pi^{+}}$
and ${D^{+}_{s}\rightarrow K^{+}K^{-}\pi^{+}}$. The $D^{0}\rightarrow
K^{-}\pi^{+}\pi^{-}\pi^{+}$ decay is only used for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ candidates, where a single
$D^{0}\rightarrow K^{-}\pi^{+}\pi^{-}\pi^{+}$ decay in the final state is
allowed, which approximately doubles the total signal efficiency. A refit of
signal candidates with $D$ mass and vertex constraints is performed to improve
the $B$ mass resolution.
Due to similar kinematics of the $D^{+}\rightarrow K^{-}\pi^{+}\pi^{+}$,
$D^{+}_{s}\rightarrow K^{+}K^{-}\pi^{+}$ and $\mathchar
28931\relax_{c}^{+}\rightarrow pK^{-}\pi^{+}$ decays, there is cross-feed
between various $b$-hadron decays that have two charm particles in the final
state. Cross-feed between $D^{+}$ and $D^{+}_{s}$ occurs when the
$K^{-}\pi^{+}h^{+}$ invariant mass is within 25
${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ ($\sim 3$ times the experimental
resolution) of both the $D^{+}$ and $D^{+}_{s}$ masses under the
$h^{+}=\pi^{+}$ and $h^{+}=K^{+}$ hypotheses, respectively. In such cases, an
arbitration is performed as follows: if either $|M(K^{+}K^{-})-m_{\phi}|<10$
${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ or $h^{+}$ satisfies a stringent
kaon particle identification (PID) requirement, the $D$ candidate is assigned
to be a $D^{+}_{s}$ meson. Conversely, if $h^{+}$ passes a stringent pion PID
requirement, the $D$ candidate is taken to be a $D^{+}$ meson. Candidates that
do not pass either of these selections are rejected. A similar veto is applied
to $D^{+}$ and $D^{+}_{s}$ decays that are consistent with the $\mathchar
28931\relax_{c}^{+}\rightarrow pK^{-}\pi^{+}$ decay hypothesis if the proton
is misidentified as a $\pi^{+}$ or $K^{+}$, respectively. The efficiencies of
these $D$ selections are determined using simulated signal decays to model the
kinematics of the decay and $D^{*+}\rightarrow D^{0}\pi^{+}$ calibration data
for the PID efficiencies. Their values are given in Table 1.
To suppress contributions from non-$D\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{\prime}$ final states, the
reconstructed $D$ decay vertex is required to be downstream of the
reconstructed $B$ decay vertex, and the $B$ and $D$ decay vertices are
required to have a vertex separation (VS) $\chi^{2}$ larger than two. Here,
the VS $\chi^{2}$ is the difference in $\chi^{2}$ between the nominal vertex
fit and a vertex fit where the $D$ is assumed to have zero lifetime. The
efficiencies of this set of requirements are obtained from simulation and are
included in Table 1.
To further improve the purity of the $B\rightarrow D\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{\prime}$ samples, a boosted decision
tree (BDT) discriminant is used to distinguish signal $D$ mesons from
backgrounds [33, 34]. The BDT uses five variables for the $D$ meson and 23 for
each of its children. The variables include kinematic quantities, track
quality, and vertex and PID information. The signal and background
distributions used to train the BDT are obtained from $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}\pi^{-}$,
$B^{-}\rightarrow D^{0}\pi^{-}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}\pi^{-}$
decays from data. The signal distributions are background subtracted using
weights [35] obtained from a fit to the $B$ candidate invariant mass
distribution. The background distributions are taken from the high $B$ mass
sidebands in the same data sample.
It is found that making a requirement on the product of the two $D$ meson BDT
responses provides better discrimination than applying one to each BDT
response individually. The optimal BDT requirement in each decay is chosen by
maximizing $N_{\rm S}/\sqrt{N_{\rm S}+N_{\rm B}}$. The number of signal
events, $N_{\rm S}$, is computed using the known (or estimated, if unknown)
branching fractions, selection efficiencies from simulated events, and the BDT
efficiencies from the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}\pi^{-}$,
$B^{-}\rightarrow D^{0}\pi^{-}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}\pi^{-}$
calibration samples, reweighted to account for small differences in kinematics
between the calibration and signal samples. The number, $N_{\rm B}$, is the
expected background yield for a given BDT requirement. The efficiencies
associated with the optimal BDT cut values, determined from an independent
subset of the $B\rightarrow D\pi^{-}$ data, are listed in Table 1.
Correlations between the BDT values for the two $D$ mesons are taken into
account.
For the purpose of measuring ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})/{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})$, only loose
BDT requirements are imposed since the expected yields are relatively large.
On the other hand, for ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-})/{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})$, the expected
signal yield of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}$
decays is small; in this case both the signal and normalization modes are
required to pass the same tighter BDT requirement. The different BDT
selections applied to the $B^{0}\rightarrow D^{+}_{s}D^{-}$ decay are referred
to as the “loose selection” and the “tight selection.” Since the final state
is identical for the tight selection, the BDT efficiency cancels in the ratio
of branching fractions, and is not included in Table 1.
Table 1: Individual contributions to the efficiency for selecting the various $B\rightarrow D\kern 1.79997pt\overline{\kern-1.79997ptD}{}^{\prime}$ final states. Shown are the efficiencies to reconstruct and trigger on the final state, and to pass the charm cross-feed veto, the VS $\chi^{2}$ and BDT selection requirements. The total selection efficiency is the product of these four values. The relative uncertainty on the selection efficiency for each decay mode due to the finite simulation samples sizes is 2%. Entries with a dash indicate that the efficiency factor is not applicable. | Efficiencies (%)
---|---
| Rec.$\times$Trig. | Cross-feed veto | VS $\chi^{2}$ | BDT
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}$ | 0.140 | $88.4$ | $75.4$ | 97.5
$B^{0}\rightarrow D^{+}_{s}D^{-}$ (loose selection) | 0.130 | $77.8$ | $82.9$ | 100.0
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow D^{0}\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0},~{}(K^{-}\pi^{+},K^{+}\pi^{-})$ | 0.447 | $-$ | $73.7$ | 57.8
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow D^{0}\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0},~{}(K^{-}\pi^{+},K^{+}\pi^{-}\pi^{+}\pi^{-})$ | 0.128 | $-$ | $74.6$ | 63.6
$B^{-}\rightarrow D^{0}D^{-}_{s}$ | 0.238 | $92.5$ | $75.0$ | 99.2
For $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow
D^{0}\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ candidates, a peaking
background from $B\rightarrow D^{*+}\pi^{-}\rightarrow(D^{0}\pi^{+})\pi^{-}$
decays, where the $\pi^{+}$ is misidentified as a $K^{+}$, is observed. This
contribution is removed by requiring the mass difference,
$M(K^{-}\pi^{+}\pi^{+})-M(K^{-}\pi^{+})>150~{}{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$,
where the $K^{+}$ in the reconstructed decay is taken to be a $\pi^{+}$. After
the final selection around $2\%$ of events in the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}$
decay mode contain multiple candidates; for all other modes the multiple
candidate rate is below $1\%$. All candidates are kept for the final analysis.
## 3 Signal and background shapes
The $B\rightarrow D\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{\prime}$
signal shapes are all similar after the $D$ mass and vertex constraints. The
signal shape is parameterized as the sum of two Crystal Ball (CB) functions
[36], which account for non-Gaussian tails on both sides of the signal peak.
The asymmetric shapes account for both non-Gaussian mass resolution effects
(on both sides) and energy loss due to final state radiation. The two CB
shapes are constrained to have equal area and a common mean. Separate sets of
shape parameters are determined for $B^{0}\rightarrow D^{+}_{s}D^{-}$, $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}$
and $B^{-}\rightarrow D^{0}D^{-}_{s}$ using simulated signal decays. In the
fits to data, the signal shape parameters are fixed to the simulated values,
except for a smearing factor that is added in quadrature to the widths from
simulation. This number is allowed to vary independently in each fit, but is
consistent with about 4.6 ${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ across
all modes, resulting in a mass resolution of about 9
${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$. For the more rare $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow D^{+}D^{-}$ decay
modes, the $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s}$ signal shape parameters are used. In determining the
signal significances, the signal shape is fixed to that for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}$,
including an additional smearing of 4.6
${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$. The impact of using the
$B^{0}\rightarrow D^{+}_{s}D^{-}$ or $B^{-}\rightarrow D^{0}D^{-}_{s}$ signal
shapes on the signal significances is negligible.
Several specific backgrounds contribute to the $D\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{\prime}$ mass spectra. In particular,
decays such as $B\rightarrow D^{(*)}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{*}$, where the $D^{*}$ mesons decay
through pion or photon emission, produce distinct structures in all decays
under consideration. The shapes of these backgrounds are derived from
simulation, which are corrected for known resolution differences between data
and simulated events, and then fixed in fits to the data. The relative yield
of the two peaks in the characteristic structure from the decay
$D^{*}\rightarrow D^{0}\pi$ is allowed to vary freely, to enable better
modeling of the background in the low mass region. Since this mass region is
significantly below the signal peaks, the impact on the signal yield
determinations is negligible.
A source of peaking background that contributes to $B\rightarrow DD^{+}_{s}$
modes are the ${B\rightarrow D\kern
1.99997pt\overline{\kern-1.99997ptK}{}^{*0}K^{+}\rightarrow
DK^{-}\pi^{+}K^{+}}$ decays, where the $\kern
1.99997pt\overline{\kern-1.99997ptK}{}^{*0}K^{+}$ is not produced in a
$D^{+}_{s}$ decay. Although the branching fractions for these decays [37] are
about twice as large as that of the $B\rightarrow DD^{+}_{s}\rightarrow
DK^{+}K^{-}\pi^{+}$ decay channel, the 25
${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ mass window around the known
$D^{+}_{s}$ mass and the VS $\chi^{2}$ $>2$ requirement reduce this
contribution to about 1% of the signal yield. This expectation is corroborated
by studying the $D^{+}_{s}$ candidate mass sidebands. The shape of this
background is obtained from simulation, and is described by a single Gaussian
function which has a width about 2.5 times larger than that of the signal
decay and peaks at the nominal $B$ meson mass.
After the charm cross-feed vetoes (see Sect. 2), the cross-feed rate from
$B^{0}\rightarrow D^{+}_{s}D^{-}$ decays into the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}$
sample is $(0.7\pm 0.2)\%$. The shape of this misidentification background is
obtained from simulation. A similar cross-feed background contribution from
$\Lambda_{b}^{0}\rightarrow\mathchar 28931\relax_{c}^{+}D^{-}_{s}$ decays is
also expected due to events passing the $\mathchar 28931\relax_{c}^{+}$ veto.
Taking into account the observed yields of these decays in data, we fix the
$B^{0}\rightarrow D^{+}_{s}D^{-}$ and $\Lambda_{b}^{0}\rightarrow\mathchar
28931\relax_{c}^{+}D^{-}_{s}$ cross-feed yields to 35 and 15 events,
respectively. Investigation of the $D$ mass sidebands reveals no additional
contributions from non-$D\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{\prime}$ backgrounds.
The combinatorial background shape is described by an exponential function
whose slope is determined from wrong-sign candidates. Wrong-sign candidates
include the $D_{s}^{+}D_{s}^{+}$, $D^{0}D^{0}$, or $\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}(K^{+}\pi^{-})D^{-}_{s}$ final
states, in which no signal excesses should be present (neglecting the small
contribution from the doubly Cabibbo suppressed $B^{-}\rightarrow
D^{0}(K^{+}\pi^{-})D^{-}_{s}$ decay). For the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow D^{+}D^{-}$ decay,
the exponential shape parameter is allowed to vary in the fit due to an
insufficient number of wrong-sign $D^{+}D^{+}$ candidates.
## 4 Fit results
Figure 2 shows the invariant mass spectra for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}$
and $B^{0}\rightarrow D^{+}_{s}D^{-}$ candidates. The results of unbinned
extended maximum likelihood fits to the distributions are overlaid with the
signal and background components indicated in the legends. Signal yields of
$451\pm 23$ $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s}$ and $5157\pm 64$ $B^{0}\rightarrow D^{+}_{s}D^{-}$ decays
are observed.
Figure 2: Invariant mass distributions for (left) $\kern
1.61993pt\overline{\kern-1.61993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}$
and (right) $B^{0}\rightarrow D^{+}_{s}D^{-}$ candidates in the data with the
loose BDT selection applied to the latter. The signal and background
components are indicated in the legend. The
$\Lambda_{b}^{0}\rightarrow\mathchar 28931\relax_{c}^{+}D^{-}_{s}$, $\kern
1.61993pt\overline{\kern-1.61993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}K^{-}K^{+}\pi^{-}$ and $B^{0}\rightarrow D^{-}K^{+}K^{-}\pi^{+}$
background components are too small to be seen, and are excluded from the
legends.
Figure 3 shows the invariant mass spectrum for $B^{0}\rightarrow
D^{+}_{s}D^{-}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}$
candidates, where the tight BDT selection requirements have been applied as
discussed previously. We observe $36\pm 6$ $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}$
signal decays, with $2832\pm 53$ events in the $B^{0}\rightarrow
D^{+}_{s}D^{-}$ normalization mode. The statistical significance of the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}$
signal corresponds to $10\sigma$ by computing $\sqrt{-2\,{\rm
ln}({\mathcal{L}_{0}}/{\mathcal{L}_{\rm max}}})$, where ${\mathcal{L}_{\rm
max}}$ and ${\mathcal{L}_{0}}$ are the fit likelihoods with the signal yields
allowed to vary and fixed to zero, respectively. Variations in the signal and
background model have only a marginal impact on the signal significance. The
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{-}D^{+}_{s}$ decay is thus observed for the first time.
Figure 3: Invariant mass distribution for $B^{0}\rightarrow D^{+}_{s}D^{-}$
and $\kern 1.61993pt\overline{\kern-1.61993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}$ candidates in the data, with the tight BDT selection applied.
The distribution is plotted on a (left) linear and (right) logarithmic scale
to highlight the suppressed $\kern
1.61993pt\overline{\kern-1.61993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}$
signal. Signal and background components are indicated in the legend.
The invariant mass spectrum for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow D^{+}D^{-}$
candidates is shown in Fig. 4 (left). Peaks are seen at both the $B^{0}$ and
$B^{0}_{s}$ meson masses, with yields of $165\pm 13$ and $43\pm 7$ signal
events, respectively. In the lower mass region, two prominent peaks from
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{*+}D^{-}$ and
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{*-}$
decays are also evident. The significance of the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}$ signal
yield is computed as described above, and corresponds to $11\sigma$,
establishing the first observation of this decay mode.
Figure 4 (right) shows the $D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ invariant mass distribution and
the results of the fit. Both ($K^{-}\pi^{+}$, $K^{+}\pi^{-}$) and
($K^{-}\pi^{+}$, $K^{+}\pi^{-}\pi^{+}\pi^{-}$) combinations are included. A
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ signal is seen with a significance
of 11$\sigma$, which establishes the first observation of this decay mode. The
data also show an excess of events at the $B^{0}$ mass. The significance of
that excess corresponds to 2.4$\sigma$, including both the statistical and
systematic uncertainty. The fitted yields in the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ decay modes are $45\pm 8$ and
$13\pm 6$ events, respectively. If both the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ decays proceed through
$W$-exchange diagrams, one would expect the signal yield in $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ to be
$\sim(f_{d}/f_{s})\times|V_{cd}/V_{cs}|^{2}\simeq 0.2$ of the yield in $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$, where we have used
$|V_{cd}/V_{cs}|^{2}=0.054$ [18] and $f_{s}/f_{d}=0.256\pm 0.020$ [38]. The
fitted yields are consistent with this expectation. The decay
$B^{-}\rightarrow D^{0}D^{-}_{s}$ is used as the normalization channel for
both the $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{0}\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ branching fraction measurements,
where only the $D^{0}\rightarrow K^{-}\pi^{+}$ decay mode is used. The fitted
invariant mass distribution for $B^{-}\rightarrow D^{0}D^{-}_{s}$ candidates
is shown in Fig. 5. The fitted signal yield is $5152\pm 73$ events.
Figure 4: Invariant mass distributions for (left) $\kern
1.61993pt\overline{\kern-1.61993ptB}{}^{0}_{(s)}\rightarrow D^{+}D^{-}$ and
(right) $\kern 1.61993pt\overline{\kern-1.61993ptB}{}^{0}_{(s)}\rightarrow
D^{0}\kern 1.79997pt\overline{\kern-1.79997ptD}{}^{0}$ candidates in the data.
Signal and background components are indicated in the legend. Figure 5:
Invariant mass distribution for $B^{-}\rightarrow D^{0}D^{-}_{s}$ candidates
in the data. Signal and background components are indicated in the legend. The
$B^{-}\rightarrow D^{0}K^{-}K^{+}\pi^{-}$ background components are too small
to be seen, and are excluded from the legend.
The measured yields, $N_{B\rightarrow D\kern
1.39998pt\overline{\kern-1.39998ptD}{}^{\prime}}$, relevant for the branching
fraction measurements are summarized in Table 2. The branching fractions are
related to the measured yields by
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle={f_{d}\over f_{s}}\cdot\epsilon_{\rm
rel}^{B^{0}/B^{0}_{s}}\cdot\kappa\cdot{{\cal{B}}(D^{+}\rightarrow
K^{-}\pi^{+}\pi^{+})\over{\cal{B}}(D^{+}_{s}\rightarrow
K^{+}K^{-}\pi^{+})}\cdot{N_{\kern
1.25995pt\overline{\kern-1.25995ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s}}\over N_{B^{0}\rightarrow D^{+}_{s}D^{-}}},$ (1)
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle={f_{d}\over f_{s}}\cdot\epsilon_{\rm rel}\cdot{N_{\kern
1.25995pt\overline{\kern-1.25995ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}}\over
N_{B^{0}\rightarrow D^{+}_{s}D^{-}}},$ (2) $\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}D^{-})\over{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{-})}$
$\displaystyle={f_{d}\over f_{s}}\cdot\epsilon_{\rm
rel}\cdot\kappa\cdot{N_{\kern
1.25995pt\overline{\kern-1.25995ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}}\over
N_{\kern 1.25995pt\overline{\kern-1.25995ptB}{}^{0}\rightarrow D^{+}D^{-}}},$
(3) $\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})\over{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})}$ $\displaystyle={f_{d}\over f_{s}}\cdot\epsilon_{\rm
rel}^{\prime}\cdot\kappa\cdot{N_{\kern
1.25995pt\overline{\kern-1.25995ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.39998pt\overline{\kern-1.39998ptD}{}^{0}}\over N_{B^{-}\rightarrow
D^{0}D^{-}_{s}}},$ (4) $\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})\over{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})}$ $\displaystyle=\epsilon_{\rm rel}^{\prime}\cdot{N_{\kern
1.25995pt\overline{\kern-1.25995ptB}{}^{0}\rightarrow D^{0}\kern
1.39998pt\overline{\kern-1.39998ptD}{}^{0}}\over N_{B^{-}\rightarrow
D^{0}D^{-}_{s}}},$ (5) $\displaystyle{{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle=\epsilon_{\rm
rel}^{B^{0}/B^{-}}\cdot{{\cal{B}}(D^{+}\rightarrow
K^{-}\pi^{+}\pi^{+})\over{\cal{B}}(D^{0}\rightarrow
K^{-}\pi^{+})}\cdot{N_{B^{-}\rightarrow D^{0}D^{-}_{s}}\over
N_{B^{0}\rightarrow D^{+}_{s}D^{-}}}~{}.$ (6)
Here, it is assumed that $B^{-}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}$ mesons are produced in equal
numbers. The relative efficiencies, $\epsilon_{\rm rel}$, are given in Table
2. They account for geometric acceptance, detection and trigger efficiencies,
and the additional VS $\chi^{2}$, BDT, and charm cross-feed veto requirements.
The first four of these relative efficiencies are obtained from simulation,
and the last two are data-driven. The indicated uncertainties on the relative
efficiencies are due only to the finite sizes of the simulated signal decays.
The average selection efficiency for $B^{-}\rightarrow D^{0}D^{-}_{s}$
relative to $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow
D^{0}\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ is
$\displaystyle\epsilon_{\rm rel}^{\prime}={\epsilon_{B^{-}\rightarrow
D^{0}D^{-}_{s}}{\cal{B}}(D^{+}_{s}\rightarrow
K^{+}K^{-}\pi^{+}){\cal{B}}(D^{0}\rightarrow
K^{-}\pi^{+})\over\epsilon_{K\pi,K\pi}[{\cal{B}}(D^{0}\rightarrow
K^{-}\pi^{+})]^{2}+2\epsilon_{K\pi\pi\pi,K\pi}{\cal{B}}(D^{0}\rightarrow
K^{-}\pi^{+}){\cal{B}}(D^{0}\rightarrow K^{-}\pi^{+}\pi^{-}\pi^{+})},$ (7)
where the quantities $\epsilon_{B^{-}\rightarrow D^{0}D^{-}_{s}}=(0.166\pm
0.003)\%$, $\epsilon_{K\pi,K\pi}=(0.190\pm 0.003)\%$ and
$\epsilon_{K\pi\pi\pi,K\pi}=(0.061\pm 0.002)\%$ are the selection efficiencies
for the ${B^{-}\rightarrow D^{0}D^{-}_{s}}$, ${\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow(D^{0}\rightarrow
K^{-}\pi^{+},\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}\rightarrow
K^{+}\pi^{-})}$ and ${\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow(D^{0}\rightarrow
K^{-}\pi^{+},\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}\rightarrow
K^{+}\pi^{-}\pi^{+}\pi^{-})}$ decays, respectively. The $D$ branching
fractions, ${{\cal{B}}(D^{0}\rightarrow K^{-}\pi^{+})=(3.88\pm 0.05)\%}$,
${{\cal{B}}(D^{0}\rightarrow K^{-}\pi^{+}\pi^{-}\pi^{+})=(8.07\pm 0.20)\%}$,
${{\cal{B}}(D^{+}_{s}\rightarrow K^{+}K^{-}\pi^{+})=(5.49\pm 0.27)\%}$, and
${{\cal{B}}(D^{+}\rightarrow K^{-}\pi^{+}\pi^{+})=(9.13\pm 0.19)\%}$ are taken
from Ref. [18].
Table 2: Summary of the observed signal and normalization mode yields and their relative efficiencies, as used in the measurements of the ratios of branching fractions. The quoted uncertainties are statistical only. Measurement | Signal | Norm. | Rel. eff.
---|---|---|---
| yield | yield | $\epsilon_{\rm rel}^{(\prime)}$
${{\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$ | $451\pm 23$ | $~{}5157\pm 64~{}$ | $~{}0.928\pm 0.027~{}$
${{\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$ | $36\pm 6$ | $~{}2832\pm 53~{}$ | 1.0
${{\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-})\over{\cal{B}}(\kern 1.25995pt\overline{\kern-1.25995ptB}{}^{0}\rightarrow D^{+}D^{-})}$ | $43\pm 7$ | $~{}~{}~{}165\pm 13~{}$ | 1.0
${{\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0})\over{\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})}$ | $45\pm 8$ | $~{}5152\pm 73~{}$ | $~{}0.523\pm 0.016~{}$
${{\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0})\over{\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})}$ | $13\pm 6$ | $~{}5152\pm 73~{}$ | $~{}0.523\pm 0.016~{}$
${{\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$ | $5152\pm 73~{}~{}$ | $~{}5157\pm 64~{}$ | $~{}0.508\pm 0.011~{}$
The factor $\kappa$ is a correction that accounts for the lower selection
efficiency associated with the shorter-lifetime $C\\!P$-even eigenstates of
the $B^{0}_{s}$ system compared to flavor-specific final states [14]. The
impact on the $B^{0}_{s}$ acceptance is estimated by convolving an exponential
distribution that has a 10% smaller lifetime than that in flavor-specific
decays with the simulated lifetime acceptance. The resulting correction is
$\kappa=1.058\pm 0.029$. In the $B^{0}$ sector, $\Delta\Gamma_{d}/\Gamma_{d}$
is below 1% [39], and the lifetime acceptance is well described by the
simulation.
The measured ratios of branching fractions are computed to be
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}D^{-})\over{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{-})}$
$\displaystyle=1.08\pm 0.20\,({\rm stat})\pm 0.10\,({\rm syst}),$
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle=0.050\pm 0.008\,({\rm stat})\pm 0.004\,({\rm syst}),$
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})\over{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})}$ $\displaystyle=0.019\pm 0.003\,({\rm stat})\pm 0.003\,({\rm
syst}),$ $\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})\over{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})}$ $\displaystyle=0.0014\pm 0.0006\,({\rm stat})\pm
0.0002\,({\rm syst})$
$\displaystyle[~{}<0.0024~{}\rm{at}~{}90\%~{}\rm{CL}~{}],$
$\displaystyle{{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle=0.56\pm 0.03\,({\rm stat})\pm 0.04\,({\rm syst}),$
$\displaystyle{{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})\over{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})}$
$\displaystyle=1.22\pm 0.02\,({\rm stat})\pm 0.07\,({\rm syst}).$
For ${{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})/{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})}$, the results obtained using the $D^{0}(K^{-}\pi^{+})\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}(K^{+}\pi^{-}\pi^{+}\pi^{-})$ and
$D^{0}(K^{-}\pi^{+})\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}(K^{+}\pi^{-})$ final states differ
by less than one standard deviation. For the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ decay, we provide both the central
value and the 90% confidence level (CL) upper limit. The upper limit is
obtained by convolving the fitted likelihood with a Gaussian function whose
width is the total systematic error, and integrating over the physical region.
## 5 Systematic uncertainties
A number of systematic uncertainties contribute to the measurements of the
ratios of branching fractions. The sources and their values are summarized in
Table 3. The dominant source of uncertainty on the branching fraction ratios
comes from the $b$ fragmentation fraction ratio, $f_{d}/f_{s}$, which has a
total uncertainty of 7.8% [38], of which 5.3% is from the ratio of branching
fractions ${\cal{B}}(D^{+}_{s}\rightarrow
K^{+}K^{-}\pi^{+})/{\cal{B}}(D^{+}\rightarrow K^{-}\pi^{+}\pi^{+})$. For
clarity, we have removed that portion of the uncertainty from $f_{d}/f_{s}$,
and included its contribution in the row labeled ${\cal{B}}(D)$ in Table 3.
For ${\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})/{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})$, the above
$D^{+}_{s}/D^{+}$ branching fraction ratio from $f_{d}/f_{s}$ cancels with the
corresponding inverted ratio in Eq. 1. On the other hand, in the ratio
${\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow
D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})/{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})$, the $D^{+}_{s}\rightarrow K^{+}K^{-}\pi^{+}$ branching
fraction enters as the square, after considering the $D$ branching fractions
used in computing $f_{d}/f_{s}$ (see Eq. 4). As a result, the uncertainty from
${\cal{B}}(D^{+}_{s}\rightarrow K^{+}K^{-}\pi^{+})$ contributes 9.8% to the
total uncertainty on ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{(s)}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})/{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})$; smaller contributions from the limited knowledge of
${\cal{B}}(D^{0}\rightarrow K^{-}\pi^{+})$ [1.3%], ${\cal{B}}(D^{0}\rightarrow
K^{-}\pi^{+}\pi^{-}\pi^{+})$ [2.5%] and ${\cal{B}}(D^{+}\rightarrow
K^{-}\pi^{+}\pi^{+})$ [2.1%] are also included in the ${\cal{B}}(D)$
uncertainties.
Another significant uncertainty results from the precision on $b$-hadron
lifetimes and decays of $B^{0}$ and $B^{0}_{s}$ to $C\\!P$ eigenstates. Using
the measured value of the width difference, $\Delta\Gamma_{s}=0.116\pm
0.018\pm 0.006~{}{\rm\,ps}^{-1}$ [40] we conservatively assume the
$C\\!P$-even lifetime to be in the range from 0.85 to 0.95 times the flavor-
specific decay lifetime. With this allowed range a 2.9% uncertainty on the
efficiencies for $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}$ decays
to $C\\!P$ eigenstates is found. The average $B^{0}_{s}$ lifetime is known
only to a precision of 3%, which leads to a 1.5% uncertainty on the selection
efficiencies for $B^{0}_{s}$ decays to flavor-specific final states. The
$B^{0}$ and $B^{-}$ lifetimes are known with sufficient precision that the
associated uncertainty is negligible.
Several of the efficiency factors are estimated from simulation. Most, but not
all, of the associated systematic uncertainties cancel due to the similar or
identical final states for the signal and normalization modes. For modes with
an unequal number of tracks in the final state, a 1% uncertainty due to small
differences in the IP resolution between data and simulation is assigned. The
efficiency of the VS $\chi^{2}$ requirement is checked using the large
$B^{0}\rightarrow D^{+}_{s}D^{-}$ signal in data, and the agreement to within
1% with the efficiency from simulation is the assigned uncertainty. For
${\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})/{\cal{B}}(B^{0}\rightarrow
D^{+}_{s}D^{-})$, a 1% uncertainty is attributed to the efficiency of track
reconstruction. For ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})/{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})$, the one fewer track in the $D^{0}(K\pi)\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}(K\pi)$ final state is offset by the
one extra track in $D^{0}(K\pi)\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}(K\pi\pi\pi)$, relative to
$D^{0}(K\pi)D^{-}_{s}(KK\pi)$, leading to a negligible tracking uncertainty.
The mass resolution in data is slightly larger than in simulation, resulting
in slightly different efficiencies for the reconstructed $D^{0}$, $D^{+}$ and
$D^{+}_{s}$ invariant masses to lie within 25
${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ of their known masses. This
introduces a maximum of 1% uncertainty on the relative branching fractions. To
estimate the uncertainty on the trigger efficiencies determined from
simulation, the hadron trigger efficiency ratios were also determined using
data. These efficiencies were measured using trigger-unbiased samples of kaons
and pions identified in $D^{*+}\rightarrow D^{0}\pi^{+}$ decays. Using this
alternative procedure, we find that the simulated trigger efficiency ratios
have an uncertainty of 2%. The combined systematic uncertainties in the
efficiencies obtained from simulation are given in Table 3.
The limited sizes of the $B\rightarrow D\pi^{-}$ calibration samples lead to
uncertainties in the BDT efficiencies. The uncertainties on the ratios vary
from 1.0% to 2.0%. The uncertainty on the efficiency of the $D_{(s)}$ and
$\mathchar 28931\relax_{c}^{+}$ vetoes is dominated by the PID efficiencies,
but they only apply to the subset of $D$ candidates that fall within the mass
window of two charm hadrons, e.g., both the $D^{+}$ and $D^{+}_{s}$ mesons,
which occurs about 20% of the time for $D^{+}_{s}$ decays. Taking this
fraction and the uncertainty in the PID efficiency into account, the veto
efficiencies are estimated to have uncertainties of 1.0% for the $D^{+}$ veto,
$0.5\%$ for the $D^{+}_{s}$ veto, and $0.3\%$ for the $\mathchar
28931\relax_{c}^{+}$ veto.
The fit model is validated using simulated experiments, and is found to be
unbiased. To assess the uncertainty due to the imperfect knowledge of the
various parameters used in the fit model, a number of variations are
investigated. The only non-negligible uncertainties are due to the
$B\rightarrow DK^{-}K^{+}\pi^{-}$ background contribution, which is varied
from 0% to 2%, and the cross-feed from ${\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}}$
decays into the ${\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}}$
sample. The uncertainty varies from 1.7% to 2.1%. For ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}D^{-})/{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{-})$ and
${\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-})/{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})$, we assign an
uncertainty of 0.5%, which accounts for potentially small differences in the
signal shape for $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}$ decays (due to the
$B^{0}$-$B^{0}_{s}$ mass difference). Lastly, the finite size of the samples
of simulated decays contributes 3% uncertainty to all the measurements. In
total, the systematic uncertainties on the branching fraction ratios range
from 5.5% to 13.0%, as indicated in Table 3.
Table 3: Sources of systematic uncertainty and their values (in %) for the ratios of branching fractions of the indicated decays. For ${\cal{B}}(\kern 1.61993pt\overline{\kern-1.61993ptB}{}^{0}_{(s)}\rightarrow D^{0}\kern 1.79997pt\overline{\kern-1.79997ptD}{}^{0})/{\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})$, the error on $f_{d}/f_{s}$ only applies to the $\kern 1.61993pt\overline{\kern-1.61993ptB}{}^{0}_{s}\rightarrow D^{0}\kern 1.79997pt\overline{\kern-1.79997ptD}{}^{0}$ decay, as indicated by the values in parentheses. Source | $\frac{\kern 1.25995pt\overline{\kern-1.25995ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}}{B^{0}\rightarrow D^{+}_{s}D^{-}}$ | $\frac{\kern 1.25995pt\overline{\kern-1.25995ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}}{B^{0}\rightarrow D^{+}_{s}D^{-}}$ | $\frac{\kern 1.25995pt\overline{\kern-1.25995ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}}{\kern 1.25995pt\overline{\kern-1.25995ptB}{}^{0}\rightarrow D^{+}D^{-}}$ | $\frac{\kern 1.25995pt\overline{\kern-1.25995ptB}{}^{0}_{(s)}\rightarrow D^{0}\kern 1.39998pt\overline{\kern-1.39998ptD}{}^{0}}{B^{-}\rightarrow D^{0}D^{-}_{s}}$ | $\frac{B^{-}\rightarrow D^{0}D^{-}_{s}}{B^{0}\rightarrow D^{+}_{s}D^{-}}$
---|---|---|---|---|---
$f_{d}/f_{s}$ | 5.7 | 5.7 | 5.7 | $-$ (5.7) | $-$
${\cal{B}}(D)$ | $-$ | 5.3 | 5.3 | 10.2 | 2.5
$B$ meson lifetimes | 2.9 | 1.5 | 2.9 | 2.9 | $-$
Eff. from simulation | 2.4 | $-$ | $-$ | 2.2 | 2.6
BDT selection | 1.4 | $-$ | $-$ | 2.2 | 1.4
Cross-feed vetoes | 0.6 | $-$ | $-$ | 0.5 | 1.0
$D$ mass resolution | 1.0 | $-$ | $-$ | 1.0 | 1.0
Fit model | 2.1 | 0.5 | 0.5 | 1.7 | 2.1
Simulated sample size | 3.0 | 3.0 | 3.0 | 3.0 | 3.0
Total | 8.0 | 8.5 | 8.9 | 11.7 (13.0) | 5.5
## 6 Discussion and summary
First observations and measurements of the relative branching fractions for
the decays ${\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}D^{-}}$, ${\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}}$ and
${\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}}$ have been presented, along with
measurements of ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s})$
and ${\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})$. Taking the world average
values for ${\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})=(7.2\pm 0.8)\times
10^{-3}$ [18], the absolute branching fractions are
$\displaystyle{\cal{B}}(B^{-}\rightarrow D^{0}D^{-}_{s})$
$\displaystyle=(8.6\pm 0.2\,({\rm stat})\pm 0.4\,({\rm syst})\pm 1.0\,({\rm
norm}))\times 10^{-3},$ $\displaystyle{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s})$
$\displaystyle=(4.0\pm 0.2\,({\rm stat})\pm 0.3\,({\rm syst})\pm 0.4\,({\rm
norm}))\times 10^{-3}.$
The third uncertainty reflects the precision of the branching fraction for the
normalization mode. These measurements are consistent with, and more precise
than, both the current world average measurements [18] as well as the more
recent measurement of ${{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})}$ [41].
The measured value of ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})/{\cal{B}}(B^{0}\rightarrow D^{+}_{s}D^{-})=0.55\pm 0.06$
is significantly lower than the naive expectation of unity for the case that
both decays are dominated by tree amplitudes (see Fig. 1(a)), assuming small
non-factorizable effects and comparable magnitudes of the $B_{(s)}\rightarrow
D^{+}_{(s)}$ form factors [42]. Unlike $B^{0}\rightarrow D^{+}_{s}D^{-}$, the
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s}$ decay receives a contribution from the $W$-exchange
process (see Fig. 1(b)), suggesting that this amplitude may not be negligible.
Interestingly, when comparing the ${\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s}}$
and ${\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{-}}$
decays, which have the same set of amplitudes, one finds
${|V_{cd}/V_{cs}|^{2}\cdot{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow
D^{+}_{s}D^{-}_{s})/{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}D^{-})\sim 1}$.
Using ${{\cal{B}}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow
D^{+}D^{-})=(2.11\pm 0.31)\times 10^{-4}}$ and ${{\cal{B}}(B^{-}\rightarrow
D^{0}D^{-}_{s})=}(10.0\pm 1.7)\times 10^{-3}$ [18], the following values for
the branching fractions are obtained
$\displaystyle{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-})$
$\displaystyle=(2.2\pm 0.4\,({\rm stat})\pm 0.2\,({\rm syst})\pm 0.3\,({\rm
norm}))\times 10^{-4},$ $\displaystyle{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})$ $\displaystyle=(1.9\pm 0.3\,({\rm
stat})\pm 0.3\,({\rm syst})\pm 0.3\,({\rm norm}))\times 10^{-4},$
$\displaystyle{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0})$ $\displaystyle=(1.4\pm 0.6\,({\rm
stat})\pm 0.2\,({\rm syst})\pm 0.2\,({\rm norm}))\times 10^{-5}.$
The first of these results disfavors the predicted values for ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-})$ in
Refs. [20, 21], which are about 5–15 times larger than our measured value. The
measured branching fractions are about a factor of 2–3 larger than the
predictions obtained by assuming that these decay amplitudes are dominated by
rescattering [17]. As discussed above for the ${\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}_{s})$
measurement, this may also suggest that the $W$-exchange amplitude
contribution is not negligible in $B\rightarrow D\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{\prime}$ decays. For precise
quantitative comparisons of these $B^{0}_{s}$ branching fraction measurements
to theoretical predictions, one should account for the different total widths
of the $C\\!P$-even and $C\\!P$-odd final states [12].
The Cabibbo suppressed $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-}$
decay is also observed for the first time. Its absolute branching fraction is
$\displaystyle{\cal{B}}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}_{s}D^{-})$
$\displaystyle=(3.6\pm 0.6\,({\rm stat})\pm 0.3\,({\rm syst})\pm 0.4\,({\rm
norm}))\times 10^{-4}.$
This value is consistent with the expected suppression of
$|V_{cd}/V_{cs}|^{2}$.
The results reported here are based on an integrated luminosity of 1.0 $\rm
fb^{-1}$. A data sample with approximately 2.5 times larger yields in these
modes has already been collected in 2012, and larger samples are anticipated
in the next few years. These samples give good prospects for $C\\!P$-violation
measurements, lifetime studies, and obtaining a deeper understanding of the
decay mechanisms that contribute to $b$-hadron decays.
## Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC
(China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG
(Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR
(Poland); ANCS/IFA (Romania); MinES, Rosatom, RFBR and NRC “Kurchatov
Institute” (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER
(Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We
also acknowledge the support received from the ERC under FP7. The Tier1
computing centres are supported by IN2P3 (France), KIT and BMBF (Germany),
INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United
Kingdom). We are thankful for the computing resources put at our disposal by
Yandex LLC (Russia), as well as to the communities behind the multiple open
source software packages that we depend on.
## References
* [1] N. Cabibbo, Unitary symmetry and leptonic decays, Phys. Rev. Lett. 10 (1963) 531
* [2] M. Kobayashi and T. Maskawa, CP violation in the renormalizable theory of weak interaction, Prog. Theor. Phys. 49 (1973) 652
* [3] BaBar collaboration, B. Aubert et al., Measurements of time-dependent CP asymmetries in $B^{0}\rightarrow D^{(*)+}D^{(*)-}$ decays, Phys. Rev. D79 (2009) 032002, arXiv:0808.1866
* [4] Belle collaboration, S. Fratina et al., Evidence for $C\\!P$ violation in $B^{0}\rightarrow D^{+}D^{-}$ decays, Phys. Rev. Lett. 98 (2007) 221802, arXiv:hep-ex/0702031
* [5] R. Aleksan et al., The decay $B\rightarrow D\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{*}+D^{*}\kern 1.99997pt\overline{\kern-1.99997ptD}{}$ in the heavy quark limit and tests of CP violation, Phys. Lett. B317 (1993) 173
* [6] A. I. Sanda and Z. Z. Xing, Towards determining $\phi_{1}$ with $B\rightarrow D^{(*)}\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{(*)}$, Phys. Rev. D56 (1997) 341, arXiv:hep-ph/9702297
* [7] Z. Z. Xing, Measuring CP violation and testing factorization in $B_{d}\rightarrow D^{*\pm}D^{*\mp}$ and $B_{s}\rightarrow D_{s}^{*\pm}D_{s}^{*\mp}$ decays, Phys. Lett. B443 (1998) 365, arXiv:hep-ph/9809496
* [8] Z. Z. Xing, CP violation in $B_{d}\rightarrow D^{+}D^{-},~{}D^{*+}D^{-},~{}D^{+}D^{*-}$, and $D^{*+}D^{*-}$ decays, Phys. Rev. D61 (2000) 014010, arXiv:hep-ph/9907455
* [9] X. Y. Pham and Z. Z. Xing, CP asymmetries in ${B_{d}\rightarrow D^{*+}D^{*-}}$ and ${B_{s}\rightarrow D^{*+}_{s}D^{*-}_{s}}$ decays: P wave dilution, penguin and rescattering effects, Phys. Lett. B458 (1999) 375, arXiv:hep-ph/9904360
* [10] A. Datta and D. London, Extracting $\gamma$ from $B_{d}^{0}(t)\rightarrow D^{(*)+}D^{(*)-}$ and $B_{d}^{0}\rightarrow D_{s}^{(*)+}D_{s}^{(*)-}$ decays, Phys. Lett. B584 (2004) 81, arXiv:hep-ph/0310252
* [11] R. Fleischer, Exploring CP violation and penguin effects through $B_{d}^{0}\rightarrow D^{+}D^{-}$ and $B_{s}^{0}\rightarrow D_{s}^{+}D_{s}^{-}$, Eur. Phys. J. C51 (2007) 849, arXiv:0705.4421
* [12] K. De Bruyn et al., Branching ratio measurements of $B_{s}$ decays, Phys. Rev. D86 (2012) 014027, arXiv:1204.1735
* [13] R. Fleischer and R. Knegjens, Effective lifetimes of $B_{s}$ decays and their constraints on the $B_{s}^{0}$-$\bar{B}_{s}^{0}$ mixing parameters, Eur. Phys. J. C71 (2011) 1789, arXiv:1109.5115
* [14] Y. Amhis et al., Averages of b-hadron, c-hadron, and tau-lepton properties as of early 2012, arXiv:1207.1158, More information is available at www.slac.stanford.edu/xorg/hfag
* [15] LHCb collaboration, R. Aaij et al., Measurement of the effective $B^{0}_{s}\rightarrow K^{+}K^{-}$ lifetime, Phys. Lett. B716 (2012) 393, arXiv:1207.5993
* [16] LHCb collaboration, R. Aaij et al., Measurement of the $B^{0}_{s}$ effective lifetime in the ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}f_{0}(980)$ final state, Phys. Rev. Lett. 109 (2012) 152002, arXiv:1207.0878
* [17] M. Gronau, D. London, and J. Rosner, Rescattering contributions to rare B meson decays, arXiv:1211.5785
* [18] Particle Data Group, J. Beringer et al., Review of particle physics (RPP), Phys. Rev. D86 (2012) 010001
* [19] LHCb collaboration, R. Aaij et al., Measurement of $b$-hadron branching fractions for two-body decays into charmless charged hadrons, JHEP 10 (2012) 037, arXiv:1206.2794
* [20] Y. Li, C.-D. Lu, and Z.-J. Xiao, Rare decays $B^{0}\rightarrow D_{s}^{(*)+}D_{s}^{(*)-}$ and $B^{0}_{s}\rightarrow D^{(*)+}D^{(*)-}$ in perturbative QCD approach, J. Phys. G31 (2005) 273, arXiv:hep-ph/0308243
* [21] J. Eeg, S. Fajfer, and A. Hiorth, On the color suppressed decay modes $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow D^{+}_{s}D^{-}_{s}$ and $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{+}D^{-}$, Phys. Lett. B570 (2003) 46, arXiv:hep-ph/0304112
* [22] LHCb collaboration, A. A. Alves Jr. et al., The LHCb detector at the LHC, JINST 3 (2008) S08005
* [23] M. Adinolfi et al., Performance of the LHCb RICH detector at the LHC, arXiv:1211.6759, (submitted to Eur. Phys. J. C)
* [24] R. Aaij et al., The LHCb trigger and its performance, arXiv:1211.3055, (submitted to JINST)
* [25] V. V. Gligorov and M. Williams, Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree, arXiv:1210.6861, (submitted to JINST)
* [26] T. Sjöstrand, S. Mrenna, and P. Z. Skands, PYTHIA 6.4 Physics and manual, JHEP 05 (2006) 026, arXiv:hep-ph/0603175
* [27] I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulation framework, Nuclear Science Symposium Conference Record (NSS/MIC) IEEE (2010) 1155
* [28] D. J. Lange, The EvtGen particle decay simulation package, Nucl. Instrum. Meth. A462 (2001) 152
* [29] P. Golonka and Z. Was, PHOTOS Monte Carlo: a precision tool for QED corrections in $Z$ and $W$ decays, Eur. Phys. J. C45 (2006) 97, arXiv:hep-ph/0506026
* [30] GEANT4 collaboration, J. Allison et al., Geant4 developments and applications, IEEE Trans. Nucl. Sci. 53 (2006) 270
* [31] GEANT4 collaboration, S. Agostinelli et al., GEANT4: A simulation toolkit, Nucl. Instrum. Meth. A506 (2003) 250
* [32] M. Clemencic et al., The LHCb simulation application, Gauss: design, evolution and experience, J. of Phys. : Conf. Ser. 331 (2011) 032023
* [33] I. Narsky, Optimization of signal significance by bagging decision trees, arXiv:physics/0507157
* [34] I. Narsky, StatPatternRecognition: a C++ package for statistical analysis of high energy physics data, arXiv:physics/0507143
* [35] M. Pivk and F. R. Le Diberder, sPlot: a statistical tool to unfold data distributions, Nucl. Instrum. Meth. A555 (2005) 356, arXiv:physics/0402083
* [36] T. Skwarnicki, A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, DESY-F31-86-02
* [37] Belle Collaboration, A. Drutskoy et al., Observation of $B\rightarrow D^{(*)}K^{-}K^{0(*)}$ decays, Phys. Lett. B542 (2002) 171, arXiv:hep-ex/0207041
* [38] LHCb collaboration, R. Aaij et al., Measurement of the ratio of fragmentation fractions $f_{s}/f_{d}$ and dependence on $B$ meson kinematics, arXiv:1301.5286, (submitted to JHEP)
* [39] A. Lenz and U. Nierste, Numerical updates of lifetimes and mixing parameters of B mesons, arXiv:1102.4274, proceedings of the $6^{th}$ International Workshop in the CKM Unitarity Triangle, Warwick, U.K., Sept. 6-10, 2010
* [40] LHCb collaboration, Tagged time-dependent angular analysis of ${B^{0}_{s}\rightarrow{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\phi}$ decays at LHCb, LHCb-CONF-2012-002
* [41] Belle collaboration, S. Esen et al., Precise measurement of the branching fractions for $B_{s}\rightarrow D_{s}^{(*)+}D_{s}^{(*)-}$ and first measurement of the $D_{s}^{*+}D_{s}^{*-}$ polarization using $e^{+}e^{-}$ collisions, arXiv:1208.0323
* [42] J. A. Bailey et al., $B_{s}\rightarrow D_{s}/B\rightarrow D$ semileptonic form-factor ratios and their application to ${\cal{B}}(B^{0}_{s}\rightarrow\mu^{+}\mu^{-})$, Phys. Rev. D85 (2012) 114502, arXiv:1202.6346
|
arxiv-papers
| 2013-02-23T22:52:28 |
2024-09-04T02:49:42.078382
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "LHCb collaboration: R. Aaij, C. Abellan Beteta, B. Adeva, M. Adinolfi,\n C. Adrover, A. Affolder, Z. Ajaltouni, J. Albrecht, F. Alessio, M. Alexander,\n S. Ali, G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, S. Amerio,\n Y. Amhis, L. Anderlini, J. Anderson, R. Andreassen, R.B. Appleby, O. Aquines\n Gutierrez, F. Archilli, A. Artamonov, M. Artuso, E. Aslanides, G. Auriemma,\n S. Bachmann, J.J. Back, C. Baesso, V. Balagura, W. Baldini, R.J. Barlow, C.\n Barschel, S. Barsuk, W. Barter, Th. Bauer, A. Bay, J. Beddow, F. Bedeschi, I.\n Bediaga, S. Belogurov, K. Belous, I. Belyaev, E. Ben-Haim, M. Benayoun, G.\n Bencivenni, S. Benson, J. Benton, A. Berezhnoy, R. Bernet, M.-O. Bettler, M.\n van Beuzekom, A. Bien, S. Bifani, T. Bird, A. Bizzeti, P.M. Bj{\\o}rnstad, T.\n Blake, F. Blanc, J. Blouw, S. Blusk, V. Bocci, A. Bondar, N. Bondar, W.\n Bonivento, S. Borghi, A. Borgia, T.J.V. Bowcock, E. Bowen, C. Bozzi, T.\n Brambach, J. van den Brand, J. Bressieux, D. Brett, M. Britsch, T. Britton,\n N.H. Brook, H. Brown, I. Burducea, A. Bursche, G. Busetto, J. Buytaert, S.\n Cadeddu, O. Callot, M. Calvi, M. Calvo Gomez, A. Camboni, P. Campana, A.\n Carbone, G. Carboni, R. Cardinale, A. Cardini, H. Carranza-Mejia, L. Carson,\n K. Carvalho Akiba, G. Casse, M. Cattaneo, Ch. Cauet, M. Charles, Ph.\n Charpentier, P. Chen, N. Chiapolini, M. Chrzaszcz, K. Ciba, X. Cid Vidal, G.\n Ciezarek, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier, C. Coca, V.\n Coco, J. Cogan, E. Cogneras, P. Collins, A. Comerma-Montells, A. Contu, A.\n Cook, M. Coombes, S. Coquereau, G. Corti, B. Couturier, G.A. Cowan, D. Craik,\n S. Cunliffe, R. Currie, C. D'Ambrosio, P. David, P.N.Y. David, I. De Bonis,\n K. De Bruyn, S. De Capua, M. De Cian, J.M. De Miranda, M. De Oyanguren\n Campos, L. De Paula, W. De Silva, P. De Simone, D. Decamp, M. Deckenhoff, L.\n Del Buono, D. Derkach, O. Deschamps, F. Dettori, A. Di Canto, H. Dijkstra, M.\n Dogaru, S. Donleavy, F. Dordei, A. Dosil Su\\'arez, D. Dossett, A. Dovbnya, F.\n Dupertuis, R. Dzhelyadin, A. Dziurda, A. Dzyuba, S. Easo, U. Egede, V.\n Egorychev, S. Eidelman, D. van Eijk, S. Eisenhardt, U. Eitschberger, R.\n Ekelhof, L. Eklund, I. El Rifai, Ch. Elsasser, D. Elsby, A. Falabella, C.\n F\\\"arber, G. Fardell, C. Farinelli, S. Farry, V. Fave, D. Ferguson, V.\n Fernandez Albor, F. Ferreira Rodrigues, M. Ferro-Luzzi, S. Filippov, C.\n Fitzpatrick, M. Fontana, F. Fontanelli, R. Forty, O. Francisco, M. Frank, C.\n Frei, M. Frosini, S. Furcas, E. Furfaro, A. Gallas Torreira, D. Galli, M.\n Gandelman, P. Gandini, Y. Gao, J. Garofoli, P. Garosi, J. Garra Tico, L.\n Garrido, C. Gaspar, R. Gauld, E. Gersabeck, M. Gersabeck, T. Gershon, Ph.\n Ghez, V. Gibson, V.V. Gligorov, C. G\\\"obel, D. Golubkov, A. Golutvin, A.\n Gomes, H. Gordon, M. Grabalosa G\\'andara, R. Graciani Diaz, L.A. Granado\n Cardoso, E. Graug\\'es, G. Graziani, A. Grecu, E. Greening, S. Gregson, O.\n Gr\\\"unberg, B. Gui, E. Gushchin, Yu. Guz, T. Gys, C. Hadjivasiliou, G.\n Haefeli, C. Haen, S.C. Haines, S. Hall, T. Hampson, S. Hansmann-Menzemer, N.\n Harnew, S.T. Harnew, J. Harrison, T. Hartmann, J. He, V. Heijne, K. Hennessy,\n P. Henrard, J.A. Hernando Morata, E. van Herwijnen, E. Hicks, D. Hill, M.\n Hoballah, C. Hombach, P. Hopchev, W. Hulsbergen, P. Hunt, T. Huse, N.\n Hussain, D. Hutchcroft, D. Hynds, V. Iakovenko, M. Idzik, P. Ilten, R.\n Jacobsson, A. Jaeger, E. Jans, P. Jaton, F. Jing, M. John, D. Johnson, C.R.\n Jones, B. Jost, M. Kaballo, S. Kandybei, M. Karacson, T.M. Karbach, I.R.\n Kenyon, U. Kerzel, T. Ketel, A. Keune, B. Khanji, O. Kochebina, I. Komarov,\n R.F. Koopman, P. Koppenburg, M. Korolev, A. Kozlinskiy, L. Kravchuk, K.\n Kreplin, M. Kreps, G. Krocker, P. Krokovny, F. Kruse, M. Kucharczyk, V.\n Kudryavtsev, T. Kvaratskheliya, V.N. La Thi, D. Lacarrere, G. Lafferty, A.\n Lai, D. Lambert, R.W. Lambert, E. Lanciotti, G. Lanfranchi, C. Langenbruch,\n T. Latham, C. Lazzeroni, R. Le Gac, J. van Leerdam, J.-P. Lees, R. Lef\\`evre,\n A. Leflat, J. Lefran\\c{c}ois, S. Leo, O. Leroy, B. Leverington, Y. Li, L. Li\n Gioi, M. Liles, R. Lindner, C. Linn, B. Liu, G. Liu, J. von Loeben, S. Lohn,\n J.H. Lopes, E. Lopez Asamar, N. Lopez-March, H. Lu, D. Lucchesi, J. Luisier,\n H. Luo, F. Machefert, I.V. Machikhiliyan, F. Maciuc, O. Maev, S. Malde, G.\n Manca, G. Mancinelli, U. Marconi, R. M\\\"arki, J. Marks, G. Martellotti, A.\n Martens, L. Martin, A. Mart\\'in S\\'anchez, M. Martinelli, D. Martinez Santos,\n D. Martins Tostes, A. Massafferri, R. Matev, Z. Mathe, C. Matteuzzi, E.\n Maurice, A. Mazurov, J. McCarthy, R. McNulty, A. Mcnab, B. Meadows, F. Meier,\n M. Meissner, M. Merk, D.A. Milanes, M.-N. Minard, J. Molina Rodriguez, S.\n Monteil, D. Moran, P. Morawski, M.J. Morello, R. Mountain, I. Mous, F.\n Muheim, K. M\\\"uller, R. Muresan, B. Muryn, B. Muster, P. Naik, T. Nakada, R.\n Nandakumar, I. Nasteva, M. Needham, N. Neufeld, A.D. Nguyen, T.D. Nguyen, C.\n Nguyen-Mau, M. Nicol, V. Niess, R. Niet, N. Nikitin, T. Nikodem, A.\n Nomerotski, A. Novoselov, A. Oblakowska-Mucha, V. Obraztsov, S. Oggero, S.\n Ogilvy, O. Okhrimenko, R. Oldeman, M. Orlandea, J.M. Otalora Goicochea, P.\n Owen, B.K. Pal, A. Palano, M. Palutan, J. Panman, A. Papanestis, M.\n Pappagallo, C. Parkes, C.J. Parkinson, G. Passaleva, G.D. Patel, M. Patel,\n G.N. Patrick, C. Patrignani, C. Pavel-Nicorescu, A. Pazos Alvarez, A.\n Pellegrino, G. Penso, M. Pepe Altarelli, S. Perazzini, D.L. Perego, E. Perez\n Trigo, A. P\\'erez-Calero Yzquierdo, P. Perret, M. Perrin-Terrin, G. Pessina,\n K. Petridis, A. Petrolini, A. Phan, E. Picatoste Olloqui, B. Pietrzyk, T.\n Pila\\v{r}, D. Pinci, S. Playfer, M. Plo Casasus, F. Polci, G. Polok, A.\n Poluektov, E. Polycarpo, D. Popov, B. Popovici, C. Potterat, A. Powell, J.\n Prisciandaro, V. Pugatch, A. Puig Navarro, G. Punzi, W. Qian, J.H.\n Rademacker, B. Rakotomiaramanana, M.S. Rangel, I. Raniuk, N. Rauschmayr, G.\n Raven, S. Redford, M.M. Reid, A.C. dos Reis, S. Ricciardi, A. Richards, K.\n Rinnert, V. Rives Molina, D.A. Roa Romero, P. Robbe, E. Rodrigues, P.\n Rodriguez Perez, S. Roiser, V. Romanovsky, A. Romero Vidal, J. Rouvinet, T.\n Ruf, F. Ruffini, H. Ruiz, P. Ruiz Valls, G. Sabatino, J.J. Saborido Silva, N.\n Sagidova, P. Sail, B. Saitta, C. Salzmann, B. Sanmartin Sedes, M. Sannino, R.\n Santacesaria, C. Santamarina Rios, E. Santovetti, M. Sapunov, A. Sarti, C.\n Satriano, A. Satta, M. Savrie, D. Savrina, P. Schaack, M. Schiller, H.\n Schindler, M. Schlupp, M. Schmelling, B. Schmidt, O. Schneider, A. Schopper,\n M.-H. Schune, R. Schwemmer, B. Sciascia, A. Sciubba, M. Seco, A. Semennikov,\n K. Senderowska, I. Sepp, N. Serra, J. Serrano, P. Seyfert, M. Shapkin, I.\n Shapoval, P. Shatalov, Y. Shcheglov, T. Shears, L. Shekhtman, O. Shevchenko,\n V. Shevchenko, A. Shires, R. Silva Coutinho, T. Skwarnicki, N.A. Smith, E.\n Smith, M. Smith, M.D. Sokoloff, F.J.P. Soler, F. Soomro, D. Souza, B. Souza\n De Paula, B. Spaan, A. Sparkes, P. Spradlin, F. Stagni, S. Stahl, O.\n Steinkamp, S. Stoica, S. Stone, B. Storaci, M. Straticiuc, U. Straumann, V.K.\n Subbiah, S. Swientek, V. Syropoulos, M. Szczekowski, P. Szczypka, T. Szumlak,\n S. T'Jampens, M. Teklishyn, E. Teodorescu, F. Teubert, C. Thomas, E. Thomas,\n J. van Tilburg, V. Tisserand, M. Tobin, S. Tolk, D. Tonelli, S.\n Topp-Joergensen, N. Torr, E. Tournefier, S. Tourneur, M.T. Tran, M. Tresch,\n A. Tsaregorodtsev, P. Tsopelas, N. Tuning, M. Ubeda Garcia, A. Ukleja, D.\n Urner, U. Uwer, V. Vagnoni, G. Valenti, R. Vazquez Gomez, P. Vazquez\n Regueiro, S. Vecchi, J.J. Velthuis, M. Veltri, G. Veneziano, M. Vesterinen,\n B. Viaud, D. Vieira, X. Vilasis-Cardona, A. Vollhardt, D. Volyanskyy, D.\n Voong, A. Vorobyev, V. Vorobyev, C. Vo\\ss, H. Voss, R. Waldi, R. Wallace, S.\n Wandernoth, J. Wang, D.R. Ward, N.K. Watson, A.D. Webber, D. Websdale, M.\n Whitehead, J. Wicht, J. Wiechczynski, D. Wiedner, L. Wiggers, G. Wilkinson,\n M.P. Williams, M. Williams, F.F. Wilson, J. Wishahi, M. Witek, S.A. Wotton,\n S. Wright, S. Wu, K. Wyllie, Y. Xie, F. Xing, Z. Xing, Z. Yang, R. Young, X.\n Yuan, O. Yushchenko, M. Zangoli, M. Zavertyaev, F. Zhang, L. Zhang, W.C.\n Zhang, Y. Zhang, A. Zhelezov, A. Zhokhov, L. Zhong, A. Zvyagin",
"submitter": "Steven R. Blusk",
"url": "https://arxiv.org/abs/1302.5854"
}
|
1302.5987
|
Hitting Time Distribution for finite states Markov Chain111The project is
partially supported by the National Natural Science Foundation of China (Grant
No. 11131003).
Wenming Hong222School of Mathematical Sciences & Laboratory of Mathematics and
Complex Systems, Beijing Normal University, Beijing 100875, P.R. China. Email:
[email protected] Ke Zhou333 School of Mathematical Sciences & Laboratory of
Mathematics and Complex Systems, Beijing Normal University, Beijing 100875,
P.R. China. Email:[email protected]
Abstract
Consider a Markov chain with finite state $\\{0,1,\cdots,d\\}$ and absorbing
at state $d$. We give out the generation functions (or Laplace transforms) for
the absorbing time when the chain starts from any state $i$. The results
generalize the well-known theorems for the birth-death (Karlin and McGregor
[7], 1959) and the skip-free (Brown and Shao [1], 1987) Markov chain starts
from state $0$. Our proof is directly and simple.
Keywords: Markov chain, absorbing time, generation functions, Laplace
transforms, eigenvalues.
Mathematics Subject Classification (2010): 60E10, 60J10, 60J27, 60J35.
## 1 Introduction
For the birth-death and the skip-free (upward jumps may be only of unit size,
and there is no restriction on downward jumps) Markov chain with finite state
$\\{0,1,\cdots,d\\}$ and absorbing at state $d$, an well-known interesting
property for the absorbing time is that it is distributed as a summation of
$d$ independent geometric (or exponential) random variables.
There are many authors give out different proofs to the results. For the birth
and death chain, the well-known results can be traced back to Karlin and
McGregor ([7], 1959) Keilson ([8], 1971; [9]). Kent and Longford ([10], 1983)
proved the result for the discrete time version (nearest random walk) although
they have not specified the result as usual form (section 2, [10]). Fill ([4],
2009) gave the first stochastic proof to both nearest random walk and birth
and death chain cases via duality which was established in [2]. Diaconis and
Miclo ([3], 2009) presented another probabilistic proof for birth and death
chain. Very recently, Gong, Mao and Zhang ([6], 2012) gave a similar result in
the case that the state space is $\mathbb{Z^{+}}$. For the skip-free chain,
Brown and Shao ([1], 1987) first proved the result in continuous time
situation; Fill ([5], 2009) gave a stochastic proof to both discrete and
continuous time cases also by using the duality, and considered the general
finite-state Markov chain situation when the chain starts from the state $0$.
The purpose of this paper is to consider a general Markov chain with finite-
state $\\{0,1,\cdots,d\\}$ and absorbing at state $d$. We give out the
generation functions (or Laplace transforms) for the absorbing time when the
chain starts from any state $i$ (not just from state $0$ only). In
particulary, the results generalize the well-known theorems for the birth-
death (Karlin and McGregor [7], 1959) and the skip-free (Brown and Shao [1],
1987) Markov chain.
Our proof is to calculate directly the generation functions (or Laplace
transforms) of the absorbing times by iterating method, which have been used
for the skip-free Markov chain by Zhou ([11]). After revised the method in
[11], we found the proof is very simple and enable us to deal with the general
finite-state Markov chain starts from any state $i$. The key idea is to
consider directly the absorbing time $\tau_{i,d}$ starting from any state $i$.
## 2 Discrete time
Define the transition probability matrix $P$ as
$P=\left(\begin{array}[]{ccccccc}r_{0}&p_{0,1}&p_{0,2}&&\cdots&p_{0,d-1}&p_{0,d}\\\
q_{1,0}&r_{1}&p_{1,2}&&\cdots&p_{1,d-1}&p_{1,d}\\\
\vdots&\ddots&\ddots&&\ddots&\ddots&\vdots\\\
q_{d-1,0}&q_{d-1,1}&q_{d-1,2}&&\cdots&r_{d-1}&p_{d-1,d}\\\
0&0&0&&\cdots&0&1\\\ \end{array}\right)_{(d+1)\times(d+1)},$ (2.1)
and for $1\leq j\leq d+1$, we denote $A_{j}(s)$ as the $d\times d$ sub-matrix
by deleting the $(d+1)^{th}$ line and the $j^{th}$ row of the matrix
$I_{d+1}-sP$.
Let $\tau_{i,d}$ be the absorbing time of state $d$ starting from $i$ and
$f_{i}(s)$ be the generation function of $\tau_{i,d}$,
$f_{i}(s)=\mathbb{E}s^{\tau_{i,d}}~{}~{}~{}\text{for}~{}0\leq i\leq d-1.$
(2.2)
###### Theorem 2.1.
For $0\leq i\leq d-1$,
$f_{i}(s)=(-1)^{d-i}\frac{\det A_{i+1}(s)}{\det A_{d+1}(s)}.$ (2.3)
###### Proof.
By decomposing the first step ,the generation function of $\tau_{i,d}$
satisfy, for $0\leq i\leq d-1$,
$\begin{split}f_{i}(s)&=r_{i}sf_{i}(s)+p_{i,i+1}sf_{i+1}(s)+p_{i,i+2}sf_{i+2}(s)+\cdots
p_{i,d-1}sf_{d-1}(s)+p_{i,d}s\\\
&~{}~{}~{}~{}~{}q_{i,i-1}sf_{i-1}(s)+q_{i,i-2}sf_{i-2}(s)+\cdots+q_{i,0}sf_{0}(s).\end{split}$
(2.4)
These system of $d$ equations are linear about
$f_{0}(s),~{}f_{1}(s)\cdots,~{}f_{d-1}(s)$. Use Cramer’s Rule, we can solve
from these system of $d$ equations and get (2.3). $\Box$
Immediately, we obtain the results for the skip-free discrete time Markov
chain (Fill [5], 2009).
###### Corollary 2.1.
If we assume for $j-i>1$, $p_{i,j}=0$. We have
$f_{0}(s)=\prod_{i=0}^{d-1}\left[\frac{(1-\lambda_{i})s}{1-\lambda_{i}s}\right],$
(2.5)
where $\lambda_{0},\cdots,\lambda_{d-1}$ are the $d$ non-unit eigenvalues of
$P$.
In particular, if all of the eigenvalues are real and nonnegative, then the
hitting time is distributed as the sum of $d$ independent geometric random
variables with parameters $1-\lambda_{i}$.
###### Proof.
Note that, $1$ is evidently an eigenvalue of $P$. So on the one hand
${\det(I_{d+1}-sP)}=(1-s)\prod_{i=0}^{d-1}(1-\lambda_{i}s)$ (where
$\lambda_{0},\cdots,\lambda_{d-1}$ are the $d$ non-unit eigenvalues of $P$);
on the other hand from (2.1) we have ${\det(I_{d+1}-sP)}=(1-s)\times\det
A_{d+1}(s)$; as a consequence we get
${\det A_{d+1}(s)}=\prod_{i=0}^{d-1}(1-\lambda_{i}s).$ (2.6)
From (2.1) and the definition of $A_{j}$, it is easy to get
$\det A_{1}(s)=(-1)^{d}p_{0,1}p_{1,2}\cdots p_{d-1,d}s^{d}.$ (2.7)
By (2.3) and (2.6) we have
$\displaystyle\det A_{1}(1)=(-1)^{d}f_{0}(1)\cdot{\det
A_{d+1}(1)}=(-1)^{d}f_{0}(1)\cdot\prod_{i=0}^{d-1}(1-\lambda_{i}).$
On the other hand, from (2.7) we get $\det
A_{1}(1)=(-1)^{d}p_{0,1}p_{1,2}\cdots p_{d-1,d}$ and recall that $f_{0}(1)=1$
by (2.2), we obtain that
$p_{0,1}p_{1,2}\cdots p_{d-1,d}=\prod_{i=0}^{d-1}(1-\lambda_{i}).$ (2.8)
Then by (2.7) and (2.8)
$\text{det}A_{1}(s)=(-1)^{d}\prod_{i=0}^{d-1}(1-\lambda_{i})s^{d},$ (2.9)
and (2.5) is immediate from (2.6) and (2.9). $\Box$
## 3 Continuous time
Define the generator $Q$ as
$Q=\left(\begin{array}[]{ccccccc}-\gamma_{0}&\alpha_{0,1}&\alpha_{0,2}&&\cdots&\alpha_{0,d-1}&\alpha_{0,d}\\\
\beta_{1,0}&-\gamma_{1}&\alpha_{1,2}&&\cdots&\alpha_{1,d-1}&\alpha_{1,d}\\\
\vdots&\ddots&\ddots&&\ddots&\ddots&\vdots\\\
\beta_{d-1,0}&\beta_{d-1,1}&\beta_{d-1,2}&&\cdots&-\gamma_{d-1}&\alpha_{d-1,d}\\\
0&0&0&&\cdots&0&0\\\ \end{array}\right)_{(d+1)\times(d+1)},$
and for $1\leq j\leq d+1$, we denote $\widetilde{A}_{j}(s)$ as the $d\times d$
sub-matrix by deleting the $(d+1)^{th}$ line and the $j^{th}$ row of the
matrix $I_{d+1}-sP$. Let $\tau_{i,d}$ be the absorbing time of state $d$
starting from $i$ and $\widetilde{f}_{i}(s)$ be the Laplace transform of
$\tau_{i,d}$.
$\widetilde{f}_{i}(s)=\mathbb{E}e^{-s\tau_{i,d}}.$
It is well known that the chain on the finite state has an simple structure.
The process starts at $i$, it stay there with an
$\mbox{Exponential}~{}(\gamma_{i})$ time, then jumps to $i+j$ with probability
$\frac{\alpha_{i,i+j}}{\gamma_{i}}$, to $i-k$ with probability
$\frac{\beta_{i,i-k}}{\gamma_{i}}$ .
###### Theorem 3.1.
$\widetilde{f}_{i}(s)=(-1)^{d-i}\frac{\det\widetilde{A}_{i+1}}{\det\widetilde{A}_{d+1}},~{}~{}~{}\text{for}~{}0\leq
i\leq d-1,$ (3.1)
###### Proof.
By decomposing the trajectory at the first jump, $0\leq i\leq d-1$,
$\begin{split}\widetilde{f}_{i}(s)&=\frac{\gamma_{i}}{\gamma_{i}+s}\frac{\alpha_{i,i+1}}{\gamma_{i}}\widetilde{f}_{i+1}(s)+\frac{\gamma_{i}}{\gamma_{i}+s}\frac{\alpha_{i,i+2}}{\gamma_{i}}\widetilde{f}_{i+1}(s)+\cdots+\frac{\gamma_{i}}{\gamma_{i}+s}\frac{\alpha_{i,d-1}}{\gamma_{i}}\widetilde{f}_{d-1}(s)+\frac{\gamma_{i}}{\gamma_{i}+s}\frac{\alpha_{i,d}}{\gamma_{i}}+\\\
&~{}~{}~{}~{}~{}+\frac{\gamma_{i}}{\gamma_{i}+s}\frac{\beta_{i,i-1}}{\gamma_{i}}\widetilde{f}_{i-1}(s)+\frac{\gamma_{i}}{\gamma_{i}+s}\frac{\beta_{i,i-2}}{\gamma_{i}}\widetilde{f}_{i-2}(s)+\cdots+\frac{\gamma_{i}}{\gamma_{i}+s}\frac{\beta_{i,0}}{\gamma_{i}}\widetilde{f}_{0}(s)\\\
&=\frac{\alpha_{i,i+1}}{\gamma_{i}+s}\widetilde{f}_{i+1}(s)+\cdots+\frac{\alpha_{i,d-1}}{\gamma_{i}+s}\widetilde{f}_{d-1}(s)+\frac{\alpha_{i,d}}{\gamma_{i}+s}+\frac{\beta_{i,i-1}}{\gamma_{i}+s}\widetilde{f}_{i-1}(s)+\cdots+\frac{\beta_{i,0}}{\gamma_{i}+s}\widetilde{f}_{0}(s)\end{split}$
These system of $d$ equations are linear about
$\widetilde{f}_{0}(s),~{}\widetilde{f}_{1}(s)\cdots,~{}\widetilde{f}_{d-1}(s)$.
Use Cramer’s Rule, we can solve from these system of $d$ equations and get
(3.1). $\Box$
Immediately, we obtain the results for the skip-free continuous time Markov
chain (Brown and Shao [1], 1987).
###### Corollary 3.1.
If we assume for $j-i>1$, $\alpha_{i,j}=0$. We have
$\varphi_{d}(s)=\prod_{i=0}^{d-1}\frac{\lambda_{i}}{s-\lambda_{i}},$
where $\lambda_{i}$ are the $d$ non-zero eigenvalues of $-Q$.
In particular, if all of the eigenvalues are real and nonnegative, then the
hitting time is distributed as the sum of $d$ independent exponential random
variables with parameters $\lambda_{i}$.
###### Proof.
The proof is similar as Corollary 2.1, we can calculate that
${\det\widetilde{A}_{d+1}}=\prod_{i=0}^{d-1}(s-\lambda_{i})$,
$\det\widetilde{A}_{1}=(-1)^{d}\alpha_{0,1}\alpha_{1,2}\cdots\alpha_{d-1,d}=(-1)^{d}\prod_{i=0}^{d-1}\lambda_{i}$.
$\Box$
## References
* [1] Brown, M. and Shao, Y. S. Identifying coefficients in the spectral representation for first passage time distributions. _Probab. Eng. Inform. Sci._ 1 (1987), 69–74.
* [2] Diaconis, P. and Fill, J. A. Strong stationary times via a new form of duality. _Ann. Probab._ 18 (1990), 1483–1522.
* [3] Diaconis, P. and Miclo, L. On times to quasi-stationarity for birth and death processes. _J. Theoret. Probab._ 22 (2009), 558–586.
* [4] Fill, J. A. The passage time distribution for a birth-and-death chain: Strong stationary duality gives a first stochastic proof. _J. Theoret. Probab._ 22 (2009), 543–557.
* [5] Fill, J. A. On hitting times and fastest strong stationary times for skip-free and more general chains. _J. Theoret. Probab._ 22 (2009), 587–600.
* [6] Gong, Y. Mao, Y. H and Zhang, C. Hitting time distributions for denumerable birth and death processes. _J. Theoret. Probab._ 25 (2012), 950–980.
* [7] Karlin, S. and McGregor, J. Coincidence properties of birth and death processes. _Pacific J. Math._ 9 (1959), 1109–1140.
* [8] Keilson, J. Log-concavity and log-convexity in passage time densities for of diffusion and birth-death processes. _J. Appl. Probab._ 8 (1971), 391–398.
* [9] Keilson, J. _Markov Chain Models—Rarity and Exponentiality._ Springer, New York, 1979.
* [10] Kent, J. T. and Longford, N. T An eigenvalue decomposition for first hitting times in random walk. _Z. Wahrscheinlichkeitstheorie verw. Gebiete_ 63 (1983), 71–84.
* [11] Zhou, K (2013), Hitting Time Distribution for Skip-Free Markov Chains: A Simple Proof. _submitted._
|
arxiv-papers
| 2013-02-25T03:25:05 |
2024-09-04T02:49:42.088576
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Wenming Hong and Ke Zhou",
"submitter": "Wenming Hong",
"url": "https://arxiv.org/abs/1302.5987"
}
|
1302.6030
|
A Fast Template Based Heuristic For Global Multiple Sequence Alignment
Srikrishnan Divakaran, Arpit Mithal, and Namit Jain
DA-IICT, Gandhinagar, Gujarat, India 382007, srikrishnan-
[email protected], [email protected], and [email protected]
###### Abstract
Advances in bio-technology have made available massive amounts of functional,
structural and genomic data for many biological sequences. This increased
availability of heterogeneous biological data has resulted in biological
applications where a multiple sequence alignment (msa) is required for
aligning similar features, where a feature is described in structural,
functional or evolutionary terms. In these applications, for a given set of
sequences, depending on the feature of interest the optimal msa is likely to
be different, and sequence similarity can only be used as a rough initial
estimate on the accuracy of an msa. This has motivated the growth in template
based heuristics that supplement the sequence information with evolutionary,
structural and functional data and exploit feature similarity instead of
sequence similarity to construct multiple sequence alignments that are
biologically more accurate. However, current frameworks for designing template
based heuristics do not allow the user to explicitly specify information that
can help to classify features into types and associate weights signifying the
relative importance of a feature with respect to other features, even though
in many instances this additional information is readily available. This has
resulted in the use of ad hoc measures and algorithms to define feature
similarity and msa construction respectively.
In this paper, we first provide a mechanism where as a part of the template
information the user can explicitly specify for each feature, its type, and
weight. The type is to classify the features into different categories based
on their characteristics and the weight signifies the relative importance of a
feature with respect to other features in that sequence. Second, we exploit
the above information to define scoring models for pair-wise sequence
alignment that assume segment conservation as opposed to single character
(residue) conservation. Finally, we present a fast progressive alignment based
heuristic framework that helps in constructing a global msa by first
constructing an msa involving only the informative segments using exact
methods, and then stitch into this the alignment of non-informative segments
constructed using fast approximate methods.
Key words: Analysis of algorithms; Bioinformatics; Computational Biology;
Multiple Sequence Alignment; Template Based Heuristics
## 1 Introduction
A global multiple sequence alignment (msa) [7, 17, 29] of a set $\cal{S}$ =
$\\{S_{1},S_{2},...,S_{k}\\}$ of $k$ related protein sequences is a way of
arranging the characters in $\cal{S}$ into a rectangular grid of columns by
introducing zero or more spaces into each sequence so that similar sequence
features occur in the same column, where a feature can be any relevant
biological information like secondary/tertiary structure, function, domain
decomposition, or homology to the common ancestor. The goal in attempting to
construct a global msa is either to identify conserved features that may
explain their functional, structural, evolutionary or phenotypic similarity,
or identify mutations that may explain functional, structural, evolutionary or
phenotypic variability.
Until recently, sequence information was the only information that was easily
available for many proteins. So, the measures that were used to evaluate the
quality (accuracy) of a msa were mostly based on sequence similarity. The sum
of pairs score (SP-score) and Tree score were two such measures that were
widely used. For both these measures, the computation of an optimal msa is
known to be NP-Complete [54]. So, in practice most of the focus is on
designing fast approximation algorithms and heuristics. From the perspective
of approximation algorithms, constant polynomial time approximation algorithms
are known for the SP-score [17,55] and polynomial time approximation schemes
(PTAS) [55] are known for the Tree score. However, in practice, these
approximation algorithms have large run-times that makes them not very useful
even for moderate sized problem instances. From the perspective of heuristics,
most heuristics are based on progressive alignment [18, 20, 48, 49, 51],
iterative alignment [10, 11, 12, 14, 15, 19, 24, 47], branch and bound [45],
genetic algorithms [36, 37], simulated annealing [26] or on Hidden Markov
Modeling (HMM) [8, 9, 21]. For an extensive review of the various heuristics
for msa construction, we refer the reader to excellent survey articles of
Kemena and Notredame [25], Notredame [33, 34, 35], Edger and Batzoglou [12],
Gotoh [15], Wallace et al. [52], Blackshields et al. [5].
In heuristics based on progressive alignment, the msa is constructed by first
computing pair-wise sequence distances using optimal pair-wise global
alignment scores. Second, a clustering algorithm (UPGMA or NJ [46]) uses these
pair-wise sequence distances to construct a rooted binary tree, usually
referred to as guide tree. Finally, an agglomerative algorithm uses this guide
tree to progressively align sequences a pair at a time to construct a msa. The
pair-wise global alignments scores are usually computed using a substitution
matrix and a gap penalty scheme that is based on sequence similarity. ClustalW
[51] was among the first widely used progressive alignment tool on which many
of the current day progressive aligners are based. In this paper, our focus is
on heuristics that are based on progressive alignment mainly because in this
method the computation of pair-wise sequence distances, guide tree and the
choice of agglomerative algorithm for progressively pair-wise aligning
sequences can be essentially split into three independent steps. This helps to
provide a flexible algorithmic framework for designing simple parameterized
greedy algorithms that are computationally scalable and whose parameters can
be tuned easily to improve its accuracy. In addition, the alignments obtained
through this approach are usually a good starting point for other popular
approaches like iterative, branch and bound, and HMM. However, the progressive
aligners because of their greedy approach commit mistakes early in the
alignment process that are usually very hard to correct even when using
sophisticated iterative aligners. This problem can be addressed if we can
incorporate into the pair-wise scoring scheme the information for every pair
of sites the frequency at which the residues at these sites are involved in
alignments involving other sequences in $\cal{S}$. However, incorporating this
information for all pairs of sites based on all sequences in $\cal{S}$ is
computationally infeasible.
The consistency based heuristics [6, 10, 11, 24, 28, 38, 39, 40, 42, 43, 47]
tackle this problem by incorporating a larger fraction of this information at
a reasonable computational cost as follows: The score for aligning residues at
a pair of sites is estimated from a collection of pair-wise residue alignments
named the library. The library is constituted of pair-wise alignments whose
residue alignment characteristics are implicitly assumed to be similar to an
optimal msa or a reference alignment that was constructed using sequence
independent methods. For a given library, any pair of residues receives an
alignment score equal to the number of times these two residues have been
found aligned either directly or indirectly through a third residue. The
Consistency based progressive aligners generally construct msas that are more
accurate than the pure progressive aligners like clustalW. However, it is not
very clear how to construct a library of alignments whose reside alignment
characteristics are guaranteed to be similar to an optimal msa. In addition,
the increased accuracy of msa of consistency based aligners comes at a
computational cost that is on an average $k$ times more than a pure
progressive aligner. T-Coffee [38], ProbCons [6], MAFFT [24], M-Coffee [53],
MUMMALS [41], EXPRESSO [2], PRALINE [42], T-Lara [4] are some of the widely
used consistency based aligners.
Currently, advances in bio-technology have made available massive amounts of
functional, structural and genomic data for many biological sequences. This
increased availability of heterogeneous biological data has resulted in
biological applications where an msa is required for aligning similar
features, where a feature is described in structural, functional or
evolutionary terms. In these applications, for a given set of sequences,
depending on the feature of interest the optimal msa is likely to be
different, and sequence similarity can only be used as a rough initial
estimate on the accuracy of an msa. In addition, from evolutionary studies we
know that structure and function of biological sequences are usually more
conserved than the sequence itself. This has motivated the growth in template
based heuristics [50] that supplement the sequence information with
evolutionary, structural and functional data and exploit feature similarity
instead of sequence similarity to construct multiple sequence alignments that
are biologically more accurate. In these methods, each sequence is associated
with a template, where a template can either be a 3-D structure, a profile or
prediction of any kind. Once a template is mapped onto a sequence, its
information content can be used to guide the sequence alignment in a sequence
independent fashion. Depending on the nature of the template one refers to its
usage as structural extension or homology extension. Structural extension
takes advantage of the increasing number of sequences with an experimentally
characterized homolog in the PDB database, whereas homology extension uses
profiles. 3-D Coffee [3], EXPRESSO [3], PROMALS [42, 44] and PRALINE [47] are
some popular aligners that are widely used tools that employ template based
methods. For more details about template based methods we refer the reader to
Kemena and Notredame [25] and Notredame [34].
In template based methods, we can view each template once mapped to a sequence
as essentially partitioning the sequence into segments, where each segment
corresponds to a feature described by the template. Then, we construct a msa
by essentially aligning segments that share similar features. The current
frameworks for describing templates do not allow the user to explicitly
specify information that can help (i) classify features into types and (ii)
associate a weight signifying the relative importance of a feature with
respect to other features, even though in many instances this additional
information is readily available. This has resulted in the use of ad hoc
measures and algorithms to define feature similarity and msa construction
respectively.
In this paper, we
* -
provide a mechanism where as a part of the template information the user can
explicitly specify for each feature, its type, and weight. The type is to
classify the features into different categories based on their characteristics
and the weight signifies the relative importance of a feature with respect to
other features in that sequence.
* -
define scoring models for pair-wise sequence alignment that assume segment
conservation as opposed to single character (residue) conservation. Our
scoring schemes for aligning pairs of segments are based on segment type,
segment weight, information content of an optimal local alignment involving
that segment pair, and its supporting context. This is an attempt to define
scoring schemes that evaluate a pair-wise global alignment through information
content of a global segment alignment, where segments correspond to features
within sequences. For example, in a structurally correct alignment the focus
is on aligning residues that play a similar role in the 3D structure of the
sequences, whereas a correct alignment from an evolutionary viewpoint focuses
on aligning two residues that share a similar relation to their closest common
ancestor, and in a functionally correct alignment the focus is on aligning
residues that are known to be responsible for the same function. The
supporting context consists of set of sequences that are known to belong to
the same family (i.e. share similar structure, function or homology to a
common ancestor) as the given sequence pair and can help determine to what
extent the alignment of the features in that pair-wise alignment is consistent
with the alignment of these features with other sequences in the family.
* -
present a fast progressive alignment based heuristic that essentially
constructs global msa by first classifying segments into informative or non-
informative segments based on their information content determined using
segment scoring matrices. Then, using exact methods, we construct a global msa
involving only the informative segments. Finally, using approximate methods we
construct the alignment of non-informative segments and stitches them into the
alignment of informative segments.
Remark: The statistical theory for evaluating alignments in terms of its
information content was developed for local alignments by Karlin and Altschul
[23]. However, their theory do not extend to the case of global alignments.
The pair-HMMS provide a framework for statistical analysis of pair-wise global
alignments for complex scoring schemes using standard methods like Baum-Welch
and Viterbi training. However, determining the right set of parameters for
optimal statistical support is highly non-trivial and involves dynamic
programming algorithms with computational complexity that is quadratic in the
length of the given sequences.
The rest of this paper is structured as follows. In Section $2$, we define the
problem and introduce the relevant terms and notations to define our segment
scoring schemes and heuristics. In Section $3$, we present our segment scoring
schemes. In Section $4$, we present our heuristics, in Section $5$, we
describe our experimental set-up and summarize our preliminary experimental
results, and in Section $6$, we present our conclusions and future work.
## 2 Preliminaries
In this section, we first define the problem of msa construction for a given a
set of sequences and their segment decompositions, where each segment is
classified into one of many types and is associated with a weight reflecting
its importance relative to other segments within that sequence. Then, we
introduce some basic terms and definitions that are required for defining our
scoring models and heuristics.
### 2.1 Problem Definition
Let $\cal{S}$ = { $S_{1},…,S_{k}$ } be a set of $k$ related protein sequences
each of length $n$. For $i\in[1..k]$, let
$B_{i}=\\{B_{i}^{1},…,B_{i}^{n_{i}}\\}$ be the decomposition of $S_{i}$ into
$n_{i}$ segments. Each segment $s\in B_{i}$ is classified into one of many
types based on the type of features that are known/predicted to be present in
that segment, and is associated with a non-negative real number weight that
reflects the importance of the feature associated with that segment relative
to other features in that sequence. That is, each segment $s\in B_{i}$,
$i\in[1..k]$, is associated with a type $type(s)$, and a non-negative real
number weight $weight(s)$.
Example: If the sequences in $\cal{S}$ are partitioned into segments based on
their predicted secondary structure, each segment is classified into one of
three types helix, strand or a coil, is associated with a non-negative weight
in the interval $[1,10]$ that reflects the confidence in its secondary
structure classification.
Given a set $\cal{S}$ of $k$ biological related sequences, their decomposition
into segments, and the type and weight associated with each of these segments,
our goal is to design fast progressive alignment based heuristics that exploit
the information content in these segments to build a biologically significant
multiple sequence alignment.
### 2.2 Basic Terms and Definitions
Now we introduce some terms and definitions that will necessary for defining
our segment scoring models and heuristics.
###### Definitions 2.1
For $i\in[1..k]$, we define
* -
$B_{i}^{inf}=\\{s\in B_{i}:weight(s)\geq\alpha\\}$ to be the segments in
$S_{i}$ whose weights are greater than or equal to $\alpha$, where $\alpha$ is
a non-negative user specified real number parameter. We refer to the segments
in $B_{i}^{inf}$ as informative segments of $S_{i}$;
* -
$S_{i}^{inf}$ to be the subsequence of $S_{i}$ obtained by concatenating the
segments in $B_{i}^{inf}$ in the order in which they appear in $S_{i}$. We
refer to this subsequence as the informative sequence of $S_{i}$.
###### Definition 2.2
For a pair of segments $s\in B_{i}^{inf}$ and $t\in B_{j}^{inf}$ of the same
type, $i\neq j\in[1..k]$, we define $L^{H}(s,t)$ to be the local alignment
between $s$ and $t$ constructed using heuristic $H$ and BLOSUM62 scoring
matrix and $SEG^{H}(s,t)$ to be bit score correpsonding to $L^{H}(s,t)$.
###### Definitions 2.3
For $i\neq j\in[1..k]$, we define
* -
$\alpha_{i,j}$ to be a real number in the interval [0,2] that reflects the
level of divergence between $S_{i}$ and $S_{j}$. We estimate the level of
divergence between $S_{i}$ and $S_{j}$ using the bit score of a local
alignment between $S_{i}^{inf}$ and $S_{j}^{inf}$ constructed using heuristic
$H$ and $BLOSUM62$ scoring matrix;
* -
$c:[0-2]\rightarrow R^{+}$ is a function that computes for a given level of
divergence the information threshold for an alignment to be informative.
###### Definitions 2.4
For a segment $s\in B_{i}^{inf}$ and $j\neq i\in[1..k]$, we define
* -
$Neighbor_{j}(s)=\\{t\in B_{j}^{inf}:type(t)=type(s)\land SEG^{H}(s,t)\geq
c(\alpha_{i,j})*|s|\\}$ to be the set of informative segments $t$ in $S_{j}$
of the same type as $s$ with bit score of a local alignment between $s$ and
$t$ greater than or equal to $c(\alpha_{i,j})*|s|$. We refer to the segments
in $Neighbor_{j}(s)$ to be the neighbors of $s$ in $S_{j}$.
* -
$Closest-neighbor_{j}(s)=\\{u^{\prime}\in
B_{j}^{inf}:SEG^{H}(s,u^{\prime})=max_{t\in Neighbor_{j}(s)}SEG^{H}(s,t)\\}$
is the neighbor of $s$ in $S_{j}$ that maximizes the bit score of a pair-wise
local alignment with $s$. We refer to such a segment to be the closest
neighbor of $s$ in $S_{j}$.
* -
$Neighborhood(s)=\bigcup_{j\in[1..k]}Closest-neighbor_{j}(s)$ to be the set
consisting of a closest neighbor of $s$ from each sequence in $\cal{S}$.
###### Definitions 2.5
For $i\in[1..k]$,
* -
$B_{i}^{nei}=\\{s\in B_{i}$ : $s$ is a neighbor of some segment in $\cal{S}$
$\setminus S_{i}$. }.
* -
$S_{i}^{nei}$ to denote the subsequence obtained by concatenating the segments
in $B_{i}^{nei}$ in the order in which they appear in $S_{i}$. We refer to
this subsequence as the neighbor sequence of $S_{i}$.
###### Definitions 2.6
For each pair of segments $s\in S_{i}^{nei}$ and $t\in S_{j}^{nei}$ of the
same type from distinct sequences in $\cal{S}$, and $l\neq i,j\in[1..k]$, we
define
* -
$Mutual-neighbors_{l}(s,t)=\\{u\in B_{l}^{nei}:u\in Neighbor_{i}(u)\bigcap
Neighbor_{j}(u)\\}$ to be the segments in $S_{l}$ that are neighbors of both
$s$ and $t$.
* -
$Closest-mutual-neighbor_{l}(s,t)=\\{u^{\prime}\in
B_{l}^{nei}:SEG^{H}(s,u^{\prime})+SEG^{H}(t,u^{\prime})=max_{u\in mutual-
neighbors_{l}(s,t)}(SEG^{H}(s,u)+SEG^{H}(t,u))$ to be the mutual neighbor $u$
of $s$ and $t$ in $S_{l}$ that maximizes $SEG^{H}(s,u)+SEG^{H}(t,u)$. We refer
to such a segment as the closest mutual neighbor of $s$ and $t$ in $S_{l}$.
* -
$Mutual-neighborhood(s,t)=\bigcup_{j\in[1..k]}Closest-mutual-
neighbor_{j}(s,t)$ to be the set consisting of a closest mutual neighbor of
$s$ and $t$ from each sequence in $\cal{S}$.
## 3 Scoring Models for Global Segment Alignment
In this section, we define scoring models for pair-wise segment alignment of
sequences. We classify segments into informative and non-informative based on
their weight and construct segment scoring matrix entries only for informative
segments. Restricting the segment scoring matrix entries to only informative
segments helps to significantly reduce the computational time of our
heuristics with minimal impact on alignment accuracy. In Section $3.1$, we
introduce scoring schemes for aligning pairs of informative segments. In
Section $3.2$, we introduce scoring schemes for aligning a segment with a gap.
### 3.1 Scoring Schemes for Aligning Pairs of Segments
We now introduce the following three scoring schemes for aligning a pair $s,t$
of informative segments of the same type: (i) Progressive scoring; (ii) Linear
Consistency scoring, and (iii) Quadratic Consistency scoring.
Progressive scoring: $SCORE(s,t)=SEG^{H}(s,t)$. In this scheme, we only make
use of the information content of a pair wise local alignment between segments
$s$ and $t$ constructed using heuristic $H$ and BLOSUM62 scoring matrix.
Linear consistency scoring:
$SCORE(s,t)$ = $|Mutual-neighborhood(s,t)|$ * $\sum_{u\in Mutual-
neighborhood(s,t)}(SEG^{H}(s,u)+SEG^{H}(t,u))$. In this scheme, we make use of
the information from both (i) pair-wise local alignment between $s$ and $t$,
and (ii) pair-wise alignments involving the segments $s$ and $t$ with segments
in their mutual neighborhood.
Quadratic consistency scoring:
$SCORE(s,t)=|Mutual-neighborhood(s,t)|^{2}$ $*\sum_{u\in Mutual-
neighborhood(s,t)}$ $(SEG^{H}(s,u)+SEG^{H}(t,u))$. In this scheme, the
information obtained through the alignment of two conserved segments of two
diverging sequences is weighed more than the information obtained through the
alignment of two non-conserved segments of two closely related sequences.
### 3.2 Scoring Schemes for Aligning a Segment with a Gap
We now introduce the following two scoring schemes for aligning an informative
segment with a gap: (i) Zero gap penalty, and (ii) Maximum gap penalty.
Zero gap penalty: $SCORE(s,-)=0$. In this scheme, we do not penalize the
deletion of any informative segment.
Maximum gap penalty: $SCORE(s,-)=\max_{t\in Neighborhood(s)}SEG^{H}(s,t)$. In
this scheme, the gap penalty of $s\in B_{i}$ based on the informative segment
$t\in\cal{S}\setminus S_{i}$ of the same type that maximizes the bit score of
a pair-wise local alignment between $s$ and $t$.
## 4 Heuristics for msa Construction
We now present a generic framework for designing template based fast
progressive alignment heuristics that construct global msa as follows:
* (i)
construct $DIST^{nei}$, a matrix of pair-wise sequence distances, based on
scores of pair-wise global segment alignment involving only the informative
segments;
* (ii)
construct a guide tree $G^{nei}$ using $NJ$ algorithm using $DIST^{nei}$ and
build $MSA^{nei}$, a msa of the informative segments, by progressively pair-
wise segment aligning sequences consistent with $G^{nei}$;
* (iii)
construct the pair-wise global alignment of the residues in non-informative
segments using fast approximate methods and stitch them back into $MSA^{nei}$.
In Section $4.1$, we describe our heuristic and in Section $4.2$, we present
our Heuristic $A$.
### 4.1 Description of Our Heuristic
Construction of pair-wise sequence distances: We now describe how we compute
the pair-wise sequence distances for each pair of sequences in $\cal{S}$.
###### Definitions 4.1
For $i,j\in[1..k]$, we define
* -
a global segment alignment between two sequences $S_{i}$ and $S_{j}$ to be an
alignment where a segment in $S_{i}$ is either aligned to a gap or another
segment in $S_{j}$ of the same type;
* -
$G^{nei}(i,j)$ to be the optimal global segment alignment between
$S_{i}^{nei}$ and $S_{j}^{nei}$ constructed using the segment scoring matrix
$SCORE$;.
* -
$DIST^{nei}(i,j)$ to be the score corresponding to $G^{nei}(i,j)$.
Notice that if each segment consists of a single amino acid then the global
segment alignment is the same as a traditional global alignment. In this case,
the traditional scoring matrices can be used to score alignments between
segments. Otherwise, one needs to determine an appropriate segment scoring
matrix and then using Needleman-Wunsch’s [32] dynamic program construct an
optimal global pair-wise segment alignment.
Construction of Guide Tree and msa of Informative Segments: We construct the
guide tree $G^{nei}$ from pair-wise sequence distance matrix $DIST^{nei}$
using the Neighbor Joining (NJ) algorithm. Then, we construct $M^{nei}$, the
msa of informative of sequences in $\cal{S}$, by progressively pair-wise
globally segment aligning the sequences $S_{1}^{nei},S_{2}^{nei},Ã\textcent
â‚\textlnot Â\textbrokenbar,S_{k}^{nei}$ consistent with $G^{nei}$.
Stitching the sites in non-informative segments into msa of informative
sequences: We will describe now for each pair of sequences $S_{i}$ and $S_{j}$
that were progressively aligned while constructing $M^{nei}$, how we stitch
the alignment of sites in $S_{i}$ and $S_{j}$ that were either in non-
informative segments or in non-aligned portion of informative segments into
$M^{nei}$. First, we introduce some necessary definitions.
###### Definitions 4.2
For a pair of informative segments $s\in B_{i}^{inf}$ and $t\in B_{j}^{inf}$
of the same type, let $L^{H}(s,t)$ be the local alignment of $s$ and $t$
constructed using heuristic $H$ and BLOSUM62 matrix. we now define
* -
$PREFIX_{s}(L^{H}(s,t))$ to be the prefix of segment $s$ that is not part of
the local alignment $L^{H}(s,t)$;
* -
$SUFFIX_{s}(L^{H}(s,t))$ to be the suffix of segment $s$ that is not part of
the local alignment $L^{H}(s,t)$.
Let $G^{nei}(i,j),i\neq j\in[1..k]$, denote an optimal global segment
alignment between sequence $S_{i}^{nei}$ and $S_{j}^{nei}$ constructed using
the segment scoring matrix $SCORE$. For $G^{nei}(i,j)$, we say a segment $s\in
S_{i}$ to be a matched segment if in $G^{nei}(i,j)$ it is aligned to a segment
$t\in S_{j}^{nei}$, otherwise it is an unmatched segment. We now present a
procedure $stitch$ that stitches the alignment between the sites in $S_{i}$
that occur between any two consecutive matched segments $s$ and $\hat{s}$ and
the sites in $S_{j}$ that occur between the corresponding matched segments $t$
and $\hat{t}$ into $G^{nei}(i,j)$.
Procedure $Stitch(s,\hat{s})$:
* -
Let $p,\hat{p}$ ($q,\hat{q}$) be the respective indices of segments
$s,\hat{s}$ ($t,\hat{t}$) in $S_{i}$ and $S_{j}$;
* -
Let $A=SUFFIX_{i}(B_{i}^{p},B_{j}^{q})\bigcup_{l=p+1}^{\hat{p}-1}B_{i}^{l}$
$\bigcup PREFIX_{i}(B_{i}^{\hat{p}},B_{j}^{\hat{q}})$ be the sequence of sites
in $S_{i}$ that are either in non-informative segments or unaligned portions
of informative segments in $G^{nei}(i,j)$;
* -
$B=SUFFIX_{j}(B_{i}^{p},B_{j}^{q})\bigcup_{l=q+1}^{\hat{q}-1}B_{j}^{l}$
$\bigcup PREFIX_{j}(B_{j}^{\hat{p}},B_{j}^{\hat{q}})$ be the sequence of sites
in $S_{j}$ that are either in non-informative segments or unaligned portions
of informative segments in $G^{nei}(i,j)$;
* -
Globally align segments $A$ and $B$ using BLOSUM62 scoring matrix and any fast
linear time heuristic and then insert this alignment between segments $s$ and
$\hat{s}$ in $G^{nei}(i,j)$.
### 4.2 Heuristic $A(\alpha,H,c)$
* Parameters:
* (1)
$\alpha$: a non-negative real number;
* (2)
$H$: an algorithm/heuristic for pair-wise local alignment of sequences;
* (3)
$c$: a function that maps for any given level of divergence in the interval
$[0,2]$ to the information threshold for an alignment to be informative.
* Inputs:
* (1)
$S=\\{S_{1},...,S_{k}\\}$: the set of k input sequences;
* (2)
$B=\\{B_{1},...,B_{k}\\}$: the set consisting of the segment decompositions of
the sequences in $\cal{S}$, where each segment $s$ is associated with a type
$type(s)$ and weight $weight(s)$;
* Main Heuristic
* (1)
For $i\in[1..k]$, construct $B_{i}^{inf}$ = {$s\in B_{i}$ :
$weight(s)\geq\alpha$ } and $S_{i}^{inf}$ the sequence of informative
segments.
* (2)
For each pair of informative segments $s\in B_{i}^{inf}$ and $t\in
B_{j}^{inf}$ of the same type, using heuristic $H$ and BLOSUM62 scoring matrix
construct $L^{H}(s,t)$ and compute $SEG^{H}(s,t)$.
* (3)
For $i\neq j\in[1..k]$, set $\alpha_{i,j}$ to the bit score per unit length
corresponding to $L^{H}(S_{i}^{inf},S_{j}^{inf})$, the local alignment between
$S_{i}^{inf}$ and $S_{j}^{inf}$ constructed using heuristic $H$ and BLOSUM62
scoring matrix.
* (4)
For each informative segment $s\in B_{i}^{inf}$ and $j\neq i\in[1..k]$,
compute the following:
* (i)
$Neighbor_{j}(s)=\\{t\in B_{j}^{inf}:type(t)=type(s)\land SEG^{H}(s,t)\geq
c(\alpha_{i,j})*|s|\\}$.
* (ii)
$Closest-neighbor_{j}(s)=\\{u^{\prime}\in
B_{j}^{inf}:SEG^{H}(s,u^{\prime})=max_{t\in Neighbor_{j}(s)}SEG^{H}(s,t)\\}$.
* (iii)
$Neighborhood(s)=\bigcup_{j\in[1..k]}Closest-neighbor_{j}(s)$.
* (5)
For each sequence $S_{i}\in\cal{S}$, $i\in[1..k]$, construct
$B_{i}^{nei}=\\{s\in B_{i}$ : $s$ is a neighbor of some segment in $\cal{S}$
$\setminus S_{i}$. } and $S_{i}^{inf}$, the neighbor sequence of $S_{i}$.
* (6)
For each pair of segments $s\in S_{i}^{nei}$ and $t\in S_{j}^{nei}$ from
distinct sequences and $l\neq i,j\in[1..k]$, compute
* (i)
$Mutual-neighbors_{l}(s,t)=\\{u\in B_{l}^{nei}:u\in Neighbor_{i}(u)\bigcap
Neighbor_{j}(u)\\}$.
* (ii)
$Closest-mutual-neighbor_{l}(s,t)=\\{u^{\prime}\in
B_{l}^{nei}:SEG^{H}(s,u^{\prime})+SEG^{H}(t,u^{\prime})=max_{u\in mutual-
neighbors_{l}(s,t)}(SEG^{H}(s,u)+SEG^{H}(t,u))\\}$.
* (iii)
$Mutual-neighborhood(s,t)=\bigcup_{j\in[1..k]}Closest-mutual-
neighbor_{j}(s,t)$.
* (7)
For each segment $s\in B_{i}^{nei}$, $i\in[1..k]$, compute $SCORE(s,-)$.
* (8)
For each pair of segments $s\in B_{i}^{nei}$ and $t\in B_{j}^{nei}$ of the
same type, compute $SCORE(s,t)$.
* (9)
For $i\neq j\in[1..k]$, compute $DIST^{nei}(i,j)$ by globally segment aligning
$S_{i}^{nei}$ and $S_{j}^{nei}$ using Needleman-Wunch’s dynamic program and
segment scoring matrix $SCORE$.
* (10)
We now construct the msa of $\cal{S}$ as follows:
* (i)
Construct guide tree $T^{nei}$ from $DIST^{nei}$ using the Neighbor Joining
(NJ) algorithm.
* (ii)
Construct $M^{nei}$ by progressively globally segment aligning the sequences
$S_{1}^{nei},...,S_{k}^{nei}$ a pair at a time consistent with $T^{nei}$.
* (iii)
For each pair of sequences $S_{i}^{nei}$ and $S_{j}^{nei}$ that were
progressively aligned while constructing $M^{nei}$,
* -
Let $G^{nei}(i,j)$ denote the global segment alignment of $S_{i}^{nei}$ and
$S_{j}^{nei}$;
* -
For each pair $s,\hat{s}$ of consecutive matched segments of $S_{i}^{nei}$ in
$G^{nei}(i,j)$ (where $t,\hat{t}$ are the corresponding matched segments of
$S_{j}^{nei}$) use procedure stitch to stitch the alignment between the sites
in $S_{i}$ that occur between $s$ and $\hat{s}$ and the sites in $S_{j}$ that
occur between the sites in $t$ and $\hat{t}$ to $G^{nei}(i,j)$.
## 5 Experimental Results
In this section, we first describe our experimental set-up, then we describe
how we evaluate the performance of our heuristic, and finally we summarize our
preliminary experimental results.
### 5.1 Experimental Set-up
Our computational experiments have been set-up with the focus on analyzing the
performance of our heuristics for sequences from protein families in the PFAM
[13] database for which (i) accurate reference alignments were available
either through structural aligners or through other sequence independent
biological methods, and (ii) annotations describing the salient biological
features were available for each sequence. We chose 12 sets of sequences
ranging from 5 to 23 sequences with sequence similarity ranging from 20% to
80%. For these sequences, we used PSIPRED [22], a widely used structure
prediction tool, to partition each sequence into segments based on their
secondary structure characteristics. PSIPRED classifies each segment into one
of three types helix, strand or a coil, and associated a non-negative weight
in the interval $[1,10]$ reflecting the confidence in its partitioning and
classification. Then for these sets of sequences, we construct an msa by using
our heuristic $A(\alpha,H,c)$, where $\alpha$ is a non-negative real number
parameter for classifying segments based on their weights into informative and
non-informative segments, $H$ is an algorithm/heuristic for pair-wise local
alignment of segments, and $c$ is a function that maps for any given level of
divergence in the interval $[0,2]$ to the information threshold for an
alignment to be informative. In our experiments, we have set $\alpha$ to be
$6$. That is a segment is considered to be informative if its average segment
weight $\geq 6$ (i.e. $\alpha\geq 6$) and its length is at least 5. In
addition, if two informative segments of the same type are separated by less
than 4 residues we merged the two segments with the intervening residues into
a single informative segment. We set $H$, the algorithm/heuristic for local
alignment to be BLASTP [1, 30] with slight modifications to handle alignments
involving short sequences. 111The quality of alignment constructed using
Smith-Waterman’s dynamic program was not significantly different from that
obtained using BLASTP. We defined the function $c$ based on the average bit
scores of BLOSUM matrices corresponding to different levels of sequence
divergence.
### 5.2 Evaluation the Performance of Our Heuristic
We evaluate the performance of our heuristic based on (i) the accuracy of its
msa in comparison with an reference alignment, and (ii) its computational
efficiency for the appropriate choice of its parameters $\alpha$, $H$ and $c$,
Evaluating accuracy of an msa: The traditional sequence similarity based
measures like SP score and Tree score have only been helpful in providing a
crude estimate of the alignment quality and measures based on structurally
correct alignments are likely to be better alternatives for evaluating
alignment accuracy. So, for sequences for which their 3D structure is known,
the accuracy of an msa can be evaluated in comparison with reference
alignments constructed through a structure aligner. We also observe instances
of homologous sequences that share only a few features and yet preserve their
overall structure and function. In these instances, local feature conservation
is another good predictor of alignment accuracy. So, we measure the accuracy
of the msas constructed by our heuristic in terms of the percentage
correlation between the columns in the multiple sequence alignments
constructed by our heuristic and the columns of the sites within the reference
alignment that correspond to conserved features.
Note: Our heuristics make use of the secondary structure predictions from
PSIPRED. So, any inaccuracies in the secondary structure prediction of PSIPRED
should also be factored while evaluating the accuracy of msa constructed by
our heuristics. We factor this in terms of the correlation between the
informative sites in our heuristic and the sites in the reference alignment
that correspond to conserved features. We also restrict the impact of
inaccuracies in secondary structure prediction on msa accuracy by conservative
choice of the information threshold function $c$ (i.e. higher than if we had
an accurate partitioning and correct classification of segments).
Evaluation of Computational Efficiency: Our heuristics attempt to minimize its
computational time with minimal impact on its accuracy by first classifying
the segments within each sequence into informative (non-informative) segments
based on its weight exceeding (not exceeding) $\alpha$. Then, the msa is
essentially constructed by first progressively pair-wise aligning the sites in
informative segments using exact methods and then us linear time approximate
heuristics to align the sites in non-informative segments and stitch them back
into the alignment of sites in informative segments. So, the saving in
computational time depends on the fraction of the segments that are
informative. This in turn depends mainly on the choice of the information
threshold $\alpha$.
### 5.3 Summary of Preliminary Experimental Results
Protein | # of | Sequence | % Sequence | # of informative | Avg Length of | % Local
---|---|---|---|---|---|---
Family | Sequences | Lengths | Similarity | segments | informative segment | Similarity
PF13420 | 21 | 152-164 | 20%-70% | 4 | 10 | $<70\%^{a}$
PF13652 | 11 | 131-152 | 65%-85% | 4 | 12 | $>90$%
PF13693 | 22 | 77-81 | 55%-83% | 3 | 12 | $>90$%
PF13733 | 5 | 133-142 | 55%-61% | 2 | 8 | $>90$%
PF13844 | 6 | 449-481 | 68%-78% | 7 | 12 | $>90$%
PF13856 | 23 | 90-112 | 30%-73% | 3 | 10 | $>80\%^{a}$
PF13944 | 21 | 120-146 | 30%-85% | 3 | 10 | $>90$%
PF14186 | 11 | 152-157 | 38%-68% | 4 | 8 | $>90$%
PF14263 | 10 | 120-129 | 50%-66% | 3 | 10 | $>90$%
PF14274 | 20 | 155-165 | 36%-71% | 3 | 12 | $>90$%
PF14323 | 18 | 485-548 | 36%-43% | 6 | 11 | $>90$%
$a$: Quadratic Consistency and Max Gap Penalty Scoring Schemes was employed.
Table 1: Summary of msa results using Linear Consistency and Max Gap Penalty
Schemes
## 6 Conclusions and Future Work
Our preliminary experimental results indicate that our template based
heuristic framework can help in designing heuristics that can exploit template
based information to construct msas that are biologically accurate in a
computationally efficient manner. However, we would like to (i) make use of
extreme value distribution [16] to define the the function $c$ that maps for a
given level of sequence divergence the information threshold for an alignment
to be informative; (ii) Understand how to define the segment scoring schemes
for aligning sequences that are highly divergent; (iii) evaluate the accuracy
of the alignments constructed by our heuristics by using sequence independent
measures [2, 25, 34] on challenging datasets in BAliBASE [27, 28].
## References
* [1] Altschul SF, Gish W, Miller W, Myers EW and Lipman DJ. Basic local alignment search tool. J. Mol. Biol. 1990;215:403-410.
* [2] Armougom F, Moretti S, Keduas V, Notredame C., The iRMSD: a local measure of sequence alignment accuracy using structural information. Bioinformatics. 2006 Jul 15;22(14):e35-e39.
* [3] Armougom F, Moretti S, Poirot O, Audic S, Dumas P, Schaeli B, Keduas V and Notredame C. Expresso: automatic incorporation of structural information in multiple sequence alignments using 3D-Coffee. Nucleic Acids Res, 2007;Jul 1-34.
* [4] Bauer M, Klau GW, and Reinert K. Accurate multiple sequence-structure alignment of RNA sequences using combinatorial optimization. BMC Bioinformatics, 2007;8(271).
* [5] Blackshields G, Wallace IM, Larkin M, and Higgins DG. Analysis and comparison of benchmarks for multiple sequence alignment. In Silico Biol. 2006;6(4):321-39.
* [6] Do CB, Mahabhashyam MS, Brudno M and Batzoglou S. ProbCons: Probabilistic consistency-based multiple sequence alignment. Genome Res. 2005 Feb;15(2):330-40.
* [7] Carrillo H and Lipman DJ. The Multiple Sequence Alignment Problem in Biology. SIAM Journal of Applied Mathematics. 1988;Vol.48, No. 5, 1073-1082.
* [8] Durbin R, Eddy S, Krogh A and Mitchison G. Biological sequence analysis: probabilistic models of proteins and nucleic acids, Cambridge University Press. 1998.
* [9] Eddy SR. Multiple alignment using hidden Markov models. Proc of Int Conf Intell Syst Mol Biol (ISMB). 1995;3:114-20.
* [10] Edgar RC. MUSCLE: a multiple sequence alignment method with reduced time and space complexity. BMC Bioinformatics. 2004a;5:113.
* [11] Edgar RC. MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res. 2004b;32:1792–1797.
* [12] Edgar RC and Batzoglou S. Multiple sequence alignment. Curr. Opin. Struct. Biol. 2006;16:368–373.
* [13] Finn RD, Mistry J, Tate J, Coggill P, Heger A, Pollington JE, Gavin OL, Gunesekaran P, Ceric G, Forslund K, Holm L, Sonnhammer EL, Eddy SR and , Bateman A. The Pfam protein families database. Nucleic Acids Res. 2010;Database Issue 38:D211-222
* [14] Gotoh O. Consistency of optimal sequence alignments. Bull. Math. Biol. 1990;52:509–525.
* [15] Gotoh O. Significant improvement in accuracy of multiple protein sequence alignments by iterative refinements as assessed by reference to structural alignments. J. Mol. Biol. 1996;264:823–838.
* [16] Gumbel EJ. Statistics of extremes. Columbia University Press, New York, NY. 1958.
* [17] Gusfield D. Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology. Cambridge University Press. 1997.
* [18] Higgins DG and Sharp PM. CLUSTAL: a package for performing multiple sequence alignment on a microcomputer. Gene 1998;73(1): 237–244.
* [19] Hirosawa M, Totoki Y, Hoshida M and Ishikawa M. Comprehensive study on iterative algorithms of multiple sequence alignment. Comput Appl Biosci 1995;11 (1): 13–18.
* [20] Hogeweg P and Hesper B. The alignment of sets of sequences and the construction of phylogenetic trees. An integrated method. J. Mol. Evol. 1984;20:175–186.
* [21] Hughey R and Krogh A. Hidden Markov models for sequence analysis: extension and analysis of the basic method. CABIOS 1996;12 (2): 95–107.
* [22] Jones DT. Protein secondary structure prediction based on position-specific scoring matrices. J. Mol. Biol. 1991;292: 195-202.
* [23] Karlin Samuel and Altschul Stephen F. (1990). Methods for assessing the statistical significance of molecular sequence features by using general scoring schemes. Proc Natl Acad Sci USA 1987 (6): 2264–8.
* [24] Katoh K and Toh H. Recent developments in the MAFFT multiple sequence alignment program. Brief. Bioinform. 2008;9:286–298.
* [25] Kemena C and Notredame C. Upcoming challenges for multiple sequence alignment methods in the high-throughput era. Bioinformatics. 2009;Oct 1;25(19):2455-65.
* [26] Kim J, Pramanik S and Chung MJ. Multiple sequence alignment using simulated annealing. Comput Appl Biosci 1994;10 (4): 419–26.
* [27] Lassmann T and Sonnhammer EL. Quality assessment of multiple alignment programs. FEBS Lett. 2002;18:126–130.
* [28] Lassmann T and Sonnhammer EL. Automatic assessment of alignment quality. Nucleic Acids Res. 2005a;33:7120–7128.
* [29] Lipman DJ, Altschul SF and Kececioglu JD. A tool for multiple sequence alignment. Proc Natl Acad Sci U S A 1988:86 (12): 4412–4415.
* [30] McWilliam H, Valentin F, Goujon M, Li W, Narayanasamy M, Martin J, Miyar T and Lopez R. Web services at the European Bioinformatics Institute - 2009 Nucleic Acids Res. 2009;37: W6-W10.
* [31] Morgenstern B. Multiple DNA and Protein sequence based on segment-to-segment comparison. Proc. Natl Acad. Sci. USA. 1996;93:12098–12103.
* [32] Needleman SB and Wunsch CD. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J. Mol. Biol. 1970;48:443–453.
* [33] Notredame C. Recent evolutions of multiple sequence alignment. PLoS Comput. Biol. 2007;3:e123.
* [34] Notredame C. Computing Multiple Sequence Alignment with Template-Based Methods. Sequence Alignment: Methods, Models and Strategies, Edited by Michael S. Rosenberg, University of California Press. 2011;Chapter 4:56-68.
* [35] Notredame C. Recent evolutions of multiple sequence alignment. PLoS Comput. Biol. 2007;3:e123.
* [36] Notredame C, O’Brien EA and Higgins DG. SAGA: RNA sequence alignment by genetic algorithm. Nucleic Acids Res. 1997 25 (22): 4570–80.
* [37] Notredame C and Higgins DG. SAGA: sequence alignment by genetic algorithm. Nucleic Acids Res. 1996;24:1515–1524.
* [38] Notredame C, Higgins DG and Heringa J. T-Coffee: A novel method for fast and accurate multiple sequence alignment. J Mol Biol. 2000 Sep 8;302(1):205-17.
* [39] O’Sullivan O, Suhre K, Abergel C, Higgins DG and Notredame C. 3DCoffee: combining protein sequences and structures within multiple sequence alignments. J. Mol. Biol. 2004;340:385–395.
* [40] Pei J. Multiple protein sequence alignment. Curr. Opin. Struct. Biol. 2008;18:382–386.
* [41] Pei J and Grishin NV. MUMMALS: multiple sequence alignment improved by using hidden Markov models with local structural information. Nucleic Acids Res. 2006;34:4364–4374.
* [42] Pei J and Grishin NV. PROMALS: towards accurate multiple sequence alignments of distantly related proteins. Bioinformatics. 2007;23:802–808.
* [43] Pei J, Sadreyev R and Grishin NV. PCMA: fast and accurate multiple sequence alignment based on profile consistency. Bioinformatics. 2003;19:427–428.
* [44] Pei J, Kim BH and Grishin NV. PROMALS3D: a tool for multiple protein sequence and structure alignments. Nucleic Acids Res. 2008;36:2295–2300.
* [45] Reinert K, Lenhof H, MutzelP, Mehlhorn K and Kececioglou JD. A branch-and-cut algorithm for multiple sequence alignment. Recomb97. 1997;241-249.
* [46] Saitou N and Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Molecular Biology and Evolution, volume 4, issue 4, pp. 406-425, July
* [47] Simossis VA and Heringa J. PRALINE: a multiple sequence alignment toolbox that integrates homology-extended and secondary structure information. Nucleic Acids Res.
* [48] Subramanian AR, Weyer-Menkhoff J, Kaufmann M and Morgenstern B. DIALIGN-T: an improved algorithm for segment-based multiple sequence alignment. BMC Bioinformatics. 2005 Mar 22;6:66.
* [49] Subramanian AR, Kaufmann M and Morgenstern B. DIALIGN-TX: greedy and progressive approaches for segment-based multiple sequence alignment. Algorithms Mol Biol. 2008 May 27;3:6. doi: 10.1186/1748-7188-3-6.
* [50] Taylor WR. Identification of protein sequence homology by consensus template alignment. J. Mol. Biol. 1986;188:233–258.
* [51] Thompson JD, Higgins DG and Gibson TJ. CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice. Nucleic Acids Res. 1994 Nov 11;22(22):4673-80.
* [52] Wallace IM, O’Sullivan O and Higgins DG. Evaluation of iterative alignment algorithms for multiple alignment. Bioinformatics. 2005b;21:1408–1414.
* [53] Wallace IM, ’Sullivan O, Higgins DG and Notredame C. M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006;34:1692–1699.
* [54] Wang L and Jiang T. On the complexity of multiple sequence alignment. J Comput Biol. 1994;1 (4): 337–348.
* [55] Wang L, Jiang T and Gusfield D. A more Efficient Approximation Scheme for Tree Alignment. SIAM Journal on Computing. 1997;Vol 30, No. 1, 283-299.
|
arxiv-papers
| 2013-02-25T09:55:16 |
2024-09-04T02:49:42.093861
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Srikrishnan Divakaran, Arpit Mithal, and Namit Jain",
"submitter": "Srikrishnan Divakaran",
"url": "https://arxiv.org/abs/1302.6030"
}
|
1302.6082
|
# A Note on Inextensible Flows of Curves in $E_{1}^{n}$
ÖNDER GÖKMEN YILDIZ Department of Mathematics, Faculty of Sciences and Arts,
Bilecik Şeyh Edebali University, Bilecik, TURKEY
[email protected] and MURAT TOSUN Department of Mathematics,
Faculty of Sciences and Arts, Sakarya University, Sakarya, TURKEY
[email protected]
###### Abstract.
In this paper, we study inextensible flows of non-null curves in $E_{1}^{n}$.
We give necessary and sufficient conditions for inextensible flow of non-null
curve in $E_{1}^{n}$.
###### Key words and phrases:
Curvature flows, inextensible, Minkowskian n-space.
###### 2010 Mathematics Subject Classification:
53C44, 51B20, 53A35.
## 1\. Introduction
Flow of curves has a very important place in the field of industry such as
modeling ship hulls, buildings, airplane wings, garments, ducts, automobile
parts. Moreover Chirikjian and Burdick describe the kinematics of
hyperredundant (or ”serpentine”) robot as the flow of plane curve [2]. The
flow of a curve is said to be inextensible if, its arclength is preserved.
Firstly, Kwon and Park studied inextensible flows of curves and developable
surfaces, which its arclength is preserved, in Euclidean 3-space [10].
Inextensible curve flows conduce to motions in which no strain energy is
induced in physical science. For example, the swinging motion of a cord of
fixed length can be represented by this type of curve flows. Also inextensible
flows of curves have great importance in computer vision and computer
animation moreover structural mechanics (see [4],[9], [11]).
There are many studies in the literature on plane curve flows, especially on
evolving curves in the direction of their curvature vector field (referred to
by various names such as “curve shortening”, flow by curvature” and “heat
flow”). Among them, perhaps, most important case (but already a very subtle
one) is the curve-shortening flow in the plane studied by Gage and Hamilton
[5] and Grayson [6]. Another paper about curve flows was studied by Chirikjian
[3].
Inextensible flows of curves are studied in many different spaces. For
example, Gürbüz have examined inextensible flows of spacelike, timelike and
null curves in [7]. After this work Öğrenmiş et al. have studied inextensible
curves in Galilean space [12] and Yıldız et al. have studied inextensible
flows of curves according to darboux frame in Euclidean 3-space [13] and they
have investigated inextensible flows of curves in Euclidean n-space [14], etc.
In the present paper following [10], [7], [12], [13], [14], we study
inextensible flows of non-null curves in $E_{1}^{n}.$ Further, we give
necessary and sufficient conditions for inextensible flows of non-null curves
in $E_{1}^{n}.$
## 2\. Preliminaries and Notations
Let $E_{1}^{n\text{ }}$ be the $n-$dimensional pseudo-Euclidean space with
index 1 endowed with the indefinite inner product given by
$\left\langle
X,Y\right\rangle=-x_{1}y_{1}+{\displaystyle\sum\limits_{i=2}^{n}}x_{i}y_{i},$
where
$X=\left(x_{1},x_{2},...,x_{n}\right),Y=\left(y_{1},y_{2},...,y_{n}\right)$ is
the usual coordinate system. An arbitrary vector
$X=\left(x_{1},x_{2},...,x_{n}\right)$ in $E_{1}^{n\text{ }}$ can have one of
three Lorentzian causal characters; it can be spacelike if $\left\langle
X,X\right\rangle>0$ or $X=0$ , timelike if $\left\langle X,X\right\rangle<0$
and null (lightlike) if $\left\langle X,X\right\rangle=0$ and $X\neq 0$. The
category into which a given tangent vector falls is called its causal
character. These definitions can be generalized for curves as follows. A curve
$\alpha$ in $E_{1}^{n\text{ }}$ is said to be spacelike if all of its velocity
vectors $\alpha^{\prime}$ are spacelike, similarly for timelike and null [1].
Let $\alpha:I\subset R\mathbb{\longrightarrow}E_{1}^{n\text{ }}$ be non-null
curve in $E_{1}^{n\text{ }}$. A non-null curve $\alpha(s)$ is said to be a
unit speed curve if
$\left\langle\alpha^{\prime}(s),\alpha^{\prime}(s)\right\rangle=\varepsilon_{0}$,
($\varepsilon_{0}$ being $+1$ or $-1$ according to $\alpha$ is spacelike or
timelike respectively). Let $\left\\{V_{1},V_{2},...,V_{n}\right\\}$ be the
moving Frenet frame along the unit speed curve $\alpha$, where
$V_{i}\left(i=1,2,...,n\right)$ denote $i^{th}$ Frenet vector fields and
$k_{i}\left(i=1,2,...,n-1\right)$ denotes the $i^{th}$ curvature function of
the curve. Then the Frenet formulas are given as
$\displaystyle V_{1}^{\prime}$ $\displaystyle=k_{1}V_{2},$ $\displaystyle
V_{i}^{\prime}$
$\displaystyle=-\varepsilon_{i-2}\varepsilon_{i-1}k_{i-1}V_{i-1}+k_{i}V_{i+1},\text{
\ }1<i<n,$ $\displaystyle V_{n}^{\prime}$
$\displaystyle=-\varepsilon_{n-2}\varepsilon_{n-1}k_{n-1}V_{n-1}+k_{i}V_{i+1},$
where $\left\langle V_{i},V_{i}\right\rangle=\varepsilon_{i-1}=\mp 1$ [8].
## 3\. Inextinsible Flows of Curve in $E_{1}^{n}$
Unless otherwise stated we assume that.
$\alpha:\left[0,l\right]\times\left[0,w\right)\mathbb{\longrightarrow}E_{1}^{n\text{
}}$
is a one parameter family of smooth non-null curves in $E_{1}^{n\text{ }}$,
where $l$ is the arclength of the initial curve. Suppose that $u$ is the curve
parametrization variable , $0\leq u\leq l$. If the speed non-null curve
$\alpha$ is given by $v=\left\|\frac{d\alpha}{du}\right\|,$ then the arclength
of $\alpha$ is given as a function of $u$ by
$s(u)={\displaystyle\int\limits_{0}^{u}}\left\|\frac{\partial\alpha}{\partial
u}\right\|du={\displaystyle\int\limits_{0}^{u}}vdu.$
The operator $\frac{\partial}{\partial s}$ is given by
$\frac{\partial}{\partial s}=\frac{1}{v}\frac{\partial}{\partial u}.$ (3.1)
In this case; the arclength is as follows $ds=vdu$.
###### Definition 3.1.
Let $\alpha$ be a differentiable non-null curve and
$\left\\{V_{1},V_{2},...,V_{n}\right\\}$ be the Frenet frame of $\alpha$ in
Euclidean n-space. Any flow of the non-null curve can be expressed as follows
$\frac{\partial\alpha}{\partial
t}={\displaystyle\sum\limits_{i=1}^{n}}f_{i}V_{i}.$
Here, $f_{i}$ is the $i^{th}$ scalar speed of the non-null curve $\alpha.$
Let the arclength variation be
$s(u,t)={\displaystyle\int\limits_{0}^{u}}vdu.$
In $E_{1}^{n\text{ }},$the requirement that the non-null curve not be subject
to any elongation or compression can be expressed by the condition
$\frac{\partial}{\partial
t}s(u,t)={\displaystyle\int\limits_{0}^{u}}\frac{\partial v}{\partial
t}du=0,\text{ \ }u\in\left[0,l\right].$ (3.2)
where $u\in\left[0,l\right].$
###### Definition 3.2.
Let $\alpha$ be a non-null curve in $E_{1}^{n\text{ }}.$ A non-null curve
evolution $\alpha(u,t)$ and its flow $\frac{\partial\alpha}{\partial t}$ are
said to be inextensible if
$\frac{\partial}{\partial t}\left\|\frac{\partial\alpha}{\partial
u}\right\|=0.$
Before deriving the necessary and sufficient condition for inelastic non-null
curve flow, we need the following lemma.
###### Lemma 3.3.
Let $\left\\{V_{1},V_{2},...,V_{n}\right\\}$ be the Frenet frame of non-null
curve $\alpha\ $and $\frac{\partial\alpha}{\partial
t}={\displaystyle\sum\limits_{i=1}^{n}}f_{i}V_{i}$ be a smooth flow of
$\alpha$ in $E_{1}^{n\text{ }}.$ Then we have the following equality:
$\frac{\partial v}{\partial t}=\varepsilon_{0}\frac{\partial f_{1}}{\partial
u}-\varepsilon_{1}f_{2}vk_{1}.$ (3.3)
###### Proof.
As $\frac{\partial}{\partial u}$ and $\frac{\partial}{\partial t}$ commute and
$v^{2}=\left\langle\frac{\partial\alpha}{\partial
u},\frac{\partial\alpha}{\partial u}\right\rangle,$ we have
$\displaystyle 2v\frac{\partial v}{\partial t}$
$\displaystyle=\frac{\partial}{\partial
t}\left\langle\frac{\partial\alpha}{\partial u},\frac{\partial\alpha}{\partial
u}\right\rangle$ $\displaystyle=2\left\langle\frac{\partial\alpha}{\partial
u},\frac{\partial}{\partial
u}\left({\displaystyle\sum\limits_{i=1}^{n}}f_{i}V_{i}\right)\right\rangle$
$\displaystyle=2\left\langle
vV_{1},{\displaystyle\sum\limits_{i=1}^{n}}\frac{\partial f_{i}}{\partial
u}V_{i}+{\displaystyle\sum\limits_{i=1}^{n}}f_{i}\frac{\partial
V_{i}}{\partial u}\right\rangle$ $\displaystyle=2\left\langle
vV_{1},\frac{\partial f_{1}}{\partial u}V_{1}+f_{1}\frac{\partial
V_{1}}{\partial u}+...+\frac{\partial f_{n}}{\partial
u}V_{n}+f_{n}\frac{\partial V_{n}}{\partial u}\right\rangle$
$\displaystyle=2\left\langle vV_{1},\frac{\partial f_{1}}{\partial
u}V_{1}+f_{1}vk_{1}V_{2}+...+\frac{\partial f_{n}}{\partial
u}V_{n}-f_{n}\varepsilon_{n-2}\varepsilon_{n-1}vk_{n-1}V_{n-1}\right\rangle$
$\displaystyle=2\left(\varepsilon_{0}\frac{\partial f_{1}}{\partial
u}-\varepsilon_{1}f_{2}vk_{1}\right).$
This clearly forces
$\frac{\partial v}{\partial t}=\varepsilon_{0}\frac{\partial f_{1}}{\partial
u}-\varepsilon_{1}f_{2}vk_{1}.$
∎
###### Theorem 3.4.
Let $\left\\{V_{1},V_{2},...,V_{n}\right\\}$ be the moving Frenet frame of the
non-null curve $\alpha$ and $\frac{\partial\alpha}{\partial
t}={\displaystyle\sum\limits_{i=1}^{n}}f_{i}V_{i}$ be a differentiable flow of
$\alpha$ in $E_{1}^{n\text{ }}$. In this case, the flow is inextensible if and
only if
$\frac{\partial f_{1}}{\partial s}=\varepsilon_{0}\varepsilon_{1}f_{2}k_{1}.$
(3.4)
###### Proof.
Let us assume that the non-null curve flow is inextensible. From equations
(3.2) and (3.3) it follows that
$\frac{\partial}{\partial
t}s(u,t)={\displaystyle\int\limits_{0}^{u}}\frac{\partial v}{\partial
t}du={\displaystyle\int\limits_{0}^{u}}\left(\varepsilon_{0}\frac{\partial
f_{1}}{\partial u}-\varepsilon_{1}f_{2}vk_{1}\right)du=0,\text{ \
}u\in\left[0,l\right].$
This clearly forces
$\varepsilon_{0}\frac{\partial f_{1}}{\partial
u}-\varepsilon_{1}f_{2}vk_{1}=0.$
Combining the last equation with (3.1) yields
$\frac{\partial f_{1}}{\partial s}=\varepsilon_{0}\varepsilon_{1}f_{2}k_{1}.$
On the contrary, following similar way as above, the proof can be completed.
Now, suppose that the non-null curve $\alpha$ is a arclength parametrized
curve. That is, $v=1$ and the local coordinate $u$ corresponds to the curve
arclength $s$.
###### Lemma 3.5.
Let $\left\\{V_{1},V_{2},...,V_{n}\right\\}$ be the moving Frenet frame of the
non-null curve $\alpha$. The differentions of
$\left\\{V_{1},V_{2},...,V_{n}\right\\}$ with respect to $t$ is
$\displaystyle\frac{\partial V_{1}}{\partial t}$
$\displaystyle=\left[{\displaystyle\sum\limits_{i=2}^{n-1}}\left(f_{i-1}k_{i-1}+\frac{\partial
f_{i}}{\partial
s}-\varepsilon_{i-1}\varepsilon_{i}f_{i+1}k_{i}\right)V_{i}\right]+\left(f_{n-1}k_{n-1}+\frac{\partial
f_{n}}{\partial s}\right)V_{n},$ $\displaystyle\frac{\partial V_{j}}{\partial
t}$
$\displaystyle=-\varepsilon_{0}\left(\varepsilon_{j-1}f_{j-1}k_{j-1}+\varepsilon_{j-1}\frac{\partial
f_{j}}{\partial
s}-\varepsilon_{j}f_{j+1}k_{j}\right)V_{1}+\left[{\displaystyle\sum\limits_{\begin{subarray}{c}k=2\\\
k\neq j\end{subarray}}^{n}}\Psi_{kj}V_{k}\right],\text{ \ }1<j<n,$
$\displaystyle\frac{\partial V_{n}}{\partial t}$
$\displaystyle=-\varepsilon_{0}\varepsilon_{n-1}\left(f_{n-1}k_{n-1}+\frac{\partial
f_{n}}{\partial
s}\right)V_{1}+\left[{\displaystyle\sum\limits_{k=2}^{n-1}}\Psi_{kn}V_{k}\right],$
where $\Psi_{kj}=\left\langle\frac{\partial V_{j}}{\partial
t},V_{k}\right\rangle,$ $k$ $\neq j,$ $1\leq j,k\leq n$ and
$\varepsilon_{i-1}=\left\langle V_{i},V_{i}\right\rangle=\pm 1,$ $1\leq i\leq
n$.
###### Proof.
As $\frac{\partial}{\partial t}$ and $\frac{\partial}{\partial s}$ commute, we
have
$\displaystyle\frac{\partial V_{1}}{\partial t}$
$\displaystyle=\frac{\partial}{\partial t}\left(\frac{\partial\alpha}{\partial
s}\right)=\frac{\partial}{\partial s}\left(\frac{\partial\alpha}{\partial
t}\right)=\frac{\partial}{\partial
s}\left({\displaystyle\sum\limits_{i=1}^{n}}f_{i}V_{i}\right)={\displaystyle\sum\limits_{i=1}^{n}}\frac{\partial
f_{i}}{\partial
s}V_{i}+{\displaystyle\sum\limits_{i=1}^{n}}f_{i}\frac{\partial
V_{i}}{\partial s}$ $\displaystyle=\frac{\partial f_{1}}{\partial
s}V_{1}+f_{1}\frac{\partial V_{1}}{\partial s}+\frac{\partial f_{2}}{\partial
s}V_{2}+f_{2}\frac{\partial V_{2}}{\partial s}+...+\frac{\partial
f_{n}}{\partial s}V_{n}+f_{n}\frac{\partial V_{n}}{\partial s}$
$\displaystyle=\frac{\partial f_{1}}{\partial
s}V_{1}+f_{1}k_{1}V_{2}+\frac{\partial f_{2}}{\partial
s}V_{2}+f_{2}\left(-\varepsilon_{0}\varepsilon_{1}k_{1}V_{1}+k_{2}V_{3}\right)+...+\frac{\partial
f_{n}}{\partial
s}V_{n}-f_{n}\varepsilon_{n-2}\varepsilon_{n-1}k_{n-1}V_{n-1}.$
Substituting the equation (3.4) into the last equation yields
$\frac{\partial V_{1}}{\partial
t}=\left[{\displaystyle\sum\limits_{i=2}^{n-1}}\left(f_{i-1}k_{i-1}+\frac{\partial
f_{i}}{\partial
s}-\varepsilon_{i-1}\varepsilon_{i}f_{i+1}k_{i}\right)V_{i}\right]+\left(f_{n-1}k_{n-1}+\frac{\partial
f_{n}}{\partial s}\right)V_{n}.$
Now, differentiating the Frenet frame with respect to $t$ for $1<j<n$
$\displaystyle 0$ $\displaystyle=\frac{\partial}{\partial t}\left\langle
V_{1},V_{j}\right\rangle=\left\langle\frac{\partial V_{1}}{\partial
t},V_{j}\right\rangle+\left\langle V_{1},\frac{\partial V_{j}}{\partial
t}\right\rangle$
$\displaystyle=\left(\varepsilon_{j-1}f_{j-1}k_{j-1}+\varepsilon_{j-1}\frac{\partial
f_{j}}{\partial s}-\varepsilon_{j}f_{j+1}k_{j}\right)+\left\langle
V_{1},\frac{\partial V_{j}}{\partial t}\right\rangle.$ (3.5)
Thus, from the last equation we get
$\frac{\partial V_{j}}{\partial
t}=-\varepsilon_{0}\left(\varepsilon_{j-1}f_{j-1}k_{j-1}+\varepsilon_{j-1}\frac{\partial
f_{j}}{\partial
s}-\varepsilon_{j}f_{j+1}k_{j}\right)V_{1}+\left[{\displaystyle\sum\limits_{\begin{subarray}{c}k=2\\\
k\neq j\end{subarray}}^{n}}\Psi_{kj}V_{k}\right].$
Since $\left\langle V_{1},V_{n}\right\rangle=0,$ this follows by the similar
method as above
$\frac{\partial V_{n}}{\partial
t}=-\varepsilon_{0}\varepsilon_{n-1}\left(f_{n-1}k_{n-1}+\frac{\partial
f_{n}}{\partial
s}\right)V_{1}+\left[{\displaystyle\sum\limits_{k=2}^{n-1}}\Psi_{kn}V_{k}\right].$
###### Theorem 3.6.
Let the non-null curve flow $\frac{\partial\alpha}{\partial
t}={\displaystyle\sum\limits_{i=1}^{n}}f_{i}V_{i}$ be inextensible in
$E_{1}^{n\text{ }}$. Then, there exist the following system of partial
differential equations.
$\displaystyle\frac{\partial k_{1}}{\partial t}$
$\displaystyle=\varepsilon_{0}\varepsilon_{1}f_{2}k_{1}^{2}+f_{1}\frac{\partial
k_{1}}{\partial s}+\frac{\partial^{2}f_{2}}{\partial
s^{2}}-2\varepsilon_{1}\varepsilon_{2}\frac{\partial f_{3}}{\partial
s}k_{2}-\varepsilon_{1}\varepsilon_{2}f_{3}\frac{\partial k_{2}}{\partial
s}-\varepsilon_{1}\varepsilon_{2}f_{2}k_{2}^{2}-\varepsilon_{1}\varepsilon_{3}f_{4}k_{2}k_{3},$
$\displaystyle\frac{\partial k_{i-1}}{\partial t}$
$\displaystyle=-\varepsilon_{i-2}\varepsilon_{i-1}\frac{\partial\Psi_{(i-1)i}}{\partial
s}-\varepsilon_{i-2}\varepsilon_{i-1}\Psi_{(i-2)i}k_{i-2},$
$\displaystyle\frac{\partial k_{i}}{\partial t}$
$\displaystyle=\frac{\partial\Psi_{(i+1)i}}{\partial
s}-\varepsilon_{i}\varepsilon_{i+1}\Psi_{(i+2)i}k_{i+1},$
$\displaystyle\frac{\partial k_{n-1}}{\partial t}$
$\displaystyle=-\varepsilon_{n-2}\varepsilon_{n-1}\frac{\partial\Psi_{(n-1)n}}{\partial
s}-\varepsilon_{n-2}\varepsilon_{n-1}\Psi_{(n-2)n}k_{n-2}.$
∎
∎
###### Proof.
Noting that $\frac{\partial}{\partial s}\frac{\partial V_{1}}{\partial
t}=\frac{\partial}{\partial t}\frac{\partial V_{1}}{\partial s}$ thus we have
$\displaystyle\frac{\partial}{\partial s}\frac{\partial V_{1}}{\partial t}$
$\displaystyle=\frac{\partial}{\partial
s}\left[{\displaystyle\sum\limits_{i=2}^{n-1}}\left(f_{i-1}k_{i-1}+\frac{\partial
f_{i}}{\partial
s}-\varepsilon_{i-1}\varepsilon_{i}f_{i+1}k_{i}\right)V_{i}+\left(f_{n-1}k_{n-1}+\frac{\partial
f_{n}}{\partial s}\right)V_{n}\right]$
$\displaystyle={\displaystyle\sum\limits_{i=2}^{n-1}}\left[\left(\frac{\partial
f_{i-1}}{\partial s}k_{i-1}+f_{i-1}\frac{\partial k_{i-1}}{\partial
s}+\frac{\partial^{2}f_{i}}{\partial
s^{2}}-\varepsilon_{i-1}\varepsilon_{i}\frac{\partial f_{i+1}}{\partial
s}k_{i}-\varepsilon_{i-1}\varepsilon_{i}f_{i+1}\frac{\partial k_{i}}{\partial
s}\right)V_{i}\right]$
$\displaystyle+{\displaystyle\sum\limits_{i=2}^{n-1}}\left[\left(f_{i-1}k_{i-1}+\frac{\partial
f_{i}}{\partial
s}-\varepsilon_{i-1}\varepsilon_{i}f_{i+1}k_{i}\right)\frac{\partial
V_{i}}{\partial s}\right]$ (3.6) $\displaystyle+\left(\frac{\partial
f_{n-1}}{\partial s}k_{n-1}+f_{n-1}\frac{\partial k_{n-1}}{\partial
s}+\frac{\partial^{2}f_{n}}{\partial
s^{2}}\right)V_{n}+\left(f_{n-1}k_{n-1}+\frac{\partial f_{n}}{\partial
s}\right)\frac{\partial V_{n}}{\partial s}$
while
$\frac{\partial}{\partial t}\frac{\partial V_{1}}{\partial
s}=\frac{\partial}{\partial t}\left(k_{1}V_{2}\right)=\frac{\partial
k_{1}}{\partial t}V_{2}+k_{1}\frac{\partial V_{2}}{\partial t}.$ (3.7)
Therefore, from the equation (3.6) and (3.7) it is seen that
$\frac{\partial k_{1}}{\partial
t}=\varepsilon_{0}\varepsilon_{1}f_{2}k_{1}^{2}+f_{1}\frac{\partial
k_{1}}{\partial s}+\frac{\partial^{2}f_{2}}{\partial
s^{2}}-2\varepsilon_{1}\varepsilon_{2}\frac{\partial f_{3}}{\partial
s}k_{2}-\varepsilon_{1}\varepsilon_{2}f_{3}\frac{\partial k_{2}}{\partial
s}-\varepsilon_{1}\varepsilon_{2}f_{2}k_{2}^{2}-\varepsilon_{1}\varepsilon_{3}f_{4}k_{2}k_{3}.$
Since $\frac{\partial}{\partial s}\frac{\partial V_{i}}{\partial
t}=\frac{\partial}{\partial t}\frac{\partial V_{i}}{\partial s}$, we obtain
$\displaystyle\frac{\partial}{\partial s}\frac{\partial V_{i}}{\partial t}$
$\displaystyle=\frac{\partial}{\partial
s}\left[-\varepsilon_{0}\left(\varepsilon_{i-1}f_{i-1}k_{i-1}+\varepsilon_{i-1}\frac{\partial
f_{i}}{\partial
s}-\varepsilon_{i}f_{i+1}k_{i}\right)V_{1}+\left({\displaystyle\sum\limits_{\begin{subarray}{c}k=2\\\
k\neq i\end{subarray}}^{n}}\Psi_{ki}V_{k}\right)\right]$
$\displaystyle=\varepsilon_{0}\left(-\varepsilon_{i-1}\frac{\partial
f_{i-1}}{\partial s}k_{i-1}-\varepsilon_{i-1}f_{i-1}\frac{\partial
k_{i-1}}{\partial s}-\varepsilon_{i-1}\frac{\partial^{2}f_{i}}{\partial
s^{2}}+\varepsilon_{i}\frac{\partial f_{i+1}}{\partial
s}k_{i}+\varepsilon_{i}f_{i+1}\frac{\partial k_{i}}{\partial s}\right)V_{1}$
$\displaystyle+\left(-\varepsilon_{0}\varepsilon_{i-1}f_{i-1}k_{i-1}-\varepsilon_{0}\varepsilon_{i-1}\frac{\partial
f_{i}}{\partial
s}+\varepsilon_{0}\varepsilon_{i}f_{i+1}k_{i}\right)\frac{\partial
V_{1}}{\partial s}+{\displaystyle\sum\limits_{\begin{subarray}{c}k=2\\\ k\neq
i\end{subarray}}^{n}}\left(\frac{\partial\Psi_{ki}}{\partial
s}V_{k}+\Psi_{ki}\frac{\partial V_{k}}{\partial s}\right)$
while
$\displaystyle\frac{\partial}{\partial t}\frac{\partial V_{i}}{\partial s}$
$\displaystyle=\frac{\partial}{\partial
t}\left(-\varepsilon_{i-2}\varepsilon_{i-1}k_{i-1}V_{i-1}+k_{i}V_{i+1}\right)$
$\displaystyle=-\varepsilon_{i-2}\varepsilon_{i-1}\frac{\partial
k_{i-1}}{\partial
t}V_{i-1}-\varepsilon_{i-2}\varepsilon_{i-1}k_{i-1}\frac{\partial
V_{i-1}}{\partial t}+\frac{\partial k_{i}}{\partial
t}V_{i+1}+k_{i}\frac{\partial V_{i+1}}{\partial t}.$
Hence
$\frac{\partial k_{i-1}}{\partial
t}=-\varepsilon_{i-2}\varepsilon_{i-1}\frac{\partial\Psi_{(i-1)i}}{\partial
s}-\varepsilon_{i-2}\varepsilon_{i-1}\Psi_{(i-2)i}k_{i-2}$
and
$\frac{\partial k_{i}}{\partial t}=\frac{\partial\Psi_{(i+1)i}}{\partial
s}-\varepsilon_{i}\varepsilon_{i+1}\Psi_{(i+2)i}k_{i+1}.$
By same way as above and considering $\frac{\partial}{\partial
s}\frac{\partial V_{n}}{\partial t}=\frac{\partial}{\partial t}\frac{\partial
V_{n}}{\partial s}$ we reach
$\frac{\partial k_{n-1}}{\partial
t}=-\varepsilon_{n-2}\varepsilon_{n-1}\frac{\partial\Psi_{(n-1)n}}{\partial
s}-\varepsilon_{n-2}\varepsilon_{n-1}\Psi_{(n-2)n}k_{n-2}.$
∎
## References
* [1] M. Barros, General helices and a theorem of Lancert, Proc. AMS, 125, 1503–9,(1997).
* [2] G. Chirikjian, J. Burdick, A modal approach to hyper-redundant manipulator kinematics, IEEE Trans. Robot. Autom. 10, 343–354 (1994).
* [3] G.S. Chirikjian, Closed-form primitives for generating volume preserving deformations, ASME J.Mechanical Design 117, 347–354 (1995).
* [4] M. Desbrun, M.-P. Cani-Gascuel, Active implicit surface for animation, in: Proc. Graphics Interface Canadian Inf. Process. Soc., 143–150 (1998).
* [5] M. Gage, R.S. Hamilton, The heat equation shrinking convex plane curves, J. Differential Geom. 23, 69–96 (1986).
* [6] M. Grayson, The heat equation shrinks embedded plane curves to round points, J. Differential Geom. 26, 285–314 (1987).
* [7] N. Gürbüzü, Inextensible flows of spacelike, timelike and null curves, Int. J. Contemp. Math. Sciences, Vol. 4, (2009), no. 32, 1599-1604.
* [8] K. İlarslan, Some special curves on non-Euclidean manifolds, Ph.D. Thesis, Ankara University, Graduate School of Natural and Applied Sciences, (2002).
* [9] M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, in: Proc. 1st Int. Conference on Computer Vision, 259–268 (1987).
* [10] D. Y. Kwon, F.C. Park, D.P. Chi, Inextensible flows of curves and developable surfaces, Appl. Math. Lett. 18 (2005) 1156-1162.
* [11] H.Q. Lu, J.S. Todhunter, T.W. Sze, Congruence conditions for nonplanar developable surfaces and their application to surface recognition, CVGIP, Image Underst. 56, 265–285 (1993).
* [12] A.O. Öğrenmiş, M. Yeneroğlu, Inextensible curves in the Galilean Space, International Journal of the Physical Sciences, 5(9), (2010), 1424-1427.
* [13] Ö. G. Yıldız, S. Ersoy, M. Masal, A note on inextensible flows of curves on oriented surface, arXiv:1106.2012v1.
* [14] Ö. G. Yıldız, M. Tosun, S. Ö. Karakuş, A note on inextensible flows of curves in $E^{n}$, arXiv:1207.1543v1
|
arxiv-papers
| 2013-02-25T13:18:58 |
2024-09-04T02:49:42.101470
|
{
"license": "Public Domain",
"authors": "\\\"Onder G\\\"okmen Y{\\i}ld{\\i}z and Murat Tosun",
"submitter": "\\\"Onder G\\\"okmen Y{\\i}ld{\\i}z",
"url": "https://arxiv.org/abs/1302.6082"
}
|
1302.6132
|
# Topological Edge States and Fractional Quantum Hall Effect from Umklapp
Scattering
Jelena Klinovaja Department of Physics, University of Basel,
Klingelbergstrasse 82, CH-4056 Basel, Switzerland Daniel Loss Department of
Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel,
Switzerland
###### Abstract
We study anisotropic lattice strips in the presence of a magnetic field in the
quantum Hall effect regime. At specific magnetic fields, causing resonant
Umklapp scattering, the system is gapped in the bulk and supports chiral edge
states in close analogy to topological insulators. In electron gases with
stripes, these gaps result in plateaus for the Hall conductivity exactly at
the known fillings $n/m$ (both positive integers and $m$ odd) for the integer
and fractional quantum Hall effect. For double strips we find topological
phase transitions with phases that support midgap edge states with flat
dispersion. The topological effects predicted here could be tested directly in
optical lattices.
###### pacs:
71.10.Fd; 05.30.Pr; 71.10.Pm; 73.43.-f
Introduction. Condensed matter systems with topological properties have
attracted wide attention over the years. Wilczek ; Hasan_RMP ; Zhang_RMP ;
Alicea_2012 E.g., the integer and fractional quantum Hall effects (IQHE and
FQHE) Klitzing ; Tsui_82 find their origin in the topology of the system.
QHE_Review_Prange ; book_Jain ; Hofstadter ; Laughlin ; Streda ; Thouless ;
Halperin ; Laughlin_FQHI ; Halperin_crystall ; oded_2013 ; Kane_lines
Similarly, band insulators with topological properties have become of central
interest recently, Hasan_RMP ; Zhang_RMP ; Topological_class_Ludwig as well
as exotic topological states like fractionally charged fermions Jackiw_Rebbi ;
FracCharge_Su ; FracCharge_Kivelson ; FracCharge_Chamon ; CDW ;
Two_field_Klinovaja_Stano_Loss_2012 ; Klinovaja_Loss_FF_1D ;
Frac_graphene_2007 ; Franz_2009 or Majorana fermions. Read_2000 ; Nayak ; fu
; Nagaosa_2009 ; Sato ; lutchyn_majorana_wire_2010 ; oreg_majorana_wire_2010 ;
Klinovaja_CNT ; Sticlet_2012 ; bilayer_MF_2012 ; MF_nanoribbon
Here, we study two-dimensional (2D) strips in magnetic fields, both
analytically and numerically, modeled by an anisotropic tight-binding lattice.
We identify a striking mechanism by which the magnetic field induces resonant
Umklapp scattering (across Brillouin zones) that opens a gap in the bulk
spectrum and results in chiral edge states in analogy to topological
insulators. Quite remarkably, the resonant scattering occurs at well-known
filling factors for the IQHE Klitzing and FQHE Tsui_82 $\nu=n/m$, where
$n,m$ are positive integers and $m$ odd. We argue below that this mechanism
could shed new light on the QHE for 2D electron gases as well, where the
formation e.g. of a periodic structure (energetically favored also by a
Peierls transition) might support the periodic structure needed for the
Umklapp scattering.
Finally, we consider a double strip of spinless fermions, or, equivalently, a
single strip with spinful fermions. Here, we find two topological phase
transitions accompanied by a closing and reopening of the bulk gap, and, as a
result, three distinct phases. The trivial phase is without edge states. The
first topological phase is similar to the one discussed above and carries two
propagating chiral modes at each edge for $\nu=1$. The second topological
phase has only one state at each edge. Quite remarkably, its dispersion is
flat throughout the Brillouin zone, making this phase an attractive playground
for studying interaction effects.
Figure 1: (a) Strip: two-dimensional lattice of width in $x$-direction, $W$,
with unit cell defined by the lattice constants $a_{x}$ and $a_{y}$. The
hopping amplitudes in $y$-direction, $t_{y1}$ and $t_{y2}$, carry the phase
$\phi$, arising from a perpendicular magnetic field ${\bf B}$, and we assume
$t_{x}\gg t_{y1},t_{y2}$. (b) Doubly degenerate spectrum of $H_{x}$ [see Eqs.
(1)] for the rows with $\sigma=1$ (upper blue) and with $\sigma=\bar{1}$
(lower green). The hoppings $t_{y1}$ (dashed line) and $t_{y2}$ (dotted line)
induce resonant scattering between right ($R_{\sigma}$) and left
($L_{\bar{\sigma}}$) movers, which open gaps at the Fermi wavevectors $\pm
k_{F}$ defined by the chemical potential $\mu$.
Anisotropic tight-binding model. We consider a 2D tight-binding model of a
strip that is of width $W$ in $x$\- and extended in $y$-direction, see Fig.
1a. The unit cell is composed of two lattice sites ($\sigma=\pm 1$) along $y$
that are distinguished by two hopping amplitudes, $t_{y1}$ and $t_{y2}$. Every
site is labeled by three indices $n,m$, and $\sigma$, where $n$ ($m$) denotes
the position of the unit cell along the $x$\- ($y$-) axis. The hopping along
$x$ is described by
$\displaystyle H_{x}=-t_{x}\sum_{n,m,\sigma}(c^{\dagger}_{n+1,m,\sigma}$
$\displaystyle c_{n,m,\sigma}+h.c.),$ (1)
where $t_{x}$ is the hopping amplitude in $x$-direction and $c_{n,m,\sigma}$
the annihilation operator acting on a spinless fermion at site $(n,m,\sigma)$,
and the sum runs over all sites. The hopping along $y$ is described by
$\displaystyle
H_{y}=\sum_{n,m}(t_{y1}e^{-in\phi}c^{\dagger}_{n,m,1}c_{n,m,\bar{1}}$
$\displaystyle\hskip
80.0pt+t_{y2}e^{in\phi}c^{\dagger}_{n,m+1,1}c_{n,m,\bar{1}}+h.c.),$ (2)
Without loss of generality, we consider $t_{y2}\geq t_{y1}\geq 0$. The phase
$\phi$ is generated by a uniform magnetic field ${\bf B}$ applied in
perpendicular $z$-direction, see Fig. 1. We choose the corresponding vector
potential $\bf A$, to be along the $y$-axis, ${\bf A}=(Bx){\bf e}_{y}$,
yielding the phase $\phi=eBa_{x}a_{y}/2\hbar c$. Here, $a_{x,y}$ are the
corresponding lattice constants.
Chiral edge states. Taking into account translational invariance of the system
in $y$-direction, we introduce the momentum $k_{y}$ via Fourier
transformation, see Appendix A. The Hamiltonians become diagonal in $k_{y}$,
i.e.,
${H}_{x}=-t_{x}\sum_{n,k_{y},\sigma}(c^{\dagger}_{n+1,k_{y},\sigma}c_{n,k_{y},\sigma}+h.c.)$,
and ${H}_{y}=\sum_{n,k_{y}}[(t_{y1}e^{-in\phi}+t_{y2}e^{i(n\phi-
k_{y}a_{y})})c^{\dagger}_{n,k_{y},1}c_{n,k_{y},\bar{1}}+h.c.]$. Thus, the
eigenfunctions of ${H}={H}_{x}+{H}_{y}$ factorize as
$e^{ik_{y}y}\psi_{k_{y}}(x)$, where we focus now on $\psi_{k_{y}}(x)$ and
treat $k_{y}$ as parameter.
Figure 2: Spectrum $E(k_{y})$ of left edge state (red line or dots)
propagating along $y$ for a strip ($t_{y1}/t_{x}=0.02$, $t_{y2}/t_{x}=0.1$) of
width $W/a_{x}=801$ and with phase $\phi=\pi/2$, obtained (a) analytically
[see Eq. (4)] and (b) numerically [see Eqs. (1) and (2)]. For
${\bar{k}}_{-}<k_{y}<{\bar{k}}_{+}$, there exists one edge state at each edge.
The left (red dots) and the right (blue dots) edge states are chiral and
propagate in opposite $y$-directions. For each $k_{y}$ [(c) $k_{y}a_{y}=\pi$,
(d) $k_{y}a_{y}=13\pi/12$] there is one left (red dots) and one right (blue
dots) edge state if Eq. (5) is satisfied. Here, $\epsilon(N)$ corresponds to
the $N$th energy level. The probability density $|\psi_{\sigma}|^{2}$ (e)
[(f)] of the left [right] localized state decays exponentially in agreement
with the analytical result, Eq. (14) in Appendix C.
Assuming for the moment periodic boundary conditions also in $x$-direction, we
introduce a momentum $k_{x}$, see Appendix A. Immediately, the well-known
spectrum of $H_{x}$ follows, $\epsilon_{\sigma}=-2t_{x}\cos(k_{x}a_{x})$,
which is twofold degenerate in $\sigma$. The chemical potential $\mu$ is fixed
such that the Fermi wavevector $k_{F}$ is connected to the phase by
$\phi=2k_{F}a_{x}$. Next, we allow for hopping along $y$ as a small
perturbation to the $x$-hopping, i.e., $t_{x}\gg t_{y1},t_{y2}$, see Fig. 1b.
To obtain analytical solutions, it is most convenient to go to the continuum
description Two_field_Klinovaja_Stano_Loss_2012 ;
MF_wavefunction_klinovaja_2012 . The annihilation operator $\Psi(x)$ close to
the Fermi level can be represented in terms of slowly varying right
[$R_{\sigma}(x)$] and left [$L_{\sigma}(x)$] movers,
$\Psi(x)=\sum_{\sigma}R_{\sigma}(x)e^{ik_{F}x}+L_{\sigma}(x)e^{-ik_{F}x}$. The
corresponding Hamiltonian density $\mathcal{H}$ can be rewritten in terms of
the Pauli matrices $\tau_{i}$ ($\sigma_{i}$), acting on the right-left mover
(lattice) subspace (see Appendix B)
$\Psi=(R_{1},L_{1},R_{\bar{1}},L_{\bar{1}})$, as
$\displaystyle\mathcal{H}=\hbar\upsilon_{F}\hat{k}\tau_{3}+\frac{t_{y1}}{2}(\sigma_{1}\tau_{1}+\sigma_{2}\tau_{2})+\frac{t_{y2}}{2}\Big{[}(\sigma_{1}\tau_{1}-\sigma_{2}\tau_{2})$
$\displaystyle\hskip
10.0pt\times\cos(k_{y}a_{y})+(\sigma_{2}\tau_{1}+\sigma_{1}\tau_{2})\sin(k_{y}a_{y})\Big{]}.$
(3)
Here, $\hbar\hat{k}=-i\hbar\partial_{x}$ is the momentum operator with
eigenvalues $k$ taken from the corresponding Fermi points $\pm k_{F}$, and
$\upsilon_{F}=2(t_{x}/\hbar)a_{x}\sin(k_{F}a_{x})$ is the Fermi velocity. The
spectrum with periodic boundary conditions in $x$\- and $y$-directions is
given by $\epsilon_{l,\pm}=\pm\sqrt{(\hbar\upsilon_{F}k)^{2}+t_{yl}^{2}}$,
where $l=1,2$. This mechanism of opening a gap by oscillatory terms causing
resonant scattering between the Fermi points is similar to a Peierls
transition. Braunecker_Loss_Klin_2009 Next, we turn to a strip of finite
width $W$, see Fig. 1. We note that the bulk spectrum $\epsilon_{l,\pm}$ is
fully gapped, so states localized at the edges can potentially exist. To
explore this possibility we consider a semi-infinite nanowire ($x\geq 0$) and
follow the method developed in Refs. MF_wavefunction_klinovaja_2012, ;
Two_field_Klinovaja_Stano_Loss_2012, , assuming that the localization length
of bound states $\xi$ is much smaller than $W$. This allows us to impose
vanishing boundary conditions only at $x=0$,
$\psi_{k_{y}}(x)|_{x=0}\equiv(\psi_{1},\psi_{\bar{1}})|_{x=0}=0$. This
boundary condition is fulfilled only at one energy inside the gap
$|E|<t_{y1}$,
$E(k_{y})=\frac{t_{y1}t_{y2}\sin(k_{y}a_{y})}{\sqrt{t_{y1}^{2}+t_{y2}^{2}-2t_{y1}t_{y2}\cos(k_{y}a_{y})}},$
(4)
if the following condition is satisfied,
$t_{y1}>t_{y2}\cos(k_{y}a_{y}).$ (5)
The edge states exist for momenta $k_{y}\in({\bar{k}}_{-},{\bar{k}}_{+})$,
where ${\bar{k}}_{\pm}a_{y}=\pi\pm\arcsin(t_{y1}/t_{y2})$. An edge state
touches a boundary of the gap at ${\bar{k}}_{\pm}$ and afterwards disappears
in the bulk spectrum of the delocalized states, see Fig. 2. The only regime in
which the edge state exhibits all momenta corresponds to the uniform strip
with $t_{y1}=t_{y2}$. The localization length $\xi$ is determined by
$\xi=\hbar\upsilon_{F}/\sqrt{t_{y1}^{2}-E^{2}}$, with wavefunction given in
Appendix C. The edge state gets delocalized if its energy is close to the
boundary of the gap, so that $\xi$ becomes comparable to $W$. Similarly, we
can search for the solution decaying to the right, $x\leq 0$, and obtain Eq.
(4) with reversed sign, $E(k_{y})\to-E(k_{y})$.
We have confirmed above results by diagonalizing the tight-binding Hamiltonian
${H}$ (in $k_{y}$-representation) numerically, see Fig. 2. The spectrum
$E(k_{y})$ of the edge states localized along $x$ and propagating along $y$
shows that at any fixed energy inside the gap there can be only one edge state
at a given edge, see Fig. 2. Moreover, the edge states are chiral, as can be
seen from the velocity, $\upsilon=\partial E/\partial k_{y}$, which is
negative (positive) for the left (right) edge state. This means that transport
along a given edge of the strip can occur only in one direction determined by
the direction of the $\bf B$-field, see Fig. 1. Quite remarkably, the obtained
spectrum of edge states is of the same form as for topological insulators
Volkov_Pankratov ; Hasan_RMP with a single Dirac cone consisting of two
crossing non-degenerate subgap modes. Due to the macroscopic separation of
opposite edges, $\xi\ll W$, these modes are protected from getting scattered
into each other by impurities, phonons or interaction effects, so that the
Dirac cone cannot be eliminated by perturbations that are local and smaller
than the gap. Thus, the edge states are topologically stable.
Umklapp scattering. We note that the system considered here is equivalent to a
2D system in the QHE regime. The above choice of magnetic field corresponds to
the IQHE with filling factor $\nu=1$, which is in agreement with one chiral
mode at each edge. To explore the possibility of inducing quantum Hall physics
at other filling factors, we fix the chemical potential $\mu=-\sqrt{2}t_{x}$,
so that the system has local particle-hole symmetry, and change the B-field.
Above, the phase $\phi$, generated by the magnetic field, was equal to $\pi/2$
for $k_{F}=\pi/4a_{x}$. However, this is not the only choice of phase leading
to the opening of a gap $\Delta_{g}$ at the Fermi level. Due to the
periodicity of the spectrum, resonant scattering between branches of
$\epsilon_{\sigma}$ occurs also via Umklapp scattering between different
Brillouin zones, with a phase $\phi=\pm\frac{p\pi}{2n}+2\pi q$, where $q$ is
an integer, $n$ a positive integer, and $p$ a positive odd integer with $p<2n$
and coprime to $n$. As a result, the Fermi level lies in the bulk gap for the
filling factors footnote_filling $\nu=n/(4qn\pm p)$, which can be rewritten
as $\nu=n/m$, where $m>0$ is an odd integer. The size of the gap can be
estimated as $\Delta_{g}\propto t_{yl}(t_{yl}/t_{x})^{(n-1)}$ (assuming for
simplicity $t_{y1}=t_{y2}$). Finally, we remark that we checked numerically
that the gap $\Delta_{g}$ never closes for any finite ratio of $t_{x}$ and
$t_{yl}$ larger or smaller than one.
FQHE in 2D electron gas. We conjecture that the same mechanism of resonant
Umklapp scattering can also lead to the integer or fractional QHE in 2D
electron gases. At high magnetic fields interaction effects get strongly
enhanced and electrons tend to order themselves into periodic structures.
Wigner_Girvin ; Wigner_Kivelson ; Wigner_Jain ; book_Jain ; anisotropy_CDW ;
strips_Review ; anisotropy_West ; QHE_strips In particular, we assume the
formation of stripes that are aligned along $x$ and periodically repeated in
$y$. While particles can hop between stripes, they move now continuously
inside them with quadratic dispersion. Thus, the perturbative solutions found
above in terms of right- and left-movers still apply. In addition, we assume
that the interaction generates a charge-density wave at wavevector $K$ inside
the stripe, providing an effective periodic potential in $x$, which will lead
to gaps. Thus, $K$ becomes the period of the Brillouin zone, and, at
$1/4$-filling of the lowest subband, we have $K=8k_{F}$. footnote:B_period
Again, the $B$-field leads to a gap at $k_{F}$ only if it results in phases
commensurable to $k_{F}$, i.e. $eBa_{y}/2\hbar c=\pm 2k_{F}\frac{p}{n}+qK$,
which is equivalent to $\nu=n/(4qn\pm p)\equiv n/m$. In this regime, there is
an additional energy gain due to a Peierls transition, favoring even more a
formation of periodic structures with gaps. Moreover, from this mean field
scenario it follows that the IQHE is more stable against disorder than the
FQHE since the latter requires Umklapp scattering through higher Brillouin
zones. The gap and the edge states can be tested in transport experiments. For
example, the Hall conductance $\sigma_{H}$ exhibits plateaus on the classical
dependence curve $\sigma_{H}\propto 1/B$, if the Fermi level lies in the gap.
This can be shown by using the Streda formula, Streda
$\sigma_{H}=ec\left(\frac{\partial{\bar{n}}}{\partial B}\right)_{\mu}$, where
${\bar{n}}$ is the bulk particle density which is uniquely determined by the
magnetic field via the relation $\nu eBa_{y}/2\hbar c=2k_{F}$ (for this it is
crucial that $K$ depends on $k_{F}$). If $\mu$ lies in the gap opened by the
Umklapp scattering, the change in the density for fixed $\mu$, $d{\bar{n}}$,
due to a change in the magnetic field, $dB$, is given by
$d{\bar{n}}=\frac{dk_{F}}{\pi a_{y}/2}=\nu\frac{e}{hc}dB$. Hence, the
conductance assumes the FQHE plateaus, $\sigma_{H}=\nu{e^{2}}/{hc}$, with
$\nu=n/m$ and independent of any lattice parameters. The width of the plateaus
is determined by the gap size $\Delta_{g}\propto t_{yl}(t_{yl}/\mu)^{(n-1)}$.
We note that the FQHE can be mapped back to the IQHE by redefining the charge
$e$ by $e^{\star}=e/m$ that allows us to keep all scattering events inside the
first Brillouin zone. Finally, the distance between stripes can be estimated
as ${a_{y}}/{2}={k_{F}}/{\pi\bar{n}}$.
Figure 3: Double strip for spinless fermions. The intra-strip couplings are
the same as in Fig. 1. The inter-strip coupling along $z$ is described by the
hopping amplitude $t_{z}$.
Figure 4: Spectrum of a double strip obtained by numerical diagonalization of
the tight-binding Hamiltonian $H_{2}=H_{x}+H_{y}+H_{z}$ for the same
parameters as in Fig. 2. (a) If $|t_{z}|<t_{y1}$ [$t_{z}/t_{x}=0.01$], there
are four edge states at any energy within the gap; two of them localized at
the left and two at the right edge. (b) If $t_{y1}<|t_{z}|<t_{y2}$
[$t_{z}/t_{x}=0.05$], there is one zero-energy edge state (i.e. with flat
dispersion) at each edge. (c) If $|t_{z}|>t_{y2}$ [$t_{z}/t_{x}=1.5$], there
are no edge states in the gap.
Double strip. Now we consider a double strip, consisting of two coupled strips
for spinless fermions, see Fig. 3. This system is equivalent to a single strip
but for spinful fermions. Below we focus on the double strip but we note that
one can identify the upper (lower) strip with spin up (down) state labeled by
$\eta=1$ ($\eta=-1$). The chemical potentials $\mu_{\eta}$ are opposite for
the two strips, $\mu_{1}=-\mu_{\bar{1}}=\sqrt{2}t_{x}$, and are chosen such
that the system is at half-filling. For the spinful strip the role of
$\mu_{\eta}$ is played by the Zeeman term, $\mu_{\eta}=\eta g\mu_{B}B$,
arising from the magnetic field $\bf B$ along $z$. Here, $g$ is the
$g$-factor, and $\mu_{B}$ is the Bohr magneton. The inter-strip hopping
amplitude $t_{z}$ is also accompanied by the phase $\phi^{z}$ arising from a
uniform magnetic field ${\bf B}_{2}$ applied along $y$,
$H_{z}=\sum_{n,m,\sigma,\eta}t_{z}e^{in\eta\phi^{z}}c_{n,m,\sigma,\eta}^{\dagger}c_{n,m,\sigma,\bar{\eta}}.$
(6)
The amplitude of ${\bf B}_{2}$ is chosen so that $\phi^{z}=(e/\hbar
c)B_{2}a_{x}a_{z}=\pi$. This amounts to apply a total field ${\bf
B}_{tot}={\bf B}+{\bf B}_{2}$ in the $yz$-plane. Moreover, the same $H_{z}$ is
generated in the spinful case by a $B_{2}$-field applied along $y$ with an
amplitude that oscillates in space along $x$ with period $2a_{x}$, or,
alternatively, by Rashba spin orbit interaction. Braunecker_Loss_Klin_2009
Again, we search for wavefunctions in terms of right and left mover fields
defined around two Fermi points, $k_{F1}=\pi/4a_{x}$ (upper strip) and
$k_{F\bar{1}}=3\pi/4a_{x}$ (lower strip). The linearized Hamiltonian density
for this extended model in terms of the Pauli matrices $\eta_{i}$ acting on
the upper/lower strip subspace is given by [see Eq. (3)]
$\displaystyle\mathcal{H}_{2}=\mathcal{H}(\sigma_{2}\to\sigma_{2}\eta_{3})+t_{z}\eta_{1}\tau_{1}\,.$
(7)
The resulting spectrum is
$\epsilon_{l,\pm,p}=\pm\sqrt{(\hbar\upsilon_{F}k)^{2}+(t_{yl}+pt_{z})^{2}}$,
with $p=\pm 1$. We note that the gap vanishes if $|t_{z}|=t_{y1}$ or
$|t_{z}|=t_{y2}$. The closing and reopening of a gap often signals a
topological phase transition. Indeed, imposing vanishing boundary conditions
at the edges, we find that there are two edge states (one at each edge) at
zero energy, $E=0$, if the following topological criterion is satisfied,
$t_{y1}<|t_{z}|<t_{y2}$, see Fig. 4. The wavefunction of the left edge state
for $t_{z}>0$ is given by (with $x=na_{x}$)
$\displaystyle\psi^{L}_{E=0}=(f(x),-if^{*}(x),i(-1)^{n}f(x),(-1)^{n+1}f^{*}(x)),$
$\displaystyle
f(x)=e^{-ik_{y}a_{y}/2}e^{-(k_{2-}+ik_{F1})x}-\cos(k_{y}a_{y}/2)$
$\displaystyle\hskip 10.0pt\times
e^{-(k_{1-}-ik_{F1})x}+i\sin(k_{y}a_{y}/2)e^{-(k_{1+}-ik_{F1})x}.$ (8)
The basis
$(\psi_{1,1},\psi_{1,\bar{1}},\psi_{\bar{1},1},\psi_{\bar{1},\bar{1}})$ is
composed of wavefunctions $\psi_{\eta,\sigma}$ defined at the $\sigma$-unit
lattice site of the $\eta$-strip. The smallest wavevectors
$k_{l,\pm}=|t_{l}\pm t_{z}|/\hbar\upsilon_{F}$ determine the localization
length of the edge state. We note that the probability densities
$|\psi_{\eta,\sigma}(x)|^{2}$ are uniform inside the unit cell.
If $|t_{z}|<t_{y1}$, there are two edge states at each edge for $E$ inside the
gap; see Fig. 4. These states, propagating in $y$, have a momentum $k_{y}$
determined by $E$. This case is similar to one strip with spinless particles
discussed above. We note that the edge states found here are the higher-
dimensional extensions of the end bound states found in one-dimensional
nanowires Two_field_Klinovaja_Stano_Loss_2012 ; footnote1 and ladders
Klinovaja_Loss_FF_1D . For $|t_{z}|>t_{y2}$, there are no edge states; see
Fig. 4. Finally, the most interesting regime here is $t_{y1}<|t_{z}|<t_{y2}$,
where there is one zero-energy edge state at each edge, see Fig. 4. Such
states with flat dispersion are expected to be strongly affected by
interactions.
Conclusions. We have studied topological regimes of strips with modulated
hopping amplitudes in the presence of magnetic fields. We found topological
regimes with chiral edge states at filling factors that correspond to integer
and fractional QHE regimes. We showed that double strips sustain topological
phases with mid-gap edge states with flat dispersion. Optical lattices
Lewenstein seem to be promising candidates for implementing directly the
anisotropic tight-binding models considered here.
This work is supported by the Swiss NSF, NCCR Nanoscience, and NCCR QSIT.
## Appendix A Fourier transformation
We introduce the momentum $k_{y}$ via Fourier transformation,
$c_{n,m,\sigma}=\frac{1}{\sqrt{N_{y}}}\sum_{k_{y}}e^{imk_{y}a_{y}}c_{n,k_{y},\sigma},$
(9)
where $N_{y}$ is the number of lattice sites in $y$-direction. By analogy, we
introduce a momentum $k_{x}$,
$c_{n,k_{y},\sigma}=(1/\sqrt{N_{x}})\sum_{k_{x}}e^{ink_{x}a_{x}}c_{k_{x}k_{y},\sigma},$
(10)
where $N_{x}$ is the number of lattice sites along $x$.
## Appendix B Effective Hamiltonian
Here we derive the spectrum of $H=H_{x}+H_{y}$ in the continuum limit
following Refs. MF_wavefunction_klinovaja_2012, ;
Two_field_Klinovaja_Stano_Loss_2012, . The annihilation operator in position
space $\Psi(x)$ close to the Fermi level is expressed in terms of slowly
varying right [$R_{\sigma}(x)$] and left [$L_{\sigma}(x)$] movers as
$\Psi(x)=\sum_{\sigma}[R_{\sigma}(x)e^{ik_{F}x}+L_{\sigma}(x)e^{-ik_{F}x}].$
(11)
As a consequence, $H_{x}$ results in the kinetic term,
$\displaystyle H_{x}^{lin}=-i\hbar\upsilon_{F}\sum_{\sigma}\int dx\ $
$\displaystyle[R_{\sigma}^{\dagger}(x)\partial_{x}R_{\sigma}(x)$
$\displaystyle\hskip 30.0pt-
L_{\sigma}^{\dagger}(x)\partial_{x}L_{\sigma}(x)],$ (12)
and $H_{y}$ results in a term that couples right and left movers,
$\displaystyle H_{y}^{lin}=t_{y1}\int dx\
\big{[}L_{1}^{\dagger}(x)R_{\bar{1}}(x)+h.c.]$ $\displaystyle\hskip
30.0pt+t_{y2}\int dx\
[e^{-ik_{y}a_{y}}R_{1}^{\dagger}(x)L_{\bar{1}}(x)+h.c.\big{]}.$ (13)
Here, we used the specific choice of the parameters,
$2k_{F}a_{x}=\phi\in(0,\pi)$. It is this term that leads to resonant
scattering and opens the gaps at the Fermi points. The Fermi velocity
$\upsilon_{F}$ depends on the Fermi wavevector,
$\hbar\upsilon_{F}=2t_{x}a_{x}\sin(k_{F}a_{x})$. The Hamiltonian density
$\mathcal{H}$ corresponding to $H^{lin}=H_{x}^{lin}+H_{y}^{lin}=\int dx\
\Psi^{\dagger}\mathcal{H}\Psi$, can be rewritten in terms of the Pauli
matrices $\tau_{i}$ ($\sigma_{i}$), acting on the right-left mover (lattice)
subspace, $\Psi=(R_{1},L_{1},R_{\bar{1}},L_{\bar{1}})$, leading directly to
Eq. (3) in the main text.
## Appendix C Wavefunction of left edge state
The wavefunction of the state localized at the left edge of a spinless strip
is given by
$\displaystyle\psi_{k_{y}}(x)=\begin{pmatrix}e^{-k_{2}x-i(k_{F}x+\theta)}-e^{-k_{1}x+i(k_{F}x-\theta)}\\\
e^{-k_{2}x+ik_{F}x}-e^{-k_{1}x-ik_{F}x}\end{pmatrix},$ (14)
where we suppress the normalization factor. Here, we introduced the notations
$e^{i\theta}=(E+i\sqrt{t_{y1}^{2}-E^{2}})/t_{y1}$ and
$k_{l}=\sqrt{t_{yl}^{2}-E^{2}}/\hbar\upsilon_{F}$.
## References
* (1) F. Wilczek, Nat. Phys. 5, 614 (2009).
* (2) M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010).
* (3) X. Qi and S. Zhang, Rev. Mod. Phys. 83, 1057 (2011).
* (4) J. Alicea, Rep. Prog. Phys. 75, 076501 (2012).
* (5) K. v. Klitzing, G. Dorda, and M. Pepper, Phys. Rev. Lett. 45, 494 (1980).
* (6) D. C. Tsui, H. L. Stormer, and A. C. Gossard, Phys. Rev. Lett. 48, 1559 (1982).
* (7) D. R. Hofstadter, Phys. Rev. B 14, 2239 (1976).
* (8) R. B. Laughlin, Phys. Rev. B 23, 5632 (1981).
* (9) D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982).
* (10) P. Streda, J. Phys. C: Solid State Phys. 15, L717 (1982).
* (11) B. I. Halperin, Phys. Rev. B 25, 2185 (1982).
* (12) R. B. Laughlin, Phys. Rev. Lett. 50, 1395 (1983).
* (13) Z. Tesanovic, F. Axel, and B. I. Halperin, Phys. Rev. B 39, 8525 (1989).
* (14) R. E. Prange and S. M. Girvin, The Quantum Hall Effect (Springer, New York, 1990).
* (15) J. K. Jain, Composite Fermions (Cambridge University Press, Cambridge, 2007).
* (16) J. C. Y. Teo and C. L. Kane, arXiv:1111.2617.
* (17) Y. E. Kraus, Z. Ringel, and O. Zilberberg, arXiv:1302.2647.
* (18) S. Ryu, A. P. Schnyder, A. Furusaki, and A. W. W. Ludwig, New J. Phys. 12, 065010 (2010).
* (19) R. Jackiw and C. Rebbi, Phys. Rev. D 13, 3398 (1976).
* (20) W. P. Su, J. R. Schrieffer, and A. J. Heeger, Phys. Rev. Lett. 42, 1698 (1979).
* (21) S. Kivelson and J. R. Schrieffer, Phys. Rev. B 25, 6447, (1982).
* (22) C. Hou, C. Chamon, and C. Mudry, Phys. Rev. Lett. 98, 186809 (2007).
* (23) B. Seradjeh, J. E. Moore, and M. Franz, Phys. Rev. Lett. 103, 066402 (2009).
* (24) L. Santos, Y. Nishida, C. Chamon, and C. Mudry, Phys. Rev. B 83, 104522 (2011).
* (25) S. Gangadharaiah, L. Trifunovic, and D. Loss, Phys. Rev. Lett. 108, 136803 (2012).
* (26) J. Klinovaja, P. Stano, and D. Loss, Phys. Rev. Lett. 109, 236801 (2012).
* (27) J. Klinovaja and D. Loss, Phys. Rev. Lett. 110, 126402 (2013).
* (28) N. Read and D. Green, Phys. Rev. B 61, 10267 (2000).
* (29) C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. 80, 1083 (2008).
* (30) L. Fu and C. L. Kane, Phys. Rev. Lett. 100, 096407 (2008).
* (31) Y. Tanaka, T. Yokoyama, and N. Nagaosa, Phys. Rev. Lett. 103, 107002 (2009).
* (32) M. Sato and S. Fujimoto, Phys. Rev. B 79, 094504 (2009).
* (33) R. M. Lutchyn, J. D. Sau, and S. Das Sarma, Phys. Rev. Lett. 105, 077001 (2010).
* (34) Y. Oreg, G. Refael, and F. von Oppen, Phys. Rev. Lett. 105, 177002 (2010).
* (35) J. Klinovaja, S. Gangadharaiah, and D. Loss, Phys. Rev. Lett. 108, 196804 (2012).
* (36) D. Sticlet, C. Bena, and P. Simon, Phys. Rev. Lett. 108, 096802 (2012).
* (37) J. Klinovaja, G. J. Ferreira, and D. Loss, Phys. Rev. B 86, 235416 (2012).
* (38) J. Klinovaja and D. Loss, Phys. Rev. X 3, 011008 (2013).
* (39) J. Klinovaja and D. Loss, Phys. Rev. B 86, 085408 (2012).
* (40) B. Braunecker, G. I. Japaridze, J. Klinovaja, and D. Loss, Phys. Rev. B 82, 045127 (2010).
* (41) B. A. Volkov and O. A. Pankratov, Pis’ma Zh. Eksp. Teor. Fiz. 42, 145 (1985) [JETP Lett. 42, 178 (1985)].
* (42) Here we use the standard definition of the filling factor $\nu=N/(BS/(hc/e))$, where $S$ is the strip area.
* (43) P. K. Lam and S. M. Girvin, Phys. Rev. B 30, 473 (1984).
* (44) S. Kivelson, C. Kallin, D. P. Arovas, and J. R. Schrieffer, Phys. Rev. Lett. 56, 873 (1986).
* (45) C. Chang, C. Toeke, G. Jeon, and J. K. Jain, Phys. Rev. B 73, 155323 (2006).
* (46) A. A. Koulakov, M. M. Fogler, and B. I. Shklovskii, Phys. Rev. Lett. 76, 499 (1996).
* (47) C. Wexler and O. Ciftja, Int. J. Mod. Phys. B 20, 747 (2006).
* (48) M. P. Lilly, K. B. Cooper, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 83, 824 (1999).
* (49) I. V. Kukushkin, V. Umansky, K. v. Klitzing, and J. H. Smet, Phys. Rev. Lett. 106, 206804 (2011).
* (50) This can be seen by identifying $k_{y}a_{y}$ in Eq. (4) with the phase shift $\theta$ in Eq. (9) of Ref. Two_field_Klinovaja_Stano_Loss_2012 .
* (51) At given $\nu$, the $x$ periodicity of the full stripe model is given by $n\pi/k_{F}$ (with reciprocal vector $2k_{F}/n$). However, in leading order (in $t_{y}$) only the $K$-periodicity of the unperturped Bloch functions enters. We emphasize that, in contrast to the lattice model, the period of our stripe model depends explicitly on $k_{F}$. This is possible only in the presence of interactions: one additional electron changes the period for all electrons.
* (52) M. Lewenstein, A. Sanpera, V. Ahufinger, B. Damski, A. Sen, and U. Sen, Adv. Phys. 56, 243 (2007).
|
arxiv-papers
| 2013-02-25T16:06:27 |
2024-09-04T02:49:42.106664
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jelena Klinovaja and Daniel Loss",
"submitter": "Jelena Klinovaja",
"url": "https://arxiv.org/abs/1302.6132"
}
|
1302.6269
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-PH-EP-2013-017 LHCb-PAPER-2013-001 25 February 2013
Determination of the $X(3872)$ meson quantum numbers
The LHCb collaboration†††Authors are listed on the following pages.
The quantum numbers of the $X(3872)$ meson are determined to be
$J^{PC}=1^{++}$ based on angular correlations in $B^{+}\rightarrow
X(3872)K^{+}$ decays, where $X(3872)\rightarrow\pi^{+}\pi^{-}J/\psi$ and
$J/\psi\rightarrow\mu^{+}\mu^{-}$. The data correspond to 1.0 fb-1 of $pp$
collisions collected by the LHCb detector. The only alternative assignment
allowed by previous measurements, $J^{PC}=2^{-+}$, is rejected with a
confidence level equivalent to more than eight Gaussian standard deviations
using the likelihood-ratio test in the full angular phase space. This result
favors exotic explanations of the $X(3872)$ state.
Submitted to Physical Review Letters
© CERN on behalf of the LHCb collaboration, license CC-BY-3.0.
LHCb collaboration
R. Aaij40, C. Abellan Beteta35,n, B. Adeva36, M. Adinolfi45, C. Adrover6, A.
Affolder51, Z. Ajaltouni5, J. Albrecht9, F. Alessio37, M. Alexander50, S.
Ali40, G. Alkhazov29, P. Alvarez Cartelle36, A.A. Alves Jr24,37, S. Amato2, S.
Amerio21, Y. Amhis7, L. Anderlini17,f, J. Anderson39, R. Andreassen59, R.B.
Appleby53, O. Aquines Gutierrez10, F. Archilli18, A. Artamonov 34, M.
Artuso56, E. Aslanides6, G. Auriemma24,m, S. Bachmann11, J.J. Back47, C.
Baesso57, V. Balagura30, W. Baldini16, R.J. Barlow53, C. Barschel37, S.
Barsuk7, W. Barter46, Th. Bauer40, A. Bay38, J. Beddow50, F. Bedeschi22, I.
Bediaga1, S. Belogurov30, K. Belous34, I. Belyaev30, E. Ben-Haim8, M.
Benayoun8, G. Bencivenni18, S. Benson49, J. Benton45, A. Berezhnoy31, R.
Bernet39, M.-O. Bettler46, M. van Beuzekom40, A. Bien11, S. Bifani12, T.
Bird53, A. Bizzeti17,h, P.M. Bjørnstad53, T. Blake37, F. Blanc38, J. Blouw11,
S. Blusk56, V. Bocci24, A. Bondar33, N. Bondar29, W. Bonivento15, S. Borghi53,
A. Borgia56, T.J.V. Bowcock51, E. Bowen39, C. Bozzi16, T. Brambach9, J. van
den Brand41, J. Bressieux38, D. Brett53, M. Britsch10, T. Britton56, N.H.
Brook45, H. Brown51, I. Burducea28, A. Bursche39, G. Busetto21,q, J.
Buytaert37, S. Cadeddu15, O. Callot7, M. Calvi20,j, M. Calvo Gomez35,n, A.
Camboni35, P. Campana18,37, A. Carbone14,c, G. Carboni23,k, R. Cardinale19,i,
A. Cardini15, H. Carranza-Mejia49, L. Carson52, K. Carvalho Akiba2, G.
Casse51, M. Cattaneo37, Ch. Cauet9, M. Charles54, Ph. Charpentier37, P.
Chen3,38, N. Chiapolini39, M. Chrzaszcz 25, K. Ciba37, X. Cid Vidal36, G.
Ciezarek52, P.E.L. Clarke49, M. Clemencic37, H.V. Cliff46, J. Closier37, C.
Coca28, V. Coco40, J. Cogan6, E. Cogneras5, P. Collins37, A. Comerma-
Montells35, A. Contu15, A. Cook45, M. Coombes45, S. Coquereau8, G. Corti37, B.
Couturier37, G.A. Cowan38, D. Craik47, S. Cunliffe52, R. Currie49, C.
D’Ambrosio37, P. David8, P.N.Y. David40, I. De Bonis4, K. De Bruyn40, S. De
Capua53, M. De Cian39, J.M. De Miranda1, M. De Oyanguren Campos35,o, L. De
Paula2, W. De Silva59, P. De Simone18, D. Decamp4, M. Deckenhoff9, L. Del
Buono8, D. Derkach14, O. Deschamps5, F. Dettori41, A. Di Canto11, H.
Dijkstra37, M. Dogaru28, S. Donleavy51, F. Dordei11, A. Dosil Suárez36, D.
Dossett47, A. Dovbnya42, F. Dupertuis38, R. Dzhelyadin34, A. Dziurda25, A.
Dzyuba29, S. Easo48,37, U. Egede52, V. Egorychev30, S. Eidelman33, D. van
Eijk40, S. Eisenhardt49, U. Eitschberger9, R. Ekelhof9, L. Eklund50, I. El
Rifai5, Ch. Elsasser39, D. Elsby44, A. Falabella14,e, C. Färber11, G.
Fardell49, C. Farinelli40, S. Farry12, V. Fave38, D. Ferguson49, V. Fernandez
Albor36, F. Ferreira Rodrigues1, M. Ferro-Luzzi37, S. Filippov32, C.
Fitzpatrick37, M. Fontana10, F. Fontanelli19,i, R. Forty37, O. Francisco2, M.
Frank37, C. Frei37, M. Frosini17,f, S. Furcas20, E. Furfaro23, A. Gallas
Torreira36, D. Galli14,c, M. Gandelman2, P. Gandini54, Y. Gao3, J. Garofoli56,
P. Garosi53, J. Garra Tico46, L. Garrido35, C. Gaspar37, R. Gauld54, E.
Gersabeck11, M. Gersabeck53, T. Gershon47,37, Ph. Ghez4, V. Gibson46, V.V.
Gligorov37, C. Göbel57, D. Golubkov30, A. Golutvin52,30,37, A. Gomes2, H.
Gordon54, M. Grabalosa Gándara5, R. Graciani Diaz35, L.A. Granado Cardoso37,
E. Graugés35, G. Graziani17, A. Grecu28, E. Greening54, S. Gregson46, O.
Grünberg58, B. Gui56, E. Gushchin32, Yu. Guz34, T. Gys37, C. Hadjivasiliou56,
G. Haefeli38, C. Haen37, S.C. Haines46, S. Hall52, T. Hampson45, S. Hansmann-
Menzemer11, N. Harnew54, S.T. Harnew45, J. Harrison53, T. Hartmann58, J. He7,
V. Heijne40, K. Hennessy51, P. Henrard5, J.A. Hernando Morata36, E. van
Herwijnen37, E. Hicks51, D. Hill54, M. Hoballah5, C. Hombach53, P. Hopchev4,
W. Hulsbergen40, P. Hunt54, T. Huse51, N. Hussain54, D. Hutchcroft51, D.
Hynds50, V. Iakovenko43, M. Idzik26, P. Ilten12, R. Jacobsson37, A. Jaeger11,
E. Jans40, P. Jaton38, F. Jing3, M. John54, D. Johnson54, C.R. Jones46, B.
Jost37, M. Kaballo9, S. Kandybei42, M. Karacson37, T.M. Karbach37, I.R.
Kenyon44, U. Kerzel37, T. Ketel41, A. Keune38, B. Khanji20, O. Kochebina7, I.
Komarov38,31, R.F. Koopman41, P. Koppenburg40, M. Korolev31, A. Kozlinskiy40,
L. Kravchuk32, K. Kreplin11, M. Kreps47, G. Krocker11, P. Krokovny33, F.
Kruse9, M. Kucharczyk20,25,j, V. Kudryavtsev33, T. Kvaratskheliya30,37, V.N.
La Thi38, D. Lacarrere37, G. Lafferty53, A. Lai15, D. Lambert49, R.W.
Lambert41, E. Lanciotti37, G. Lanfranchi18,37, C. Langenbruch37, T. Latham47,
C. Lazzeroni44, R. Le Gac6, J. van Leerdam40, J.-P. Lees4, R. Lefèvre5, A.
Leflat31,37, J. Lefrançois7, S. Leo22, O. Leroy6, B. Leverington11, Y. Li3, L.
Li Gioi5, M. Liles51, R. Lindner37, C. Linn11, B. Liu3, G. Liu37, J. von
Loeben20, S. Lohn37, J.H. Lopes2, E. Lopez Asamar35, N. Lopez-March38, H. Lu3,
D. Lucchesi21,q, J. Luisier38, H. Luo49, F. Machefert7, I.V.
Machikhiliyan4,30, F. Maciuc28, O. Maev29,37, S. Malde54, G. Manca15,d, G.
Mancinelli6, U. Marconi14, R. Märki38, J. Marks11, G. Martellotti24, A.
Martens8, L. Martin54, A. Martín Sánchez7, M. Martinelli40, D. Martinez
Santos41, D. Martins Tostes2, A. Massafferri1, R. Matev37, Z. Mathe37, C.
Matteuzzi20, E. Maurice6, A. Mazurov16,32,37,e, J. McCarthy44, R. McNulty12,
A. Mcnab53, B. Meadows59,54, F. Meier9, M. Meissner11, M. Merk40, D.A.
Milanes8, M.-N. Minard4, J. Molina Rodriguez57, S. Monteil5, D. Moran53, P.
Morawski25, M.J. Morello22,s, R. Mountain56, I. Mous40, F. Muheim49, K.
Müller39, R. Muresan28, B. Muryn26, B. Muster38, P. Naik45, T. Nakada38, R.
Nandakumar48, I. Nasteva1, M. Needham49, N. Neufeld37, A.D. Nguyen38, T.D.
Nguyen38, C. Nguyen-Mau38,p, M. Nicol7, V. Niess5, R. Niet9, N. Nikitin31, T.
Nikodem11, A. Nomerotski54, A. Novoselov34, A. Oblakowska-Mucha26, V.
Obraztsov34, S. Oggero40, S. Ogilvy50, O. Okhrimenko43, R. Oldeman15,d,37, M.
Orlandea28, J.M. Otalora Goicochea2, P. Owen52, B.K. Pal56, A. Palano13,b, M.
Palutan18, J. Panman37, A. Papanestis48, M. Pappagallo50, C. Parkes53, C.J.
Parkinson52, G. Passaleva17, G.D. Patel51, M. Patel52, G.N. Patrick48, C.
Patrignani19,i, C. Pavel-Nicorescu28, A. Pazos Alvarez36, A. Pellegrino40, G.
Penso24,l, M. Pepe Altarelli37, S. Perazzini14,c, D.L. Perego20,j, E. Perez
Trigo36, A. Pérez-Calero Yzquierdo35, P. Perret5, M. Perrin-Terrin6, G.
Pessina20, K. Petridis52, A. Petrolini19,i, A. Phan56, E. Picatoste Olloqui35,
B. Pietrzyk4, T. Pilař47, D. Pinci24, S. Playfer49, M. Plo Casasus36, F.
Polci8, G. Polok25, A. Poluektov47,33, E. Polycarpo2, D. Popov10, B.
Popovici28, C. Potterat35, A. Powell54, J. Prisciandaro38, V. Pugatch43, A.
Puig Navarro38, G. Punzi22,r, W. Qian4, J.H. Rademacker45, B.
Rakotomiaramanana38, M.S. Rangel2, I. Raniuk42, N. Rauschmayr37, G. Raven41,
S. Redford54, M.M. Reid47, A.C. dos Reis1, S. Ricciardi48, A. Richards52, K.
Rinnert51, V. Rives Molina35, D.A. Roa Romero5, P. Robbe7, E. Rodrigues53, P.
Rodriguez Perez36, S. Roiser37, V. Romanovsky34, A. Romero Vidal36, J.
Rouvinet38, T. Ruf37, F. Ruffini22, H. Ruiz35, P. Ruiz Valls35,o, G.
Sabatino24,k, J.J. Saborido Silva36, N. Sagidova29, P. Sail50, B. Saitta15,d,
C. Salzmann39, B. Sanmartin Sedes36, M. Sannino19,i, R. Santacesaria24, C.
Santamarina Rios36, E. Santovetti23,k, M. Sapunov6, A. Sarti18,l, C.
Satriano24,m, A. Satta23, M. Savrie16,e, D. Savrina30,31, P. Schaack52, M.
Schiller41, H. Schindler37, M. Schlupp9, M. Schmelling10, B. Schmidt37, O.
Schneider38, A. Schopper37, M.-H. Schune7, R. Schwemmer37, B. Sciascia18, A.
Sciubba24, M. Seco36, A. Semennikov30, K. Senderowska26, I. Sepp52, N.
Serra39, J. Serrano6, P. Seyfert11, M. Shapkin34, I. Shapoval42,37, P.
Shatalov30, Y. Shcheglov29, T. Shears51,37, L. Shekhtman33, O. Shevchenko42,
V. Shevchenko30, A. Shires52, R. Silva Coutinho47, T. Skwarnicki56, N.A.
Smith51, E. Smith54,48, M. Smith53, M.D. Sokoloff59, F.J.P. Soler50, F.
Soomro18,37, D. Souza45, B. Souza De Paula2, B. Spaan9, A. Sparkes49, P.
Spradlin50, F. Stagni37, S. Stahl11, O. Steinkamp39, S. Stoica28, S. Stone56,
B. Storaci39, M. Straticiuc28, U. Straumann39, V.K. Subbiah37, S. Swientek9,
V. Syropoulos41, M. Szczekowski27, P. Szczypka38,37, T. Szumlak26, S.
T’Jampens4, M. Teklishyn7, E. Teodorescu28, F. Teubert37, C. Thomas54, E.
Thomas37, J. van Tilburg11, V. Tisserand4, M. Tobin39, S. Tolk41, D.
Tonelli37, S. Topp-Joergensen54, N. Torr54, E. Tournefier4,52, S. Tourneur38,
M.T. Tran38, M. Tresch39, A. Tsaregorodtsev6, P. Tsopelas40, N. Tuning40, M.
Ubeda Garcia37, A. Ukleja27, D. Urner53, U. Uwer11, V. Vagnoni14, G.
Valenti14, R. Vazquez Gomez35, P. Vazquez Regueiro36, S. Vecchi16, J.J.
Velthuis45, M. Veltri17,g, G. Veneziano38, M. Vesterinen37, B. Viaud7, D.
Vieira2, X. Vilasis-Cardona35,n, A. Vollhardt39, D. Volyanskyy10, D. Voong45,
A. Vorobyev29, V. Vorobyev33, C. Voß58, H. Voss10, R. Waldi58, R. Wallace12,
S. Wandernoth11, J. Wang56, D.R. Ward46, N.K. Watson44, A.D. Webber53, D.
Websdale52, M. Whitehead47, J. Wicht37, J. Wiechczynski25, D. Wiedner11, L.
Wiggers40, G. Wilkinson54, M.P. Williams47,48, M. Williams55, F.F. Wilson48,
J. Wishahi9, M. Witek25, S.A. Wotton46, S. Wright46, S. Wu3, K. Wyllie37, Y.
Xie49,37, F. Xing54, Z. Xing56, Z. Yang3, R. Young49, X. Yuan3, O.
Yushchenko34, M. Zangoli14, M. Zavertyaev10,a, F. Zhang3, L. Zhang56, W.C.
Zhang12, Y. Zhang3, A. Zhelezov11, A. Zhokhov30, L. Zhong3, A. Zvyagin37.
1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
3Center for High Energy Physics, Tsinghua University, Beijing, China
4LAPP, Université de Savoie, CNRS/IN2P3, Annecy-Le-Vieux, France
5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-
Ferrand, France
6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot,
CNRS/IN2P3, Paris, France
9Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
10Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
11Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
12School of Physics, University College Dublin, Dublin, Ireland
13Sezione INFN di Bari, Bari, Italy
14Sezione INFN di Bologna, Bologna, Italy
15Sezione INFN di Cagliari, Cagliari, Italy
16Sezione INFN di Ferrara, Ferrara, Italy
17Sezione INFN di Firenze, Firenze, Italy
18Laboratori Nazionali dell’INFN di Frascati, Frascati, Italy
19Sezione INFN di Genova, Genova, Italy
20Sezione INFN di Milano Bicocca, Milano, Italy
21Sezione INFN di Padova, Padova, Italy
22Sezione INFN di Pisa, Pisa, Italy
23Sezione INFN di Roma Tor Vergata, Roma, Italy
24Sezione INFN di Roma La Sapienza, Roma, Italy
25Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of
Sciences, Kraków, Poland
26AGH University of Science and Technology, Kraków, Poland
27National Center for Nuclear Research (NCBJ), Warsaw, Poland
28Horia Hulubei National Institute of Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
29Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia
30Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia
31Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
32Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN),
Moscow, Russia
33Budker Institute of Nuclear Physics (SB RAS) and Novosibirsk State
University, Novosibirsk, Russia
34Institute for High Energy Physics (IHEP), Protvino, Russia
35Universitat de Barcelona, Barcelona, Spain
36Universidad de Santiago de Compostela, Santiago de Compostela, Spain
37European Organization for Nuclear Research (CERN), Geneva, Switzerland
38Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
39Physik-Institut, Universität Zürich, Zürich, Switzerland
40Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands
41Nikhef National Institute for Subatomic Physics and VU University Amsterdam,
Amsterdam, The Netherlands
42NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
43Institute for Nuclear Research of the National Academy of Sciences (KINR),
Kyiv, Ukraine
44University of Birmingham, Birmingham, United Kingdom
45H.H. Wills Physics Laboratory, University of Bristol, Bristol, United
Kingdom
46Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
47Department of Physics, University of Warwick, Coventry, United Kingdom
48STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
49School of Physics and Astronomy, University of Edinburgh, Edinburgh, United
Kingdom
50School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
51Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
52Imperial College London, London, United Kingdom
53School of Physics and Astronomy, University of Manchester, Manchester,
United Kingdom
54Department of Physics, University of Oxford, Oxford, United Kingdom
55Massachusetts Institute of Technology, Cambridge, MA, United States
56Syracuse University, Syracuse, NY, United States
57Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de
Janeiro, Brazil, associated to 2
58Institut für Physik, Universität Rostock, Rostock, Germany, associated to 11
59University of Cincinnati, Cincinnati, OH, United States, associated to 56
aP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS),
Moscow, Russia
bUniversità di Bari, Bari, Italy
cUniversità di Bologna, Bologna, Italy
dUniversità di Cagliari, Cagliari, Italy
eUniversità di Ferrara, Ferrara, Italy
fUniversità di Firenze, Firenze, Italy
gUniversità di Urbino, Urbino, Italy
hUniversità di Modena e Reggio Emilia, Modena, Italy
iUniversità di Genova, Genova, Italy
jUniversità di Milano Bicocca, Milano, Italy
kUniversità di Roma Tor Vergata, Roma, Italy
lUniversità di Roma La Sapienza, Roma, Italy
mUniversità della Basilicata, Potenza, Italy
nLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain
oIFIC, Universitat de Valencia-CSIC, Valencia, Spain
pHanoi University of Science, Hanoi, Viet Nam
qUniversità di Padova, Padova, Italy
rUniversità di Pisa, Pisa, Italy
sScuola Normale Superiore, Pisa, Italy
It has been almost ten years since the narrow $X(3872)$ state was discovered
in $B^{+}$ decays by the Belle experiment [1].111The inclusion of charge-
conjugate states is implied in this Letter. Subsequently, its existence has
been confirmed by several other experiments [2, 3, 4]. Recently, its
production has been studied at the LHC [5, 6]. However, the nature of this
state remains unclear. Among the open possibilities are conventional
charmonium and exotic states such as $D^{*0}\bar{D}^{0}$ molecules [7], tetra-
quarks [8] or their mixtures [9]. Determination of the quantum numbers, total
angular momentum $J$, parity $P$, and charge-conjugation $C$, is important to
shed light on this ambiguity. The $C$-parity of the state is positive since
the $X(3872)\rightarrow\gamma{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$
decay has been observed [10, 11].
The CDF experiment analyzed three-dimensional (3D) angular correlations in a
relatively high-background sample of $2292\pm 113$ inclusively-reconstructed
$X(3872)\rightarrow\pi^{+}\pi^{-}{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}$, ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\rightarrow\mu^{+}\mu^{-}$ decays, dominated by prompt production in
$p\bar{p}$ collisions. The unknown polarization of the $X(3872)$ mesons
limited the sensitivity of the measurement of $J^{PC}$ [12]. A $\chi^{2}$ fit
of $J^{PC}$ hypotheses to the binned 3D distribution of the
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ and $\pi\pi$ helicity angles
($\theta_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$, $\theta_{\pi\pi}$)
[13, 14, 15], and the angle between their decay planes
($\Delta\phi_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu},\pi\pi}=\phi_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}-\phi_{\pi\pi}$), excluded all spin-parity assignments except for
$1^{++}$ or $2^{-+}$. The Belle collaboration observed $173\pm 16$
$B\rightarrow X(3872)K$ ($K=K^{\pm}$ or $K^{0}_{S}$),
$X(3872)\rightarrow\pi^{+}\pi^{-}{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}$, ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\rightarrow\ell^{+}\ell^{-}$ decays [16]. The reconstruction of the full
decay chain resulted in a small background and polarized $X(3872)$ mesons,
making their helicity angle ($\theta_{X}$) and orientation of their decay
plane ($\phi_{X}$) sensitive to $J^{PC}$ as well. By studying one-dimensional
distributions in three different angles, they concluded that their data were
equally well described by the $1^{++}$ and $2^{-+}$ hypotheses. The BaBar
experiment observed $34\pm 7$
$X(3872)\rightarrow\omega{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$,
$\omega\rightarrow\pi^{+}\pi^{-}\pi^{0}$ events [17]. The observed
$\pi^{+}\pi^{-}\pi^{0}$ mass distribution favored the $2^{-+}$ hypothesis,
which had a confidence level (CL) of $68\%$, over the $1^{++}$ hypothesis, but
the latter was not ruled out (CL $=7\%$).
In this Letter, we report the first analysis of the complete five-dimensional
angular correlations of the $B^{+}\rightarrow X(3872)K^{+}$,
$X(3872)\rightarrow\pi^{+}\pi^{-}{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}$, ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\rightarrow\mu^{+}\mu^{-}$ decay chain using $\sqrt{s}=7$ TeV $pp$
collision data corresponding to $1.0$ fb-1 collected in 2011 by the LHCb
experiment. The LHCb detector [18] is a single-arm forward spectrometer
covering the pseudorapidity range $2<\eta<5$, designed for the study of
particles containing $b$ or $c$ quarks. The detector includes a high precision
tracking system consisting of a silicon-strip vertex detector surrounding the
$pp$ interaction region, a large-area silicon-strip detector located upstream
of a dipole magnet with a bending power of about $4{\rm\,Tm}$, and three
stations of silicon-strip detectors and straw drift tubes placed downstream.
The combined tracking system has momentum resolution $\Delta p/p$ that varies
from 0.4% at 5$\mathrm{\,Ge\kern-1.00006ptV}$ to 0.6% at
100$\mathrm{\,Ge\kern-1.00006ptV}$, and impact parameter (IP) resolution of
20$\,\upmu\rm m$ for tracks with high transverse momentum ($p_{\rm T}$).222We
use mass and momentum units in which $c=1$. Charged hadrons are identified
using two ring-imaging Cherenkov detectors. Photon, electron and hadron
candidates are identified by a calorimeter system consisting of scintillating-
pad and preshower detectors, an electromagnetic calorimeter and a hadronic
calorimeter. Muons are identified by a system composed of alternating layers
of iron and multiwire proportional chambers. The trigger [19] consists of a
hardware stage, based on information from the calorimeter and muon systems,
followed by a software stage which applies a full event reconstruction.
In the offline analysis ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\rightarrow\mu^{+}\mu^{-}$ candidates are selected with the following
criteria: $p_{\rm T}(\mu)>0.9$ GeV, $p_{\rm
T}({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu})>1.5$ GeV, $\chi^{2}$ per
degree of freedom for the two muons to form a common vertex, $\chi^{2}_{\rm
vtx}(\mu^{+}\mu^{-})/\hbox{\rm ndf}<9$, and a mass consistent with the
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ meson. The separation of the
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ decay vertex from the nearest
primary vertex (PV) must be at least three standard deviations. Combinations
of $K^{+}\pi^{-}\pi^{+}$ candidates that are consistent with originating from
a common vertex with $\chi^{2}_{\rm vtx}(K^{+}\pi^{-}\pi^{+})/\hbox{\rm
ndf}<9$, with each charged hadron ($h$) separated from all PVs ($\chi^{2}_{\rm
IP}(h)>9$) and having $p_{\rm T}(h)>0.25$ GeV, are selected. The quantity
$\chi^{2}_{\rm IP}(h)$ is defined as the difference between the $\chi^{2}$ of
the PV reconstructed with and without the considered particle. Kaon and pion
candidates are required to satisfy $\ln[{\cal L}(K)/{\cal L}(\pi)]>0$ and
$<5$, respectively, where ${\cal L}$ is the particle identification likelihood
[20]. If both same-sign hadrons in this combination meet the kaon requirement,
only the particle with higher $p_{\rm T}$ is considered a kaon candidate. We
combine ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ candidates with
$K^{+}\pi^{-}\pi^{+}$ candidates to form $B^{+}$ candidates, which must
satisfy $\chi^{2}_{\rm vtx}({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K^{+}\pi^{-}\pi^{+})/\hbox{\rm ndf}<9$, $p_{\rm T}(B^{+})>2$ GeV and
have decay time greater than $0.25$ ps. The
${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}K^{+}\pi^{-}\pi^{+}$ mass is
calculated using the known ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ mass
and the $B$ vertex as constraints.
Four discriminating variables ($x_{i}$) are used in a likelihood ratio to
improve the background suppression: the minimal $\chi^{2}_{\rm IP}(h)$,
$\chi^{2}_{\rm vtx}({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K^{+}\pi^{+}\pi^{-})/\hbox{\rm ndf}$, $\chi^{2}_{\rm IP}(B^{+})$, and
the cosine of the largest opening angle between the $J/\psi$ and the charged-
hadron transverse momenta. The latter peaks at positive values for the signal
as the $B^{+}$ meson has a high transverse momentum. Background events in
which particles are combined from two different $B$ decays peak at negative
values, whilst those due to random combinations of particles are more
uniformly distributed. The four 1D signal probability density functions
(PDFs), ${\cal P}_{\rm sig}(x_{i})$, are obtained from a simulated sample of
$B^{+}\rightarrow\psi(2S)K^{+}$,
$\psi(2S)\rightarrow\pi^{+}\pi^{-}{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}$ decays, which are kinematically similar to the signal decays. The data
sample of $B^{+}\rightarrow\psi(2S)K^{+}$ events is used as a control sample
for ${\cal P}_{\rm sig}(x_{i})$ and for systematic studies in the angular
analysis. The background PDFs, ${\cal P}_{\rm bkg}(x_{i})$, are obtained from
the data in the $B^{+}$ mass sidebands (4.85–5.10 and 5.45–6.50 GeV). We
require $-2\sum_{i=1}^{4}\ln[{\cal P}_{\rm sig}(x_{i})/{\cal P}_{\rm
bkg}(x_{i})]<1.0$, which preserves about 94% of the $X(3872)$ signal events.
Figure 1: Distribution of $\Delta M$ for
$B^{+}\rightarrow{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K^{+}\pi^{+}\pi^{-}$ candidates. The fits of the $\psi(2S)$ and
$X(3872)$ signals are displayed. The solid blue, dashed red, and dotted green
lines represent the total fit, signal component, and background component,
respectively.
About $38\,000$ candidates are selected in a $\pm 2\sigma$ mass range around
the $B^{+}$ peak in the $M({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\pi^{+}\pi^{-}K^{+})$ distribution, with a signal purity of 89%. The
$\Delta M=M(\pi^{+}\pi^{-}{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu})-M({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu})$ distribution is shown
in Fig. 1. Fits to the $\psi(2S)$ and $X(3872)$ signals are shown in the
insets. A Crystal Ball function [21] with symmetric tails is used for the
signal shapes. The background is assumed to be linear. The $\psi(2S)$ fit is
performed in the 539.2–639.2 MeV range leaving all parameters free to vary. It
yields $5642\pm 76$ signal ($230\pm 21$ background) candidates with a $\Delta
M$ resolution of $\sigma_{\Delta M}=3.99\pm 0.05$ MeV, corresponding to a
signal purity of $99.2\%$ within a $\pm 2.5\sigma_{\Delta M}$ region. When
fitting in the 723–823 MeV range, the signal tail parameters are fixed to the
values obtained in the $\psi(2S)$ fit, which also describe well the simulated
$X(3872)$ signal distribution. The fit yields $313\pm 26$ $B^{+}\rightarrow
X(3872)K^{+}$ ($568\pm 31$ background) candidates with a resolution of $5.5\pm
0.5$ MeV, corresponding to a signal purity of $68\%$ within a $\pm
2.5\sigma_{\Delta M}$ region. The dominant source of background is from
$B^{+}\rightarrow{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}K_{1}(1270)^{+}$,
$K_{1}(1270)^{+}\rightarrow K^{+}\pi^{+}\pi^{-}$ decays as found by studying
the $K^{+}\pi^{+}\pi^{-}$ mass distribution.
The angular correlations in the $B^{+}$ decay carry information about the
$X(3872)$ quantum numbers. To discriminate between the $1^{++}$ and $2^{-+}$
assignments we use the likelihood-ratio test, which in general provides the
most powerful test between two hypotheses [22]. The PDF for each $J^{PC}$
hypothesis, $J_{X}$, is defined in the 5D angular space $\Omega\equiv$
$(\cos\theta_{X},\cos\theta_{\pi\pi},\Delta\phi_{X,\pi\pi},\cos\theta_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}},\Delta\phi_{X,{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}})$ by the
normalized product of the expected decay matrix element (${\cal M}$) squared
and of the reconstruction efficiency ($\epsilon$), ${\cal
P}(\Omega|J_{X})=|{\cal M}(\Omega|J_{X})|^{2}\,\epsilon(\Omega)/I(J_{X})$,
where $I(J_{X})=\int|{\cal M}(\Omega|J_{X})|^{2}\,\epsilon(\Omega){\it
d}\Omega$. The efficiency is averaged over the $\pi^{+}\pi^{-}$ mass
($M(\pi\pi)$) using a simulation [23, 24, 25, 26, *Agostinelli:2002hh, 28]
that assumes the
$X(3872)\rightarrow\rho(770){J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$,
$\rho(770)\rightarrow\pi^{+}\pi^{-}$ decay [16, 29, 6]. The observed
$M(\pi\pi)$ distribution is in good agreement with this simulation. The
lineshape of the $\rho(770)$ resonance can change slightly depending on the
spin hypothesis. The effect on $\epsilon(\Omega)$ is found to be very small
and is neglected. We follow the approach adopted in Ref. [12] to predict the
matrix elements. The angular correlations are obtained using the helicity
formalism,
$\displaystyle|\,{\cal M}(\Omega|J_{X})\,|^{2}$ $\displaystyle=$
$\displaystyle\sum_{\Delta\lambda_{\mu}=-1,+1}\left|\phantom{A_{\lambda_{\psi}}}\hskip-11.38092pt\right.\sum_{\lambda_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}},\lambda_{\pi\pi}=-1,0,+1}A_{\lambda_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}},\lambda_{\pi\pi}}\times
D^{J_{X}}_{0\,,\,\lambda_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}-\lambda_{\pi\pi}}(\phi_{X},\theta_{X},-\phi_{X})\times$
$\displaystyle\hskip
68.28644ptD^{1}_{\lambda_{\pi\pi}\,,\,0}(\phi_{\pi\pi},\theta_{\pi\pi},-\phi_{\pi\pi})\times
D^{1}_{\lambda_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}\,,\,\Delta\lambda_{\mu}}(\phi_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}},\theta_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}},-\phi_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}})\left.\phantom{A_{\lambda_{\psi}}}\hskip-11.38092pt\right|^{2},$
where $\lambda$ are particle helicities and
$D^{J}_{\lambda_{1}\,,\,\lambda_{2}}$ are Wigner functions [13, 14, 15]. The
helicity couplings, $A_{\lambda_{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}},\lambda_{\pi\pi}}$, are expressed in terms of the $LS$ couplings [30,
31], $B_{LS}$, where $L$ is the orbital angular momentum between the $\pi\pi$
system and the ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ meson, and $S$
is the sum of their spins. Since the energy release in the
$X(3872)\rightarrow\rho(770){J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$
decay is small, the lowest value of $L$ is expected to dominate, especially
because the next-to-minimal value is not allowed by parity conservation. The
lowest value for the $1^{++}$ hypothesis is $L=0$, which implies $S=1$. With
only one $LS$ amplitude present, the angular distribution is completely
determined without free parameters. For the $2^{-+}$ hypothesis the lowest
value is $L=1$, which implies $S=1$ or $2$. As both $LS$ combinations are
possible, the $2^{-+}$ hypothesis implies two parameters, which are chosen to
be the real and imaginary parts of $\alpha\equiv B_{11}/(B_{11}+B_{12})$.
Since they are related to strong dynamics, they are difficult to predict
theoretically and are treated as nuisance parameters.
We define a test statistic $t=-2\ln[{\cal L}(2^{-+})/{\cal L}(1^{++})]$, where
the ${\cal L}(2^{-+})$ likelihood is maximized with respect to $\alpha$. The
efficiency $\epsilon(\Omega)$ is not determined on an event-by-event basis,
since it cancels in the likelihood ratio except for the normalization
integrals. A large sample of simulated events, with uniform angular
distributions, passed through a full simulation of the detection and the data
selection process, is used to carry out the integration,
$I(J_{X})\propto\sum_{i=1}^{N_{\rm MC}}|{\cal M}(\Omega_{i}|J_{X})|^{2}$,
where $N_{\rm MC}$ is the number of reconstructed simulated events. The
background in the data is subtracted in the log-likelihoods using the sPlot
technique [32] by assigning to each candidate in the fitted $\Delta M$ range
an event weight (sWeight), $w_{i}$, based on its $\Delta M$ value, $-2\ln{\cal
L}(J_{X})=-s_{w}\,2\sum_{i=1}^{N_{\rm data}}w_{i}\,\ln{\cal
P}(\Omega_{i}|J_{X})$. Here, $s_{w}$ is a constant scaling factor,
$s_{w}=\sum_{i=1}^{N_{\rm data}}w_{i}/\sum_{i=1}^{N_{\rm data}}{w_{i}}^{2}$,
which accounts for statistical fluctuations in the background subtraction.
Positive (negative) values of the test statistic for the data, $t_{\rm data}$,
favor the $1^{++}$ ($2^{-+}$) hypothesis. The analysis procedure has been
extensively tested on simulated samples for the $1^{++}$ and $2^{-+}$
hypotheses with different values of $\alpha$, generated using the EvtGen
package [25].
The value of $\alpha$ that minimizes $-2\ln{\cal L}(J_{X}=2^{-+},\alpha)$ in
the data is $\hat{\alpha}=(0.671\pm 0.046,0.280\pm 0.046)$. This is compatible
with the value reported by Belle, $(0.64,0.27)$ [16]. The value of the test
statistic observed in the data is $t_{\rm data}=+99$, thus favoring the
$1^{++}$ hypothesis. Furthermore, $\hat{\alpha}$ is consistent with the value
of $\alpha$ obtained from fitting a large background-free sample of simulated
$1^{++}$ events, $(0.650\pm 0.011,0.294\pm 0.012)$. The value of $t_{\rm
data}$ is compared with the distribution of $t$ in the simulated experiments
to determine a $p$-value for the $2^{-+}$ hypothesis via the fraction of
simulated experiments yielding a value of $t>t_{\rm data}$. We simulate 2
million experiments with the value of $\alpha$, and the number of signal and
background events, as observed in the data. The background is assumed to be
saturated by the $B^{+}\rightarrow{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K_{1}(1270)^{+}$ decay, which provides a good description of its angular
correlations. None of the values of $t$ from the simulated experiments even
approach $t_{\rm data}$, indicating a $p$-value smaller than $1/(2\times
10^{6})$, which corresponds to a rejection of the $2^{-+}$ hypothesis with
greater than $5\sigma$ significance. As shown in Fig. 2, the distribution of
$t$ is reasonably well approximated by a Gaussian function. Based on the mean
and r.m.s. spread of the $t$ distribution for the $2^{-+}$ experiments, this
hypothesis is rejected with a significance of $8.4\sigma$. The deviations of
the $t$ distribution from the Gaussian function suggest this is a plausible
estimate. Using phase-space
$B^{+}\rightarrow{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}K^{+}\pi^{+}\pi^{-}$ decays as a model for the background events, we
obtain a consistent result. The value of $t_{\rm data}$ falls into the region
where the probability density for the $1^{++}$ simulated experiments is high.
Integrating the $1^{++}$ distribution from $-\infty$ to $t_{\rm data}$ gives
${\rm CL}~{}(1^{++})=34\%$. We also compare the binned distribution of single-
event log-likelihood-ratios with sWeights applied, $\ln[{\cal
P}(\Omega_{i}|2^{-+},\hat{\alpha})/{\cal P}(\Omega_{i}|1^{++})]$, between the
data and the simulations. The shape of this distribution in data is consistent
with the $1^{++}$ simulations and inconsistent with the $2^{-+}$ simulations,
as illustrated in Fig. 3.
Figure 2: Distribution of the test statistic $t$ for the simulated experiments
with $J^{PC}=2^{-+}$ and $\alpha=\hat{\alpha}$ (black circles on the left) and
with $J^{PC}=1^{++}$ (red triangles on the right). A Gaussian fit to the
$2^{-+}$ distribution is overlaid (blue solid line). The value of the test
statistic for the data, $t_{\rm data}$, is shown by the solid vertical line.
Figure 3: Distribution of $-\ln[{{\cal
P}(\Omega_{i}|2^{-+},\hat{\alpha})}/{{\cal P}(\Omega_{i}|1^{++})}]$ for the
data (points with error bars) compared to the distributions for the simulated
experiments with $J^{PC}=1^{++}$ (red solid histogram) and with
$J^{PC}=2^{-+}$, $\alpha=\hat{\alpha}$ (blue dashed histogram) after the
background subtraction using sWeights. The simulated distributions are
normalized to the number of signal candidates observed in the data. Bin
contents and uncertainties are divided by bin width because of unequal bin
sizes.
We vary the data selection criteria to probe for possible biases from the
background subtraction and the efficiency corrections. The nominal selection
does not bias the $M(\pi\pi)$ distribution. By requiring
$Q=M({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\pi\pi)-M({J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu})-M(\pi\pi)<0.1$
GeV, we reduce the background level by a factor of four, while losing only
$21\%$ of the signal. The significance of the $2^{-+}$ rejection changes very
little, in agreement with the simulations. By tightening the requirements on
the $p_{\rm T}$ of $\pi$, $K$ and $\mu$ candidates, we decrease the signal
efficiency by about 50% with similar reduction in the background level. In all
cases, the significance of the $2^{-+}$ rejection is reduced by a factor
consistent with the simulations.
In the analysis we use simulations to calculate the $I(J_{X})$ integrals. In
an alternative approach to the efficiency estimates, we use the
$B^{+}\rightarrow\psi(2S)K^{+}$ events observed in the data weighted by the
inverse of $1^{--}$ matrix element squared. We obtain a value of $t_{\rm
data}$ that corresponds to $8.2\sigma$ rejection of the $2^{-+}$ hypothesis.
As an additional goodness-of-fit test for the $1^{++}$ hypothesis, we project
the data onto five 1D and ten 2D binned distributions in all five angles and
their combinations. They are all consistent with the distributions expected
for the $1^{++}$ hypothesis. Some of them are inconsistent with the
distributions expected for the ($2^{-+}$, $\hat{\alpha}$) hypothesis. The most
significant inconsistency is observed for the 2D projections onto
$\cos\theta_{X}$ vs. $\cos\theta_{\pi\pi}$. The separation between the
$1^{++}$ and $2^{-+}$ hypotheses increases when using correlations between
these two angles, as illustrated in Fig. 4.
In summary, we unambiguously establish that the values of total angular
momentum, parity and charge-conjugation eigenvalues of the $X(3872)$ state are
$1^{++}$. This is achieved through the first analysis of the full five-
dimensional angular correlations between final state particles in
$B^{+}\rightarrow X(3872)K^{+}$,
$X(3872)\rightarrow\pi^{+}\pi^{-}{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}$, ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}\rightarrow\mu^{+}\mu^{-}$ decays using the likelihood-ratio test. The
$2^{-+}$ hypothesis is excluded with a significance of more than eight
Gaussian standard deviations. This result rules out the explanation of the
$X(3872)$ meson as a conventional $\eta_{c2}(1^{1}D_{2})$ state. Among the
remaining possibilities are the $\chi_{c1}(2^{3}P_{1})$ charmonium, disfavored
by the value of the $X(3872)$ mass [33], and unconventional explanations such
as a $D^{*0}\bar{D}^{0}$ molecule [7], tetraquark state [8] or charmonium-
molecule mixture [9].
Figure 4: Background-subtracted distribution of $\cos\theta_{X}$ for (top) all
candidates and for (bottom) candidates with $|\cos\theta_{\pi\pi}|>0.6$ for
the data (points with error bars) compared to the expected distributions for
the $J^{PC}=1^{++}$ (red solid histogram) and $J^{PC}=2^{-+}$ and
$\alpha=\hat{\alpha}$ hypotheses (blue dashed histogram). The simulated
distributions are normalized to the number of signal candidates observed in
the data across the full phase space.
## Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC
(China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG
(Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR
(Poland); ANCS/IFA (Romania); MinES, Rosatom, RFBR and NRC “Kurchatov
Institute” (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER
(Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We
also acknowledge the support received from the ERC under FP7. The Tier1
computing centres are supported by IN2P3 (France), KIT and BMBF (Germany),
INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United
Kingdom). We are thankful for the computing resources put at our disposal by
Yandex LLC (Russia), as well as to the communities behind the multiple open
source software packages that we depend on.
## References
* [1] Belle collaboration, S.-K. Choi et al., Observation of a narrow charmonium-like state in exclusive $B^{\pm}\rightarrow K^{\pm}\pi^{+}\pi^{-}J/\psi$ decays, Phys. Rev. Lett. 91 (2003) 262001, arXiv:hep-ex/0309032
* [2] CDF collaboration, D. Acosta et al., Observation of the narrow state $X(3872)\rightarrow J/\psi\pi^{+}\pi^{-}$ in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV, Phys. Rev. Lett. 93 (2004) 072001, arXiv:hep-ex/0312021
* [3] D0 collaboration, V. M. Abazov et al., Observation and properties of the X(3872) decaying to $J/\psi\pi^{+}\pi^{-}$ in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV, Phys. Rev. Lett. 93 (2004) 162002, arXiv:hep-ex/0405004
* [4] BaBar collaboration, B. Aubert et al., Study of the $B^{-}\rightarrow J/\psi K^{-}\pi^{+}\pi^{-}$ decay and measurement of the $B^{-}\rightarrow X(3872)K^{-}$ branching fraction, Phys. Rev. D71 (2005) 071103, arXiv:hep-ex/0406022
* [5] LHCb collaboration, R. Aaij et al., Observation of $X(3872)$ production in $pp$ collisions at $\sqrt{s}=7$ TeV, Eur. Phys. J. C72 (2012) 1972, arXiv:1112.5310
* [6] CMS collaboration, S. Chatrchyan et al., Measurement of the $X(3872)$ production cross section via decays to ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}\pi\pi$ in $pp$ collisions at $\sqrt{s}=7$ TeV, arXiv:1302.3968
* [7] N. A. Tornqvist, Isospin breaking of the narrow charmonium state of Belle at 3872-MeV as a deuson, Phys. Lett. B590 (2004) 209, arXiv:hep-ph/0402237
* [8] L. Maiani, F. Piccinini, A. D. Polosa, and V. Riquer, Diquark-antidiquarks with hidden or open charm and the nature of $X(3872)$, Phys. Rev. D71 (2005) 014028, arXiv:hep-ph/0412098
* [9] C. Hanhart, Y. Kalashnikova, and A. Nefediev, Interplay of quark and meson degrees of freedom in a near-threshold resonance: multi-channel case, Eur. Phys. J. A47 (2011) 101, arXiv:1106.1185
* [10] BaBar collaboration, B. Aubert et al., Search for $B^{+}\rightarrow X(3872)K^{+}$, $X(3872)\rightarrow J/\psi\gamma$, Phys. Rev. D74 (2006) 071101, arXiv:hep-ex/0607050
* [11] Belle collaboration, V. Bhardwaj et al., Observation of $X(3872)\rightarrow J/\psi\gamma$ and search for $X(3872)\rightarrow\psi^{\prime}\gamma$ in B decays, Phys. Rev. Lett. 107 (2011) 091803, arXiv:1105.0177
* [12] CDF collaboration, A. Abulencia et al., Analysis of the quantum numbers $J^{PC}$ of the $X(3872)$, Phys. Rev. Lett. 98 (2007) 132002, arXiv:hep-ex/0612053
* [13] M. Jacob and G. Wick, On the general theory of collisions for particles with spin, Annals Phys. 7 (1959) 404
* [14] J. D. Richman, An experimenter’s guide to the helicity formalism, 1984, CALT-68-1148
* [15] S. U. Chung, General formulation of covariant helicity-coupling amplitudes, Phys. Rev. D57 (1998) 431
* [16] Belle collaboration, S.-K. Choi et al., Bounds on the width, mass difference and other properties of $X(3872)\rightarrow\pi^{+}\pi^{-}J/\psi$ decays, Phys. Rev. D84 (2011) 052004, arXiv:1107.0163
* [17] BaBar collaboration, P. del Amo Sanchez et al., Evidence for the decay $X(3872)\rightarrow J/\psi\omega$, Phys. Rev. D82 (2010) 011101, arXiv:1005.5190
* [18] LHCb collaboration, A. A. Alves Jr. et al., The LHCb detector at the LHC, JINST 3 (2008) S08005
* [19] R. Aaij et al., The LHCb trigger and its performance, arXiv:1211.3055, submitted to JINST
* [20] M. Adinolfi et al., Performance of the LHCb RICH detector at the LHC, arXiv:1211.6759
* [21] T. Skwarnicki, A study of the radiative cascade transitions between the $\Upsilon^{\prime}$ and $\Upsilon$ resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, DESY-F31-86-02
* [22] F. James, Statistical methods in experimental physics, World Scientific Publishing, 2006
* [23] T. Sjöstrand, S. Mrenna, and P. Skands, PYTHIA 6.4 physics and manual, JHEP 05 (2006) 026, arXiv:hep-ph/0603175
* [24] I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulation framework, Nuclear Science Symposium Conference Record (NSS/MIC) IEEE (2010) 1155
* [25] D. J. Lange, The EvtGen particle decay simulation package, Nucl. Instrum. Meth. A462 (2001) 152
* [26] GEANT4 collaboration, J. Allison et al., Geant4 developments and applications, IEEE Trans. Nucl. Sci. 53 (2006) 270
* [27] GEANT4 collaboration, S. Agostinelli et al., GEANT4: a simulation toolkit, Nucl. Instrum. Meth. A506 (2003) 250
* [28] M. Clemencic et al., The LHCb simulation application, Gauss: design, evolution and experience, J. of Phys. : Conf. Ser. 331 (2011) 032023
* [29] CDF collaboration, A. Abulencia et al., Measurement of the dipion mass spectrum in $X(3872)\rightarrow J/\psi\pi^{+}\pi^{-}$ decays., Phys. Rev. Lett. 96 (2006) 102002, arXiv:hep-ex/0512074
* [30] J. Heuser, Measurement of the mass and the quantum numbers $J^{PC}$ of the $X(3872)$ state, PhD thesis, Universitat Karlsruhe, 2008, IEKP-KA/2008-16
* [31] N. Mangiafave, Measurements of charmonia production and a study of the $X(3872)$ at LHCb, PhD thesis, Cambridge, 2012, CERN-THESIS-2012-003
* [32] M. Pivk and F. R. Le Diberder, sPlot: a statistical tool to unfold data distributions, Nucl. Instrum. Meth. A555 (2005) 356, arXiv:physics/0402083v3
* [33] T. Skwarnicki, Heavy quarkonium, Int. J. Mod. Phys. A19 (2004) 1030, arXiv:hep-ph/0311243
|
arxiv-papers
| 2013-02-25T22:43:09 |
2024-09-04T02:49:42.114458
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "LHCb collaboration: R. Aaij, C. Abellan Beteta, B. Adeva, M. Adinolfi,\n C. Adrover, A. Affolder, Z. Ajaltouni, J. Albrecht, F. Alessio, M. Alexander,\n S. Ali, G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, S. Amerio,\n Y. Amhis, L. Anderlini, J. Anderson, R. Andreassen, R.B. Appleby, O. Aquines\n Gutierrez, F. Archilli, A. Artamonov, M. Artuso, E. Aslanides, G. Auriemma,\n S. Bachmann, J.J. Back, C. Baesso, V. Balagura, W. Baldini, R.J. Barlow, C.\n Barschel, S. Barsuk, W. Barter, Th. Bauer, A. Bay, J. Beddow, F. Bedeschi, I.\n Bediaga, S. Belogurov, K. Belous, I. Belyaev, E. Ben-Haim, M. Benayoun, G.\n Bencivenni, S. Benson, J. Benton, A. Berezhnoy, R. Bernet, M.-O. Bettler, M.\n van Beuzekom, A. Bien, S. Bifani, T. Bird, A. Bizzeti, P.M. Bj{\\o}rnstad, T.\n Blake, F. Blanc, J. Blouw, S. Blusk, V. Bocci, A. Bondar, N. Bondar, W.\n Bonivento, S. Borghi, A. Borgia, T.J.V. Bowcock, E. Bowen, C. Bozzi, T.\n Brambach, J. van den Brand, J. Bressieux, D. Brett, M. Britsch, T. Britton,\n N.H. Brook, H. Brown, I. Burducea, A. Bursche, G. Busetto, J. Buytaert, S.\n Cadeddu, O. Callot, M. Calvi, M. Calvo Gomez, A. Camboni, P. Campana, A.\n Carbone, G. Carboni, R. Cardinale, A. Cardini, H. Carranza-Mejia, L. Carson,\n K. Carvalho Akiba, G. Casse, M. Cattaneo, Ch. Cauet, M. Charles, Ph.\n Charpentier, P. Chen, N. Chiapolini, M. Chrzaszcz, K. Ciba, X. Cid Vidal, G.\n Ciezarek, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier, C. Coca, V.\n Coco, J. Cogan, E. Cogneras, P. Collins, A. Comerma-Montells, A. Contu, A.\n Cook, M. Coombes, S. Coquereau, G. Corti, B. Couturier, G.A. Cowan, D. Craik,\n S. Cunliffe, R. Currie, C. D'Ambrosio, P. David, P.N.Y. David, I. De Bonis,\n K. De Bruyn, S. De Capua, M. De Cian, J.M. De Miranda, M. De Oyanguren\n Campos, L. De Paula, W. De Silva, P. De Simone, D. Decamp, M. Deckenhoff, L.\n Del Buono, D. Derkach, O. Deschamps, F. Dettori, A. Di Canto, H. Dijkstra, M.\n Dogaru, S. Donleavy, F. Dordei, A. Dosil Su\\'arez, D. Dossett, A. Dovbnya, F.\n Dupertuis, R. Dzhelyadin, A. Dziurda, A. Dzyuba, S. Easo, U. Egede, V.\n Egorychev, S. Eidelman, D. van Eijk, S. Eisenhardt, U. Eitschberger, R.\n Ekelhof, L. Eklund, I. El Rifai, Ch. Elsasser, D. Elsby, A. Falabella, C.\n F\\\"arber, G. Fardell, C. Farinelli, S. Farry, V. Fave, D. Ferguson, V.\n Fernandez Albor, F. Ferreira Rodrigues, M. Ferro-Luzzi, S. Filippov, C.\n Fitzpatrick, M. Fontana, F. Fontanelli, R. Forty, O. Francisco, M. Frank, C.\n Frei, M. Frosini, S. Furcas, E. Furfaro, A. Gallas Torreira, D. Galli, M.\n Gandelman, P. Gandini, Y. Gao, J. Garofoli, P. Garosi, J. Garra Tico, L.\n Garrido, C. Gaspar, R. Gauld, E. Gersabeck, M. Gersabeck, T. Gershon, Ph.\n Ghez, V. Gibson, V.V. Gligorov, C. G\\\"obel, D. Golubkov, A. Golutvin, A.\n Gomes, H. Gordon, M. Grabalosa G\\'andara, R. Graciani Diaz, L.A. Granado\n Cardoso, E. Graug\\'es, G. Graziani, A. Grecu, E. Greening, S. Gregson, O.\n Gr\\\"unberg, B. Gui, E. Gushchin, Yu. Guz, T. Gys, C. Hadjivasiliou, G.\n Haefeli, C. Haen, S.C. Haines, S. Hall, T. Hampson, S. Hansmann-Menzemer, N.\n Harnew, S.T. Harnew, J. Harrison, T. Hartmann, J. He, V. Heijne, K. Hennessy,\n P. Henrard, J.A. Hernando Morata, E. van Herwijnen, E. Hicks, D. Hill, M.\n Hoballah, C. Hombach, P. Hopchev, W. Hulsbergen, P. Hunt, T. Huse, N.\n Hussain, D. Hutchcroft, D. Hynds, V. Iakovenko, M. Idzik, P. Ilten, R.\n Jacobsson, A. Jaeger, E. Jans, P. Jaton, F. Jing, M. John, D. Johnson, C.R.\n Jones, B. Jost, M. Kaballo, S. Kandybei, M. Karacson, T.M. Karbach, I.R.\n Kenyon, U. Kerzel, T. Ketel, A. Keune, B. Khanji, O. Kochebina, I. Komarov,\n R.F. Koopman, P. Koppenburg, M. Korolev, A. Kozlinskiy, L. Kravchuk, K.\n Kreplin, M. Kreps, G. Krocker, P. Krokovny, F. Kruse, M. Kucharczyk, V.\n Kudryavtsev, T. Kvaratskheliya, V.N. La Thi, D. Lacarrere, G. Lafferty, A.\n Lai, D. Lambert, R.W. Lambert, E. Lanciotti, G. Lanfranchi, C. Langenbruch,\n T. Latham, C. Lazzeroni, R. Le Gac, J. van Leerdam, J.-P. Lees, R. Lef\\`evre,\n A. Leflat, J. Lefran\\c{c}ois, S. Leo, O. Leroy, B. Leverington, Y. Li, L. Li\n Gioi, M. Liles, R. Lindner, C. Linn, B. Liu, G. Liu, J. von Loeben, S. Lohn,\n J.H. Lopes, E. Lopez Asamar, N. Lopez-March, H. Lu, D. Lucchesi, J. Luisier,\n H. Luo, F. Machefert, I.V. Machikhiliyan, F. Maciuc, O. Maev, S. Malde, G.\n Manca, G. Mancinelli, U. Marconi, R. M\\\"arki, J. Marks, G. Martellotti, A.\n Martens, L. Martin, A. Mart\\'in S\\'anchez, M. Martinelli, D. Martinez Santos,\n D. Martins Tostes, A. Massafferri, R. Matev, Z. Mathe, C. Matteuzzi, E.\n Maurice, A. Mazurov, J. McCarthy, R. McNulty, A. Mcnab, B. Meadows, F. Meier,\n M. Meissner, M. Merk, D.A. Milanes, M.-N. Minard, J. Molina Rodriguez, S.\n Monteil, D. Moran, P. Morawski, M.J. Morello, R. Mountain, I. Mous, F.\n Muheim, K. M\\\"uller, R. Muresan, B. Muryn, B. Muster, P. Naik, T. Nakada, R.\n Nandakumar, I. Nasteva, M. Needham, N. Neufeld, A.D. Nguyen, T.D. Nguyen, C.\n Nguyen-Mau, M. Nicol, V. Niess, R. Niet, N. Nikitin, T. Nikodem, A.\n Nomerotski, A. Novoselov, A. Oblakowska-Mucha, V. Obraztsov, S. Oggero, S.\n Ogilvy, O. Okhrimenko, R. Oldeman, M. Orlandea, J.M. Otalora Goicochea, P.\n Owen, B.K. Pal, A. Palano, M. Palutan, J. Panman, A. Papanestis, M.\n Pappagallo, C. Parkes, C.J. Parkinson, G. Passaleva, G.D. Patel, M. Patel,\n G.N. Patrick, C. Patrignani, C. Pavel-Nicorescu, A. Pazos Alvarez, A.\n Pellegrino, G. Penso, M. Pepe Altarelli, S. Perazzini, D.L. Perego, E. Perez\n Trigo, A. P\\'erez-Calero Yzquierdo, P. Perret, M. Perrin-Terrin, G. Pessina,\n K. Petridis, A. Petrolini, A. Phan, E. Picatoste Olloqui, B. Pietrzyk, T.\n Pila\\v{r}, D. Pinci, S. Playfer, M. Plo Casasus, F. Polci, G. Polok, A.\n Poluektov, E. Polycarpo, D. Popov, B. Popovici, C. Potterat, A. Powell, J.\n Prisciandaro, V. Pugatch, A. Puig Navarro, G. Punzi, W. Qian, J.H.\n Rademacker, B. Rakotomiaramanana, M.S. Rangel, I. Raniuk, N. Rauschmayr, G.\n Raven, S. Redford, M.M. Reid, A.C. dos Reis, S. Ricciardi, A. Richards, K.\n Rinnert, V. Rives Molina, D.A. Roa Romero, P. Robbe, E. Rodrigues, P.\n Rodriguez Perez, S. Roiser, V. Romanovsky, A. Romero Vidal, J. Rouvinet, T.\n Ruf, F. Ruffini, H. Ruiz, P. Ruiz Valls, G. Sabatino, J.J. Saborido Silva, N.\n Sagidova, P. Sail, B. Saitta, C. Salzmann, B. Sanmartin Sedes, M. Sannino, R.\n Santacesaria, C. Santamarina Rios, E. Santovetti, M. Sapunov, A. Sarti, C.\n Satriano, A. Satta, M. Savrie, D. Savrina, P. Schaack, M. Schiller, H.\n Schindler, M. Schlupp, M. Schmelling, B. Schmidt, O. Schneider, A. Schopper,\n M.-H. Schune, R. Schwemmer, B. Sciascia, A. Sciubba, M. Seco, A. Semennikov,\n K. Senderowska, I. Sepp, N. Serra, J. Serrano, P. Seyfert, M. Shapkin, I.\n Shapoval, P. Shatalov, Y. Shcheglov, T. Shears, L. Shekhtman, O. Shevchenko,\n V. Shevchenko, A. Shires, R. Silva Coutinho, T. Skwarnicki, N.A. Smith, E.\n Smith, M. Smith, M.D. Sokoloff, F.J.P. Soler, F. Soomro, D. Souza, B. Souza\n De Paula, B. Spaan, A. Sparkes, P. Spradlin, F. Stagni, S. Stahl, O.\n Steinkamp, S. Stoica, S. Stone, B. Storaci, M. Straticiuc, U. Straumann, V.K.\n Subbiah, S. Swientek, V. Syropoulos, M. Szczekowski, P. Szczypka, T. Szumlak,\n S. T'Jampens, M. Teklishyn, E. Teodorescu, F. Teubert, C. Thomas, E. Thomas,\n J. van Tilburg, V. Tisserand, M. Tobin, S. Tolk, D. Tonelli, S.\n Topp-Joergensen, N. Torr, E. Tournefier, S. Tourneur, M.T. Tran, M. Tresch,\n A. Tsaregorodtsev, P. Tsopelas, N. Tuning, M. Ubeda Garcia, A. Ukleja, D.\n Urner, U. Uwer, V. Vagnoni, G. Valenti, R. Vazquez Gomez, P. Vazquez\n Regueiro, S. Vecchi, J.J. Velthuis, M. Veltri, G. Veneziano, M. Vesterinen,\n B. Viaud, D. Vieira, X. Vilasis-Cardona, A. Vollhardt, D. Volyanskyy, D.\n Voong, A. Vorobyev, V. Vorobyev, C. Vo\\ss, H. Voss, R. Waldi, R. Wallace, S.\n Wandernoth, J. Wang, D.R. Ward, N.K. Watson, A.D. Webber, D. Websdale, M.\n Whitehead, J. Wicht, J. Wiechczynski, D. Wiedner, L. Wiggers, G. Wilkinson,\n M.P. Williams, M. Williams, F.F. Wilson, J. Wishahi, M. Witek, S.A. Wotton,\n S. Wright, S. Wu, K. Wyllie, Y. Xie, F. Xing, Z. Xing, Z. Yang, R. Young, X.\n Yuan, O. Yushchenko, M. Zangoli, M. Zavertyaev, F. Zhang, L. Zhang, W.C.\n Zhang, Y. Zhang, A. Zhelezov, A. Zhokhov, L. Zhong, A. Zvyagin",
"submitter": "Tomasz Skwarnicki",
"url": "https://arxiv.org/abs/1302.6269"
}
|
1302.6354
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-PH-EP-2013-024 LHCb-PAPER-2012-053 25 February 2013
Observations of
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ and
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays
The LHCb collaboration†††Authors are listed on the following pages.
First observations of the
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$,
$\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$ and
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays are made using a dataset corresponding to an integrated luminosity of
1.0$\mbox{\,fb}^{-1}$ collected by the LHCb experiment in proton-proton
collisions at a centre-of-mass energy of
$\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$. The ratios of the branching
fractions of each of the $\uppsi{(2\mathrm{S})}$ modes with respect to the
corresponding ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ decays
are
$\displaystyle\frac{{\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta)}{{\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta)}$ $\displaystyle=0.83\pm 0.14\,\mathrm{(stat)}\pm
0.12\,\mathrm{(syst)}\pm 0.02\,({\cal B}),$ $\displaystyle\frac{{\cal
B}(\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-})}{{\cal
B}(\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uppi^{+}\uppi^{-})}$ $\displaystyle=0.56\pm 0.07\,\mathrm{(stat)}\pm
0.05\,\mathrm{(syst)}\pm 0.01\,({\cal B}),$ $\displaystyle\frac{{\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-})}{{\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uppi^{+}\uppi^{-})}$ $\displaystyle=0.34\pm 0.04\,\mathrm{(stat)}\pm
0.03\,\mathrm{(syst)}\pm 0.01\,({\cal B}),$
where the third uncertainty corresponds to the uncertainties of the dilepton
branching fractions of the ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}$ and $\uppsi{(2\mathrm{S})}$ meson decays.
Submitted to Nucl. Phys. B
© CERN on behalf of the LHCb collaboration, license CC-BY-3.0.
LHCb collaboration
R. Aaij40, C. Abellan Beteta35,n, B. Adeva36, M. Adinolfi45, C. Adrover6, A.
Affolder51, Z. Ajaltouni5, J. Albrecht9, F. Alessio37, M. Alexander50, S.
Ali40, G. Alkhazov29, P. Alvarez Cartelle36, A.A. Alves Jr24,37, S. Amato2, S.
Amerio21, Y. Amhis7, L. Anderlini17,f, J. Anderson39, R. Andreassen59, R.B.
Appleby53, O. Aquines Gutierrez10, F. Archilli18, A. Artamonov 34, M.
Artuso56, E. Aslanides6, G. Auriemma24,m, S. Bachmann11, J.J. Back47, C.
Baesso57, V. Balagura30, W. Baldini16, R.J. Barlow53, C. Barschel37, S.
Barsuk7, W. Barter46, Th. Bauer40, A. Bay38, J. Beddow50, F. Bedeschi22, I.
Bediaga1, S. Belogurov30, K. Belous34, I. Belyaev30, E. Ben-Haim8, M.
Benayoun8, G. Bencivenni18, S. Benson49, J. Benton45, A. Berezhnoy31, R.
Bernet39, M.-O. Bettler46, M. van Beuzekom40, A. Bien11, S. Bifani12, T.
Bird53, A. Bizzeti17,h, P.M. Bjørnstad53, T. Blake37, F. Blanc38, J. Blouw11,
S. Blusk56, V. Bocci24, A. Bondar33, N. Bondar29, W. Bonivento15, S. Borghi53,
A. Borgia56, T.J.V. Bowcock51, E. Bowen39, C. Bozzi16, T. Brambach9, J. van
den Brand41, J. Bressieux38, D. Brett53, M. Britsch10, T. Britton56, N.H.
Brook45, H. Brown51, I. Burducea28, A. Bursche39, G. Busetto21,q, J.
Buytaert37, S. Cadeddu15, O. Callot7, M. Calvi20,j, M. Calvo Gomez35,n, A.
Camboni35, P. Campana18,37, A. Carbone14,c, G. Carboni23,k, R. Cardinale19,i,
A. Cardini15, H. Carranza-Mejia49, L. Carson52, K. Carvalho Akiba2, G.
Casse51, M. Cattaneo37, Ch. Cauet9, M. Charles54, Ph. Charpentier37, P.
Chen3,38, N. Chiapolini39, M. Chrzaszcz 25, K. Ciba37, X. Cid Vidal36, G.
Ciezarek52, P.E.L. Clarke49, M. Clemencic37, H.V. Cliff46, J. Closier37, C.
Coca28, V. Coco40, J. Cogan6, E. Cogneras5, P. Collins37, A. Comerma-
Montells35, A. Contu15, A. Cook45, M. Coombes45, S. Coquereau8, G. Corti37, B.
Couturier37, G.A. Cowan38, D. Craik47, S. Cunliffe52, R. Currie49, C.
D’Ambrosio37, P. David8, P.N.Y. David40, I. De Bonis4, K. De Bruyn40, S. De
Capua53, M. De Cian39, J.M. De Miranda1, M. De Oyanguren Campos35,o, L. De
Paula2, W. De Silva59, P. De Simone18, D. Decamp4, M. Deckenhoff9, L. Del
Buono8, D. Derkach14, O. Deschamps5, F. Dettori41, A. Di Canto11, H.
Dijkstra37, M. Dogaru28, S. Donleavy51, F. Dordei11, A. Dosil Suárez36, D.
Dossett47, A. Dovbnya42, F. Dupertuis38, R. Dzhelyadin34, A. Dziurda25, A.
Dzyuba29, S. Easo48,37, U. Egede52, V. Egorychev30, S. Eidelman33, D. van
Eijk40, S. Eisenhardt49, U. Eitschberger9, R. Ekelhof9, L. Eklund50, I. El
Rifai5, Ch. Elsasser39, D. Elsby44, A. Falabella14,e, C. Färber11, G.
Fardell49, C. Farinelli40, S. Farry12, V. Fave38, D. Ferguson49, V. Fernandez
Albor36, F. Ferreira Rodrigues1, M. Ferro-Luzzi37, S. Filippov32, C.
Fitzpatrick37, M. Fontana10, F. Fontanelli19,i, R. Forty37, O. Francisco2, M.
Frank37, C. Frei37, M. Frosini17,f, S. Furcas20, E. Furfaro23, A. Gallas
Torreira36, D. Galli14,c, M. Gandelman2, P. Gandini54, Y. Gao3, J. Garofoli56,
P. Garosi53, J. Garra Tico46, L. Garrido35, C. Gaspar37, R. Gauld54, E.
Gersabeck11, M. Gersabeck53, T. Gershon47,37, Ph. Ghez4, V. Gibson46, V.V.
Gligorov37, C. Göbel57, D. Golubkov30, A. Golutvin52,30,37, A. Gomes2, H.
Gordon54, M. Grabalosa Gándara5, R. Graciani Diaz35, L.A. Granado Cardoso37,
E. Graugés35, G. Graziani17, A. Grecu28, E. Greening54, S. Gregson46, O.
Grünberg58, B. Gui56, E. Gushchin32, Yu. Guz34, T. Gys37, C. Hadjivasiliou56,
G. Haefeli38, C. Haen37, S.C. Haines46, S. Hall52, T. Hampson45, S. Hansmann-
Menzemer11, N. Harnew54, S.T. Harnew45, J. Harrison53, T. Hartmann58, J. He7,
V. Heijne40, K. Hennessy51, P. Henrard5, J.A. Hernando Morata36, E. van
Herwijnen37, E. Hicks51, D. Hill54, M. Hoballah5, C. Hombach53, P. Hopchev4,
W. Hulsbergen40, P. Hunt54, T. Huse51, N. Hussain54, D. Hutchcroft51, D.
Hynds50, V. Iakovenko43, M. Idzik26, P. Ilten12, R. Jacobsson37, A. Jaeger11,
E. Jans40, P. Jaton38, F. Jing3, M. John54, D. Johnson54, C.R. Jones46, B.
Jost37, M. Kaballo9, S. Kandybei42, M. Karacson37, T.M. Karbach37, I.R.
Kenyon44, U. Kerzel37, T. Ketel41, A. Keune38, B. Khanji20, O. Kochebina7, I.
Komarov38,31, R.F. Koopman41, P. Koppenburg40, M. Korolev31, A. Kozlinskiy40,
L. Kravchuk32, K. Kreplin11, M. Kreps47, G. Krocker11, P. Krokovny33, F.
Kruse9, M. Kucharczyk20,25,j, V. Kudryavtsev33, T. Kvaratskheliya30,37, V.N.
La Thi38, D. Lacarrere37, G. Lafferty53, A. Lai15, D. Lambert49, R.W.
Lambert41, E. Lanciotti37, G. Lanfranchi18,37, C. Langenbruch37, T. Latham47,
C. Lazzeroni44, R. Le Gac6, J. van Leerdam40, J.-P. Lees4, R. Lefèvre5, A.
Leflat31,37, J. Lefran$\mathrm{c}$cois7, S. Leo22, O. Leroy6, B.
Leverington11, Y. Li3, L. Li Gioi5, M. Liles51, R. Lindner37, C. Linn11, B.
Liu3, G. Liu37, J. von Loeben20, S. Lohn37, J.H. Lopes2, E. Lopez Asamar35, N.
Lopez-March38, H. Lu3, D. Lucchesi21,q, J. Luisier38, H. Luo49, F. Machefert7,
I.V. Machikhiliyan4,30, F. Maciuc28, O. Maev29,37, S. Malde54, G. Manca15,d,
G. Mancinelli6, U. Marconi14, R. Märki38, J. Marks11, G. Martellotti24, A.
Martens8, L. Martin54, A. Martín Sánchez7, M. Martinelli40, D. Martinez
Santos41, D. Martins Tostes2, A. Massafferri1, R. Matev37, Z. Mathe37, C.
Matteuzzi20, E. Maurice6, A. Mazurov16,32,37,e, J. McCarthy44, R. McNulty12,
A. Mcnab53, B. Meadows59,54, F. Meier9, M. Meissner11, M. Merk40, D.A.
Milanes8, M.-N. Minard4, J. Molina Rodriguez57, S. Monteil5, D. Moran53, P.
Morawski25, M.J. Morello22,s, R. Mountain56, I. Mous40, F. Muheim49, K.
Müller39, R. Muresan28, B. Muryn26, B. Muster38, P. Naik45, T. Nakada38, R.
Nandakumar48, I. Nasteva1, M. Needham49, N. Neufeld37, A.D. Nguyen38, T.D.
Nguyen38, C. Nguyen-Mau38,p, M. Nicol7, V. Niess5, R. Niet9, N. Nikitin31, T.
Nikodem11, A. Nomerotski54, A. Novoselov34, A. Oblakowska-Mucha26, V.
Obraztsov34, S. Oggero40, S. Ogilvy50, O. Okhrimenko43, R. Oldeman15,d,37, M.
Orlandea28, J.M. Otalora Goicochea2, P. Owen52, B.K. Pal56, A. Palano13,b, M.
Palutan18, J. Panman37, A. Papanestis48, M. Pappagallo50, C. Parkes53, C.J.
Parkinson52, G. Passaleva17, G.D. Patel51, M. Patel52, G.N. Patrick48, C.
Patrignani19,i, C. Pavel-Nicorescu28, A. Pazos Alvarez36, A. Pellegrino40, G.
Penso24,l, M. Pepe Altarelli37, S. Perazzini14,c, D.L. Perego20,j, E. Perez
Trigo36, A. Pérez-Calero Yzquierdo35, P. Perret5, M. Perrin-Terrin6, G.
Pessina20, K. Petridis52, A. Petrolini19,i, A. Phan56, E. Picatoste Olloqui35,
B. Pietrzyk4, T. Pilař47, D. Pinci24, S. Playfer49, M. Plo Casasus36, F.
Polci8, S. Polikarpov30, G. Polok25, A. Poluektov47,33, E. Polycarpo2, D.
Popov10, B. Popovici28, C. Potterat35, A. Powell54, J. Prisciandaro38, V.
Pugatch43, A. Puig Navarro38, G. Punzi22,r, W. Qian4, J.H. Rademacker45, B.
Rakotomiaramanana38, M.S. Rangel2, I. Raniuk42, N. Rauschmayr37, G. Raven41,
S. Redford54, M.M. Reid47, A.C. dos Reis1, S. Ricciardi48, A. Richards52, K.
Rinnert51, V. Rives Molina35, D.A. Roa Romero5, P. Robbe7, E. Rodrigues53, P.
Rodriguez Perez36, S. Roiser37, V. Romanovsky34, A. Romero Vidal36, J.
Rouvinet38, T. Ruf37, F. Ruffini22, H. Ruiz35, P. Ruiz Valls35,o, G.
Sabatino24,k, J.J. Saborido Silva36, N. Sagidova29, P. Sail50, B. Saitta15,d,
C. Salzmann39, B. Sanmartin Sedes36, M. Sannino19,i, R. Santacesaria24, C.
Santamarina Rios36, E. Santovetti23,k, M. Sapunov6, A. Sarti18,l, C.
Satriano24,m, A. Satta23, M. Savrie16,e, D. Savrina30,31, P. Schaack52, M.
Schiller41, H. Schindler37, M. Schlupp9, M. Schmelling10, B. Schmidt37, O.
Schneider38, A. Schopper37, M.-H. Schune7, R. Schwemmer37, B. Sciascia18, A.
Sciubba24, M. Seco36, A. Semennikov30, K. Senderowska26, I. Sepp52, N.
Serra39, J. Serrano6, P. Seyfert11, M. Shapkin34, I. Shapoval42,37, P.
Shatalov30, Y. Shcheglov29, T. Shears51,37, L. Shekhtman33, O. Shevchenko42,
V. Shevchenko30, A. Shires52, R. Silva Coutinho47, T. Skwarnicki56, N.A.
Smith51, E. Smith54,48, M. Smith53, M.D. Sokoloff59, F.J.P. Soler50, F.
Soomro18,37, D. Souza45, B. Souza De Paula2, B. Spaan9, A. Sparkes49, P.
Spradlin50, F. Stagni37, S. Stahl11, O. Steinkamp39, S. Stoica28, S. Stone56,
B. Storaci39, M. Straticiuc28, U. Straumann39, V.K. Subbiah37, S. Swientek9,
V. Syropoulos41, M. Szczekowski27, P. Szczypka38,37, T. Szumlak26, S.
T’Jampens4, M. Teklishyn7, E. Teodorescu28, F. Teubert37, C. Thomas54, E.
Thomas37, J. van Tilburg11, V. Tisserand4, M. Tobin39, S. Tolk41, D.
Tonelli37, S. Topp-Joergensen54, N. Torr54, E. Tournefier4,52, S. Tourneur38,
M.T. Tran38, M. Tresch39, A. Tsaregorodtsev6, P. Tsopelas40, N. Tuning40, M.
Ubeda Garcia37, A. Ukleja27, D. Urner53, U. Uwer11, V. Vagnoni14, G.
Valenti14, R. Vazquez Gomez35, P. Vazquez Regueiro36, S. Vecchi16, J.J.
Velthuis45, M. Veltri17,g, G. Veneziano38, M. Vesterinen37, B. Viaud7, D.
Vieira2, X. Vilasis-Cardona35,n, A. Vollhardt39, D. Volyanskyy10, D. Voong45,
A. Vorobyev29, V. Vorobyev33, C. Voß58, H. Voss10, R. Waldi58, R. Wallace12,
S. Wandernoth11, J. Wang56, D.R. Ward46, N.K. Watson44, A.D. Webber53, D.
Websdale52, M. Whitehead47, J. Wicht37, J. Wiechczynski25, D. Wiedner11, L.
Wiggers40, G. Wilkinson54, M.P. Williams47,48, M. Williams55, F.F. Wilson48,
J. Wishahi9, M. Witek25, S.A. Wotton46, S. Wright46, S. Wu3, K. Wyllie37, Y.
Xie49,37, F. Xing54, Z. Xing56, Z. Yang3, R. Young49, X. Yuan3, O.
Yushchenko34, M. Zangoli14, M. Zavertyaev10,a, F. Zhang3, L. Zhang56, W.C.
Zhang12, Y. Zhang3, A. Zhelezov11, A. Zhokhov30, L. Zhong3, A. Zvyagin37.
1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
3Center for High Energy Physics, Tsinghua University, Beijing, China
4LAPP, Université de Savoie, CNRS/IN2P3, Annecy-Le-Vieux, France
5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-
Ferrand, France
6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot,
CNRS/IN2P3, Paris, France
9Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
10Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
11Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
12School of Physics, University College Dublin, Dublin, Ireland
13Sezione INFN di Bari, Bari, Italy
14Sezione INFN di Bologna, Bologna, Italy
15Sezione INFN di Cagliari, Cagliari, Italy
16Sezione INFN di Ferrara, Ferrara, Italy
17Sezione INFN di Firenze, Firenze, Italy
18Laboratori Nazionali dell’INFN di Frascati, Frascati, Italy
19Sezione INFN di Genova, Genova, Italy
20Sezione INFN di Milano Bicocca, Milano, Italy
21Sezione INFN di Padova, Padova, Italy
22Sezione INFN di Pisa, Pisa, Italy
23Sezione INFN di Roma Tor Vergata, Roma, Italy
24Sezione INFN di Roma La Sapienza, Roma, Italy
25Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of
Sciences, Kraków, Poland
26AGH University of Science and Technology, Kraków, Poland
27National Center for Nuclear Research (NCBJ), Warsaw, Poland
28Horia Hulubei National Institute of Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
29Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia
30Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia
31Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
32Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN),
Moscow, Russia
33Budker Institute of Nuclear Physics (SB RAS) and Novosibirsk State
University, Novosibirsk, Russia
34Institute for High Energy Physics (IHEP), Protvino, Russia
35Universitat de Barcelona, Barcelona, Spain
36Universidad de Santiago de Compostela, Santiago de Compostela, Spain
37European Organization for Nuclear Research (CERN), Geneva, Switzerland
38Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
39Physik-Institut, Universität Zürich, Zürich, Switzerland
40Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands
41Nikhef National Institute for Subatomic Physics and VU University Amsterdam,
Amsterdam, The Netherlands
42NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
43Institute for Nuclear Research of the National Academy of Sciences (KINR),
Kyiv, Ukraine
44University of Birmingham, Birmingham, United Kingdom
45H.H. Wills Physics Laboratory, University of Bristol, Bristol, United
Kingdom
46Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
47Department of Physics, University of Warwick, Coventry, United Kingdom
48STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
49School of Physics and Astronomy, University of Edinburgh, Edinburgh, United
Kingdom
50School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
51Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
52Imperial College London, London, United Kingdom
53School of Physics and Astronomy, University of Manchester, Manchester,
United Kingdom
54Department of Physics, University of Oxford, Oxford, United Kingdom
55Massachusetts Institute of Technology, Cambridge, MA, United States
56Syracuse University, Syracuse, NY, United States
57Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de
Janeiro, Brazil, associated to 2
58Institut für Physik, Universität Rostock, Rostock, Germany, associated to 11
59University of Cincinnati, Cincinnati, OH, United States, associated to 56
aP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS),
Moscow, Russia
bUniversità di Bari, Bari, Italy
cUniversità di Bologna, Bologna, Italy
dUniversità di Cagliari, Cagliari, Italy
eUniversità di Ferrara, Ferrara, Italy
fUniversità di Firenze, Firenze, Italy
gUniversità di Urbino, Urbino, Italy
hUniversità di Modena e Reggio Emilia, Modena, Italy
iUniversità di Genova, Genova, Italy
jUniversità di Milano Bicocca, Milano, Italy
kUniversità di Roma Tor Vergata, Roma, Italy
lUniversità di Roma La Sapienza, Roma, Italy
mUniversità della Basilicata, Potenza, Italy
nLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain
oIFIC, Universitat de Valencia-CSIC, Valencia, Spain
pHanoi University of Science, Hanoi, Viet Nam
qUniversità di Padova, Padova, Italy
rUniversità di Pisa, Pisa, Italy
sScuola Normale Superiore, Pisa, Italy
## 1 Introduction
Decays of B mesons containing a charmonium resonance,
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ or
$\uppsi{(2\mathrm{S})}$, in the final state play a crucial role in the study
of $C\\!P$ violation and in the precise measurement of neutral B meson mixing
parameters.
The
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta$ decay was observed by the Belle collaboration and the branching
fraction was measured to be ${\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta)=(5.10\pm 0.50\pm 0.25\,^{+1.14}_{-0.79})\times 10^{-4}$ [1],
where the first uncertainty is statistical, the second systematic and the
third due to the uncertainty in the number of produced
$\mathrm{B}^{0}_{\mathrm{s}}\kern
1.79993pt\overline{\kern-1.79993pt\mathrm{B}}{}^{0}_{\mathrm{s}}$ pairs. This
decay has also recently been reported by LHCb, including the decay
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta^{\prime}$ [2].
The
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uppi^{+}\uppi^{-}$ decays, where $\mathrm{B}^{0}_{\mathrm{(s)}}$
denotes a $\mathrm{B}^{0}$ or $\mathrm{B}^{0}_{\mathrm{s}}$ meson, have been
studied previously and the $\uppi^{+}\uppi^{-}$ final states are found to
comprise the decay products of the $\uprho^{0}(770)$ and
$\mathrm{f_{2}}(1270)$ mesons in case of $\mathrm{B}^{0}$ decays and of
$\mathrm{f_{0}}(980)$ and $\mathrm{f_{0}}(1370)$ mesons in case of
$\mathrm{B}^{0}_{\mathrm{s}}$ decays [3, 4, 5]. The
$\mathrm{B}^{0}_{\mathrm{s}}$ modes have been used to measure mixing-induced
$C\\!P$ violation [6, 7]. The decays
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ and
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
have not previously been studied.
The relative branching fractions of $\mathrm{B}^{0}$ and
$\mathrm{B}^{0}_{\mathrm{s}}$ mesons into final states containing
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ and
$\uppsi{(2\mathrm{S})}$ mesons have been studied by several experiments (CDF
[8, 9], D0 [10] and LHCb [11]). In this paper, measurements of the branching
fraction ratios of $\mathrm{B}^{0}_{\mathrm{(s)}}$ mesons decaying to
$\uppsi{(2\mathrm{S})}\mathrm{X^{0}}$ and
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\mathrm{X^{0}}$ are
reported, where $\mathrm{X^{0}}$ denotes either an $\upeta$ meson or a
$\uppi^{+}\uppi^{-}$ system. Charge conjugate decays are implicitly included.
The analysis presented here is based on a data sample corresponding to an
integrated luminosity of 1.0$\mbox{\,fb}^{-1}$ collected with the LHCb
detector during $2011$ in $\mathrm{p}\mathrm{p}$ collisions at a centre-of-
mass energy of $\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$.
## 2 LHCb detector
The LHCb detector [12] is a single-arm forward spectrometer covering the
pseudorapidity range $2<\eta<5$, designed for the study of particles
containing $\mathrm{b}$ or $\mathrm{c}$ quarks. The detector includes a high
precision tracking system consisting of a silicon-strip vertex detector
surrounding the $\mathrm{p}\mathrm{p}$ interaction region, a large-area
silicon-strip detector located upstream of a dipole magnet with a bending
power of about $4{\rm\,Tm}$, and three stations of silicon-strip detectors and
straw drift tubes placed downstream. The combined tracking system has momentum
resolution $\Delta p/p$ that varies from 0.4% at
5${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ to 0.6% at
100${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, and impact parameter resolution of
20$\,\upmu\rm m$ for tracks with high transverse momentum ($p_{\rm T}$).
Charged hadrons are identified using two ring-imaging Cherenkov detectors.
Photon, electron and hadron candidates are identified by a calorimeter system
consisting of scintillating-pad and preshower detectors, an electromagnetic
calorimeter and a hadronic calorimeter. Muons are identified by a system
composed of alternating layers of iron and multiwire proportional chambers.
The trigger [13] consists of a hardware stage, based on information from the
calorimeter and muon systems, followed by a software stage where a full event
reconstruction is applied. Candidate events are first required to pass a
hardware trigger which selects muons with a transverse momentum,
$\mbox{$p_{\rm T}$}>1.48{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. In the
subsequent software trigger, at least one of the final state particles is
required to have both $\mbox{$p_{\rm
T}$}>0.8{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and impact parameter
$>100\,\upmu\rm m$ with respect to all of the primary $\mathrm{p}\mathrm{p}$
interaction vertices (PVs) in the event. Finally, two or more of the final
state particles are required to form a vertex which is significantly displaced
from the PVs.
For the simulation, $\mathrm{p}\mathrm{p}$ collisions are generated using
Pythia 6.4 [14] with a specific LHCb configuration [15]. Decays of hadronic
particles are described by EvtGen [16] in which final state radiation is
generated using Photos [17]. The interaction of the generated particles with
the detector and its response are implemented using the Geant4 toolkit [18,
19] as described in Ref. [20].
## 3 Event selection
The decays $\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi\upeta$ and
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$, where
$\uppsi$ denotes ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ or
$\uppsi{(2\mathrm{S})}$, are reconstructed using
$\uppsi\rightarrow\upmu^{+}\upmu^{-}$ and $\upeta\rightarrow\upgamma\upgamma$
decay modes. Pairs of oppositely-charged tracks identified as muons, each
having $\mbox{$p_{\rm T}$}>0.55{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and
originating from a common vertex, are combined to form
$\uppsi\rightarrow\upmu^{+}\upmu^{-}$ candidates. Track quality is ensured by
requiring the $\chi^{2}$ per number of degrees of freedom
($\chi^{2}/\mathrm{ndf}$) provided by the track fit to be less than 5. Well
identified muons are selected by requiring that the difference in logarithms
of the global likelihood of the muon hypothesis,
$\Delta\log\mathcal{L}_{\mu\mathrm{h}}$ [21], provided by the particle
identification detectors, with respect to the hadron hypothesis is larger than
zero. The fit of the common two-prong vertex is required to satisfy
$\chi^{2}/\mathrm{ndf}<20$. The vertex is deemed to be well separated from the
reconstructed primary vertex of the proton-proton interaction by requiring the
decay length significance to be larger than three. Finally, the invariant mass
of the dimuon combination is required to be between 3.020 and
3.135${\mathrm{\,Ge\kern-1.00006ptV\\!/}c^{2}}$ for
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ candidates and
between 3.597 and 3.730${\mathrm{\,Ge\kern-1.00006ptV\\!/}c^{2}}$ for
$\uppsi{(2\mathrm{S})}$ candidates. These correspond to [–5$\sigma$;
3$\sigma$] intervals around the nominal masses to accomodate QED radiation.
The pions are required to have $\mbox{$p_{\rm
T}$}>0.25{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and an impact parameter
$\chi^{2}$, defined as the difference between the $\chi^{2}$ of the PV formed
with and without the considered track, larger than 9. When more that one PV is
reconstructed, the smallest value of impact parameter $\chi^{2}$ is chosen. In
addition, to suppress contamination from kaons, the difference between the
logarithms of likelihoods of the pion and kaon hypotheses,
$\Delta\log\mathcal{L}_{\uppi\mathrm{K}}$ [22], provided by the RICH
detectors, has to be larger than zero.
Photons are selected from neutral clusters in the electromagnetic calorimeter
with transverse energy in excess of $0.4\mathrm{\,Ge\kern-1.00006ptV}$. The
$\upeta\rightarrow\upgamma\upgamma$ candidates are reconstructed as diphoton
combinations with an invariant mass within $\pm
70{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ of the $\upeta$ mass [23]. To
suppress the large combinatorial background from the decays of neutral pions,
photons that form a $\uppi^{0}\rightarrow\upgamma\upgamma$ candidate with
invariant mass within $\pm 25{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ of the
$\uppi^{0}$ mass are not used to reconstruct
$\upeta\rightarrow\upgamma\upgamma$ candidates.
The $\mathrm{B}^{0}_{\mathrm{(s)}}$ candidates are formed from
$\uppsi\mathrm{X^{0}}$ combinations. In the $\uppsi\upeta$ case an additional
requirement $\mbox{$p_{\rm
T}$}(\upeta)>2.5{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ is applied to reduce
combinatorial background. To improve the invariant mass resolution a kinematic
fit [24] is performed. In this fit, constraints are applied on the known
masses [23] of intermediate resonances, and it is also required that the
candidate’s momentum vector points to the associated primary vertex. The
$\chi^{2}/\mathrm{ndf}$ for this fit is required to be less than 5. Finally,
the decay time, $ct$, of the $\mathrm{B}^{0}_{\mathrm{(s)}}$ candidate,
calculated with respect to the primary vertex, is required to be in excess of
$150$$\,\upmu\rm m$.
## 4 Observation of the
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ decay
The invariant mass distributions of the selected $\uppsi\upeta$ candidates are
shown in Fig. 1. The $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\upeta$
signal yields are estimated by performing unbinned extended maximum likelihood
fits. The $\mathrm{B}^{0}_{\mathrm{s}}$ signal is modelled by a Gaussian
distribution and the background by an exponential function. In the
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upeta$ case a
possible contribution from the corresponding $\mathrm{B}^{0}$ decays is
included in the fit model as an additional Gaussian component. The resolutions
of the two Gaussian functions are set to be the same and the difference of
their central values is fixed to the known difference between the
$\mathrm{B}^{0}_{\mathrm{s}}$ and the $\mathrm{B}^{0}$ masses [23]. The
contribution from the decay
$\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ is not considered in
the baseline fit model. The mass resolution of the
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ decay mode
is fixed to the value $\sigma^{\uppsi{(2\mathrm{S})}\upeta}_{\rm
DATA}=\sigma^{{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta}_{\rm DATA}\times\sigma^{\uppsi{(2\mathrm{S})}\upeta}_{\rm
MC}/\sigma^{{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta}_{\rm MC}$, where $\sigma_{\rm DATA}$ and $\sigma_{\rm MC}$ are
the widths of the corresponding channel in data and simulation, respectively.
(a)(b)
Candidates/(20${\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$)
Candidates/(20${\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$)
LHCbLHCbM$({\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta)$M$(\uppsi{(2\mathrm{S})}\upeta)$$\left[{\mathrm{\,Ge\kern-0.90005ptV\\!/}c^{2}}\right]$$\left[{\mathrm{\,Ge\kern-0.90005ptV\\!/}c^{2}}\right]$
Figure 1: Mass distributions of (a)
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta$ and (b)
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$
candidates. The total fit function (solid black) and the combinatorial
background (dashed) are shown. The solid red lines show the signal
$\mathrm{B}^{0}_{\mathrm{s}}$ contribution and the red dot dashed line
corresponds to the $\mathrm{B}^{0}$ contribution.
The fit results are summarised in Table 1. In all cases the positions of the
signal peaks are consistent with the nominal $\mathrm{B}^{0}_{\mathrm{s}}$
mass [23] and the resolutions are in agreement with the expectations from
simulation. The measured yield of
$\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta$ is $144\pm 41$ events (uncertainty is statistical only), which
is consistent with the expected value based on the measured branching fraction
of this decay [25]. The statistical significance in each fit is determined as
$\mathcal{S}=\sqrt{-2\ln{\frac{\mathcal{L}_{\mathrm{B}}}{\mathcal{L}_{\mathrm{S+B}}}}}$
, where ${\mathcal{L}_{\mathrm{S+B}}}$ and ${\mathcal{L}_{\mathrm{B}}}$ denote
the likelihood of the signal plus background hypothesis and the background
only hypothesis, respectively. Taking into account the systematic uncertainty
related to the fit function, which is discussed in detail in Sect. 6, the
significance of the
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ signal is
$6.2\sigma$.
Table 1: Fitted values of signal events ($N_{\mathrm{B}}$), signal peak position ($\mathrm{M}_{\mathrm{B}}$) and resolution ($\sigma_{\mathrm{B}}$). The quoted uncertainties are statistical only. Mode | $N_{\mathrm{B}}$ | $\mathrm{M}_{\mathrm{B}}$ | $\sigma_{\mathrm{B}}$
---|---|---|---
$\left[{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}\right]$ | $\left[{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}\right]$
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upeta$ | $863\pm 52$ | $5370.9\pm 2.3$ | $33.7\pm 2.3$
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ | $\phantom{0}76\pm 12$ | $5373.4\pm 5.0$ | $26.6$ fixed
To demonstrate that the signal originates from
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ decays the
sPlot technique [26] has been used to separate the signal and the background.
Using the $\upmu^{+}\upmu^{-}\upgamma\upgamma$ invariant mass distribution as
the discriminating variable, the distributions for the invariant masses of the
intermediate resonances $\upeta\rightarrow\upgamma\upgamma$ and
$\uppsi{(2\mathrm{S})}\rightarrow\upmu^{+}\upmu^{-}$ have been obtained. In
this procedure, the invariant mass window for each corresponding resonance is
released and the mass constraint is removed. The resulting invariant mass
distributions for $\upgamma\upgamma$ and $\upmu^{+}\upmu^{-}$ from
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ candidates
are shown in Fig. 2. Clear signals are seen in both
$\upeta\rightarrow\upgamma\upgamma$ and
$\uppsi{(2\mathrm{S})}\rightarrow\upmu^{+}\upmu^{-}$ decays. The distributions
are described by the sum of a Gaussian function and a constant. The fit shows
that the constant is consistent with zero, as expected.
(b)(a)$\mathrm{M}(\upgamma\upgamma)$$\mathrm{M}(\upmu^{+}\upmu^{-})$$\left[{\mathrm{\,Ge\kern-0.90005ptV\\!/}c^{2}}\right]$$\left[{\mathrm{\,Ge\kern-0.90005ptV\\!/}c^{2}}\right]$LHCbLHCb
Candidates/(10${\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$)
Candidates/(10${\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$)
Figure 2: Background subtracted (a) $\upgamma\upgamma$ and (b)
$\upmu^{+}\upmu^{-}$ mass distributions in
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ decays. In
both cases the blue line is the result of the fit described in the text.
## 5 Observation of the
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays
The invariant mass distributions for the
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ candidates
are shown in Fig. 3. The narrow signals correspond to the
$\mathrm{B}^{0}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ and
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ decays. The
peak at lower mass corresponds to a reflection from
$\mathrm{B}^{0}\rightarrow\uppsi\mathrm{K}^{*0}(\rightarrow\mathrm{K}^{+}\uppi^{-})$
decays where the kaon is misidentified as a pion. The contribution from
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\mathrm{K}^{*0}$ decays [27] is
negligible.
The invariant mass distributions are fitted with two Gaussian functions to
describe the two signals, an asymmetric Gaussian function with different width
for the two sides to represent the reflection from
$\mathrm{B}^{0}\rightarrow\uppsi\mathrm{K}^{*0}$ decays and an exponential
function for the background. The fit results are summarised in Table 2. The
statistical significances of the signals are found to be larger than 9
standard deviations.
Table 2: Fitted values of signal events ($N_{\mathrm{B}}$), signal peak position ($\mathrm{M}_{\mathrm{B}}$) and resolution ($\sigma_{\mathrm{B}}$). The quoted uncertainties are statistical only. Mode | $N_{\mathrm{B}}$ | $\mathrm{M}_{\mathrm{B}}$ | $\sigma_{\mathrm{B}}$
---|---|---|---
$\left[{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}\right]$ | $\left[{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}\right]$
$\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\uppi^{+}\uppi^{-}$ | $2801\pm 85$ | $5281.1\pm 0.3$ | $8.2\pm 0.3$
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\uppi^{+}\uppi^{-}$ | $4096\pm 86$ | $5368.4\pm 0.2$ | $8.7\pm 0.2$
$\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$ | $\phantom{0}202\pm 23$ | $5280.3\pm 1.0$ | $8.4\pm 1.1$
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$ | $\phantom{0}178\pm 22$ | $5366.3\pm 1.2$ | $9.1\pm 1.4$
(a)(b)
Candidates/(5${\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$)
Candidates/(10${\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$)
LHCbLHCb$\mathrm{M}({\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uppi^{+}\uppi^{-})$$\mathrm{M}(\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-})$$\left[{\mathrm{\,Ge\kern-0.90005ptV\\!/}c^{2}}\right]$$\left[{\mathrm{\,Ge\kern-0.90005ptV\\!/}c^{2}}\right]$
Figure 3: Mass distributions of (a)
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uppi^{+}\uppi^{-}$ and (b)
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
candidates. The total fit function (solid black) and the combinatorial
background (dashed) are shown. The solid red lines show the signal
$\mathrm{B}^{0}_{\mathrm{s}}$ contribution and the red dot dashed lines
correspond to the $\mathrm{B}^{0}$ contributions. The reflections from
misidentified $\mathrm{B}^{0}\rightarrow\uppsi\mathrm{K}^{*0}$,
$\mathrm{K}^{*0}\rightarrow\mathrm{K}^{+}\uppi^{-}$ decays are shown with
dotted blue lines.
For the
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uppi^{+}\uppi^{-}$ decays, the $\uppi^{+}\uppi^{-}$ mass shapes have
been studied in detail using a partial wave analysis in Refs. [4, 5]. The main
contributions are
$\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uprho^{0}(770)$ and
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\mathrm{f_{0}(980)}$. However, due to the limited number of signal
events, the same method cannot be used for the
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays. The sPlot technique is used in order to study the dipion mass
distribution in those decays. With the
$\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$ invariant mass as the discriminating
variable, the $\uppi^{+}\uppi^{-}$ invariant mass spectra from
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays are obtained (see Fig. 4).
(a)(b)
Candidates/(50${\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$)
Candidates/(50${\mathrm{\,Me\kern-0.90005ptV\\!/}c^{2}}$)
LHCbLHCb$\mathrm{M}(\uppi^{+}\uppi^{-})$$\mathrm{M}(\uppi^{+}\uppi^{-})$$\left[{\mathrm{\,Ge\kern-0.90005ptV\\!/}c^{2}}\right]$$\left[{\mathrm{\,Ge\kern-0.90005ptV\\!/}c^{2}}\right]$
Figure 4: Background subtracted $\uppi^{+}\uppi^{-}$ mass distribution in (a)
$\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$ and (b)
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
(black points). The red filled area shows the expected signal spectrum for the
$\uppsi{(2\mathrm{S})}$ channel derived from the measured spectrum of the
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ channel (the fit has
one parameter — the normalisation). The width of the band corresponds to the
uncertainties of the distribution from the
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ channel. In case of
$\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$, the blue
vertical filled area shows the $\mathrm{K}^{0}_{\rm\scriptscriptstyle S}$
region that is excluded from the fit.
To check that the background subtracted $\uppi^{+}\uppi^{-}$ distributions
have similar shapes in both channels, the distribution obtained from the
$\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$ decay is fitted with the
distribution obtained from the
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\uppi^{+}\uppi^{-}$
channel, corrected by the ratio of phase-space factors and by the ratio of the
efficiencies which depends on the dipion invariant mass. The p-value for the
$\chi^{2}$ fit is 30% for $\mathrm{B}^{0}\rightarrow\uppsi\uppi^{+}\uppi^{-}$
and 7% for $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$,
respectively. As seen in Fig. 4,
$\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\uprho^{0}(770)$ and
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\mathrm{f_{0}(980)}$
decays are the main contributions to
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays. Detailed amplitude analyses of the resonance structures in
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays, similar to Refs. [4, 5], will be possible with a larger dataset. This
will allow the possible excess of events in the region
$\mathrm{M}(\uppi^{+}\uppi^{-})>1.4{\mathrm{\,Ge\kern-1.00006ptV\\!/}c^{2}}$
to be investigated.
The narrow peak around $0.5{\mathrm{\,Ge\kern-1.00006ptV\\!/}c^{2}}$ in Fig.
4(a) is dominated by $\mathrm{K}^{0}_{\rm\scriptscriptstyle
S}\rightarrow\uppi^{+}\uppi^{-}$ from
$\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\mathrm{K}^{0}_{\rm\scriptscriptstyle S}$ decays. The contributions from
$\mathrm{K}^{0}_{\rm\scriptscriptstyle S}$ decays are taken into account by
the fit function described in Ref. [2]. The resulting yields are $129\pm 26$
in the ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ channel and
$11\pm 6$ in the $\uppsi{(2\mathrm{S})}$ channel. In the calculation of the
final ratio of branching fractions, the number of
$\mathrm{K}^{0}_{\rm\scriptscriptstyle S}$ events is subtracted from the
corresponding $\mathrm{B}^{0}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ yields. The
yield from
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\mathrm{K}^{0}_{\rm\scriptscriptstyle
S}$ decays is negligible [28].
## 6 Efficiencies and systematic uncertainties
The ratios of branching fractions are calculated using the formula
$\frac{{\cal
B}({\mathrm{\mathrm{B}}\rightarrow\uppsi{(2\mathrm{S})}\mathrm{X^{0}}})}{{\cal
B}({\mathrm{B}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\mathrm{X^{0}}})}=\frac{{N}_{\uppsi{(2\mathrm{S})}\mathrm{X^{0}}}}{{N_{{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}X^{0}}}}\times\frac{\mathrm{\epsilon_{{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}X^{0}}}}{\mathrm{\epsilon}_{\uppsi{(2\mathrm{S})}\mathrm{X^{0}}}}\times\frac{\mathrm{{\cal
B}({\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\rightarrow\upmu^{+}\upmu^{-})}}{{\cal
B}(\uppsi{(2\mathrm{S})}\rightarrow\upmu^{+}\upmu^{-})}\;\mathrm{,}$ (1)
where ${N}$ is the number of signal events, and $\mathrm{\epsilon}$ is the
product of the geometrical acceptance, the detection, reconstruction,
selection and trigger efficiencies. The efficiency ratios are estimated using
simulation for all six decay modes.
The efficiency ratios are $1.22\pm 0.01$, $1.03\pm 0.01$ and $1.02\pm 0.01$
for the $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\upeta$,
$\mathrm{B}^{0}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ and
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ channels,
respectively (uncertainties are statistical only). Since the selection
criteria for the decays with ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}$ and $\uppsi{(2\mathrm{S})}$ are identical, the ratio of efficiencies
is expected to be close to unity. The deviation of the overall efficiency
ratio from unity in case of
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\upeta$ is due to the difference
between the $p_{\rm T}$ spectra of the selected
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ and
$\uppsi{(2\mathrm{S})}$ mesons, when the $\mbox{$p_{\rm
T}$}(\upeta)>2.5{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ requirement is applied.
For the $\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$
channels this effect is small since no explicit $p_{\rm T}$ requirement is
applied on the dipion system.
Most systematic uncertainties cancel in the ratio of branching fractions, in
particular, those related to the muon and $\uppsi$ reconstruction and
identification. Systematic uncertainties related to the fit model are
estimated using a number of alternative models for the description of the
invariant mass distributions. For the
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\upeta$ decays the tested
alternatives are a fit model including a $\mathrm{B}^{0}$ signal component
(with the ratio
$N(\mathrm{B}^{0}\rightarrow\uppsi\upeta$)/$N(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\upeta$)
fixed from the ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$
channel), a fit model with a linear function for the background description,
fits with signal widths fixed or not fixed to those obtained in simulation, a
fit with the difference between the fitted $\mathrm{B}^{0}$ and
$\mathrm{B}^{0}_{\mathrm{s}}$ masses allowed to vary within a $\pm 1\sigma$
interval around the nominal value [23], and a fit model with Student’s
t–distributions for the signals. For each alternative fit model the ratio of
event yields is calculated and the systematic uncertainty is then determined
as the maximum deviation of this ratio from the ratio obtained with the
baseline model. For
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ decays the
tested alternatives include a fit with a first or second order polynomial for
the background description, a model with a symmetric Gaussian distribution for
the reflection and a model with the difference of the mean values of the two
Gaussian functions fixed to the known mass difference between the
$\mathrm{B}^{0}_{\mathrm{s}}$ and the $\mathrm{B}^{0}$ mesons [23]. The
maximum deviation observed in the ratio of yields in the
$\uppsi{(2\mathrm{S})}$ and ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}$ modes is taken as the systematic uncertainty. The obtained
uncertainties are $8.0\%$ for the
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\upeta$ channel, $1.0\%$ for the
$\mathrm{B}^{0}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ channel and $1.6\%$ for
the $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ channel.
The selection efficiency for the dipion system has a dependence on the dipion
invariant mass. The ratios of efficiencies vary over the entire
$\uppi^{+}\uppi^{-}$ mass range by approximately 40% and 24% for
$\mathrm{B}^{0}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ and
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ channels,
respectively. The systematic uncertainties related to the different dependence
of the efficiency as a function of the dipion invariant mass for
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ and
$\uppsi{(2\mathrm{S})}$ channels are evaluated using the decay models from
Ref. [5] for $\mathrm{B}^{0}_{\mathrm{s}}$ and Refs. [2, 4] for
$\mathrm{B}^{0}$ decays. The systematic uncertainties on the branching
fraction ratios are 2% for both channels.
The most important source of uncertainty arises from potential disagreement
between data and simulation in the estimation of efficiencies. This source of
uncertainty is studied by varying the selection criteria in ranges
corresponding to approximately $15\%$ change in the signal yields. The
agreement is estimated by comparing the efficiency corrected ratio of yields
with these variations. The resulting uncertainties are found to be 11.5% in
the $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\upeta$ channel and 8% in the
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ channel.
The geometrical acceptance is calculated separately for different magnet
polarities. The observed difference in the efficiency ratios is taken as an
estimate of the systematic uncertainty and is $1.1\%$ for the
$\mathrm{B}^{0}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ channel and negligible for
the other channels.
The trigger is highly efficient in selecting $\mathrm{B}$ meson decays with
two muons in the final state. For this analysis the dimuon pair is required to
trigger the event. Differences in the trigger efficiency between data and
simulation are studied in the data using events that were triggered
independently of the dimuon pair [11]. Based on these studies, an uncertainty
of 1.1% is assigned. A summary of all systematic uncertainties is presented in
Table 3.
Table 3: Relative systematic uncertainties (in %) of the relative branching fractions. Source | $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\upeta$ | $\mathrm{B}^{0}\rightarrow\uppsi\uppi^{+}\uppi^{-}$ | $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi\uppi^{+}\uppi^{-}$
---|---|---|---
Fit model | 08.0 | 1.0 | 01.6
Mass dependence of efficiencies | — | 2.0 | 02.0
Efficiencies from simulation | 11.5 | 8.0 | 08.0
Acceptance | $<0.5$ | 1.1 | $<0.5$
Trigger | 01.1 | 1.1 | 01.1
Sum in quadrature | 14.1 | 8.5 | 08.5
## 7 Results
With data corresponding to an integrated luminosity of 1.0$\mbox{\,fb}^{-1}$,
collected in 2011 with the LHCb detector, the first observations of the
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ and
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays have been made. The relative rates of $\mathrm{B}^{0}_{\mathrm{(s)}}$
meson decays into final states containing
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ and
$\uppsi{(2\mathrm{S})}$ mesons are measured for those decay modes. Since the
dielectron branching fractions of $\uppsi$ mesons are measured more precisely
than those of the dimuon decay modes, invoking lepton universality, the ratio
$\frac{{\cal B}({\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\rightarrow\upmu^{+}\upmu^{-})}{{\cal
B}(\uppsi{(2\mathrm{S})}\rightarrow\upmu^{+}\upmu^{-})}=\frac{{\cal
B}({\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\rightarrow\mathrm{e}^{+}\mathrm{e}^{-})}{{\cal
B}(\uppsi{(2\mathrm{S})}\rightarrow\mathrm{e}^{+}\mathrm{e}^{-})}=7.69\pm
0.19$ [23] is used. The results are combined using Eq. (1), to give
$\displaystyle\frac{{\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta)}{{\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\upeta)}$ $\displaystyle=0.83\pm 0.14\,\mathrm{(stat)}\pm
0.12\,\mathrm{(syst)}\pm 0.02\,({\cal B}),$ $\displaystyle\frac{{\cal
B}(\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-})}{{\cal
B}(\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uppi^{+}\uppi^{-})}$ $\displaystyle=0.56\pm 0.07\,\mathrm{(stat)}\pm
0.05\,\mathrm{(syst)}\pm 0.01\,({\cal B}),$ $\displaystyle\frac{{\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-})}{{\cal
B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}\uppi^{+}\uppi^{-})}$ $\displaystyle=0.34\pm 0.04\,\mathrm{(stat)}\pm
0.03\,\mathrm{(syst)}\pm 0.01\,({\cal B}),$
where the first uncertainty is statistical, the second systematic and the
third from the world average ratio [23] of the
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ and
$\uppsi{(2\mathrm{S})}$ branching fractions to dileptonic final states. The
branching fraction ratios measured here correspond to the time integrated
quantities. For the
$\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip
2.0mu}(\uppsi{(2\mathrm{S})})\uppi^{+}\uppi^{-}$ channel the measured ratio
excludes the $\mathrm{K}^{0}_{\rm\scriptscriptstyle
S}\rightarrow\uppi^{+}\uppi^{-}$ contibution. The dominant contributions to
the
$\mathrm{B}^{0}_{\mathrm{(s)}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays are found to be from
$\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\uprho^{0}(770)$ and
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\mathrm{f_{0}(980)}$
decays.
These results are compatible with the measured range of relative branching
fractions of B decays to $\uppsi{(2\mathrm{S})}$ and
${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ mesons. The
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upeta$ and
$\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\uppi^{+}\uppi^{-}$
decays are particularly interesting since, with more data becoming available,
they can be used to measure $C\\!P$ violation in $\mathrm{B}^{0}_{\mathrm{s}}$
mixing.
## Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC
(China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG
(Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR
(Poland); ANCS/IFA (Romania); MinES, Rosatom, RFBR and NRC “Kurchatov
Institute” (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER
(Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We
also acknowledge the support received from the ERC under FP7. The Tier1
computing centers are supported by IN2P3 (France), KIT and BMBF (Germany),
INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United
Kingdom). We are thankful for the computing resources put at our disposal by
Yandex LLC (Russia), as well as to the communities behind the multiple open
source software packages that we depend on.
## References
* [1] Belle collaboration, J. Li et al., First observation of $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upeta$ and $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upeta^{\prime}$, Phys. Rev. Lett. 108 (2012) 181808, arXiv:1202.0103.
* [2] LHCb collaboration, R. Aaij et al., Evidence for the decay $\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upomega$ and measurement of the relative branching fractions of $\mathrm{B}^{0}_{\mathrm{s}}$ meson decays to ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upeta$ and ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upeta^{\prime}$, Nucl. Phys. B867 (2013) 547, arXiv:1210.2631.
* [3] LHCb collaboration, R. Aaij et al., First observation of $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\mathrm{f_{0}(980)}$ decays, Phys. Lett. B698 (2011) 115, arXiv:1102.0206.
* [4] LHCb collaboration, R. Aaij et al., Analysis of the resonant components in $\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\uppi^{+}\uppi^{-}$, arXiv:1301.5347, to appear in Phys. Rev. D.
* [5] LHCb collaboration, R. Aaij et al., Analysis of the resonant components in $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\uppi^{+}\uppi^{-}$, Phys. Rev. D86 (2012) 052006, arXiv:1204.5643.
* [6] LHCb collaboration, R. Aaij et al., Measurement of the $C\\!P$ violating phase $\upphi_{\mathrm{s}}$ in $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\mathrm{f_{0}(980)}$, Phys. Lett. B707 (2012) 497, arXiv:1112.3056.
* [7] LHCb collaboration, R. Aaij et al., Measurement of the $C\\!P$-violating phase $\upphi_{\mathrm{s}}$ in $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\uppi^{+}\uppi^{-}$ decays, Phys. Lett. B713 (2012) 378, arXiv:1204.5675.
* [8] CDF collaboration, F. Abe et al., Observation of $\mathrm{B}^{+}\rightarrow\uppsi{(2\mathrm{S})}\mathrm{K}^{+}$ and $\mathrm{B}^{0}\rightarrow\uppsi{(2\mathrm{S})}\mathrm{K^{*0}(892)}$ decays and measurements of $\mathrm{B}$ meson branching fractions into ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ and $\uppsi{(2\mathrm{S})}$ final states, Phys. Rev. D58 (1998) 072001, arXiv:hep-ex/9803013.
* [9] CDF Collaboration, A. Abulencia et al., Observation of $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upphi$ and measurement of ratio of branching fractions ${\cal B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow\uppsi{(2\mathrm{S})}\upphi)/{\cal B}(\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upphi)$, Phys. Rev. Lett. 96 (2006) 231801, arXiv:hep-ex/0602005.
* [10] D0 collaboration, V. Abazov et al., Relative rates of $\mathrm{B}$ meson decays into $\uppsi{(2\mathrm{S})}$ and ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ mesons, Phys. Rev. D79 (2009) 111102, arXiv:0805.2576.
* [11] LHCb collaboration, R. Aaij et al., Measurement of relative branching fractions of $\mathrm{B}$ decays to $\uppsi{(2\mathrm{S})}$ and ${\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}$ mesons, Eur. Phys. J. C72 (2012) 2118, arXiv:1205.0918.
* [12] LHCb collaboration, A. A. Alves Jr. et al., The LHCb detector at the LHC, JINST 3 (2008) S08005.
* [13] R. Aaij et al., The LHCb trigger and its performance, arXiv:1211.3055, submitted to JINST.
* [14] T. Sjöstrand, S. Mrenna, and P. Skands, PYTHIA 6.4 physics and manual, JHEP 05 (2006) 026, arXiv:hep-ph/0603175.
* [15] I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulation framework, Nuclear Science Symposium Conference Record (NSS/MIC) IEEE (2010) 1155.
* [16] D. J. Lange, The EvtGen particle decay simulation package, Nucl. Instrum. Meth. A462 (2001) 152.
* [17] P. Golonka and Z. Was, PHOTOS Monte Carlo: a precision tool for QED corrections in $Z$ and $W$ decays, Eur. Phys. J. C45 (2006) 97, arXiv:hep-ph/0506026.
* [18] GEANT4 collaboration, S. Agostinelli et al., GEANT4: A simulation toolkit, Nucl. Instrum. Meth. A506 (2003) 250.
* [19] GEANT4 collaboration, J. Allison et al., GEANT4 developments and applications, IEEE Trans. Nucl. Sci. 53 (2006) 270.
* [20] M. Clemencic et al., The LHCb simulation application, Gauss: design, evolution and experience, J. of Phys. Conf. Ser. 331 (2011) 032023.
* [21] A. A. Alves et al., Performance of the LHCb muon system, arXiv:1211.1346, submitted to JINST.
* [22] M. Adinolfi et al., Performance of the LHCb RICH detector at the LHC, arXiv:1211.6759, submitted to Eur. Phys. J.
* [23] Particle Data Group, J. Beringer et al., Review of particle physics, Phys. Rev. D86 (2012) 010001.
* [24] W. D. Hulsbergen, Decay chain fitting with a Kalman filter, Nucl. Instrum. Meth. A552 (2005) 566, arXiv:physics/0503191.
* [25] Belle collaboration, M.-C. Chang et al., Observation of the decay $\mathrm{B}^{0}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\upeta$, Phys. Rev. Lett. 98 (2007) 131803, arXiv:hep-ex/0609047.
* [26] M. Pivk and F. R. Le Diberder, sPlot: a statistical tool to unfold data distributions, Nucl. Instrum. Meth. A555 (2005) 356, arXiv:physics/0402083.
* [27] LHCb collaboration, R. Aaij et al., Measurement of the $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\kern 1.99997pt\overline{\kern-1.99997pt\mathrm{K}}{}^{*0}$ branching fraction and angular amplitudes, Phys. Rev. D86 (2012) 071102(R), arXiv:1208.0738.
* [28] LHCb collaboration, R. Aaij et al., Measurement of the $\mathrm{B}^{0}_{\mathrm{s}}\rightarrow{\mathrm{J}\mskip-3.0mu/\mskip-2.0mu\uppsi\mskip 2.0mu}\mathrm{K}^{0}_{\rm\scriptscriptstyle S}$ branching fraction, Phys. Lett. B713 (2012) 172, arXiv:1205.0934.
|
arxiv-papers
| 2013-02-26T08:26:48 |
2024-09-04T02:49:42.124737
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "LHCb collaboration: R. Aaij, C. Abellan Beteta, B. Adeva, M. Adinolfi,\n C. Adrover, A. Affolder, Z. Ajaltouni, J. Albrecht, F. Alessio, M. Alexander,\n S. Ali, G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, S. Amerio,\n Y. Amhis, L. Anderlini, J. Anderson, R. Andreassen, R.B. Appleby, O. Aquines\n Gutierrez, F. Archilli, A. Artamonov, M. Artuso, E. Aslanides, G. Auriemma,\n S. Bachmann, J.J. Back, C. Baesso, V. Balagura, W. Baldini, R.J. Barlow, C.\n Barschel, S. Barsuk, W. Barter, Th. Bauer, A. Bay, J. Beddow, F. Bedeschi, I.\n Bediaga, S. Belogurov, K. Belous, I. Belyaev, E. Ben-Haim, M. Benayoun, G.\n Bencivenni, S. Benson, J. Benton, A. Berezhnoy, R. Bernet, M.-O. Bettler, M.\n van Beuzekom, A. Bien, S. Bifani, T. Bird, A. Bizzeti, P.M. Bj{\\o}rnstad, T.\n Blake, F. Blanc, J. Blouw, S. Blusk, V. Bocci, A. Bondar, N. Bondar, W.\n Bonivento, S. Borghi, A. Borgia, T.J.V. Bowcock, E. Bowen, C. Bozzi, T.\n Brambach, J. van den Brand, J. Bressieux, D. Brett, M. Britsch, T. Britton,\n N.H. Brook, H. Brown, I. Burducea, A. Bursche, G. Busetto, J. Buytaert, S.\n Cadeddu, O. Callot, M. Calvi, M. Calvo Gomez, A. Camboni, P. Campana, A.\n Carbone, G. Carboni, R. Cardinale, A. Cardini, H. Carranza-Mejia, L. Carson,\n K. Carvalho Akiba, G. Casse, M. Cattaneo, Ch. Cauet, M. Charles, Ph.\n Charpentier, P. Chen, N. Chiapolini, M. Chrzaszcz, K. Ciba, X. Cid Vidal, G.\n Ciezarek, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier, C. Coca, V.\n Coco, J. Cogan, E. Cogneras, P. Collins, A. Comerma-Montells, A. Contu, A.\n Cook, M. Coombes, S. Coquereau, G. Corti, B. Couturier, G.A. Cowan, D. Craik,\n S. Cunliffe, R. Currie, C. D'Ambrosio, P. David, P.N.Y. David, I. De Bonis,\n K. De Bruyn, S. De Capua, M. De Cian, J.M. De Miranda, M. De Oyanguren\n Campos, L. De Paula, W. De Silva, P. De Simone, D. Decamp, M. Deckenhoff, L.\n Del Buono, D. Derkach, O. Deschamps, F. Dettori, A. Di Canto, H. Dijkstra, M.\n Dogaru, S. Donleavy, F. Dordei, A. Dosil Su\\'arez, D. Dossett, A. Dovbnya, F.\n Dupertuis, R. Dzhelyadin, A. Dziurda, A. Dzyuba, S. Easo, U. Egede, V.\n Egorychev, S. Eidelman, D. van Eijk, S. Eisenhardt, U. Eitschberger, R.\n Ekelhof, L. Eklund, I. El Rifai, Ch. Elsasser, D. Elsby, A. Falabella, C.\n F\\\"arber, G. Fardell, C. Farinelli, S. Farry, V. Fave, D. Ferguson, V.\n Fernandez Albor, F. Ferreira Rodrigues, M. Ferro-Luzzi, S. Filippov, C.\n Fitzpatrick, M. Fontana, F. Fontanelli, R. Forty, O. Francisco, M. Frank, C.\n Frei, M. Frosini, S. Furcas, E. Furfaro, A. Gallas Torreira, D. Galli, M.\n Gandelman, P. Gandini, Y. Gao, J. Garofoli, P. Garosi, J. Garra Tico, L.\n Garrido, C. Gaspar, R. Gauld, E. Gersabeck, M. Gersabeck, T. Gershon, Ph.\n Ghez, V. Gibson, V.V. Gligorov, C. G\\\"obel, D. Golubkov, A. Golutvin, A.\n Gomes, H. Gordon, M. Grabalosa G\\'andara, R. Graciani Diaz, L.A. Granado\n Cardoso, E. Graug\\'es, G. Graziani, A. Grecu, E. Greening, S. Gregson, O.\n Gr\\\"unberg, B. Gui, E. Gushchin, Yu. Guz, T. Gys, C. Hadjivasiliou, G.\n Haefeli, C. Haen, S.C. Haines, S. Hall, T. Hampson, S. Hansmann-Menzemer, N.\n Harnew, S.T. Harnew, J. Harrison, T. Hartmann, J. He, V. Heijne, K. Hennessy,\n P. Henrard, J.A. Hernando Morata, E. van Herwijnen, E. Hicks, D. Hill, M.\n Hoballah, C. Hombach, P. Hopchev, W. Hulsbergen, P. Hunt, T. Huse, N.\n Hussain, D. Hutchcroft, D. Hynds, V. Iakovenko, M. Idzik, P. Ilten, R.\n Jacobsson, A. Jaeger, E. Jans, P. Jaton, F. Jing, M. John, D. Johnson, C.R.\n Jones, B. Jost, M. Kaballo, S. Kandybei, M. Karacson, T.M. Karbach, I.R.\n Kenyon, U. Kerzel, T. Ketel, A. Keune, B. Khanji, O. Kochebina, I. Komarov,\n R.F. Koopman, P. Koppenburg, M. Korolev, A. Kozlinskiy, L. Kravchuk, K.\n Kreplin, M. Kreps, G. Krocker, P. Krokovny, F. Kruse, M. Kucharczyk, V.\n Kudryavtsev, T. Kvaratskheliya, V.N. La Thi, D. Lacarrere, G. Lafferty, A.\n Lai, D. Lambert, R.W. Lambert, E. Lanciotti, G. Lanfranchi, C. Langenbruch,\n T. Latham, C. Lazzeroni, R. Le Gac, J. van Leerdam, J.-P. Lees, R. Lef\\`evre,\n A. Leflat, J. Lefran\\c{c}ois, S. Leo, O. Leroy, B. Leverington, Y. Li, L. Li\n Gioi, M. Liles, R. Lindner, C. Linn, B. Liu, G. Liu, J. von Loeben, S. Lohn,\n J.H. Lopes, E. Lopez Asamar, N. Lopez-March, H. Lu, D. Lucchesi, J. Luisier,\n H. Luo, F. Machefert, I.V. Machikhiliyan, F. Maciuc, O. Maev, S. Malde, G.\n Manca, G. Mancinelli, U. Marconi, R. M\\\"arki, J. Marks, G. Martellotti, A.\n Martens, L. Martin, A. Mart\\'in S\\'anchez, M. Martinelli, D. Martinez Santos,\n D. Martins Tostes, A. Massafferri, R. Matev, Z. Mathe, C. Matteuzzi, E.\n Maurice, A. Mazurov, J. McCarthy, R. McNulty, A. Mcnab, B. Meadows, F. Meier,\n M. Meissner, M. Merk, D.A. Milanes, M.-N. Minard, J. Molina Rodriguez, S.\n Monteil, D. Moran, P. Morawski, M.J. Morello, R. Mountain, I. Mous, F.\n Muheim, K. M\\\"uller, R. Muresan, B. Muryn, B. Muster, P. Naik, T. Nakada, R.\n Nandakumar, I. Nasteva, M. Needham, N. Neufeld, A.D. Nguyen, T.D. Nguyen, C.\n Nguyen-Mau, M. Nicol, V. Niess, R. Niet, N. Nikitin, T. Nikodem, A.\n Nomerotski, A. Novoselov, A. Oblakowska-Mucha, V. Obraztsov, S. Oggero, S.\n Ogilvy, O. Okhrimenko, R. Oldeman, M. Orlandea, J.M. Otalora Goicochea, P.\n Owen, B.K. Pal, A. Palano, M. Palutan, J. Panman, A. Papanestis, M.\n Pappagallo, C. Parkes, C.J. Parkinson, G. Passaleva, G.D. Patel, M. Patel,\n G.N. Patrick, C. Patrignani, C. Pavel-Nicorescu, A. Pazos Alvarez, A.\n Pellegrino, G. Penso, M. Pepe Altarelli, S. Perazzini, D.L. Perego, E. Perez\n Trigo, A. P\\'erez-Calero Yzquierdo, P. Perret, M. Perrin-Terrin, G. Pessina,\n K. Petridis, A. Petrolini, A. Phan, E. Picatoste Olloqui, B. Pietrzyk, T.\n Pila\\v{r}, D. Pinci, S. Playfer, M. Plo Casasus, F. Polci, S. Polikarpov, G.\n Polok, A. Poluektov, E. Polycarpo, D. Popov, B. Popovici, C. Potterat, A.\n Powell, J. Prisciandaro, V. Pugatch, A. Puig Navarro, G. Punzi, W. Qian, J.H.\n Rademacker, B. Rakotomiaramanana, M.S. Rangel, I. Raniuk, N. Rauschmayr, G.\n Raven, S. Redford, M.M. Reid, A.C. dos Reis, S. Ricciardi, A. Richards, K.\n Rinnert, V. Rives Molina, D.A. Roa Romero, P. Robbe, E. Rodrigues, P.\n Rodriguez Perez, S. Roiser, V. Romanovsky, A. Romero Vidal, J. Rouvinet, T.\n Ruf, F. Ruffini, H. Ruiz, P. Ruiz Valls, G. Sabatino, J.J. Saborido Silva, N.\n Sagidova, P. Sail, B. Saitta, C. Salzmann, B. Sanmartin Sedes, M. Sannino, R.\n Santacesaria, C. Santamarina Rios, E. Santovetti, M. Sapunov, A. Sarti, C.\n Satriano, A. Satta, M. Savrie, D. Savrina, P. Schaack, M. Schiller, H.\n Schindler, M. Schlupp, M. Schmelling, B. Schmidt, O. Schneider, A. Schopper,\n M.-H. Schune, R. Schwemmer, B. Sciascia, A. Sciubba, M. Seco, A. Semennikov,\n K. Senderowska, I. Sepp, N. Serra, J. Serrano, P. Seyfert, M. Shapkin, I.\n Shapoval, P. Shatalov, Y. Shcheglov, T. Shears, L. Shekhtman, O. Shevchenko,\n V. Shevchenko, A. Shires, R. Silva Coutinho, T. Skwarnicki, N.A. Smith, E.\n Smith, M. Smith, M.D. Sokoloff, F.J.P. Soler, F. Soomro, D. Souza, B. Souza\n De Paula, B. Spaan, A. Sparkes, P. Spradlin, F. Stagni, S. Stahl, O.\n Steinkamp, S. Stoica, S. Stone, B. Storaci, M. Straticiuc, U. Straumann, V.K.\n Subbiah, S. Swientek, V. Syropoulos, M. Szczekowski, P. Szczypka, T. Szumlak,\n S. T'Jampens, M. Teklishyn, E. Teodorescu, F. Teubert, C. Thomas, E. Thomas,\n J. van Tilburg, V. Tisserand, M. Tobin, S. Tolk, D. Tonelli, S.\n Topp-Joergensen, N. Torr, E. Tournefier, S. Tourneur, M.T. Tran, M. Tresch,\n A. Tsaregorodtsev, P. Tsopelas, N. Tuning, M. Ubeda Garcia, A. Ukleja, D.\n Urner, U. Uwer, V. Vagnoni, G. Valenti, R. Vazquez Gomez, P. Vazquez\n Regueiro, S. Vecchi, J.J. Velthuis, M. Veltri, G. Veneziano, M. Vesterinen,\n B. Viaud, D. Vieira, X. Vilasis-Cardona, A. Vollhardt, D. Volyanskyy, D.\n Voong, A. Vorobyev, V. Vorobyev, C. Vo\\ss, H. Voss, R. Waldi, R. Wallace, S.\n Wandernoth, J. Wang, D.R. Ward, N.K. Watson, A.D. Webber, D. Websdale, M.\n Whitehead, J. Wicht, J. Wiechczynski, D. Wiedner, L. Wiggers, G. Wilkinson,\n M.P. Williams, M. Williams, F.F. Wilson, J. Wishahi, M. Witek, S.A. Wotton,\n S. Wright, S. Wu, K. Wyllie, Y. Xie, F. Xing, Z. Xing, Z. Yang, R. Young, X.\n Yuan, O. Yushchenko, M. Zangoli, M. Zavertyaev, F. Zhang, L. Zhang, W.C.\n Zhang, Y. Zhang, A. Zhelezov, A. Zhokhov, L. Zhong, A. Zvyagin",
"submitter": "Ivan Belyaev",
"url": "https://arxiv.org/abs/1302.6354"
}
|
1302.6395
|
# on flat and gorenstein flat dimensions of local cohomology modules
Majid Rahro Zargar and Hossein Zakeri M.R. Zargar and H. Zakeri, Faculty of
mathematical sciences and computer, Kharazmi University, 599 Taleghani Avenue,
Tehran 15618, Iran. [email protected] [email protected]
###### Abstract.
Let $(R,\mathfrak{m})$ be a commutative Noetherian local ring and let $M$ be a
relative Cohen-Macaulay $R$–module with respect to a proper ideal
$\mathfrak{a}$ of $R$ and set $n:=\mbox{ht}\,_{M}\mathfrak{a}$. We prove that
$\mbox{fd}\,_{R}M<\infty$ if and only if
$\mbox{fd}\,_{R}\mbox{H}^{n}_{\mathfrak{a}}(M)<\infty$, and that
$\mbox{fd}\,_{R}\mbox{H}^{n}_{\mathfrak{a}}(M)=\mbox{fd}\,_{R}M+n$. This
result provides some characterizations of Gorenstein local rings which, under
some additional assumptions, have been exposed in the literature. We also
prove that $\mbox{Gpd}\,_{R}M<\infty$ if and only if
$\mbox{Gfd}\,_{R}\mbox{H}^{n}_{\mathfrak{m}}(M)<\infty$ and that
$\mbox{Gfd}\,_{R}\mbox{H}^{n}_{\mathfrak{m}}(M)=\mbox{Gpd}\,_{R}M+n$ whenever
$M$ is Cohen-Macaulay with $\mbox{dim}\,M=n$. As an application of this
result, we show that its $G_{C}$–Gorenstein projective version holds true,
where $C$ is a semidualizing $R$–module.
###### Key words and phrases:
Flat dimension, Gorenstein injective dimension, Gorenstein flat dimension,
Local cohomology, Relative Cohen-Macaulay module, Semidualizing module.
###### 2000 Mathematics Subject Classification:
13D05, 13D45, 18G20
## 1\. introduction
Throughout this paper, $R$ is a commutative Noetherian ring, $\mathfrak{a}$ is
a proper ideal of $R$ and $M$ is an $R$-module. From section 3, we assume that
$R$ is local with maximal ideal $\mathfrak{m}$. In this case, $\hat{R}$
denotes the $\mathfrak{m}$–adic completion of $R$ and
$\mbox{E}(R/\mathfrak{m})$ denotes the injective hull of the residue field
$R/\mathfrak{m}$. For each non-negative integer $i$, let
$\mbox{H}_{\mathfrak{a}}^{i}(M)$ denotes the $i$-th local cohomology module of
$M$ with respect to $\mathfrak{a}$ (see [3] for its definition and basic
results). Also, we use $\mbox{id}\,_{R}(M)$, $\mbox{pd}\,_{R}(M)$ and
$\mbox{fd}\,_{R}(M)$, respectively, to denote the usual injective, projective
and flat dimensions of $M$ respectively. The notions of Gorenstein injective,
Gorenstein projective and Gorenstein flat, were introduced by Enochs and Jenda
in [9]. Notice that, the classes of Gorenstein injective, Gorenstein
projective and Gorenstein flat modules include, respectively the classes of
injective, projective and flat modules. Recently, the authors proved, in [17,
Theorem 2.5], that if $M$ is a certain module over a local ring $R$, then
$\mbox{id}\,_{R}(M)$ and
$\mbox{id}\,_{R}(\mbox{H}_{\mathfrak{a}}^{\small{\mbox{ht}\,_{M}\mathfrak{a}}}(M))$
are simultaneously finite and the equality
$\mbox{id}\,_{R}(\mbox{H}_{\mathfrak{a}}^{\small{\mbox{ht}\,_{M}\mathfrak{a}}}(M))=\mbox{id}\,_{R}(M)-\mbox{ht}\,_{M}\mathfrak{a}$
holds. Also, a counterpart of this result was established in Gorenstein
homological algebra. Indeed, under the additional assumption that $R$ has a
dualizing complex, it was proved that $\mbox{Gid}\,_{R}M<\infty$ implies
$\mbox{Gid}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)<\infty$ and that the converse
holds whenever both $R$ and $M$ are Cohen-Macaulay. As an application of this
result, it was shown that the equality
$\mbox{Gid}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)=\mbox{Gid}\,_{R}M-n$ holds,
whenever $M$ is a Cohen-Macaulay module over the Cohen-Macaulay local ring
$(R,\mathfrak{m})$ and $\mbox{dim}\,M=n$.
The principal aim of this paper is to study, in like manner, the projective
(resp. Gorenstein projective) dimension of certain $R$-modules in terms of
flat (resp. Gorenstein flat) dimension of its local cohomology modules.
In this paper we will use the concept of relative Cohen-Macaulay modules which
has been studied in [12] under the title of cohomologically complete
intersections and continued in [17] and [16]. The organization of this paper
is as follows. In section 3, we prove, in 3.1, that if $M$ is relative Cohen-
Macaulay with respect to $\mathfrak{a}$ and $\mbox{ht}\,_{M}{\mathfrak{a}}=n$,
then $\mbox{fd}\,_{R}M$ and $\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)$
are simultaneously finite and
$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)=\mbox{fd}\,_{R}M+n$. Next, in
3.3, we prove that a $d$-dimensional finitely generated $R$–module $M$ with
finite projective dimension is Cohen-Macaulay if and only if
$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{pd}\,_{R}M+d$. Notice
that, this result is a generalization of [9, Proposition 9.5.22]. In 3.5, 3.6
and 3.7, we also generalize some results of the author and H. Zakeri, which
have been proved in [16] and [17] under some additional assumptions. In 3.8,
we generalize the well-known result that if the local ring $R$ admits a non-
zero Cohen-Macaulay module of finite projective dimension, then $R$ is Cohen-
Macaulay. Indeed, it is shown that $R$ is Cohen-Macaulay if it admits a non-
zero Cohen-Macaulay $R$–module with finite $\mbox{C-pd}\,_{R}(M)$, where $C$
is a semidualizing $R$–module. In section 4, as a main result, a Gorenstein
projective version of 3.1 is demonstrated in a certain case. Indeed, it is
shown, in 4.2, that if $M$ is Cohen-Macaulay with $\mbox{dim}\,M=n$, then
$\mbox{Gpd}\,_{R}M<\infty$ if and only if
$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)<\infty$ and, moreover, the
equality $\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)=\mbox{Gpd}\,_{R}M+n$
holds. Also, in 4.3, we provide a Gorenstein projective version of 3.3.
Finally, with the aid of the above result, it is proved that if $C$ is a
semidualizing $R$-module and $M$ is Cohen-Macaulay with $\mbox{dim}\,M=n$,
then the quantities $\mbox{$G_{C}$-pd}\,_{R}M$ and
$\mbox{$G_{C}$-fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)$ are simultaneously
finite and the equality
$\mbox{$G_{C}$-fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)=\mbox{$G_{C}$-pd}\,_{R}M+n$
holds.
## 2\. preliminaries
In this section we recall some definitions and facts which are needed
throughout this paper.
###### Definition 2.1.
_Following[20, Definition 2.1], let $\mathcal{X}$ be a class of $R$-modules
and let $M$ be an $R$-module. An $\mathcal{X}$-coresolution of $M$ is a
complex of $R$-modules in $\mathcal{X}$ of the form_
$X=0\longrightarrow
X_{0}\stackrel{{\scriptstyle\partial_{0}^{X}}}{{\longrightarrow}}X_{-1}\stackrel{{\scriptstyle\partial_{-1}^{X}}}{{\longrightarrow}}\cdots\stackrel{{\scriptstyle\partial_{n+1}^{X}}}{{\longrightarrow}}X_{n}\stackrel{{\scriptstyle\partial_{n}^{X}}}{{\longrightarrow}}X_{n-1}\stackrel{{\scriptstyle\partial_{n-1}^{X}}}{{\longrightarrow}}\cdots$
_such that $\mbox{H}_{0}(X)\cong M$ and $\mbox{H}_{n}(X)=0$ for all $n\leq-1$.
The $\mathcal{X}$-injective dimension of $M$ is the quantity_
${\mbox{$\mathcal{X}$-id}\,_{R}(M)}=\inf\\{\sup\\{-n\geq 0|X_{n}\neq
0\\}~{}|~{}X~{}\text{is an $\mathcal{X}$-coresolution of $M$~{}}\\}.$
_The modules of $\mathcal{X}$-injective dimension zero are precisely the non-
zero modules of $\mathcal{X}$ and also
$\mbox{$\mathcal{X}$-id}\,_{R}(0)=-\infty$._
_Dually, an $\mathcal{X}$-resolution of $M$ is a complex of $R$-modules in
$\mathcal{X}$ of the form_
$X=\cdots\longrightarrow
X_{n}\stackrel{{\scriptstyle\partial_{n}^{X}}}{{\longrightarrow}}X_{n-1}\stackrel{{\scriptstyle\partial_{n-1}^{X}}}{{\longrightarrow}}\cdots\stackrel{{\scriptstyle\partial_{2}^{X}}}{{\longrightarrow}}X_{1}\stackrel{{\scriptstyle\partial_{1}^{X}}}{{\longrightarrow}}X_{0}\longrightarrow
0$
_such that $\mbox{H}_{0}(X)\cong M$ and $\mbox{H}_{n}(X)=0$ for all $n\geq 1$.
The $\mathcal{X}$-projective dimension of $M$ is the quantity_
${\mbox{$\mathcal{X}$-pd}\,_{R}(M)}=\inf\\{\sup\\{n\geq 0|X_{n}\neq
0\\}~{}|~{}X~{}\text{is an $\mathcal{X}$-resolution of $M$~{}}\\}.$
_The modules of $\mathcal{X}$-projective dimension zero are precisely the non-
zero modules of $\mathcal{X}$ and also
$\mbox{$\mathcal{X}$-pd}\,_{R}(0)=-\infty$._
_The following notion of semidualizing modules goes back at least to
Vasconcelos[23], but was rediscovered by others. The reader is referred to
[19] for more details about semidualizing modules._
###### Definition 2.2.
_A finitely generated $R$-module $C$ is called semidualizing if the natural
homomorphism $R\rightarrow\mbox{Hom}\,_{R}(C,C)$ is an isomorphism and
$\mbox{Ext}\,_{R}^{i}(C,C)=0$ for all $i\geq 1$. An $R$-module $D$ is said to
be a dualizing $R$-module if it is semidualizing and has finite injective
dimension. For a semidualizing $R$-module $C$, we set_
$\begin{array}[]{rl}&\mathcal{I}_{C}(R)=\\{~{}\mbox{Hom}\,_{R}(C,I)|~{}~{}I~{}~{}\text{is
an injective $R$-module}\\},\\\
&\mathcal{P}_{C}(R)=\\{~{}C\otimes_{R}P|~{}~{}P~{}~{}\text{is a projective
$R$-module}\\},\\\
&\mathcal{F}_{C}(R)=\\{~{}C\otimes_{R}F|~{}~{}F~{}~{}\text{is a flat
$R$-module}\\}.\end{array}$
_The $R$–modules in $\mathcal{I}_{C}(R)$, $\mathcal{P}_{C}(R)$ and
$\mathcal{F}_{C}(R)$ are called $C$–injective, $C$–projective and $C$–flat,
respectively. For convenience the quantities
$\mathcal{I_{C}}(R)$-$\mbox{id}\,_{R}M$ and
$\mathcal{P_{C}}(R)$-$\mbox{pd}\,_{R}M$, which are defined as in 2.1, are
denoted by $\mbox{C-id}\,_{R}(M)$ and $\mbox{C-pd}\,_{R}(M)$ respectively.
Notice that when $C=R$ these notions recover the concepts of injective and
projective dimensions, respectively. _
Based on the work of E.E. Enochs and O.M.G. Jenda [9], the following notions
were introduced and studied by H. Holm and P. J${\o}$rgensen [14].
###### Definitions 2.3.
_Let $C$ be a semidualizing $R$-module. A complete
$\mathcal{I}_{C}\mathcal{I}$-coresolution is a complex $Y$ of $R$-modules such
that_
* (i)
$Y$ is exact and $\mbox{Hom}\,_{R}(I,Y)$ is exact for each
$I\in\mathcal{I}_{C}(R)$, and that
* (ii)
$Y_{i}\in\mathcal{I}_{C}(R)$ for all $i>0$ and $Y_{i}$ is injective for all
$i\leq 0$.
_An $R$-module $M$ is called $G_{C}$-injective if there exists a complete
$\mathcal{I}_{C}\mathcal{I}$-coresolution $Y$ such that
$M\cong\ker(\partial_{0}^{Y})$. In this case $Y$ is a complete
$\mathcal{I}_{C}\mathcal{I}$-coresolution of $M$. The class of
$G_{C}$-injective $R$-modules is denoted by $\mathcal{GI_{C}}(R)$._
_A complete $\mathcal{P}_{C}\mathcal{P}$-resolution is a complex $X$ of
$R$–modules such that_
* (i)
$X$ is exact and $\mbox{Hom}\,_{R}(X,P)$ is exact for each
$P\in\mathcal{P}_{C}(R)$, and that
* (ii)
$X_{i}\in\mathcal{P}_{C}(R)$ for all $i<0$ and $X_{i}$ is projective for all
$i\geq 0$.
_An $R$-module $M$ is called $G_{C}$-projective if there exists a complete
$\mathcal{P}_{C}\mathcal{P}$-resolution $X$ such that
$M\cong\mbox{Coker}\,(\partial_{1}^{X})$. In this case $X$ is a complete
$\mathcal{P}_{C}\mathcal{P}$-resolution of $M$. The class of
$G_{C}$-projective $R$-modules is denoted by $\mathcal{GP_{C}}(R)$._
_A complete $\mathcal{F}_{C}\mathcal{F}$-resolution is a complex $Z$ of
$R$–modules such that_
* (i)
$Z$ is exact and $Z\otimes_{R}I$ is exact for each $I\in\mathcal{I}_{C}(R)$,
and that
* (ii)
$Z_{i}\in\mathcal{F}_{C}(R)$ for all $i<0$ and $Z_{i}$ is flat for all $i\geq
0$.
_An $R$-module $M$ is called $G_{C}$-flat if there exists a complete
$\mathcal{F}_{C}\mathcal{F}$-resolution $Z$ such that
$M\cong\mbox{Coker}\,(\partial_{1}^{Z})$. In this case $Z$ is a complete
$\mathcal{F}_{C}\mathcal{F}$-resolution of $M$. The class of $G_{C}$-flat
$R$-modules is denoted by $\mathcal{GF_{C}}(R)$._
_For convenience the $\mathcal{GI_{C}}(R)$-$\mbox{id}\,_{R}M$,
$\mathcal{GP_{C}}(R)$-$\mbox{pd}\,_{R}M$ and
$\mathcal{GF_{C}}(R)$-$\mbox{pd}\,_{R}M$ of $M$ which are defined as in 2.1
are denoted by $\mbox{$G_{C}$-id}\,_{R}(M)$, $\mbox{$G_{C}$-pd}\,_{R}M$ and
$\mbox{$G_{C}$-fd}\,_{R}M$, respectively. Note that when $C=R$ these notions
are exactly the concepts of Gorenstein injective, Gorenstein projective and
Gorenstein flat dimensions which were introduced in [9]._
###### Definition 2.4.
_We say that a finitely generated $R$-module $M$ is relative Cohen Macaulay
with respect to $\mathfrak{a}$ if there is precisely one non-vanishing local
cohomology module of $M$ with respect to $\mathfrak{a}$. Clearly this is the
case if and only if
$\mbox{grade}\,(\mathfrak{a},M)=\mbox{cd}\,(\mathfrak{a},M)$, where
$\mbox{cd}\,(\mathfrak{a},M)$ is the largest integer $i$ for which
$\mbox{H}_{\mathfrak{a}}^{i}(M)\neq 0$. Observe that the notion of relative
Cohen-Macaulay module is connected with the notion of cohomologically complete
intersection ideal which has been studied in [12]. _
###### Remark 2.5.
_Let $M$ be a relative Cohen-Macaulay module with respect to $\mathfrak{a}$
and let $\mbox{cd}\,(\mathfrak{a},M)=n$. Then, in view of [3, theorems 6.1.4,
4.2.1, 4.3.2], it is easy to see that
$\mbox{Supp}\,\mbox{H}^{n}_{\mathfrak{a}}(M)=\mbox{Supp}\,({M}/{\mathfrak{a}M})$
and $\mbox{ht}\,_{M}\mathfrak{a}=\mbox{grade}\,(\mathfrak{a},M)$, where
$\mbox{ht}\,_{M}\mathfrak{a}=\inf\\{\
\mbox{dim}\,_{R_{\mathfrak{p}}}M_{\mathfrak{p}}|~{}\mathfrak{p}\in\mbox{Supp}\,(M/\mathfrak{a}M)~{}\\}$._
Next, we recall some elementary results about the trivial extension of a ring
by a module.
###### Definition and Facts 2.6.
_Let $C$ be an $R$-module. Then the direct sum $R\oplus C$ has the structure
of a commutative ring with respect to the multiplication defined by_
$(a,c)(a^{\prime},c^{\prime})=(aa^{\prime},ac^{\prime}+a^{\prime}c),$
_for all $(a,c),(a^{\prime},c^{\prime})\in R\oplus C$. This ring is called
trivial extension of $R$ by $C$ and is denoted by $R\ltimes C$. The following
properties of $R\ltimes C$ are needed in this paper._
* (i)
_There are natural ring homomorphisms $R\rightleftarrows R\ltimes C$ which
enable us to consider $R$-modules as $R\ltimes C$-modules, and vice versa._
* (ii)
_For any ideal $\mathfrak{a}$ of $R$, $\mathfrak{a}\oplus C$ is an ideal of
$R\ltimes C$._
* (iii)
$(R\ltimes C,\mathfrak{m}\oplus C)$ is a Noetherian local ring whenever
$(R,\mathfrak{m})$ is a Noetherian local ring and $C$ is a finitely generated
$R$-module. Also, in this case, $\mbox{dim}\,R=\mbox{dim}\,R\ltimes C$.
The classes defined next is collectively known as Foxby classes. The reader is
referred to [19] for some basic results about those classes.
###### Definition 2.7.
_Let $C$ be a semidualizing $R$-module. The Bass class with respect to $C$ is
the class $\mathcal{B_{C}}(R)$ of $R$–modules $M$ such that _
* (i)
_$\mbox{Ext}\,_{R}^{i}(C,M)=0=\mbox{Tor}\,^{R}_{i}(C,\mbox{Hom}\,_{R}(C,M))$
for all $i\geq 1$_, and that
* (ii)
_the natural evaluation map $C\otimes_{R}\mbox{Hom}\,_{R}(C,M)\rightarrow M$
is an isomorphism_.
_Dually, the Auslander class with respect to $C$, denoted by
$\mathcal{A}_{C}(R)$, consists of all $R$-modules $M$ such that_
* (i)
_$\mbox{Tor}\,^{R}_{i}(C,M)=0=\mbox{Ext}\,_{R}^{i}(C,C\otimes_{R}M)$ for all
$i\geq 1$_, and that
* (ii)
_the natural map $M\rightarrow\mbox{Hom}\,_{R}(C,C\otimes_{R}M)$ is an
isomorphism_.
## 3\. local cohomology and flat dimension
The following theorem, which is one of the main results of this section,
provides a comparison between the flat dimensions of a relative Cohen-Macaulay
module and its non-zero local cohomology module. Here we adopt the convention
that the flat dimension of the zero module is to be taken as $-\infty$.
###### Theorem 3.1.
Let $n$ be a non-negative integer such that $\emph{H}^{i}_{\mathfrak{a}}(M)=0$
for all $i\neq n$.
* (i)
If _$\mbox{fd}\,_{R}M <\infty$_, then
_$\mbox{fd}\,_{R}\mbox{H}^{n}_{\mathfrak{a}}(M) <\infty$._
* (ii)
The converse holds whenever $M$ is finitely generated.
Furthermore, if $M$ is non-zero finitely generated, then
_$\mbox{fd}\,_{R}\mbox{H}^{n}_{\mathfrak{a}}(M)=\mbox{fd}\,_{R}M+n=\mbox{pd}\,_{R}M+n$_.
###### Proof.
(i): First notice that we may assume $\mbox{H}^{n}_{\mathfrak{a}}(M)\neq 0$.
Let $\mbox{fd}\,_{R}M=s$ and let $c$ be the arithmetic rank of $\mathfrak{a}$.
Then, there exists a sequence $x_{1},\ldots,x_{c}$ of elements of $R$ such
that $\sqrt{\mathfrak{a}}=\sqrt{(x_{1},\ldots,x_{c})}$. We notice that $n\leq
c$. Let $C(R)^{\bullet}$ denotes the $\check{C}$ech complex of $R$ with
respect to $x_{1},\ldots,x_{c}$. Let $N$ be an arbitrary $R-$module and let
$F_{\bullet}$ be a free resolution for $N$. For the first quadrant bicomplex
$\mathcal{M}=\\{M_{p,q}=F_{p}\otimes_{R}M\otimes_{R}C_{c-q}\\}$ we denote the
total complex of $\mathcal{M}$ by $\mbox{Tot}\,(\mathcal{M})$. Now, with the
notation of [18], $\mbox{E}^{1}$ is the bigraded module whose $(p,q)$ term is
$\mbox{H}^{{}^{\prime\prime}}_{q}(M_{p,*})$, the q-th homology of the p-th
column. Since $F_{p}$ is flat, by assumption we have
${}^{I}E_{p,q}^{1}=\mbox{H}^{{}^{\prime\prime}}_{q}(M_{p,*})=\begin{cases}0&\text{if
$q\neq c-n$}\\\ F_{p}\otimes_{R}\mbox{H}^{n}_{\mathfrak{a}}(M)&\text{if
$q=c-n$},\end{cases}$
therefore
${}^{I}E_{p,q}^{2}=\mbox{H}^{{}^{\prime}}_{p}\mbox{H}^{{}^{\prime\prime}}_{q}(\mathcal{M})=\begin{cases}0&\text{if
$q\neq c-n$}\\\
\mbox{Tor}\,^{R}_{p}(N,\mbox{H}^{n}_{\mathfrak{a}}(M))&\text{if
$q=c-n$};\end{cases}$
and hence the spectral sequence collapses. Note that, in view of [18, Theorem
10.16] we have
${}^{I}E_{p,q}^{2}\underset{p}{\Longrightarrow}\mbox{H}_{p+q}(\mbox{Tot}\,(\mathcal{M}))$
for all $p,q$. Thus, for all $t=p+q$, there is the following filtration
${0}=\Phi^{-1}H_{t}\subseteq\Phi^{0}H_{t}\subseteq\ldots\subseteq\Phi^{t-1}H_{t}\subseteq\Phi^{t}H_{t}=H_{t}$
such that ${}^{I}E_{p,q}^{\infty}\cong\Phi^{p}H_{t}/\Phi^{p-1}H_{t}$.
Therefore, one can use the above filtration to see that
(3.1)
$\mbox{Tor}\,^{R}_{p}(N,\mbox{H}^{n}_{\mathfrak{a}}(M))\cong\mbox{H}_{p+c-n}(\mbox{Tot}\,(\mathcal{M}))$
for all $p$.
A similar argument applies to the second iterated homology, using the fact
that each $C_{c-q}$ is flat, yields
${}^{II}E_{p^{{}^{\prime}},q^{{}^{\prime}}}^{2}=\mbox{H}^{{}^{\prime\prime}}_{p^{{}^{\prime}}}\mbox{H}^{{}^{\prime}}_{q^{{}^{\prime}}}(\mathcal{M})=\begin{cases}0&\text{if
$q^{{}^{\prime}}>s$}\\\
\mbox{H}_{\mathfrak{a}}^{c-p^{{}^{\prime}}}(\mbox{Tor}\,^{R}_{q^{{}^{\prime}}}(N,M))&\text{if
$q^{{}^{\prime}}\leq s.$}\end{cases}$
Now, we claim that ${}^{II}E_{p^{{}^{\prime}},q^{{}^{\prime}}}^{\infty}=0$ for
all $p^{{}^{\prime}},q^{{}^{\prime}}$ such that
$p^{{}^{\prime}}+q^{{}^{\prime}}=p+c-n$ and that $p>s+n$. To this end, first
notice that, by [18, Theorem 10.16], we have
${}^{II}E_{p^{{}^{\prime}},q^{{}^{\prime}}}^{2}\underset{p_{{}^{\prime}}}{\Longrightarrow}\mbox{H}_{p^{{}^{\prime}}+q^{{}^{\prime}}}(\mbox{Tot}\,(\mathcal{M}))$.
If $q^{{}^{\prime}}>s$, there is nothing to prove. Let $q^{{}^{\prime}}\leq
s$. Then $0>c-p^{{}^{\prime}}$ and hence
${}^{II}E_{p^{{}^{\prime}},q^{{}^{\prime}}}^{2}=0$; which in turn yields
${}^{II}E_{p^{{}^{\prime}},q^{{}^{\prime}}}^{\infty}=0$. Now, by using a
similar filtration as above, one can see that
$\mbox{H}_{p+c-n}(\mbox{Tot}\,(\mathcal{M}))=0$ for all $p>s+n$. Therefore
$\mbox{Tor}\,^{R}_{p}(N,\mbox{H}^{n}_{\mathfrak{a}}(M))=0$ for all $p>s+n$;
and hence $\mbox{fd}\,_{R}(\mbox{H}_{\mathfrak{a}}^{n}(M))\leq s+n$.
(ii): First, notice that $\mbox{Tor}\,^{R}_{i}(R/\mathfrak{m},M)$ is
$\mathfrak{a}$-torsion for all $i$. Therefore, by using the same arguments as
above, one can deduce that
${}^{II}E_{p^{{}^{\prime}},q^{{}^{\prime}}}^{2}=\mbox{H}^{{}^{\prime\prime}}_{p^{{}^{\prime}}}\mbox{H}^{{}^{\prime}}_{q^{{}^{\prime}}}(\mathcal{M})=\begin{cases}0&\text{if
$p^{{}^{\prime}}\neq c$}\\\
\mbox{Tor}\,^{R}_{q^{{}^{\prime}}}(R/\mathfrak{m},M)&\text{if
$p^{{}^{\prime}}=c$.}\end{cases}$
Thus, the spectral sequence collapses at the c-th column; and hence we get the
isomorphism
$\mbox{Tor}\,^{R}_{q^{{}^{\prime}}}(R/\mathfrak{m},M)\cong\mbox{H}_{q^{{}^{\prime}}+c}(\mbox{Tot}\,(\mathcal{M}))$
for all $q^{{}^{\prime}}$. It therefore follows, by the isomorphism (3.1),
that
(3.2)
$\mbox{Tor}\,_{p}^{R}(R/\mathfrak{m},\mbox{H}_{\mathfrak{a}}^{n}(M))\cong\mbox{Tor}\,_{p-n}^{R}(R/\mathfrak{m},M)$
for all $p$. Now, suppose for a moment that $M$ is non-zero and finitely
generated. Then, one can use [18, Corollary 8.54], to see that
$\mbox{fd}\,_{R}M<\infty$. The final assertion is a consequence of (i) and
(ii). ∎
###### Corollary 3.2.
Suppose that $R$ is relative Cohen-Macaulay with respect to $\mathfrak{a}$ and
that _$\mbox{ht}\,_{R}\mathfrak{a}=n$_. Then, for every non-zero faithfully
flat $R$–module $M$ we have
_$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)=n$._
###### Proof.
Let $M$ be a non-zero faithfully flat $R$–module. Since the functor
$\mbox{H}_{\mathfrak{a}}^{n}(-)$ is right exact, we have
$\mbox{H}_{\mathfrak{a}}^{n}(M)\cong\mbox{H}_{\mathfrak{a}}^{n}(R)\otimes_{R}M$;
and hence by assumption $\mbox{H}_{\mathfrak{a}}^{n}(M)\neq 0$ and
$\mathfrak{m}M\neq M$. By, [18, Theorem 5.40], there is a directed index set
$I$ and a family of finitely generated free $R$–modules $\\{M_{i}\\}_{i\in I}$
such that $M=\underset{{i\in I}}{\varinjlim}M_{i}$. Notice that each $M_{i}$
is relative Cohen-Macaulay with respect to $\mathfrak{a}$ and that
$\mbox{ht}\,_{M_{i}}\mathfrak{a}=n$. Therefore,
$\mbox{H}_{\mathfrak{a}}^{j}(M)=\underset{i\in
I}{\varinjlim}\mbox{H}_{\mathfrak{a}}^{j}(M_{i})=0$ for all $j\neq n$; and
hence, in view of 3.1(i), we get
$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)\leq n$. Now, if
$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)<n$, then
$\mbox{Tor}\,^{R}_{n}(R/\mathfrak{m},\mbox{H}_{\mathfrak{a}}^{n}(M))=0$. But,
by the isomorphism (3.2), which is proved without finitely generated
assumption on $M$, we have
$\mbox{Tor}\,_{n}^{R}(R/\mathfrak{m},\mbox{H}_{\mathfrak{a}}^{n}(M))\cong
M/\mathfrak{m}M\neq 0$ which is a contradiction. ∎
The next proposition is a generalization of [9, Proposition 9.5.22].
###### Proposition 3.3.
Let $M$ be a $d$-dimensional finitely generated $R$–module of finite
projective dimension. Then the following statements are equivalent.
* (i)
_$M$ is Cohen-Macaulay._
* (ii)
_$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{pd}\,_{R}M+d$_.
* (iii)
_$\mbox{pd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{pd}\,_{R}M+d$_.
Moreover, if one of the above statements holds, then $R$ is Cohen-Macaulay.
###### Proof.
We first notice that the Artinian $R$–module $\mbox{H}_{\mathfrak{m}}^{d}(M)$
has a natural $\hat{R}$–module structure and that
$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{fd}\,_{\hat{R}}(\mbox{H}_{\mathfrak{m}}^{d}(M))$.
Now, assume that $\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)<\infty$. Then,
in view of [15, Proposition 6] and [11, Theorem 3.2.6], we see that
$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)\leq\mbox{pd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)\leq\mbox{dim}\,R$.
Next, by [4, Theorem 3.1.17], [6, Theorem 4.16] and the New Intersection
Theorem, one can deduce that
$\mbox{fd}\,_{\hat{R}}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{id}\,_{\hat{R}}\mbox{Hom}\,_{\hat{R}}(\mbox{H}_{\mathfrak{m}}^{d}(M),\mbox{E}_{\hat{R}}(\hat{R}/\mathfrak{m}\hat{R}))=0pt\hat{R}=\mbox{dim}\,R.$
It therefore follows that
$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{pd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{dim}\,R$
and that $R$ is Cohen-Macaulay.
Now, the implications (ii)$\Leftrightarrow$(iii) follows immediately from the
above argument.
(ii)$\Rightarrow$(i): Since
$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)<\infty$, one can use the
conclusion of the above argument in conjunction with the Auslander-Buchsbaum
Theorem [4, Theorem 1.3.3] to see that $M$ is Cohen-Macaulay. Finally the
implication (i)$\Rightarrow$(ii) follows from 3.1.
∎
The following corollary, which is an immediate consequence of the pervious
proposition, has been proved in [9, Proposition 9.5.22] under the extra
condition that the underlying ring admits a canonical module.
###### Corollary 3.4.
Suppose that _$\mbox{dim}\,R=d$_. Then the following statements are
equivalent.
* (i)
_$R$ is a Cohen-Macaulay ring._
* (ii)
_$\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(R)=d$_.
* (iii)
_$\mbox{pd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(R)=d$_.
The next proposition has been proved in [16, Proposition 3.3] under the extra
conditions that the underlying ring is Cohen-Macaulay and admits a dualizing
complex.
###### Proposition 3.5.
Let $C$ be a semidualizing $R$-module. Then the following statements are
equivalent.
* (i)
$C$ is a dualizing $R$-module.
* (ii)
_$\mbox{$G_{C}$-id}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(R) <\infty$ for all
ideals $\mathfrak{a}$ of $R$ such that $R$ is relative Cohen-Macaulay with
respect to $\mathfrak{a}$ and that $\mbox{ht}\,_{R}\mathfrak{a}=n.$ _
* (iii)
$\emph{\mbox{$G_{C}$-id}\,}_{R}\emph{\mbox{H}}^{n}_{\mathfrak{a}}(R)<\infty$
for some ideal $\mathfrak{a}$ of $R$ such that $R$ is relative Cohen-Macaulay
with respect to $\mathfrak{a}$ and that
$\emph{\mbox{ht}\,}_{R}\mathfrak{a}=n.$
###### Proof.
The implication (i)$\Rightarrow$(ii) follows from [16, Theorem 3.2(ii)] and
the implication (ii)$\Rightarrow$(iii) is clear.
(iii)$\Rightarrow$(i): Suppose that
$\mbox{$G_{C}$-id}\,_{R}\mbox{H}^{n}_{\mathfrak{a}}(R)<\infty$, where
$\mathfrak{a}$ is an ideal of $R$ such that $R$ is relative Cohen-Macaulay
with respect to $\mathfrak{a}$ and that $\mbox{ht}\,_{R}\mathfrak{a}=n$. Then,
in view of 3.1, $\mbox{fd}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(R)<\infty$. Hence,
one can use [15, Proposition 6] to see that
$\mbox{pd}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(R)<\infty$. Therefore, by [20,
Theorem 2.3], we have
$\mbox{$G_{C}$-id}\,_{R}\mbox{H}^{n}_{\mathfrak{a}}(R)=\mbox{C-id}\,_{R}\mbox{H}^{n}_{\mathfrak{a}}(R)$.
Hence, one can use [16, Theorem 3.2(ii)] to complete the proof. ∎
An immediate consequence of the previous proposition is the next Corollary,
which has been proved in [17, Corollary 3.10] with additional assumptions that
$R$ is Cohen-Macaulay and admits a dualizing complex.
###### Corollary 3.6.
The following statements are equivalent.
* (i)
_$R$ is a Gorenstein ring._
* (ii)
_$\mbox{Gid}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(R) <\infty$ for all ideals
$\mathfrak{a}$ of $R$ such that $R$ is relative Cohen-Macaulay with respect to
$\mathfrak{a}$ and that $\mbox{ht}\,_{R}\mathfrak{a}=n.$ _
* (iii)
_$\mbox{Gid}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(R) <\infty$ for some ideal
$\mathfrak{a}$ of $R$ such that $R$ is relative Cohen-Macaulay with respect to
$\mathfrak{a}$ and that $\mbox{ht}\,_{R}\mathfrak{a}=n.$ _
It follows from the proof of [16, Theorem 3.2(i)] that if $n$ is a non-
negative integer and $M$ is an $R$–module( not necessarily finitely generated)
such that ${\mbox{H}}^{i}_{\mathfrak{a}}(M)=0$ for all $i\neq n$ and that
$\mbox{C-id}\,_{R}M$ is finite, then
$\mbox{C-id}\,_{R}{\mbox{H}}^{n}_{\mathfrak{a}}(M)$ is finite. This fact leads
us to the following proposition which recovers [16, Theorem 3.8].
###### Proposition 3.7.
Let $C$ be a semidualizing $R$-module. Consider the following statements.
* (i)
$R$ is Gorenstein.
* (ii)
_$\mbox{C-id}\,_{R}{\mbox{H}}^{n}_{\mathfrak{a}}(C) <\infty$_ for all ideals
$\mathfrak{a}$ of $R$ such that $R$ is relative Cohen-Macaulay with respect to
$\mathfrak{a}$ and that $\emph{\mbox{ht}\,}_{R}\mathfrak{a}=n$.
* (iii)
_$\mbox{C-id}\,_{R}{\mbox{H}}^{n}_{\mathfrak{a}}(C) <\infty$_ for some ideal
$\mathfrak{a}$ of $R$ such that $R$ is relative Cohen-Macaulay with respect to
$\mathfrak{a}$ and that $\emph{\mbox{ht}\,}_{R}\mathfrak{a}=n$.
_Then, the implications (i) $\Rightarrow$(ii)$\Rightarrow$(iii) hold true, and
the implication (iii)$\Rightarrow$(i) holds true whenever $R$ is Cohen-
Macaulay._
###### Proof.
First, notice that $R\cong C$ whenever $R$ is Gorenstein. Hence, the
implication (i)$\Rightarrow$(ii) follows from [17, Theorem 2.5(i)]. The
implication (ii)$\Rightarrow$(iii) is clear.
(iii)$\Rightarrow$(i): Let $\mathfrak{a}$ be an ideal of $R$ such that $R$ is
relative Cohen-Macaulay with respect to $\mathfrak{a}$ and that
$\mbox{ht}\,_{R}\mathfrak{a}=n$. Since
$\mbox{Supp}\,_{R}(C)=\mbox{Spec}\,(R)$, in view of [7, Theorem 2.2], we get
$\mbox{cd}\,(\mathfrak{a},R)=\mbox{cd}\,(\mathfrak{a},C)$. On the other hand,
by [19, Theorem 2.2.6(c)],
$\mbox{grade}\,(\mathfrak{a},R)=\mbox{grade}\,(\mathfrak{a},C)$. Hence, 2.4
implies that $C$ is relative Cohen-Macaulay with respect to $\mathfrak{a}$.
Also, by [3, Theorem 3.4.10], the local cohomology functor
$\mbox{H}_{\mathfrak{a}}^{i}(-)$ commutes with direct limits and any
$R$-module can be viewed as a direct limit of its finitely generated
submodules. It therefore follows that the functor
$\mbox{H}_{\mathfrak{a}}^{n}(-)$ is right exact. Thus, in view of [3, Exercise
6.1.9], we have
$\mbox{H}_{\mathfrak{a}}^{n}(R)\otimes_{R}C\cong\mbox{H}_{\mathfrak{a}}^{n}(C)$.
Since
$\mbox{H}_{\mathfrak{m}}^{0}(\mbox{E}(R/\mathfrak{m}))=\mbox{E}(R/\mathfrak{m})$
and for any non-maximal prime ideal $\mathfrak{p}$ of $R$,
$\mbox{H}_{\mathfrak{m}}^{0}(\mbox{E}(R/\mathfrak{p}))=0$, we may apply [17,
Proposition 2.8] to see that
$\mbox{H}_{\mathfrak{m}}^{i}(\mbox{H}_{\mathfrak{a}}^{n}(C))=\mbox{H}_{\mathfrak{m}}^{n+i}(C)$
for all $i\geq 0$. Therefore, by considering the additional assumption that
$R$ is Cohen-Macaulay, one can deduce that
$\mbox{H}_{\mathfrak{m}}^{i}(\mbox{H}_{\mathfrak{a}}^{n}(C))=\begin{cases}0&\text{if
$i\neq\mbox{dim}\,R/\mathfrak{a}$}\\\ \mbox{H}^{d}_{\mathfrak{m}}(C)&\text{if
$i=\mbox{dim}\,R/\mathfrak{a},$}\end{cases}$
where $d=\mbox{dim}\,R$. Thus, by the assumption and [16, Theorem 3.2(i)], we
see that $\mbox{C-id}\,_{R}{\mbox{H}}^{d}_{\mathfrak{m}}(C)$ is finite. Now,
one can use [16, Theorem 3.8] to complete the proof. ∎
It is known that if a local ring admits a non-zero Cohen-Macaulay $R$–module
of finite projective dimension, then it is a Cohen-Macaulay ring. The
following theorem is a generalization of this result.
###### Theorem 3.8.
Let $C$ be a semidualizing $R$–module. If there exists a non-zero Cohen-
Macaulay $R$-module $M$ with finite _$\mbox{C-pd}\,_{R}M$_ , then $R$ is
Cohen-Macaulay.
###### Proof.
Let $M$ be a non-zero Cohen-Macaulay $R$–module of dimension $n$ such that
$\mbox{C-pd}\,_{R}M$ is finite. Notice that, in view of [22, Theorem 2.11(c)],
we have $\mbox{C-pd}\,_{R}M=\mbox{pd}\,_{R}\mbox{Hom}\,_{R}(C,M)$. Also, since
$C\otimes_{R}\hat{R}$ is a semidualizing $\hat{R}$–module and
$\mbox{Hom}\,_{\hat{R}}(\hat{C},\hat{M})\cong\mbox{Hom}\,_{R}(C,M)\otimes_{R}\hat{R}$,
we may assume that $R$ is complete. Now, by using [22, Corollary 2.9(a)], we
have $M\in\mathcal{B}_{C}(R)$. Therefore,
$\mbox{Tor}\,_{i}^{R}(C,\mbox{Hom}\,_{R}(C,M))=0$ for all $i>0$ and
$C\otimes_{R}\mbox{Hom}\,_{R}(C,M)\cong M$. Hence, one can use [1, Theorem
1.2] to obtain the following equalities
$\begin{array}[]{rl}0pt_{R}M&=0pt_{R}(C\otimes_{R}\mbox{Hom}\,_{R}(C,M))\\\
&=0pt_{R}C-0ptR+0pt_{R}\mbox{Hom}\,_{R}(C,M)\\\
&=0pt_{R}\mbox{Hom}\,_{R}(C,M).\end{array}$
On the other hand, since
$\mbox{Ass}\,_{R}(\mbox{Hom}\,_{R}(C,M))=\mbox{Ass}\,_{R}(M)$ and $M$ is
Cohen-Macaulay, we see that
$\mbox{dim}\,_{R}(M)=\mbox{dim}\,_{R}(\mbox{Hom}\,_{R}(C,M))$. Therefore,
$\mbox{Hom}\,_{R}(C,M)$ is Cohen-Macaulay. Hence, one can use 3.1 to see that
the injective dimension of the finitely generated $R$–module
$\mbox{Hom}\,_{R}(\mbox{H}_{\mathfrak{m}}^{n}(\mbox{Hom}\,_{R}(C,M)),\mbox{E}_{R}(R/\mathfrak{m}))$
is finite. Therefore, by the New Intersection Theorem, $R$ is Cohen-Macaulay.
∎
Applying Theorem 3.8 to the semidualizing $R$–module $C=R$, we immediately
obtain the following well-known result.
###### Corollary 3.9.
If $R$ admits a non-zero Cohen-Macaulay module of finite projective dimension,
then $R$ is Cohen-Macaulay.
###### Proposition 3.10.
Let $M$ be relative Cohen-Macaulay with respect to $\mathfrak{a}$ and set
_$\mbox{ht}\,_{M}\mathfrak{a}=n$_. Suppose that
_$\mbox{Gid}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)$_ and
_$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)$_ are finite. Then
_$\mbox{pd}\,_{R}M$_ is finite if and only if _$\mbox{id}\,_{R}M$_ is finite.
###### Proof.
Let $\mbox{pd}\,_{R}M<\infty$. Then, in view of 3.1, we have
$\mbox{fd}\,_{R}(\mbox{H}_{\mathfrak{a}}^{n}(M))<\infty$. Therefore, by the
assumption and [13, Theorem 2.6(i)],
$\mbox{id}\,_{R}\mbox{H}_{\mathfrak{a}}^{n}(M)<\infty$. Hence, in view of [17,
Theorem 2.5], $\mbox{id}\,_{R}M<\infty$. Now, to prove the reverse, suppose
that $\mbox{id}\,_{R}M<\infty$. Then, by [17, Theorem 2.5],
$\mbox{id}\,_{R}(\mbox{H}_{\mathfrak{a}}^{n}(M))<\infty$. Therefore, by the
assumption and [6, Proposition 4.21],
$\mbox{fd}\,_{R}(\mbox{H}_{\mathfrak{a}}^{n}(M))<\infty$. Hence, in view of
3.1, we have $\mbox{pd}\,_{R}M<\infty$. ∎
###### Remark 3.11.
_If $R$ is Gorenstein, then, by [6, theorems 3.14 and 4.11],
$\mbox{Gfd}\,_{R}M$ and $\mbox{Gid}\,_{R}M$ are finite for any $R$–module $M$.
Therefore 3.10 implies the well-known fact that, for a finitely generated
$R$–module $M$, $\mbox{pd}\,_{R}M$ is finite if and only if $\mbox{id}\,_{R}M$
is finite._
## 4\. local cohomology and gorenstein flat dimension
The starting point of this section is the next lemma which has been proved, in
[17, Lemma 3.7] and [17, Corollary 3.9], under the extra assumption that $R$
is Cohen-Macaulay.
###### Lemma 4.1.
Suppose that $M$ is a non-zero finitely generated $R$–module. Then the
following statements hold true.
* (i)
_Suppose that $x\in\mathfrak{m}$ is both $R$–regular and $M$–regular. Then
$\mbox{Gid}\,_{R}M<\infty$ if and only if
$\mbox{Gid}\,_{{R/xR}}(M/xM)<\infty$._
* (ii)
_Assume that $M$ is Cohen-Macaulay of dimension $n$. Then
$\mbox{Gid}\,_{R}M<\infty$ if and only if
$\mbox{Gid}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)<\infty$._
###### Proof.
First notice that, by [6, Theorem 3.24],
$\mbox{Gid}\,_{R}M=\mbox{Gid}\,_{\hat{R}}\hat{M}$. On the other hand, since
$\mbox{H}_{\mathfrak{m}}^{i}(M)$ is Artinian, in view of [21, Lemma 3.6], we
have
$\mbox{Gid}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)=\mbox{Gid}\,_{\hat{R}}\mbox{H}_{\mathfrak{m}}^{n}(M)=\mbox{Gid}\,_{\hat{R}}\mbox{H}_{\mathfrak{m}\hat{R}}^{n}(\hat{M})$.
Thus, we can assume that $R$ is complete; and hence it has a dualizing complex
$D$.
(i): Set $\overline{R}=R/xR$. We notice that
$\mbox{fd}\,_{R}\overline{R}<\infty$ and
${\mu}^{i+\small{0ptR}}(\mathfrak{m},R)=\mu^{i+\small{0pt\overline{R}}}(\overline{\mathfrak{m}},\overline{R})$
for all $i\in\mathbb{Z}$, where $\mu^{i}(\mathfrak{m},R)$ denotes the i-th
Bass number of $R$ with respect to $\mathfrak{m}$. Hence, by using [2, 2.11],
we see that $D\otimes_{R}^{\mathbf{L}}\overline{R}$ is a dualizing complex for
$\overline{R}$. On the other hand, by the assumption, one can deduce that
$\mbox{Tor}\,^{R}_{i}(\overline{R},M)=0$ for all $i>0$. Therefore
$\overline{M}\simeq M\otimes_{R}^{\mathbf{L}}\overline{R}$, in derived
category $\mathcal{D}(R)$. Now, we can use [5, Theorem 5.5] to complete the
proof.
(ii): Let $M$ be Cohen-Macaulay with $\mbox{dim}\,M=n$. Then, the implication
($\Rightarrow$) follows from [17, Theorem 3.8(i)]. To prove the converse, we
proceed by induction on $n$. The case $n=0$ is obvious. Assume that $n>0$ and
that the result has been proved for $n-1$. Now, by using [17, Theorem
3.12(ii)] in conjunction with the assumption, one can choose an element $x$ in
$\mathfrak{m}$ which is both $R$-regular and $M$-regular. Next, we can use the
induced exact sequence
$0\longrightarrow{\mbox{H}}^{n-1}_{\mathfrak{m}}(M/xM)\longrightarrow{\mbox{H}}^{n}_{\mathfrak{m}}(M)\longrightarrow{\mbox{H}}^{n}_{\mathfrak{m}}(M)\longrightarrow
0$
and [6, Proposition 3.9] to see that
$\mbox{Gid}\,_{R}({\mbox{H}}^{n-1}_{\mathfrak{m}}(M/xM))$ is finite. Hence, by
the inductive hypothesis, $\mbox{Gid}\,_{R}M/xM$ is finite. Therefore, in view
of [6, Theorem 7.6(b)], $\mbox{Gid}\,_{R/xR}M/xM<\infty$. It therefore follows
from part(i) that $\mbox{Gid}\,_{R}M$ is finite. Now the result follows by
induction.
∎
The following theorem, which is the main result of this section, provides a
comparison between the Gorenstein projective dimensions of a Cohen-Macaulay
module and its non-zero local cohomology module with support in
$\mathfrak{m}$.
###### Theorem 4.2.
Suppose that the $R$–module $M$ is Cohen-Macaulay of dimension $n$. Then
_$\mbox{Gpd}\,_{R}(M)$_ and
_$\mbox{Gfd}\,_{R}(\mbox{H}_{\mathfrak{m}}^{n}(M))$_ are simultaneously finite
and when they are finite, there is an equality
_$\mbox{Gfd}\,_{R}(\mbox{H}_{\mathfrak{m}}^{n}(M))=\mbox{Gpd}\,_{R}(M)+n.$_
###### Proof.
First notice that, in view of [6, Theorem 4.27] and [6, propositions 2.16 and
1.26], we have
$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{Gfd}\,_{\hat{R}}\mbox{H}_{\mathfrak{m}\hat{R}}^{d}(\hat{M})$
and $\mbox{Gpd}\,_{R}(M)=\mbox{Gpd}\,_{\hat{R}}(\hat{M})$. Therefore, without
lose of generality, we can assume that $R$ is complete; and hence it is a
homomorphic image of a Gorenstein local ring $(S,\mathfrak{n})$ of dimension
$d$. Now, in view of the local duality theorem [3, Theorem 11.2.6], we have
(4.1)
$\mbox{H}_{\mathfrak{m}}^{n}(M)\cong\mbox{Hom}\,_{R}(\mbox{Ext}\,_{S}^{d-n}(M,S),\mbox{E}(R/\mathfrak{m})).$
Next, we notice that $M$ is a Cohen-Macaulay $S$–module of dimension $n$; and
hence, by [4, Theorem 3.3.10(c)(i)], the $S$–module
$\mbox{Ext}\,_{S}^{d-n}(M,S)$ is Cohen-Macaulay of dimension $n$. So that it
is a Cohen-Macaulay $R$–module. Therefore, again, we can use the local duality
theorem and [4, Theorem 3.3.10(c)(iii)] to obtain the following isomorphisms
(4.2)
$\begin{array}[]{rl}\mbox{H}_{\mathfrak{m}}^{n}(\mbox{Ext}\,_{S}^{d-n}(M,S))&\cong\mbox{Hom}\,_{R}(\mbox{Ext}\,_{S}^{d-n}(\mbox{Ext}\,_{S}^{d-n}(M,S),S),\mbox{E}(R/\mathfrak{m}))\\\
&\cong\mbox{Hom}\,_{R}(M,\mbox{E}(R/\mathfrak{m})).\end{array}$
Now, with the aid of the above arguments, there are equivalences
$\begin{array}[]{rl}\mbox{Gpd}\,_{R}(M)<\infty&\Leftrightarrow\mbox{Gid}\,_{R}(\mbox{Hom}\,_{R}(M,\mbox{E}(R/\mathfrak{m})))<\infty\Leftrightarrow\mbox{Gid}\,_{R}(\mbox{H}_{\mathfrak{m}}^{n}(\mbox{Ext}\,_{S}^{d-n}(M,S)))<\infty\\\
&\Leftrightarrow\mbox{Gid}\,_{R}(\mbox{Ext}\,_{S}^{d-n}(M,S))<\infty\Leftrightarrow\mbox{Gfd}\,_{R}(\mbox{H}_{\mathfrak{m}}^{n}(M))<\infty.\end{array}$
Notice that, in the above equivalences, the first step follows from [6,
Theorem 4.16 and Proposition 4.24], the second step is from (4.2), the third
step is clear by 4.1(ii), and the fourth step is an immediate consequence of
(4.1) and [6, Theorem 4.25].
For the final assertion, we notice that, since
$\mbox{H}_{\mathfrak{m}}^{n}(M)$ is Artinian and $R$ is complete, the
$R$–module
$\mbox{Hom}\,_{R}(\mbox{H}_{\mathfrak{m}}^{n}(M),\mbox{E}(R/\mathfrak{m}))$ is
finitely generated. Therefore, in view of [6, theorems 3.24 and 4.16], we get
the following equalities
$\begin{array}[]{rl}\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)&=\mbox{Gid}\,_{R}\mbox{Hom}\,_{R}(\mbox{H}_{\mathfrak{m}}^{n}(M),\mbox{E}(R/\mathfrak{m}))\\\
&=0ptR\\\ &=\mbox{Gpd}\,_{R}M+n,\end{array}$
where the last equality follows from [6, Proposition 4.24 and Theorem 1.25]. ∎
The following proposition is a Gorenstein projective version of 3.3.
###### Proposition 4.3.
Assume that $R$ is Cohen-Macaulay and that $M$ is a $d$–dimensional finitely
generated $R$–module of finite Gorenstein projective dimension. Then the
following statements are equivalent.
* (i)
_$M$ is Cohen-Macaulay. _
* (ii)
_$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{Gpd}\,_{R}M+d$_.
* (iii)
_$\mbox{Gpd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{Gpd}\,_{R}M+d$_.
###### Proof.
The implication (i)$\Rightarrow$(ii) follows from 4.2.
(ii)$\Rightarrow$(i),(iii): Since $R$ has finite Kurll dimension and
$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M$) is finite, we have the
finitness of $\mbox{Gpd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)$ by [10, Theorem
3.4]. Hence, by [8, Corollary 2.4],
$\mbox{Gpd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)\leq\mbox{dim}\,R$. Therefore,
in view of [6, Theorem 4.23], we get the following inequalities
$\mbox{Gpd}\,_{R}M+d=\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)\leq\mbox{Gpd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)\leq\mbox{dim}\,R.$
Now, one can use [6, Proposition 2.16 and Theorem 1.25] to see that
$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{Gpd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)$
and that $0ptM=\mbox{dim}\,M$. Thus, $M$ is Cohen-Macaulay and (iii) holds
true.
(iii)$\Rightarrow$(ii): First, we notice that, by [6, Theorem 4.27],
$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{Gfd}\,_{\hat{R}}\mbox{H}_{\mathfrak{m}\hat{R}}^{d}(\hat{M})$
and that, in view of [6, propositions 4.23 and 2.20], the following
inequalities hold:
$\begin{array}[]{rl}\mbox{Gfd}\,_{\hat{R}}\mbox{H}_{\mathfrak{m}\hat{R}}^{d}(\hat{M})&\leq\mbox{Gpd}\,_{\hat{R}}\mbox{H}_{\mathfrak{m}\hat{R}}^{d}(\hat{M})\\\
&\leq\mbox{Gpd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}({M}).\end{array}$
Now, since $\mbox{H}_{\mathfrak{m}}^{d}(M)$ is an Artinian $\hat{R}$–module,
one can use [6, Theorem 4.16] to see that the finitely generated
$\hat{R}$–module
$\mbox{Hom}\,_{{R}}(\mbox{H}_{\mathfrak{m}}^{d}(M),\mbox{E}(R/\mathfrak{m}))$
is of finite Gorenstein injective dimension. Therefore, by [6, Theorem 3.24]
and [6, Theorem 4.16],
$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{Gid}\,_{\hat{R}}\mbox{Hom}\,_{{R}}(\mbox{H}_{\mathfrak{m}}^{d}(M),\mbox{E}(R/\mathfrak{m}))=\mbox{dim}\,R$.
Hence, one can use [8, Corollary 2.4] and above inequalities to complete the
proof.
∎
Next, we single out a certain case of 4.3 which is a Gorenstein projective
version of Corollary 3.4. Notice that the proof of the following corollary is
similar to the proof of 4.3(ii)$\Rightarrow$(i).
###### Corollary 4.4.
Suppose that _$\mbox{dim}\,R=d$_. Then the following statements are
equivalent.
* (i)
_$R$ is Cohen-Macaulay._
* (ii)
_$\mbox{Gfd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(R)=d$_.
The following proposition is a generalization of Theorem 4.2 in terms of
$G_{C}$–dimensions.
###### Proposition 4.5.
Let $C$ be a semidualizing $R$–module and let $M$ be a Cohen-Macaulay
$R$–module of dimension $n$. Then the following statements are equivalent.
* (i)
_$\mbox{$G_{C}$-pd}\,_{R}M <\infty$_.
* (ii)
_$\mbox{$G_{C}$-fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M) <\infty$_.
Furthermore,
_$\mbox{$G_{C}$-fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)=\mbox{$G_{C}$-pd}\,_{R}M+n$_.
###### Proof.
First, we notice that, in view of [3, Theorem 4.2.1],
$\mbox{H}_{\mathfrak{m}}^{i}(M)\cong\mbox{H}_{\mathfrak{m}\oplus C}^{i}(M)$
for all $i$. Therefore, $M$ is a Cohen-Macaulay $R\ltimes C$–module with
dimension $n$. On the other hand, by using [14, Theorem 2.16], we have
$\mbox{$G_{C}$-pd}\,_{R}M=\mbox{Gpd}\,_{R\ltimes C}M$ and
$\mbox{$G_{C}$-fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{n}(M)=\mbox{Gfd}\,_{R\ltimes
C}\mbox{H}_{\mathfrak{m}}^{n}(M)=\mbox{Gfd}\,_{R\ltimes
C}\mbox{H}_{\mathfrak{m}\oplus C}^{n}(M)$. Hence, by replacing $R$ with
$R\ltimes C$, one can use 4.2 to complete the proof.
∎
The following corollary is a consequence of pervious proposition and 4.3.
###### Corollary 4.6.
Let $R$ be Cohen-Macaulay, $C$ be a semidualizing $R$–module and let $M$ be a
$d$–dimensional finitely generated $R$–module of finite $G_{C}$–projective
dimension. Then the following statements are equivalent.
* (i)
_$M$ is Cohen-Macaulay._
* (ii)
_$\mbox{$G_{C}$-fd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{$G_{C}$-pd}\,_{R}M+d$_.
* (iii)
_$\mbox{$G_{C}$-pd}\,_{R}\mbox{H}_{\mathfrak{m}}^{d}(M)=\mbox{$G_{C}$-pd}\,_{R}M+d$_.
###### Proof.
We notice that, by using [4, Exercise 1.2.26] and [19, Theorem 2.2.6], one can
deduce that $(R\ltimes C,\mathfrak{m}\oplus C)$ is a Cohen-Macaulay local
ring. Also, $M$ is a Cohen-Macaulay $R$–module if and only if $M$ is a Cohen-
Macaulay $R\ltimes C$–module. Therefore, the assertion follows from 4.3 and
4.5. ∎
## References
* [1] M. Auslander, _Modules over unramified regular local rings,_ Illinois J. Math. 5 (1961) 631–647.
* [2] L.L. Avramov and H-B. Foxby, _Ring homomorphisms and finite Gorenstein dimension,_ Proc. London Math. Soc. (3) 75 (1997) 241–270.
* [3] M.P. Brodmann and R.Y. Sharp, _Local cohomology: An algebraic introduction with geometric applications_ , Cambridge University Press, Cambridge, 1998.
* [4] W. Bruns and J. Herzog,_Cohen-Macaulay rings_ , Cambridge University Press, Cambridge, 1993\.
* [5] L.W. Christensen, A. Frankild and H. Holm, _On Gorenstein projective, injective and flat dimensions A functorial description with applications_ , J. Algebra. 302 (2006) 231–279.
* [6] L.W. Christensen, H-B. Foxby and H. Holm, _Beyond totally reflexive modules and back_ , In: Noetherian and Non-Noetherian Perspectives, edited by M. Fontana, S-E. Kabbaj, B. Olberding and I. Swanson, Springer Science+Business Media, LLC, New York, 2011 101–143.
* [7] K. Divaani-Aazar, R. Naghipour and M. Tousi, _Cohomological dimension of certain algebraic varieties_ , Proc. Amer. Math. Soc. 130 (2002) 3537–3544.
* [8] E.E. Enochs, O.M.G. Jenda, Jinzhong Xu, _Foxby duality and Gorenstein injective and projective modules_ , Trans. Amer. Math. Soc. 348(8) (1996) 3223–3234.
* [9] E.E. Enochs and O.M.G. Jenda,_Relative homological algebra_ , de Gruyter, Berlin, 2000.
* [10] M.A. Esmkhani, M. Tousi, _Gorenstein homological dimensions and Auslander categories_ , J. Algebra. 308(1) (2007) 321–329.
* [11] L. Gruson and M. Raynaud, _Critères de platitude et de projectivité_. Techniques de ”platifi cation” d’un module, Invent. Math. 13 (1971) 1–89.
* [12] M. Hellus and P. Schenzel, On cohomologically complete intersections, Journal of Algebra. 320 (2008) 3733–3748.
* [13] H. Holm, _Rings with finite Gorenstein injective dimension_ , Proc. Amer. Math. Soc. 132 (2004) 1279–1283.
* [14] H. Holm and P. Jørgensen, _Semidualizing modules and related Gorenstein homological dimension_ , J. Pure Appl. Algebra. 205 (2006) 423–445.
* [15] C. U. Jensen, _On the Vanishing of $\underset{}{\varprojlim}^{(i)}$,_ J. Algebra 15 (1970) 151–166.
* [16] M. Rahro Zargar,_Local cohomology modules and Gorenstein injectivity with respect to a semidualizing module_ , Arch. Math. (Basel) 100 (2013) 25–34.
* [17] M. Rahro Zargar and H. Zakeri, _On injective and Gorenstein injective dimensions of local cohomology modules_ , to appear in Algebar Colloquium.
* [18] J.J. Rotman, _An introduction to homological algebra_ , Second ed., Springer, New York, 2009.
* [19] S. Sather-Wagstaff, _Semidualizing modules,_ http://www.ndsu.edu/pubweb/ ssatherw.
* [20] S. Sather-Wagstaff and S. Yassemi,_Modules of finite homological dimension with respect to a semidualizing module_ , Arch. Math. (Basel) 93 (2009) 111–121.
* [21] R. Sazeedeh, _Gorenstein injective of the section functor,_ Forum Mathematicum, 22 (2010) 1117–1127.
* [22] R. Takahashi and D. White, _Homological aspects of semidualizing modules,_ Math. Scand. 106 (2010) 5–22.
* [23] W. V. Vasconcelos, _Divisor theory in module categories_ , North-Holland Math. Stud. 14, North-Holland Publishing Co., Amsterdam (1974).
|
arxiv-papers
| 2013-02-26T11:18:24 |
2024-09-04T02:49:42.133056
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Majid Rahro Zargar and Hossein Zakeri",
"submitter": "Majid Rahro Zargar",
"url": "https://arxiv.org/abs/1302.6395"
}
|
1302.6429
|
# Dynamical Evolution of Photons in Plasma Waves
Zhigang Bu1) Yuee Luo1) Hehe Li1) Wenbo Chen1) Peiyong Ji1)2)
[email protected] 1) Department of Physics, Shanghai University, Shanghai
200444, China
2) The Shanghai Key Lab of Astrophysics, Shanghai 200234, China
###### Abstract
On the viewpoint of corpuscular model an electromagnetic radiation can be
regarded as a system composed of photons with different energies and momenta,
which provides us a method being different from the Maxwell wave theory to
describe the interaction of electromagnetic waves with plasmas. In this paper
the evolution behavior of a single photon and the collective effect of a
photon system in plasma waves are uniformly described in the frame of photon
dynamics. In a small-amplitude plasma wave the modulation of photon dynamical
behavior by the plasma wave can be treated as perturbation, and the photon
acceleration effect and photon Landau damping are investigated in the linear
theory. In a plasma wave with arbitrary amplitude the photon evolution
trajectories in phase space and coordinate space are obtained by solving the
dynamical equations, and the trapping condition and possibility of photons in
the given plasma wave are also discussed.
photon dynamics, plasma wave, photon Landau damping, trajectory equation
###### pacs:
52.25.Dg, 52.20.Dq
## I Introduction
The interaction of intense fields with plasmas is an important hotspot issue
in plasma physics, which is involved in many area, for example the excitation
of wakefield, laser-plasma acceleration, fast ignition fusion[1-7], etc. A
laser pulse can excite a plasma wave via the effect of ponderomotive force
when propagating through a plasma, and motions of electrons and photons in
plasmas are also modulated by this plasma wave. In the interaction process the
laser-plasma system can be described by the Maxwell-fluid theory which has
achieved huge success in the last two decades[8-16].
On the viewpoint of corpuscular theory an electromagnetic field interacting
with plasmas can be treated as a nonequilibrium photon system governed by the
kinetic equation, which is called as photon kinetic theory[17-19]. Based on
the photon kinetic theory the dynamical behavior of a single photon in a
background plasma can be extended to the evolution of the entire photon
system. As a special photon system, laser pulse has a lot of advantages
compared with other electromagnetic radiations, such as good directivity, high
intensity and narrow spectrum width, etc. We can treat the laser pulse as an
ordered photon current described by the kinetic theory. The connection between
the photon system and a single photon is the photon number distribution in
phase space which is essentially the statistical weight of photons with
various modes in the entire photon system. The evolution of an electromagnetic
radiation in plasmas can be derived by making weighted average to all photon
behaviors. Due to the photon kinetic theory being a very effective method for
describing the propagation properties of electromagnetic fields in plasmas,
the research on single photon dynamics is worth trying.
There are some similarities between photon and electron behaviors in plasma
waves. For instance, if the photon velocity matches with the phase velocity of
the plasma wave, the photon can be accelerated effectively by the plasma
wave[20-23] and the photon system can produce Landau collisionless damping to
the plasma wave[24-26], etc., and so do the electrons[27-31]. However the
differences are obvious. The photon is not influenced by the electrostatic
field of the plasma wave because of the electric neutrality, whereas the
electron is. The propagation behavior of photons in plasmas is related to the
refractive index, i.e., electron number density of the plasma. The refractive
index of a plasma can be equivalent to a special potential field acting on
photons, and photons will get some equivalent forces when moving in this
potential field just as electrons are affected by the electrostatic forces in
plasma waves. In this sense, the dynamical evolution of a single photon in a
plasma is determined by the electron number density of the plasma. In
inhomogeneous plasma the equivalent forces acting on photons depend on their
positions in the plasma, and the photons with different modes are also
influenced by different forces owing to dispersion effects.
The purpose of this paper is to investigate the propagation and evolution
behaviors of photons in a background plasma wave by means of the photon
dynamical theory. The photon dynamical equations are expressed in Hamiltonian
formulation based on the approximation of geometrical optics[22], and the
photon frequency is regarded as Hamiltonian for a photon. Based on the
dynamical theory we describe some phenomena in the interaction process of
photons with plasma wave in a unified frame. If the amplitude of a plasma wave
is small enough, the modulation of the photon dynamical behavior by the plasma
wave can be treated as perturbation. In Sec. 2 the photon dynamical equations
are treated by the perturbation theory, and the photon acceleration effect,
along with the collective collisionless damping originated from the laser
pulse, is analyzed to the second order. In Sec. 3 the dynamical evolution of a
single photon in a plasma wave with arbitrary amplitude is investigated. The
photon evolution trajectory in phase space is derived by solving photon
dynamical equations. The trapping condition and possibility of photons in a
given plasma wave are also analysed, and the motion equations in coordinate
space for both trapped and untrapped photons are analytically obtained. At
last we give a discussion in Sec. 4.
## II Photon Behavior in Small-Amplitude Plasma Wave
Let us consider one dimensional model for simplicity, and suppose that the
electromagnetic wave propagates along $x$ axis. In this paper we assume that
the intensity of the electromagnetic wave is not strong enough to change the
configuration of the plasma wave prominently so that the reaction from the
electromagnetic wave (or the photon system) to the plasma wave can be ignored.
When an electromagnetic wave propagates in a plasma wave, the disturbed
electron number density modulates the frequency $\omega$ and wave vector $k$
of the electromagnetic wave to produce frequency shifting, which can be
described by the Maxwell wave theory. Nevertheless, we will discuss this
problem in an alternative way. By considering the electromagnetic wave as a
system of photon gas, let us discuss the kinetic evolution of a single photon
in a plasma wave. If the amplitude of a plasma wave is small, $\delta n\ll
n_{0}$, where $\delta n$ is the disturbed electron number density and $n_{0}$
is the ambient plasma density, the photon kinetics can be expressed by
perturbation expansion:
$x=x_{0}+v_{0}t+x_{1}+x_{2}+\cdots\cdots,$ (1)
$k=k_{0}+k_{1}+k_{2}+\cdots\cdots,$ (2)
$\omega=\omega_{0}+\omega_{1}+\omega_{2}+\cdots\cdots,$ (3)
$v=v_{0}+v_{1}+v_{2}+\cdots\cdots,$ (4)
where $x$ and $v$ are the position and velocity of the photon, $\omega$ and
$k$ are regarded as the energy and momentum of the photon. The subscript 0
denotes the initial quantity, and 1 and 2 denote the first- and second-order
corrections respectively, and $v_{0}=c^{2}k_{0}/\omega_{0}$.
Assuming that the configuration of the plasma wave can be expressed by a
function of $f(x,t)=f(k_{pl}x-\omega_{pl}t)$, where $k_{pl}$ and $\omega_{pl}$
are the wave number and frequency of the plasma wave. The Hamiltonian of the
photon is proportional to its frequency $\omega$ as determined by the
dispersion relation of the electromagnetic wave in the plasma
$H=\hbar\omega=\hbar\sqrt{c^{2}k^{2}+\omega_{p}^{2}},$ (5)
where $\omega_{p}$ is the local plasma frequency determined by the disturbed
electron number density in the plasma wave as
$\omega_{p}^{2}=\omega_{p0}^{2}(1+f(x,t))$. The photon evolution in a plasma
wave is governed by the dynamical equations in Hamiltonian formulation
$v=\frac{dx}{dt}=\frac{\partial H}{\partial k},$ (6)
$\frac{dk}{dt}=-\frac{\partial H}{\partial x}.$ (7)
For simplicity, we suppose the plasma wave has the shape of cosine,
$f=\kappa\cos(k_{pl}x-\omega_{pl}t)$, where $\kappa=\delta n/n_{0}$ is the
amplitude of the plasma wave and a small quantity $\kappa\ll 1$. Dynamical
equation (7) gives
$\frac{dk}{dt}=\frac{\omega_{p0}^{2}k_{pl}\kappa}{2\omega}\sin(k_{pl}x-\omega_{pl}t).$
(8)
Substituting Eqs.(1)-(3) into Eq.(8) the first- and second-order equations for
the photon momentum are derived as
$\frac{dk_{1}}{dt}=\frac{\omega_{p0}^{2}k_{pl}\kappa}{2\omega_{0}}\sin\left(k_{pl}x_{0}-\Omega
t\right),$ (9) $\displaystyle\frac{dk_{2}}{dt}=$
$\displaystyle\frac{\omega_{p0}^{2}k_{pl}\kappa}{2\omega_{0}}\left[k_{pl}x_{1}\cos\left(k_{pl}x_{0}-\Omega
t\right)-\frac{c^{2}k_{0}k_{1}}{\omega_{0}^{2}}\sin\left(k_{pl}x_{0}-\Omega
t\right)\right.$ (10)
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\left.-\frac{\omega_{p0}^{2}\kappa}{2\omega_{0}^{2}}\cos\left(k_{pl}x_{0}-\Omega
t\right)\sin\left(k_{pl}x_{0}-\Omega t\right)\right],$
in which $\Omega=\omega_{pl}-k_{pl}v_{0}$. The integral of Eq.(9) gives a
first-order correction of the photon momentum in the small-amplitude plasma
wave
$k_{1}=\frac{\omega_{p0}^{2}k_{pl}\kappa}{2\omega_{0}\Omega}\left[\cos\left(k_{pl}x_{0}-\Omega
t\right)-\cos\left(k_{pl}x_{0}\right)\right].$ (11)
In plasma waves the photon frequency is modulated by the electron number
density, which is known as the photon acceleration effect. By using of the
dispersion relation of an electromagnetic field in plasmas we can expand the
photon frequency to second-order as
$\displaystyle\omega$ $\displaystyle=$
$\displaystyle\omega_{0}\left\\{1+\frac{c^{2}k_{0}^{2}}{2\omega_{0}^{2}}\left(\frac{2k_{1}}{k_{0}}+\frac{\omega_{p0}^{2}\kappa}{c^{2}k_{0}^{2}}\cos\left(k_{pl}x_{0}-\Omega
t\right)\right)\right.$ (12)
$\displaystyle+\left.\frac{c^{2}k_{0}^{2}}{2\omega_{0}^{2}}\left[\left(\frac{1}{k_{0}^{2}}-\frac{c^{2}}{\omega_{0}^{2}}\right)k_{1}^{2}+\frac{2k_{2}}{k_{0}}-\frac{\omega_{p0}^{2}k_{pl}\kappa}{c^{2}k_{0}^{2}}x_{1}\sin\left(k_{pl}x_{0}-\Omega
t\right)\right.\right.$
$\displaystyle\left.\left.-\frac{\omega_{p0}^{4}\kappa^{2}}{4c^{2}k_{0}^{2}\omega_{0}^{2}}\cos^{2}\left(k_{pl}x_{0}-\Omega
t\right)-\frac{\omega_{p0}^{2}\kappa
k_{1}}{k_{0}\omega_{0}^{2}}\cos\left(k_{pl}x_{0}-\Omega
t\right)\right]\right\\},$
where $\omega_{0}$ and $k_{0}$ are the initial photon frequency and momentum
respectively. The frequency shift of photon in the plasma wave can be analyzed
by using of Eq.(12). Defining the frequency shift at time $t$ as
$\Delta\omega=\omega(t)-\omega(0)$, where $\omega(0)$ is the initial photon
frequency, and using the first-order equation of photon momentum (11), it is
given that
$\Delta\omega=-\frac{\omega_{p0}^{2}\omega_{pl}\kappa}{2\omega_{0}\Delta
vk_{pl}}\left[\cos\left(k_{pl}x_{0}+k_{pl}\Delta
vt\right)-\cos\left(k_{pl}x_{0}\right)\right]$ (13)
when calculated to the linear term, where the parameter $\Delta
v=v_{0}-v_{\phi}$, and $v_{\phi}=\omega_{pl}/k_{pl}$ denotes the phase
velocity of the plasma wave. If the photon velocity $v_{0}$ is very close to
the phase velocity of the plasma wave, i.e., in the limit of $\Delta
v\rightarrow 0$, we get the maximum frequency shift
$\lim_{\Delta v\rightarrow
0}\Delta\omega=\frac{k_{pl}t\omega_{p0}^{2}v_{\phi}\kappa}{2\omega_{0}}\sin\left(k_{pl}x_{0}\right),$
(14)
which is consistent with the earlier publications[21].
On the viewpoint of corpuscular property an electromagnetic radiation can be
regarded as a particle system composed of photons with various modes. The
evolution behavior of entire electromagnetic radiation in plasmas can be
derived by making statistical summation of all photon behaviors in theory. Now
we will discuss energy exchanging of the electromagnetic wave with the plasma
wave based on the photon dynamics, which is a kind of collective effect of
photon system known as photon Landau damping.
Taking the derivative of Eq.(12) with respect to time $t$, the average rate of
energy change of the photon to second-order in plasma wave is given by
$\displaystyle\left\langle\frac{d\omega}{dt}\right\rangle$ $\displaystyle=$
$\displaystyle\frac{c^{2}}{\omega_{0}}\left(1-\frac{c^{2}k_{0}^{2}}{\omega_{0}}\right)\langle
k_{1}\dot{k}_{1}\rangle+\frac{c^{2}k_{0}}{\omega_{0}}\langle\dot{k}_{2}\rangle$
(15)
$\displaystyle-\frac{\omega_{p0}^{2}k_{pl}\kappa}{2\omega_{0}}\left(\langle\dot{x}_{1}\sin\left(k_{pl}x_{0}-\Omega
t\right)\rangle-\Omega\langle x_{1}\cos\left(k_{pl}x_{0}-\Omega
t\right)\rangle\right)$
$\displaystyle-\frac{c^{2}k_{0}\omega_{p0}^{2}\kappa}{2\omega_{0}^{3}}\left(\langle\dot{k}_{1}\cos\left(k_{pl}x_{0}-\Omega
t\right)+\Omega\langle k_{1}\sin\left(k_{pl}x_{0}-\Omega
t\right)\rangle\right).$
Here the dot stands for the derivative with respect to the time, and the
bracket $\langle\cdots\rangle$ denotes the average over initial position
$x_{0}$ in one wavelength range of the plasma wave. The first-order correction
of photon velocity can be obtained from dynamical equation Eq.(6) as
$v_{1}=\dot{x}_{1}=\frac{c^{2}\omega_{p0}^{2}\kappa}{2\omega_{0}^{3}}\left(\frac{\omega_{p0}^{2}k_{pl}}{\omega_{0}\Omega}-k_{0}\right)\cos\left(k_{pl}x_{0}-\Omega
t\right)-\frac{c^{2}\omega_{p0}^{4}k_{pl}\kappa}{2\omega_{0}^{4}\Omega}\cos(k_{pl}x_{0}).$
(16)
The integral of Eq.(16) gives
$x_{1}=\frac{c^{2}\omega_{p0}^{2}\kappa}{2\omega_{0}^{3}\Omega}\left(k_{0}-\frac{\omega_{p0}^{2}k_{pl}}{\omega_{0}\Omega}\right)\left[\sin\left(k_{pl}x_{0}-\Omega
t\right)-\sin\left(k_{pl}x_{0}\right)\right]-\frac{c^{2}\omega_{p0}^{4}k_{pl}\kappa}{2\omega_{0}^{4}\Omega}t\cos\left(k_{pl}x_{0}\right).$
(17)
Substituting Eqs.(9)-(11), (16) and (17) into Eq.(15) we get
$\left\langle\frac{d\omega}{dt}\right\rangle=-\frac{c^{2}\omega_{p0}^{6}\omega_{pl}k_{pl}^{2}\kappa^{2}}{8\omega_{0}^{5}}\frac{\partial}{\partial\Omega}\left(\frac{1}{\Omega}\sin(\Omega
t)\right)-\frac{c^{2}\omega_{p0}^{4}k_{0}\omega_{pl}k_{pl}\kappa^{2}}{4\omega_{0}^{4}}\frac{1}{\Omega}\sin(\Omega
t).$ (18)
In order to extend the rate of energy change of the photon to the whole
electromagnetic field we need to computer the sum of all photons in the field.
Supposing that the initial photon number distribution is $N(\omega_{0})$, the
average rate of energy change of the electromagnetic field is given by
$\overline{\langle\dot{\omega}\rangle}=\int\langle\dot{\omega}\rangle
N\left(\omega_{0}\right)d\omega_{0}.$ (19)
Eq.(18) indicates that $\langle\dot{\omega}\rangle$ is oscillatory and damped
rapidly, excepting at the resonance singularity $\Omega=0$. Thus the main
contribution in Eq.(19) comes from the area nearby the singularity, which can
be worked out by considering the long time limit and expressed in terms of a
delta function
$\lim_{t\rightarrow\infty}\frac{1}{\Omega}\sin\left(\Omega
t\right)=\pi\delta\left(\Omega\right)=\frac{\pi\omega_{\phi}^{2}k_{\phi}}{\omega_{p0}^{2}k_{pl}}\delta\left(\omega_{0}-\omega_{\phi}\right),$
(20)
where $\omega_{\phi}$ is resonance frequency of the photon with the plasma
wave as defined by $v_{0}(\omega_{0}=\omega_{\phi})=v_{\phi}$. Since
$\partial/\partial\Omega=-\left(\omega_{0}^{2}k_{0}\right)/\left(\omega_{p0}^{2}k_{pl}\right)\partial/\partial\omega_{0}$,
we obtain
$\lim_{t\rightarrow\infty}\left\langle\frac{d\omega}{dt}\right\rangle=\frac{\pi
c\omega_{p0}^{2}\omega_{\phi}^{2}k_{\phi}\omega_{pl}\kappa^{2}}{8}\frac{\partial}{\partial\omega_{0}}\left[\frac{1}{\omega_{0}^{2}}\delta\left(\omega_{0}-\omega_{\phi}\right)\right].$
(21)
Inserting Eq.(21) into Eq.(19) the average rate of energy change of the
electromagnetic field in plasma wave is expressed by
$\overline{\langle\dot{\omega}\rangle}=-\frac{\pi
c\omega_{p0}^{2}k_{\phi}\omega_{pl}\kappa^{2}}{8}\frac{\partial
N\left(\omega_{0}\right)}{\partial\omega_{0}}\Bigg{|}_{\omega_{0}=\omega_{\phi}}.$
(22)
Eq.(22) indicates a mechanism of energy exchange of the electromagnetic wave
with plasma wave. The frequency $\omega_{\phi}$ can be regarded as the
resonance frequency because the photon with this frequency has the velocity
matching with the phase velocity of the plasma wave. From Eq.(22) we find that
the energy exchange is determined by the derivative of the distribution
function of photon number near $\omega_{\phi}$, and if the slope of this
distribution function is steeper the energy exchange is more remarkable. That
just means only the photons having the frequencies close to the resonance
frequency are involved in this process of energy exchange with the plasma
wave. From the preceding discussion it is clear that the energy exchange is
not related to collisions between photons and electrons. Thus the mechanism of
the energy exchange here is a collisionless resonance effect between photon
system and the plasma wave, namely the photon Landau damping[24,25]. Photon
Landau damping has some properties in common with electron Landau damping:
Both are collective velocity resonance phenomena with plasma waves. However
the differences between them are obvious. The energy exchange in the process
of electron Landau damping is achieved through doing work to electrons by the
electrostatic field. But photons are electrically neutral and not influenced
by the electrostatic force. In fact the energy exchange in photon Landau
damping is achieved through the effect of photon frequency shift in plasma
wave that is related to the fluctuation of the electron number density.
The plasma wave is described by the Poisson equation
$\nabla^{2}\phi=\frac{en_{0}f}{\varepsilon_{0}}=-\frac{dE}{dx},$ (23)
where $\phi$ and $E$ are the scalar potential and strength of the longitudinal
electric field in plasma wave. The solution of Eq.(23) is
$E=\frac{en_{0}\kappa}{\varepsilon_{0}k_{pl}}\sin\left(k_{pl}x-\omega_{pl}t\right)=\hat{E}\sin\left(k_{pl}x-\omega_{pl}t\right).$
(24)
Then the amplitude of plasma wave $\kappa$ can be written as
$\kappa=(\varepsilon_{0}k_{pl}\hat{E})/(en_{0})$, and Eq.(22) is given by
$\overline{\langle\dot{\omega}\rangle}=-\frac{\pi
c\varepsilon_{0}k_{\phi}\omega_{pl}k_{pl}^{2}\hat{E}^{2}}{8mn_{0}}\frac{\partial
N\left(\omega_{0}\right)}{\partial\omega_{0}}\Bigg{|}_{\omega_{0}=\omega_{\phi}}.$
(25)
If we only consider the effect of photon Landau damping between photon system
and the plasma wave and neglect other interaction, the damping rate $\gamma$
can be worked out by means of the energy conservation
$\frac{1}{2}\varepsilon_{0}\hat{E}^{2}\gamma+\hbar\overline{\langle\dot{\omega}\rangle}=0.$
(26)
Substituting Eq.(25) into (26) the damping rate is obtained as
$\gamma=\frac{\pi c\hbar k_{\phi}\omega_{pl}k_{pl}^{2}}{4mn_{0}}\frac{\partial
N\left(\omega_{0}\right)}{\partial\omega_{0}}\Bigg{|}_{\omega_{0}=\omega_{\phi}}.$
(27)
We consider the electromagnetic wave as a time-Gaussian laser pulse, the
initial photon number distribution in frequency domain space is expressed by
$N\left(\omega_{0}\right)d\omega_{0}=N_{0}\frac{\tau\omega_{c}}{\sqrt{2\pi}\omega_{0}}\exp\left[-\frac{1}{2}\left(\omega_{0}-\omega_{c}\right)^{2}s^{2}\right]d\omega_{0},$
(28)
where $N_{0}$ is a normalization constant depending on the intensity of the
initial laser pulse and $\omega_{c}$ is the center frequency and $s$ is the
duration time. Substituting Eq.(28) into (27) the damping rate of time-
Gaussian laser pulse to the plasma wave is derived as
$\gamma=\sqrt{\frac{\pi}{2}}\frac{\hbar\tau\omega_{pl}k_{pl}^{2}\omega_{c}N_{0}}{4mn_{0}}\frac{ck_{\phi}}{\omega_{\phi}}\left(\left(\omega_{c}-\omega_{\phi}\right)s^{2}-\frac{1}{\omega_{\phi}}\right)\exp\left[-\frac{1}{2}\left(\omega_{c}-\omega_{\phi}\right)^{2}s^{2}\right].$
(29)
In Ref.26 the same damping rate is obtained by using the kinetic theory. If
the phase velocity of the plasma wave satisfies the condition
$v_{\phi}<v_{c}-(c\omega_{p}^{2})/(\omega_{c}^{4}s^{2})$, where
$v_{c}=c\sqrt{1-\omega_{p}^{2}/\omega_{c}^{2}}$, we have the damping rate
$\gamma>0$ for the time-Gaussian laser pulse.
## III Photon Evolution in Plasma Wave with Arbitrary Amplitude
Now we will discuss the situation of the plasma wave with arbitrary amplitude.
It is more convenient to perform a coordinate transformation from the
laboratory frame to a new frame: $\xi=x-v_{\phi}t$, $\tau=t$. In this new
frame the shape of plasma wave has a simpler expression as $f=f(k_{pl}\xi)$,
and the photon dynamical equations are derived from Eq.(6) and (7) as
$\frac{d\xi}{d\tau}=v-v_{\phi}=c\sqrt{1-\frac{\omega_{p}^{2}}{\omega^{2}}}-v_{\phi},$
(30)
$\frac{d\omega}{d\tau}=-\frac{\omega_{p0}^{2}v_{\phi}}{2\omega}\frac{\partial
f(k_{pl}\xi)}{\partial\xi}.$ (31)
In Eq.(31) we have canceled the photon momentum $k$ by using of the dispersion
relation of electromagnetic wave in plasmas. Combining Eqs.(30) and (31) the
time $\tau$ can be canceled and the evolution equation of photon in
$\xi-\omega$ space is given by
$\frac{dW}{df}=-\frac{\beta_{\phi}}{\sqrt{1-(1+f)/W}-\beta_{\phi}},$ (32)
where $\beta_{\phi}=v_{\phi}/c$ and $W=\omega^{2}/\omega_{p0}^{2}$. Here we
also call the $\xi-\omega$ space as the phase space for convenience. The
solution of Eq.(32) is
$\sqrt{\alpha}\left(\sqrt{W}-\beta_{\phi}\sqrt{W-f-1}\right)=1,$ (33)
where $\alpha$ is a constant determined uniquely by the initial condition
$\alpha=\frac{1}{\left(\sqrt{W_{0}}-\beta_{\phi}\sqrt{W_{0}-f_{0}-1}\right)^{2}}.$
(34)
In Eq.(34) $W_{0}$ depends on the initial photon frequency and $f_{0}$ is
determined by the initial position of the photon injected into the plasma wave
as $f_{0}=f(k_{pl}\xi_{0})$. Eq.(33) indicates that
$\left(\sqrt{W}-\beta_{\phi}\sqrt{W-f-1}\right)$ is an invariant as long as
the initial conditions are given. From Eq.(33) the photon frequency is
obtained as
$\frac{\omega}{\omega_{p0}}=\frac{\gamma_{\phi}^{2}}{\sqrt{\alpha}}\left(1\pm\beta_{\phi}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}(1+f)}\right),$
(35)
which describes the evolution trajectory of a photon in phase space.
In Eq.(35) the notations “$\pm$” denotes the two different branches of the
trajectory in phase space, which represents the evolutions of photons with two
different kinds of initial conditions. The sign “$+$” stands for the upper
branch and “$-$” denotes the lower branch. We also suppose the plasma wave has
the shape of cosine, $f(\xi)=\kappa\cos(k_{pl}\xi)$, for simplicity. The
photon evolution trajectory in phase space expressed by Eq.(35) is illustrated
in Fig.1, and the similar result was numerically obtained in Ref.22.
Figure 1: The photon evolution trajectory in phase space. The dotted line
denotes the evolution trajectory of trapped photon which is confined to one
wavelength area of the plasma wave and oscillates between upper and lower
branches. The dashed line denotes the evolution trajectory of untrapped
photon, and these photons can propagate through different periods of plasma
wave. The solid line is the separatrix between the trapped and untrapped
photon trajectories.
According to the different initial conditions the evolution trajectory can be
divided into two types: The first one is displayed as the dotted line in
Fig.1, where the upper and lower branches intersect with each other to form a
closed trajectory. The photons evolving in this trajectory will be confined to
one wavelength area of the plasma wave and oscillate between the upper and
lower branches. This type of photon is known as the trapped photon by the
plasma wave. The second type is displayed as the dashed line, where the upper
and lower branches keep separated consistently. In this situation the photon
evolves only on one branch and propagates through different periods of the
plasma wave, so it is obviously not trapped by the plasma wave. If the photon
is on the upper branch of the trajectory at the initial time, it will always
evolve on the upper branch with the motion velocity faster than the phase
velocity of the plasma wave. Whereas the photon will evolve on the lower
branch and propagate backward with respect to the plasma wave if its initial
position is on the lower branch in phase space. In Fig.1 the solid line
denotes the separatrix between trajectories of trapped and untrapped photons.
Whether the photon can be trapped is determined by initial conditions. We have
known that the trapped photon has the closed evolution trajectory, that is to
say the factor under the square root sign in Eq.(35)
$1-\frac{\alpha}{\gamma_{\phi}^{2}}(1+\kappa\cos(k_{pl}\xi))=0$ (36)
is solvable. Then we get the trapped condition for photons in plasma wave as
$\frac{\gamma_{\phi}^{2}}{1+\kappa}\leq\alpha\leq\frac{\gamma_{\phi}^{2}}{1-\kappa}.$
(37)
All the photons with initial conditions satisfying inequality (37) can be
trapped by the plasma wave. The trapped photons have the velocity matching
with the phase velocity of plasma wave, and can exchange energy effectively
with the plasma wave.
Let us discuss the motion equation of photons in coordinate space. The
integral of dynamical equation (30) gives
$\int_{\xi_{0}}^{\xi}\frac{d\xi^{\prime}}{\sqrt{1-\left(1+f(\xi^{\prime})\right)/W(\xi^{\prime})}-\beta_{\phi}}=c\tau.$
(38)
Supposing that the plasma wave has the shape of cosine just as we discussed
before and inserting Eqs.(33) and (35) into Eq.(38), the motion equation is
given by
$c\tau=\beta_{\phi}\gamma_{\phi}^{2}(\xi-\xi_{0})\pm\frac{\gamma_{\phi}^{2}}{k_{pl}}\int_{k_{pl}\xi_{0}}^{k_{pl}\xi}\frac{dx}{\sqrt{\left(1-\frac{\alpha}{\gamma_{\phi}^{2}}-\frac{\alpha\kappa}{\gamma_{\phi}^{2}}\right)+\frac{2\alpha\kappa}{\gamma_{\phi}^{2}}\sin^{2}\frac{x}{2}}}.$
(39)
The sign “$+$” in Eq.(39) denotes the motion trajectory of the photon evolving
on the upper branch, and “$-$” represents the trajectory of the photon
evolving on the lower branch.
We will analyze the motion equation of trapped photon first. The maximum and
minimum frequencies of the trapped photon in plasma wave can be obtained from
Eq.(35) and given by
$\omega_{max}=\frac{\gamma_{\phi}^{2}\omega_{p0}}{\sqrt{\alpha}}\left(1+\beta_{\phi}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}(1-\kappa)}\right),~{}~{}\omega_{min}=\frac{\gamma_{\phi}^{2}\omega_{p0}}{\sqrt{\alpha}}\left(1-\beta_{\phi}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}(1-\kappa)}\right).$
(40)
The initial conditions of the trapped photon satisfy inequality (37), which
gives $1-\alpha/\gamma_{\phi}^{2}-\alpha\kappa/\gamma_{\phi}^{2}<0$, and then
the motion equation (39) is simplified into
$c\tau=\beta_{\phi}\gamma_{\phi}^{2}(\xi-\xi_{0})\pm\frac{i2\gamma_{\phi}^{2}}{k_{pl}\sqrt{\frac{\alpha\kappa}{\gamma_{\phi}^{2}}+\frac{\alpha}{\gamma_{\phi}^{2}}-1}}\int_{\frac{k_{pl}\xi_{0}}{2}}^{\frac{k_{pl}\xi}{2}}\frac{d\theta}{\sqrt{1-k^{2}\sin^{2}\theta}},$
(41)
with
$k^{2}=\frac{(2\alpha\kappa/\gamma_{\phi}^{2})}{(\alpha\kappa/\gamma_{\phi}^{2})+(\alpha/\gamma_{\phi}^{2})-1}>0.$
(42)
The integral function in Eq.(41) can be expressed by the elliptic integral of
the first kind $F(k_{pl}\xi/2,k)$. For simplicity we only discuss the motion
equation in the first period, the evolution after the first period can be
worked out analogously. Assuming that the photon is on the upper branch of the
trajectory in phase space at the initial time, the motion trajectory before
the photon going into the lower branch is derived by using Eq.(41) as
$c\tau=\beta_{\phi}\gamma_{\phi}^{2}(\xi-\xi_{0})-\frac{2\gamma_{\phi}^{2}}{k_{pl}\sqrt{\frac{\alpha\kappa}{\gamma_{\phi}^{2}}+\frac{\alpha}{\gamma_{\phi}^{2}}-1}}\mathrm{Im}\left[F(k_{pl}\xi/2,k)-F(k_{pl}\xi_{0}/2,k)\right].$
(43)
And then, the photon reaches the lower branch and the motion equation is given
by
$c\tau=\beta_{\phi}\gamma_{\phi}^{2}(\xi-\xi_{0})-\frac{2\gamma_{\phi}^{2}}{k_{pl}\sqrt{\frac{\alpha\kappa}{\gamma_{\phi}^{2}}+\frac{\alpha}{\gamma_{\phi}^{2}}-1}}\mathrm{Im}\left[2F(k_{pl}\xi_{m}/2,k)-F(k_{pl}\xi_{0}/2,k)-F(k_{pl}\xi/2,k)\right],$
(44)
where $\xi_{m}$ determined by
$1-\frac{\alpha}{\gamma_{\phi}^{2}}\left(1+\kappa\cos(k_{pl}\xi_{m})\right)=0$
(45)
is the maximum position the photon can reach. At last the photon returns to
the upper branch and finishes the evolution in a completed cycle. If the
photon is on the lower branch of the trajectory in phase space at the initial
time, we can analyze the motion equation by using the similar method, and the
motion trajectory of this kind of photon is illustrated in Fig.2.
Figure 2: The motion trajectory of trapped photon in coordinate space. At the
initial time the photon is on the lower branch of the trajectory in phase
space. This kind of photon is confined to the area shorter than one wavelength
of the plasma wave and oscillates between upper and lower branches, so the
motion trajectory in coordinate space is also oscillatory as displayed in the
figure.
It is shown that the trajectory is oscillatory, and the photon is confined to
the area shorter than one wavelength of the plasma wave. The velocity of the
photon is close to the phase velocity of the plasma wave.
The situation of the untrapped photon is somewhat simpler. If the photon is on
the upper branch of the trajectory in phase space at the beginning, it always
propagate on the upper branch, and the maximum and minimum frequencies of the
photon in plasma wave are given by
$\omega_{max}=\frac{\gamma_{\phi}^{2}\omega_{p0}}{\sqrt{\alpha}}\left(1+\beta_{\phi}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}(1-\kappa)}\right),~{}~{}\omega_{min}=\frac{\gamma_{\phi}^{2}\omega_{p0}}{\sqrt{\alpha}}\left(1+\beta_{\phi}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}(1+\kappa)}\right).$
(46)
Inequality (37) is not satisfied now and we have
$1-\alpha/\gamma_{\phi}^{2}-\alpha\kappa/\gamma_{\phi}^{2}>0$, so the motion
equation (39) gives
$c\tau=\beta_{\phi}\gamma_{\phi}^{2}(\xi-\xi_{0})+\frac{2\gamma_{\phi}^{2}}{k_{pl}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}-\frac{\alpha\kappa}{\gamma_{\phi}^{2}}}}\mathrm{Re}\left[F(k_{pl}\xi/2,ik)-F(k_{pl}\xi_{0}/2,ik)\right],$
(47)
with
$k^{2}=\frac{(2\alpha\kappa/\gamma_{\phi}^{2})}{1-(\alpha/\gamma_{\phi}^{2})-(\alpha\kappa/\gamma_{\phi}^{2})}>0.$
(48)
Eq.(47) is the motion equation of the untrapped photon with the initial
position on the upper branch. In Fig.3 the motion trajectory of this type of
photon in coordinate space is displayed.
Figure 3: The motion trajectory of trapped photon with the initial position on
the upper branch in phase space. In this situation the photon always propagate
with the velocity faster than the phase velocity of the plasma wave, so it
travels forward with respect to the plasma wave.
The photon velocity is always faster than the phase velocity of the plasma
wave, so the photon propagates forward with respect to the plasma wave and can
enter into different periods of the plasma wave. If the photon is on the lower
branch of the trajectory in phase space at the initial time, it will always
propagate on the lower branch, and the maximum and minimum frequencies of the
photon in plasma wave are obtained by using Eq.(35) as
$\omega_{max}=\frac{\gamma_{\phi}^{2}\omega_{p0}}{\sqrt{\alpha}}\left(1-\beta_{\phi}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}(1+\kappa)}\right),~{}~{}\omega_{min}=\frac{\gamma_{\phi}^{2}\omega_{p0}}{\sqrt{\alpha}}\left(1-\beta_{\phi}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}(1-\kappa)}\right).$
(49)
The motion equation of this kind of photon is given by
$c\tau=\beta_{\phi}\gamma_{\phi}^{2}(\xi-\xi_{0})-\frac{2\gamma_{\phi}^{2}}{k_{pl}\sqrt{1-\frac{\alpha}{\gamma_{\phi}^{2}}-\frac{\alpha\kappa}{\gamma_{\phi}^{2}}}}\mathrm{Re}\left[F(k_{pl}\xi/2,ik)-F(k_{pl}\xi_{0}/2,ik)\right].$
(50)
We illustrate the motion trajectory in Fig.4. The velocity of this type of
photon is always slower than the phase velocity of the plasma wave, and the
photon moves backward with respect to the plasma wave.
Figure 4: The motion trajectory of trapped photon with the initial position on
the lower branch in phase space. This kind of photon always moves with the
velocity slower than the phase velocity of the plasma wave, so it travels
backward with respect to the plasma wave.
In Fig.5 we show the motion trajectory of the photon evolving on the
separatrix in one complete cycle. The photon is just restricted to one
wavelength area of the plasma wave and oscillates between two neighbor peaks
of the plasma wave.
Figure 5: The motion trajectory of the photon evolving on the separatrix in
one complete cycle.
At last let us discuss a simple case in the limit of $v_{\phi}\rightarrow c$.
The evolution equation of a photon in phase space, Eq.(33), is simplified as
$W=\frac{1}{4\alpha}\left[\alpha(1+f)+1\right]^{2}$ (51)
in this situation, which gives
$\omega=\frac{\omega_{p0}}{2\sqrt{\alpha}}\left(\alpha\kappa\cos(k_{pl}\xi)+\alpha+1\right).$
(52)
Considering the region of underdense plasma $\omega\gg\omega_{p0}$, i.e.,
$W\gg 1$, both $1/\alpha$ and $(1+f)/W$ are small quantities, and the motion
equation (38) can be expanded in series and calculated to the first order,
which gives
$\alpha\kappa\sin(k_{pl}\xi)+(\alpha+2)k_{pl}\xi+2c\tau
k_{pl}=\alpha\kappa\sin(k_{pl}\xi_{0})+(\alpha+2)k_{pl}\xi_{0}.$ (53)
Apparently, in this limit the photon cannot be trapped by the plasma wave
because its velocity is slower than $c$ all the time, and it will propagate
backward with respect to the plasma wave.
## IV Conclusion and Discussion
In this paper some dynamical behaviors of photons and collective effect of
photon system in plasma wave are described in the unified frame of the photon
dynamical theory. We treat the plasma wave as a special background for
photons, and the refractive index of the plasma is equivalent to an external
potential field which determines the dynamical evolution of photons. The
dynamical equations of photons are constructed in Hamiltonian formulation in
the approximation of geometrical optics. We solve the dynamical equations and
analyze some motion behaviors of photons in plasma wave, including photon
acceleration and trapping effects, the evolution trajectory in phase space and
photon Landau damping to the plasma wave. In fact the photon trapping effect
in plasma wave is related to the photon acceleration. Only when the photon
velocity is close to the phase velocity of plasma wave, the trapping
phenomenon occurs, which is analogous to the electron trapping effect in a
plasma wave. Whether a photon can be accelerated to a velocity that matches
with the phase velocity of a given plasma wave is determined by the initial
conditions. The trapping condition and possibility of photons in a given
plasma wave are analysed in detail in this paper. In a small-amplitude plasma
wave, the evolution behavior of a single photon is extended to the entire
electromagnetic field to calculate the collisionless Landau damping effect
originated from the electromagnetic field. When the plasma wave gets a
relativistic phase velocity, for example a plasma wave driven by a laser
pulse, some photons of the electromagnetic field can propagate with velocities
very close to the phase velocity of the plasma wave and be trapped by the
plasma wave, and consequently the velocity resonance with plasma wave occurs.
The energy can be exchanged effectively between the electromagnetic field and
the plasma wave in this resonance process. If we consider the electromagnetic
field as a laser pulse, due to the narrow spectrum width most photons in the
pulse may resonate with the plasma wave to produce the photon Landau damping
effect.
Theoretically, if we know the dynamical evolutions of photons in plasma waves,
the evolution properties of the laser pulse can be obtained by making
statistical summation of all photon behaviors, in which the keypoint is the
weight factor of photons with different modes in entire pulse, i.e., photon
number distribution. In a small-amplitude plasma wave the modulation on photon
motion by the plasma wave is so weak that photon number distribution can be
considered as a invariant, so we can use the initial pulse distribution as the
statistical weight and keep it unchanged. However in a large-amplitude plasma
wave the photon motion is significantly modulated and the photon number
distribution evolves to obey the kinetic equation. Consequently, it is also an
important problem to solve the photon kinetic equation in a large-amplitude
plasma wave so as to obtain the time-dependent photon number distribution in
phase space[32]. The photon dynamics plays an important role in the research
of the interaction of electromagnetic field with plasma wave based on the
corpuscular theory of light. By combining the kinetic theory of photon system
and the single photon dynamics, the evolution of an electromagnetic field in a
given plasma wave can be well-described in a different method from the Maxwell
wave theory.
###### Acknowledgements.
This work was partly supported by the Shanghai Leading Academic Discipline
Project (Project No. S30105), and Shanghai Research Foundation (Grant No.
07dz22020).
## References
* (1) S. V. Bulanov, V. I. Kirsanov, and A. S. Sakharov, JETP Lett. 50, 198 (1989).
* (2) P. Sprangle, E. Esarey, and A. Ting, Phys. Rev. Lett. 64, 2011 (1990).
* (3) Alexander Pukhov, Rep. Prog. Phys., 66, 47 (2003).
* (4) P. Sprangle, B. Hafizi, J. R. Peñano, R. F. Hubbard, A. Ting, C.I. Moore, D. F. Gordon, A. Zigler, D. Kaganovich, and T. M. Antonsen, Jr., Phys. Rev. E 63, 056405 (2001).
* (5) E. Esarey, C. B. Schroeder, and W. P. Leemans, Rev. Mod. Phys. 81, 1229 (2009).
* (6) M. Tabak, J. Hammer, M. E. Glinsky, W. L. Kruer, S. C. Wilks, J. Woodworth, E. M. Campbell, M. D. Perry, and R. J. Mason, Phys. Plasmas, 1, 1626 (1994).
* (7) M. Roth, T. E. Cowan, M. H. Key, S. P. Hatchett, C. Brown, W, Fountain, J. Johnson, D. M. Pennington, R. A. Snavely, S. C. Wilks, K. Yasuike, H. Ruhl, F. Pegoraro, S. V. Bulanov, E. M. Campbell, M. D. Perry, and H. Powell, Phys. Rev. Lett. 86, 436 (2001).
* (8) S. V. Bulanov, I. N. Inovenkov, V.I.Kirsanov, N. M. Naumova, and A. S. Sakharov, Phys. Fluids B 4, 1935 (1992).
* (9) J. Borhanian, S. Sobhanian, I. Kourakis, and A. Esfandyari-Kalejahi, Phys. Plasmas 15, 093108 (2008).
* (10) B. A. Shadwick, C. B. Schroeder, and E. Esarey, Phys. Plasmas 16, 056704 (2009).
* (11) G. A. Mourou, T. Tajima, and S. V. Bulanov, Rev. Mod. Phys. 78, 309 (2006).
* (12) C. Ren, B. J. Duda, R. G. Hemker, W. B. Mori, T. Katsouleas, Phys. Rev. E 63, 026411 (2001).
* (13) D. F. Gordon, and B. Hafizi, Phys. Rev. Lett. 90, 215001 (2003).
* (14) C. B. Schroeder, C. Benedetti, E. Esarey, and W. P. Leemans, Phys. Rev. Lett. 106, 135002 (2011).
* (15) C. B. Schroeder, C. Benedetti, E. Esarey, J. van Tilborg, and W. P. Leemans, Phys. Plasmas 18, 083103 (2011).
* (16) Y. I. Salamin, S. X. Hu, K. Z. Hatsagortsyan, and C. H. Keitel, Physics Reports 427, 41 (2006).
* (17) L. Oliveira e Silva, and J. T. Mendonça, Phys. Rev. E 57, 3423 (1998).
* (18) J. T. Mendonça, Phys. Scr. 74, C61 (2006).
* (19) A. J. W. Reitsma, R. M. G. M. Trines, R. Bingham, R. A. Cairns, J. T. Mendonça, and D. A. Jaroszynski, Phys. Plasmas 13, 113104 (2006).
* (20) S. C. Wilks, J. M. Dawson, W. B. Mori, T. Katsouleas, and M. E. Jones, Phys. Rev. Lett. 62, 2600 (1989).
* (21) E. Esarey, A. Ting, and P. Sprangle, Phys. Rev. A 42, 3526 (1990).
* (22) J. T. Mendonça, and L. Oliveira e Silva, Phys. Rev. E 49, 3520 (1994).
* (23) P. Ji, Phys. Rev. E 64, 036501 (2001).
* (24) R. Bingham, J. T. Mendonça, and J. M. Dawson, Phys. Rev. Lett. 78, 247 (1997).
* (25) J. T. Mendonça, and A. Serbeto, Phys. Plasmas 13, 102109 (2006).
* (26) Z. Bu, and P. Ji, Phys. Plasmas 19, 012112 (2012).
* (27) William L. Kruer, The Physics of Laser Plasma Interactions (Westview Press, 2001), P. 99.
* (28) T. Tajima, and J. M. Dawson, Phys. Rev. Lett. 43, 267 (1979).
* (29) L. D. Landau, J. Phys. 10, 25 (1946).
* (30) J. Dawson, Phys. Fluids 4, 869 (1961).
* (31) Thomas O’Neil, Phys. Fluids 8, 2255 (1965).
* (32) Z. Bu, and P. Ji, Phys. Plasmas 19, 113114 (2012).
|
arxiv-papers
| 2013-02-26T13:28:49 |
2024-09-04T02:49:42.142342
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhigang Bu, Yuee Luo, Hehe Li, Wenbo Chen, and Peiyong Ji",
"submitter": "Zhigang Bu",
"url": "https://arxiv.org/abs/1302.6429"
}
|
1302.6446
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-PH-EP-2013-025 LHCb-PAPER-2012-056 February 26, 2013
Search for the decay $B_{s}^{0}\rightarrow D^{*\mp}\pi^{\pm}$
The LHCb collaboration†††Authors are listed on the following pages.
A search for the decay $B_{s}^{0}\rightarrow D^{*\mp}\pi^{\pm}$ is presented
using a data sample corresponding to an integrated luminosity of $1.0\ {\rm
fb}^{-1}$ of $pp$ collisions collected by LHCb. This decay is expected to be
mediated by a $W$-exchange diagram, with little contribution from rescattering
processes, and therefore a measurement of the branching fraction will help to
understand the mechanism behind related decays such as
$B_{s}^{0}\rightarrow\pi^{+}\pi^{-}$ and $B_{s}^{0}\rightarrow D\kern
1.99997pt\overline{\kern-1.99997ptD}{}$. Systematic uncertainties are
minimised by using $B^{0}\rightarrow D^{*\mp}\pi^{\pm}$ as a normalisation
channel. We find no evidence for a signal, and set an upper limit on the
branching fraction of ${\cal B}(B_{s}^{0}\rightarrow
D^{*\mp}\pi^{\pm})<6.1\,\left(7.8\right)\times 10^{-6}$ at 90 % (95 %)
confidence level.
Submitted to Phys. Rev. D (R)
© CERN on behalf of the LHCb collaboration, license CC-BY-3.0.
LHCb collaboration
R. Aaij40, C. Abellan Beteta35,n, B. Adeva36, M. Adinolfi45, C. Adrover6, A.
Affolder51, Z. Ajaltouni5, J. Albrecht9, F. Alessio37, M. Alexander50, S.
Ali40, G. Alkhazov29, P. Alvarez Cartelle36, A.A. Alves Jr24,37, S. Amato2, S.
Amerio21, Y. Amhis7, L. Anderlini17,f, J. Anderson39, R. Andreassen59, R.B.
Appleby53, O. Aquines Gutierrez10, F. Archilli18, A. Artamonov 34, M.
Artuso56, E. Aslanides6, G. Auriemma24,m, S. Bachmann11, J.J. Back47, C.
Baesso57, V. Balagura30, W. Baldini16, R.J. Barlow53, C. Barschel37, S.
Barsuk7, W. Barter46, Th. Bauer40, A. Bay38, J. Beddow50, F. Bedeschi22, I.
Bediaga1, S. Belogurov30, K. Belous34, I. Belyaev30, E. Ben-Haim8, M.
Benayoun8, G. Bencivenni18, S. Benson49, J. Benton45, A. Berezhnoy31, R.
Bernet39, M.-O. Bettler46, M. van Beuzekom40, A. Bien11, S. Bifani12, T.
Bird53, A. Bizzeti17,h, P.M. Bjørnstad53, T. Blake37, F. Blanc38, J. Blouw11,
S. Blusk56, V. Bocci24, A. Bondar33, N. Bondar29, W. Bonivento15, S. Borghi53,
A. Borgia56, T.J.V. Bowcock51, E. Bowen39, C. Bozzi16, T. Brambach9, J. van
den Brand41, J. Bressieux38, D. Brett53, M. Britsch10, T. Britton56, N.H.
Brook45, H. Brown51, I. Burducea28, A. Bursche39, G. Busetto21,q, J.
Buytaert37, S. Cadeddu15, O. Callot7, M. Calvi20,j, M. Calvo Gomez35,n, A.
Camboni35, P. Campana18,37, A. Carbone14,c, G. Carboni23,k, R. Cardinale19,i,
A. Cardini15, H. Carranza-Mejia49, L. Carson52, K. Carvalho Akiba2, G.
Casse51, M. Cattaneo37, Ch. Cauet9, M. Charles54, Ph. Charpentier37, P.
Chen3,38, N. Chiapolini39, M. Chrzaszcz 25, K. Ciba37, X. Cid Vidal36, G.
Ciezarek52, P.E.L. Clarke49, M. Clemencic37, H.V. Cliff46, J. Closier37, C.
Coca28, V. Coco40, J. Cogan6, E. Cogneras5, P. Collins37, A. Comerma-
Montells35, A. Contu15, A. Cook45, M. Coombes45, S. Coquereau8, G. Corti37, B.
Couturier37, G.A. Cowan38, D. Craik47, S. Cunliffe52, R. Currie49, C.
D’Ambrosio37, P. David8, P.N.Y. David40, I. De Bonis4, K. De Bruyn40, S. De
Capua53, M. De Cian39, J.M. De Miranda1, M. De Oyanguren Campos35,o, L. De
Paula2, W. De Silva59, P. De Simone18, D. Decamp4, M. Deckenhoff9, L. Del
Buono8, D. Derkach14, O. Deschamps5, F. Dettori41, A. Di Canto11, H.
Dijkstra37, M. Dogaru28, S. Donleavy51, F. Dordei11, A. Dosil Suárez36, D.
Dossett47, A. Dovbnya42, F. Dupertuis38, R. Dzhelyadin34, A. Dziurda25, A.
Dzyuba29, S. Easo48,37, U. Egede52, V. Egorychev30, S. Eidelman33, D. van
Eijk40, S. Eisenhardt49, U. Eitschberger9, R. Ekelhof9, L. Eklund50, I. El
Rifai5, Ch. Elsasser39, D. Elsby44, A. Falabella14,e, C. Färber11, G.
Fardell49, C. Farinelli40, S. Farry12, V. Fave38, D. Ferguson49, V. Fernandez
Albor36, F. Ferreira Rodrigues1, M. Ferro-Luzzi37, S. Filippov32, C.
Fitzpatrick37, M. Fontana10, F. Fontanelli19,i, R. Forty37, O. Francisco2, M.
Frank37, C. Frei37, M. Frosini17,f, S. Furcas20, E. Furfaro23, A. Gallas
Torreira36, D. Galli14,c, M. Gandelman2, P. Gandini54, Y. Gao3, J. Garofoli56,
P. Garosi53, J. Garra Tico46, L. Garrido35, C. Gaspar37, R. Gauld54, E.
Gersabeck11, M. Gersabeck53, T. Gershon47,37, Ph. Ghez4, V. Gibson46, V.V.
Gligorov37, C. Göbel57, D. Golubkov30, A. Golutvin52,30,37, A. Gomes2, H.
Gordon54, M. Grabalosa Gándara5, R. Graciani Diaz35, L.A. Granado Cardoso37,
E. Graugés35, G. Graziani17, A. Grecu28, E. Greening54, S. Gregson46, O.
Grünberg58, B. Gui56, E. Gushchin32, Yu. Guz34, T. Gys37, C. Hadjivasiliou56,
G. Haefeli38, C. Haen37, S.C. Haines46, S. Hall52, T. Hampson45, S. Hansmann-
Menzemer11, N. Harnew54, S.T. Harnew45, J. Harrison53, T. Hartmann58, J. He7,
V. Heijne40, K. Hennessy51, P. Henrard5, J.A. Hernando Morata36, E. van
Herwijnen37, E. Hicks51, D. Hill54, M. Hoballah5, C. Hombach53, P. Hopchev4,
W. Hulsbergen40, P. Hunt54, T. Huse51, N. Hussain54, D. Hutchcroft51, D.
Hynds50, V. Iakovenko43, M. Idzik26, P. Ilten12, R. Jacobsson37, A. Jaeger11,
E. Jans40, P. Jaton38, F. Jing3, M. John54, D. Johnson54, C.R. Jones46, B.
Jost37, M. Kaballo9, S. Kandybei42, M. Karacson37, T.M. Karbach37, I.R.
Kenyon44, U. Kerzel37, T. Ketel41, A. Keune38, B. Khanji20, O. Kochebina7, I.
Komarov38,31, R.F. Koopman41, P. Koppenburg40, M. Korolev31, A. Kozlinskiy40,
L. Kravchuk32, K. Kreplin11, M. Kreps47, G. Krocker11, P. Krokovny33, F.
Kruse9, M. Kucharczyk20,25,j, V. Kudryavtsev33, T. Kvaratskheliya30,37, V.N.
La Thi38, D. Lacarrere37, G. Lafferty53, A. Lai15, D. Lambert49, R.W.
Lambert41, E. Lanciotti37, G. Lanfranchi18,37, C. Langenbruch37, T. Latham47,
C. Lazzeroni44, R. Le Gac6, J. van Leerdam40, J.-P. Lees4, R. Lefèvre5, A.
Leflat31,37, J. Lefrançois7, S. Leo22, O. Leroy6, B. Leverington11, Y. Li3, L.
Li Gioi5, M. Liles51, R. Lindner37, C. Linn11, B. Liu3, G. Liu37, J. von
Loeben20, S. Lohn37, J.H. Lopes2, E. Lopez Asamar35, N. Lopez-March38, H. Lu3,
D. Lucchesi21,q, J. Luisier38, H. Luo49, F. Machefert7, I.V.
Machikhiliyan4,30, F. Maciuc28, O. Maev29,37, S. Malde54, G. Manca15,d, G.
Mancinelli6, U. Marconi14, R. Märki38, J. Marks11, G. Martellotti24, A.
Martens8, L. Martin54, A. Martín Sánchez7, M. Martinelli40, D. Martinez
Santos41, D. Martins Tostes2, A. Massafferri1, R. Matev37, Z. Mathe37, C.
Matteuzzi20, E. Maurice6, A. Mazurov16,32,37,e, J. McCarthy44, R. McNulty12,
A. Mcnab53, B. Meadows59,54, F. Meier9, M. Meissner11, M. Merk40, D.A.
Milanes8, M.-N. Minard4, J. Molina Rodriguez57, S. Monteil5, D. Moran53, P.
Morawski25, M.J. Morello22,s, R. Mountain56, I. Mous40, F. Muheim49, K.
Müller39, R. Muresan28, B. Muryn26, B. Muster38, P. Naik45, T. Nakada38, R.
Nandakumar48, I. Nasteva1, M. Needham49, N. Neufeld37, A.D. Nguyen38, T.D.
Nguyen38, C. Nguyen-Mau38,p, M. Nicol7, V. Niess5, R. Niet9, N. Nikitin31, T.
Nikodem11, A. Nomerotski54, A. Novoselov34, A. Oblakowska-Mucha26, V.
Obraztsov34, S. Oggero40, S. Ogilvy50, O. Okhrimenko43, R. Oldeman15,d,37, M.
Orlandea28, J.M. Otalora Goicochea2, P. Owen52, B.K. Pal56, A. Palano13,b, M.
Palutan18, J. Panman37, A. Papanestis48, M. Pappagallo50, C. Parkes53, C.J.
Parkinson52, G. Passaleva17, G.D. Patel51, M. Patel52, G.N. Patrick48, C.
Patrignani19,i, C. Pavel-Nicorescu28, A. Pazos Alvarez36, A. Pellegrino40, G.
Penso24,l, M. Pepe Altarelli37, S. Perazzini14,c, D.L. Perego20,j, E. Perez
Trigo36, A. Pérez-Calero Yzquierdo35, P. Perret5, M. Perrin-Terrin6, G.
Pessina20, K. Petridis52, A. Petrolini19,i, A. Phan56, E. Picatoste Olloqui35,
B. Pietrzyk4, T. Pilař47, D. Pinci24, S. Playfer49, M. Plo Casasus36, F.
Polci8, G. Polok25, A. Poluektov47,33, E. Polycarpo2, D. Popov10, B.
Popovici28, C. Potterat35, A. Powell54, J. Prisciandaro38, V. Pugatch43, A.
Puig Navarro38, G. Punzi22,r, W. Qian4, J.H. Rademacker45, B.
Rakotomiaramanana38, M.S. Rangel2, I. Raniuk42, N. Rauschmayr37, G. Raven41,
S. Redford54, M.M. Reid47, A.C. dos Reis1, S. Ricciardi48, A. Richards52, K.
Rinnert51, V. Rives Molina35, D.A. Roa Romero5, P. Robbe7, E. Rodrigues53, P.
Rodriguez Perez36, S. Roiser37, V. Romanovsky34, A. Romero Vidal36, J.
Rouvinet38, T. Ruf37, F. Ruffini22, H. Ruiz35, P. Ruiz Valls35,o, G.
Sabatino24,k, J.J. Saborido Silva36, N. Sagidova29, P. Sail50, B. Saitta15,d,
C. Salzmann39, B. Sanmartin Sedes36, M. Sannino19,i, R. Santacesaria24, C.
Santamarina Rios36, E. Santovetti23,k, M. Sapunov6, A. Sarti18,l, C.
Satriano24,m, A. Satta23, M. Savrie16,e, D. Savrina30,31, P. Schaack52, M.
Schiller41, H. Schindler37, M. Schlupp9, M. Schmelling10, B. Schmidt37, O.
Schneider38, A. Schopper37, M.-H. Schune7, R. Schwemmer37, B. Sciascia18, A.
Sciubba24, M. Seco36, A. Semennikov30, K. Senderowska26, I. Sepp52, N.
Serra39, J. Serrano6, P. Seyfert11, M. Shapkin34, I. Shapoval42,37, P.
Shatalov30, Y. Shcheglov29, T. Shears51,37, L. Shekhtman33, O. Shevchenko42,
V. Shevchenko30, A. Shires52, R. Silva Coutinho47, T. Skwarnicki56, N.A.
Smith51, E. Smith54,48, M. Smith53, M.D. Sokoloff59, F.J.P. Soler50, F.
Soomro18,37, D. Souza45, B. Souza De Paula2, B. Spaan9, A. Sparkes49, P.
Spradlin50, F. Stagni37, S. Stahl11, O. Steinkamp39, S. Stoica28, S. Stone56,
B. Storaci39, M. Straticiuc28, U. Straumann39, V.K. Subbiah37, S. Swientek9,
V. Syropoulos41, M. Szczekowski27, P. Szczypka38,37, T. Szumlak26, S.
T’Jampens4, M. Teklishyn7, E. Teodorescu28, F. Teubert37, C. Thomas54, E.
Thomas37, J. van Tilburg11, V. Tisserand4, M. Tobin39, S. Tolk41, D.
Tonelli37, S. Topp-Joergensen54, N. Torr54, E. Tournefier4,52, S. Tourneur38,
M.T. Tran38, M. Tresch39, A. Tsaregorodtsev6, P. Tsopelas40, N. Tuning40, M.
Ubeda Garcia37, A. Ukleja27, D. Urner53, U. Uwer11, V. Vagnoni14, G.
Valenti14, R. Vazquez Gomez35, P. Vazquez Regueiro36, S. Vecchi16, J.J.
Velthuis45, M. Veltri17,g, G. Veneziano38, M. Vesterinen37, B. Viaud7, D.
Vieira2, X. Vilasis-Cardona35,n, A. Vollhardt39, D. Volyanskyy10, D. Voong45,
A. Vorobyev29, V. Vorobyev33, C. Voß58, H. Voss10, R. Waldi58, R. Wallace12,
S. Wandernoth11, J. Wang56, D.R. Ward46, N.K. Watson44, A.D. Webber53, D.
Websdale52, M. Whitehead47, J. Wicht37, J. Wiechczynski25, D. Wiedner11, L.
Wiggers40, G. Wilkinson54, M.P. Williams47,48, M. Williams55, F.F. Wilson48,
J. Wishahi9, M. Witek25, S.A. Wotton46, S. Wright46, S. Wu3, K. Wyllie37, Y.
Xie49,37, F. Xing54, Z. Xing56, Z. Yang3, R. Young49, X. Yuan3, O.
Yushchenko34, M. Zangoli14, M. Zavertyaev10,a, F. Zhang3, L. Zhang56, W.C.
Zhang12, Y. Zhang3, A. Zhelezov11, A. Zhokhov30, L. Zhong3, A. Zvyagin37.
1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
3Center for High Energy Physics, Tsinghua University, Beijing, China
4LAPP, Université de Savoie, CNRS/IN2P3, Annecy-Le-Vieux, France
5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-
Ferrand, France
6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot,
CNRS/IN2P3, Paris, France
9Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
10Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
11Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
12School of Physics, University College Dublin, Dublin, Ireland
13Sezione INFN di Bari, Bari, Italy
14Sezione INFN di Bologna, Bologna, Italy
15Sezione INFN di Cagliari, Cagliari, Italy
16Sezione INFN di Ferrara, Ferrara, Italy
17Sezione INFN di Firenze, Firenze, Italy
18Laboratori Nazionali dell’INFN di Frascati, Frascati, Italy
19Sezione INFN di Genova, Genova, Italy
20Sezione INFN di Milano Bicocca, Milano, Italy
21Sezione INFN di Padova, Padova, Italy
22Sezione INFN di Pisa, Pisa, Italy
23Sezione INFN di Roma Tor Vergata, Roma, Italy
24Sezione INFN di Roma La Sapienza, Roma, Italy
25Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of
Sciences, Kraków, Poland
26AGH University of Science and Technology, Kraków, Poland
27National Center for Nuclear Research (NCBJ), Warsaw, Poland
28Horia Hulubei National Institute of Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
29Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia
30Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia
31Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
32Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN),
Moscow, Russia
33Budker Institute of Nuclear Physics (SB RAS) and Novosibirsk State
University, Novosibirsk, Russia
34Institute for High Energy Physics (IHEP), Protvino, Russia
35Universitat de Barcelona, Barcelona, Spain
36Universidad de Santiago de Compostela, Santiago de Compostela, Spain
37European Organization for Nuclear Research (CERN), Geneva, Switzerland
38Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
39Physik-Institut, Universität Zürich, Zürich, Switzerland
40Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands
41Nikhef National Institute for Subatomic Physics and VU University Amsterdam,
Amsterdam, The Netherlands
42NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
43Institute for Nuclear Research of the National Academy of Sciences (KINR),
Kyiv, Ukraine
44University of Birmingham, Birmingham, United Kingdom
45H.H. Wills Physics Laboratory, University of Bristol, Bristol, United
Kingdom
46Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
47Department of Physics, University of Warwick, Coventry, United Kingdom
48STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
49School of Physics and Astronomy, University of Edinburgh, Edinburgh, United
Kingdom
50School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
51Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
52Imperial College London, London, United Kingdom
53School of Physics and Astronomy, University of Manchester, Manchester,
United Kingdom
54Department of Physics, University of Oxford, Oxford, United Kingdom
55Massachusetts Institute of Technology, Cambridge, MA, United States
56Syracuse University, Syracuse, NY, United States
57Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de
Janeiro, Brazil, associated to 2
58Institut für Physik, Universität Rostock, Rostock, Germany, associated to 11
59University of Cincinnati, Cincinnati, OH, United States, associated to 56
aP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS),
Moscow, Russia
bUniversità di Bari, Bari, Italy
cUniversità di Bologna, Bologna, Italy
dUniversità di Cagliari, Cagliari, Italy
eUniversità di Ferrara, Ferrara, Italy
fUniversità di Firenze, Firenze, Italy
gUniversità di Urbino, Urbino, Italy
hUniversità di Modena e Reggio Emilia, Modena, Italy
iUniversità di Genova, Genova, Italy
jUniversità di Milano Bicocca, Milano, Italy
kUniversità di Roma Tor Vergata, Roma, Italy
lUniversità di Roma La Sapienza, Roma, Italy
mUniversità della Basilicata, Potenza, Italy
nLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain
oIFIC, Universitat de Valencia-CSIC, Valencia, Spain
pHanoi University of Science, Hanoi, Viet Nam
qUniversità di Padova, Padova, Italy
rUniversità di Pisa, Pisa, Italy
sScuola Normale Superiore, Pisa, Italy
Decays of $B^{0}_{s}$ mesons to final states such as $D^{+}D^{-}$, $D^{0}\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ [1] and $\pi^{+}\pi^{-}$ [2] have
been recently observed by LHCb. Such decays can proceed, at short distance, by
two types of amplitudes, referred to as weak exchange and penguin
annihilation. Example diagrams are shown in Fig. 1(a) and (b). There is also a
potential long distance contribution from rescattering. For example, the
$D^{+}D^{-}$ final state can be obtained from a $b\rightarrow c\bar{c}s$ decay
to $D^{+}_{s}D^{-}_{s}$ followed by the $s\bar{s}$ pair rearranging to
$d\bar{d}$. Understanding rescattering effects in hadronic $B$ meson decays is
important in order to interpret various $C\\!P$-violating observables.
Figure 1: Decay diagrams for (a) $B^{0}_{s}\rightarrow D^{(*)+}D^{(*)-}$ via
weak exchange, (b) $B^{0}_{s}\rightarrow D^{(*)+}D^{(*)-}$ via penguin
annihilation, (c) $B^{0}_{s}\rightarrow D^{(*)-}\pi^{+}$ via weak exchange.
A measurement of the branching fraction of the decay $B^{0}_{s}\rightarrow
D^{*-}\pi^{+}$ can be used to disentangle the contributions from different
decay diagrams and from rescattering [3, 4]. This decay has only weak exchange
contributions, as shown in Fig. 1(c). (The suppressed diagram for
$B^{0}_{s}\rightarrow D^{*+}\pi^{-}$ is not shown.) Moreover, rescattering
contributions to the $B^{0}_{s}\rightarrow D^{(*)\mp}\pi^{\pm}$ decay are
expected to be small [5]. Therefore, if the observed branching fraction for
the decay $B^{0}_{s}\rightarrow\pi^{+}\pi^{-}$ is explained by rescattering, a
low value of ${\cal B}(B^{0}_{s}\rightarrow D^{*-}\pi^{+})=(1.2\pm 0.2)\times
10^{-6}$ is predicted [5]. However, if short-distance amplitudes are the
dominant effect in $B^{0}_{s}\rightarrow\pi^{+}\pi^{-}$ and related decays,
${\cal B}(B^{0}_{s}\rightarrow D^{*-}\pi^{+})$ could be much larger. The
measured $B^{0}_{s}\rightarrow D\kern 1.99997pt\overline{\kern-1.99997ptD}{}$
[1] and $B^{+}\rightarrow D^{+}_{s}\phi$ [6] rates are at the upper end of the
expected range in the rescattering-based model, but further measurements are
needed to establish whether long-distance processes are dominant in these
hadronic $B$ decays.
In this paper, the result of a search for the decay $B^{0}_{s}\rightarrow
D^{*\mp}\pi^{\pm}$ is presented. No previous measurements of this decay have
been made. The inclusion of charge conjugated processes is implied throughout
the paper. Since the flavour of the $B^{0}_{s}$ meson at production is not
tagged, the $D^{*-}\pi^{+}$ and $D^{*+}\pi^{-}$ final states are combined. The
analysis is based on a data sample corresponding to an integrated luminosity
of $1.0\mbox{\,fb}^{-1}$ of LHC $pp$ collision data, at a centre-of-mass
energy of $7\mathrm{\,Te\kern-1.00006ptV}$, collected with the LHCb detector
during 2011. In high energy $pp$ collisions all $b$ hadron species are
produced, so the $B^{0}\rightarrow D^{*\mp}\pi^{\pm}$ decay, with branching
fraction ${\cal B}(B^{0}\rightarrow D^{*-}\pi^{+})=(2.76\pm 0.13)\times
10^{-3}$ [7, 8], is both a potentially serious background channel as well as
the ideal normalisation mode for the measurement of the $B^{0}_{s}$ branching
fraction.
The LHCb detector [9] is a single-arm forward spectrometer covering the
pseudorapidity range $2<\eta<5$, designed for the study of particles
containing $b$ or $c$ quarks. The detector includes a high precision tracking
system consisting of a silicon-strip vertex detector surrounding the $pp$
interaction region, a large-area silicon-strip detector located upstream of a
dipole magnet with a bending power of about $4{\rm\,Tm}$, and three stations
of silicon-strip detectors and straw drift tubes placed downstream. The
combined tracking system has momentum resolution $\Delta p/p$ that varies from
0.4 % at 5${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ to 0.6 % at
100${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, and impact parameter (IP)
resolution of 20$\,\upmu\rm m$ for tracks with high transverse momentum
($p_{\rm T}$). Charged hadrons are identified using two ring-imaging Cherenkov
detectors. Photon, electron and hadron candidates are identified by a
calorimeter system consisting of scintillating-pad and preshower detectors, an
electromagnetic calorimeter and a hadronic calorimeter. Muons are identified
by a system composed of alternating layers of iron and multiwire proportional
chambers.
The trigger [10] consists of a hardware stage, based on information from the
calorimeter and muon systems, followed by a software stage which applies a
full event reconstruction. In this analysis, signal candidates are accepted if
one of the final state particles created a cluster in the calorimeter with
sufficient transverse energy to fire the hardware trigger. Events that are
triggered at the hardware level by another particle in the $pp\rightarrow
b\bar{b}X$ event are also retained. The software trigger requires
characteristic signatures of $b$-hadron decays: at least one track, with
$\mbox{$p_{\rm T}$}>1.7\,{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and
$\chi^{2}_{\rm IP}$ with respect to any primary interaction vertex (PV)
greater than 16, that subsequently forms a two-, three- or four-track
secondary vertex with a high sum of the $p_{\rm T}$ of the tracks and
significant displacement from the PV. The $\chi^{2}_{\rm IP}$ is the
difference between the $\chi^{2}$ of the PV reconstruction with and without
the considered track. In the offline analysis, the software trigger decision
is required to be due to the candidate signal decay.
Candidates that are consistent with the decay chain $B^{0}_{(s)}\rightarrow
D^{*\mp}\pi^{\pm}$, $D^{*-}\rightarrow\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}\pi^{-}$, $\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}\rightarrow K^{+}\pi^{-}$ are
selected. The $\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ and $D^{*-}$
candidate invariant masses are required to satisfy
$1814<m_{K^{+}\pi^{-}}<1914{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ and
$2008.78<m_{\kern
1.39998pt\overline{\kern-1.39998ptD}{}^{0}\pi^{-}}<2011.78{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$,
respectively, where a $D^{0}$ mass constraint is applied in the evaluation of
$m_{\kern 1.39998pt\overline{\kern-1.39998ptD}{}^{0}\pi^{-}}$. The bachelor
pion, from the $B^{0}_{(s)}$ decay, is required to be consistent with the pion
mass hypothesis, based on particle identification (PID) information from the
RICH detectors [11]. All other selection criteria were tuned on the
$B^{0}\rightarrow D^{*\mp}\pi^{\pm}$ control channel in a similar manner to
that used in another recent LHCb publication [12]. The large yield in the
normalisation sample allows the selection to be based on data, though the
efficiencies are determined using Monte Carlo (MC) simulated events in which
$pp$ collisions are generated using Pythia 6.4 [13] with a specific LHCb
configuration [14]. Decays of hadronic particles are described by EvtGen [15].
The interaction of the generated particles with the detector and its response
are implemented using the Geant4 toolkit [16, *Agostinelli:2002hh] as
described in Ref. [18].
The selection requirements include criteria on the quality of the tracks
forming the signal candidate, their $p$, $p_{\rm T}$ and inconsistency with
the hypothesis of originating from the PV ($\chi^{2}_{\rm IP}$). Requirements
are also placed on the corresponding variables for candidate composite
particles ($\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}$, $B^{0}_{(s)}$)
together with restrictions of the decay fit ($\chi^{2}_{\rm vertex}$), the
flight distance ($\chi^{2}_{\rm flight}$), and the cosine of the angle between
the momentum vector and the line joining the PV to the $B^{0}_{(s)}$ vertex
($\cos\theta_{\rm dir}$) [19].
Further discrimination between signal and background categories is achieved by
calculating weights for the remaining $B^{0}$ candidates [20]. The weights are
based on a simplified fit to the $B$ candidate invariant mass distribution,
where the $B^{0}_{s}$ region is neither examined nor included in the fit. The
weights are used to train a neural network [21] in order to maximise the
separation between categories. To retain sufficient background events for the
network training, the requirement on $m_{\kern
1.39998pt\overline{\kern-1.39998ptD}{}^{0}\pi^{-}}$ is not applied. A total of
fifteen variables are used as input to the network. They include the
$\chi^{2}_{\rm IP}$ of the four candidate tracks, the $\chi^{2}_{\rm IP}$,
$\chi^{2}_{\rm vertex}$, $\chi^{2}_{\rm flight}$ and $\cos\theta_{\rm dir}$ of
the $\kern 1.99997pt\overline{\kern-1.99997ptD}{}^{0}$ and $B^{0}_{(s)}$
candidates, and the $B^{0}_{(s)}$ candidate $p_{\rm T}$. The $p_{\rm T}$
asymmetry and track multiplicity in a cone with half-angle of 1.5 units in the
plane of pseudorapidity and azimuthal angle (measured in radians) [22] around
the $B^{0}_{(s)}$ candidate flight direction are also used. The input
quantities to the neural network depend only weakly on the kinematics of the
$B^{0}_{(s)}$ decay. A requirement on the network output is imposed that
reduces the combinatorial background by an order of magnitude while retaining
about 75 % of the signal. Potential biases from this data-driven method are
investigated by training the neural network with different fractions of the
data sample. The same results are obtained using a neural network trained on
30, 40, 50, 60 and 70 % of the total data sample.
After all selection requirements are applied, approximately 50 000 candidates
are selected in the invariant mass range
$5150<m_{D^{*-}\pi^{+}}<5600{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$. About 1
% of events with at least one candidate also contain a second candidate. Such
multiple candidates are retained and treated the same as other candidates.
In addition to combinatorial background, candidates may be formed from
misidentified or partially reconstructed $B^{0}_{(s)}$ decays. Contributions
from partially reconstructed decays are reduced by requiring the invariant
mass of the $B^{0}_{(s)}$ candidate to be above
$5150{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$. The contribution from
$B^{0}_{(s)}$ decays to identical final states but without intermediate
charmed mesons is negligible due to the requirement on the $D^{*-}$ candidate
invariant mass. A small but significant number of background events are
expected from $B^{0}\rightarrow D^{*-}K^{+}$ decays with the $K^{+}$
misidentified as a pion. The branching fractions of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{*-}K^{+}$ and
$\mathchar 28931\relax^{0}_{b}\rightarrow D^{*-}p$ are expected to be small
due to CKM suppression, so that these potential backgrounds are negligible.
Since the $B^{0}$ decay mode is several orders of magnitude more abundant than
the $B^{0}_{s}$ decay, it is critical to understand precisely the shape of the
$B^{0}$ signal peak. The dependence of the width of the peak on different
kinematic variables of the $B^{0}$ decay was investigated. The strongest
correlation was found to be with the angle between the momenta of the $D^{*-}$
candidate and the bachelor $\pi^{+}$ in the lab frame, denoted as $\theta_{\rm
bach}$. Simulated pseudo-experiments were used to find an optimal number of
$\theta_{\rm bach}$ bins to be used in a simultaneous fit. The outcome is that
five bins are used, with ranges 0–0.046, 0.046–0.067, 0.067–0.092, 0.092–0.128
and 0.128–0.4 $\rm\,rad$, chosen to have approximately equal numbers of
$B^{0}$ decays in each. The peak width in the highest bin is approximately 60
% of that in the lowest bin. The pseudo-experiments show that the simultaneous
fit in bins of $\theta_{\rm bach}$ is approximately 20 % more sensitive to a
potential $B^{0}_{s}$ signal than the fit without binning.
The signal yields are obtained from a maximum likelihood fit to the
$D^{*-}\pi^{+}$ invariant mass distribution in the range
$5150$–$5600{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$. The fit is performed
simultaneously in the five $\theta_{\rm bach}$ bins. The fit includes double
Gaussian shapes, where the two Gaussian functions share a common mean, for
$B^{0}$ and $B^{0}_{s}$ signals, together with an exponential component for
the partially reconstructed background, a linear component for the
combinatorial background and a non-parametric function, derived from
simulation, for $B^{0}\rightarrow D^{*-}K^{+}$ decays. The probability density
function (PDF) for the $B^{0}\rightarrow D^{*-}K^{+}$ background is shifted by
the mass difference between data and simulation for each bin of $\theta_{\rm
bach}$.
The parameters of the double Gaussian shapes are constrained to be identical
for $B^{0}$ and $B^{0}_{s}$ signals, with an offset in their mean values fixed
to the known $B^{0}$–$B^{0}_{s}$ mass difference [8]. Additionally, the
relative normalisation of the two Gaussian functions and the ratio of their
widths are constrained within uncertainties to the value obtained in
simulation. A total of thirty-three parameters are allowed to vary in the fit:
the ratio of yields $N(B^{0}_{s})/N(B^{0})$, the linear slope of the
combinatorial background and the exponential parameter of the partially
reconstructed background, plus separate parameters in each of the $\theta_{\rm
bach}$ bins to describe the peak position and core Gaussian width of the
signal PDF, and the yields of the $B^{0}$ peak, the combinatorial background,
the partially reconstructed background, and the background from
$B^{0}\rightarrow D^{*-}K^{+}$.
The results of the fit are shown in Fig. 2. The total number of
$B^{0}\rightarrow D^{*\mp}\pi^{\pm}$ decays is found to be $30\,000\pm 400$,
and the ratio of yields is determined to be $N(B^{0}_{s})/N(B^{0})=(1.4\pm
3.5)\times 10^{-4}$, where the uncertainty is statistical only. The number of
$B^{0}\rightarrow D^{*-}K^{+}$ decays found is $1\,200\pm 200$, with a
correlation of $7\,\%$ to the ratio of signal yields.
Figure 2: Simultaneous fit to the full data sample in five bins of
$\theta_{\rm bach}$: (a) 0–0.046, (b) 0.046–0.067, (c) 0.067–0.092, (d)
0.092–0.128 and (e) 0.128–0.4 $\rm\,rad$. Note the $y$-axis scale is
logarithmic and is the same for each bin. Data points are shown in black, the
full PDF as a solid blue line and the component PDFs as: (red dot-dashed)
partially reconstructed background, (magenta dashed) combinatorial background,
(blue dashed) $B^{0}$ signal, (black dot-dashed) $B^{0}_{s}$ signal and (green
3 dot-dashed) $B^{0}\rightarrow D^{*-}K^{+}$ background.
The ratio of yields is converted to a branching fraction following
${\cal B}(B^{0}_{s}\rightarrow
D^{*\mp}\pi^{\pm})=\frac{N(B^{0}_{s})}{N(B^{0})}\times\frac{\epsilon(B^{0})}{\epsilon(B^{0}_{s})}\times\frac{f_{d}}{f_{s}}\times{\cal
B}(B^{0}\rightarrow D^{*-}\pi^{+})\,,$ (1)
where $\epsilon(B^{0})$ and $\epsilon(B^{0}_{s})$ are the efficiencies for the
$B^{0}$ and $B^{0}_{s}$ decay modes respectively, while $f_{d}$ ($f_{s}$) is
the probability that a $b$ quark produced in the acceptance results in a
$B^{0}$ ($B^{0}_{s}$) meson. Their ratio has been determined to be
$f_{s}/f_{d}=0.256\pm 0.020$ [23].
The total efficiencies are $(0.165\pm 0.002)\,\%$ and $(0.162\pm 0.002)\,\%$
for the $B^{0}$ and $B^{0}_{s}$ decay modes respectively, including
contributions from detector acceptance, selection criteria, PID and trigger
effects. The ratio is consistent with unity, as expected. The PID efficiency
is measured using a control sample of $D^{*-}\rightarrow\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}\pi^{-},\,\kern
1.99997pt\overline{\kern-1.99997ptD}{}^{0}\rightarrow K^{+}\pi^{-}$ decays to
obtain background-subtracted efficiency tables for kaons and pions as
functions of their $p$ and $p_{\rm T}$ [2]. The kinematic properties of the
tracks in signal decays are obtained from simulation, allowing the PID
efficiency for each event to be obtained from the tables. Note that this
calibration sample is dominated by promptly produced $D^{*}$ mesons. The
remaining contributions to the total efficiency are determined from
simulation, and validated using data.
Systematic uncertainties on ${\cal B}(B^{0}_{s}\rightarrow D^{*\mp}\pi^{\pm})$
are assigned due to the following sources, given in units of $1\times
10^{-6}$, summarised in Table 1.
Table 1: Systematic uncertainties on ${\cal B}(B^{0}_{s}\rightarrow D^{*\mp}\pi^{\pm})$. Source | Uncertainty $(10^{-6})$
---|---
Efficiency | 0.02
Fit model | 1.44
Fit bias | 0.12
Multiple candidates | 0.22
$f_{s}/f_{d}$ | 0.12
${\cal B}(B^{0}\rightarrow D^{*-}\pi^{+})$ | 0.08
Total | 1.47
Event selection efficiencies for both modes are found to be consistent in
simulation to within $2\,\%$, yielding a systematic uncertainty of $0.02$. The
fit model is varied by replacing the double Gaussian signal shapes with double
Crystal Ball [24] functions (with both upper and lower tails), changing the
linear combinatorial background shape to quadratic and including a possible
contribution from $\kern
1.79993pt\overline{\kern-1.79993ptB}{}^{0}_{s}\rightarrow D^{*-}K^{+}$. The
non-parametric function for the $B^{0}\rightarrow D^{*-}K^{+}$ background was
scaled in each bin to account for the change in the width of the $B^{0}$
signal. Combined in quadrature these sources contribute $1.44$ to the
systematic uncertainty. Possible biases in the determination of the fit
parameters are investigated by simulated pseudo-experiments, leading to an
uncertainty of $0.12$. Events with multiple candidates are investigated by
performing a fit having chosen one candidate at random. This fit is performed
100 times, with different seeds, and the spread of the results, $0.22$, is
taken as the systematic uncertainty. The uncertainty on the quantity
$f_{s}/f_{d}$ contributes $0.12$, while that on ${\cal B}(B^{0}\rightarrow
D^{*-}\pi^{+})$ gives $0.08$. Combining all sources in quadrature, the total
absolute systematic uncertainty is $1.47\times 10^{-6}$, and the $B^{0}_{s}$
branching fraction is determined to be ${\cal B}(B^{0}_{s}\rightarrow
D^{*\mp}\pi^{\pm})=(1.5\pm 3.8\pm 1.5)\times 10^{-6}$, where the first
uncertainty is statistical and the second is systematic.
A number of cross-checks are performed to test the stability of the result.
Candidates are divided based upon the hardware trigger decision into three
groups; events in which a particle from the signal decay created a large
enough cluster in the calorimeter to fire the trigger, events that were
triggered independently of the signal decay and those events that were
triggered by both the signal decay and the rest of the event. The neural
network and PID requirements are tightened and loosened. The non-parametric
PDF used to describe the background from D*K decays was smoothed to eliminate
potential statistical fluctuations. All cross-checks give consistent results.
Since no significant signal is observed, upper limits are set, at both 90 %
and 95 % confidence level (CL), using a Bayesian approach. The statistical
likelihood curve from the fit is convolved with a Gaussian function of width
given by the systematic uncertainty, and the upper limits are taken as the
values containing 90 % (95 %) of the integral of the likelihood in the
physical region. The obtained limits are
${\cal B}(B^{0}_{s}\rightarrow D^{*\mp}\pi^{\pm})<6.1\ (7.8)\times 10^{-6}\
{\rm at}\ 90\,\%\ (95\,\%)\ {\rm CL}\,.$
In summary, the decay $B^{0}_{s}\rightarrow D^{*\mp}\pi^{\pm}$ is searched for
in a data sample of $1.0\mbox{\,fb}^{-1}$ of data collected with the LHCb
detector during 2011. No significant signal is observed and upper limits on
the branching fraction are set. The absence of a detectable signal indicates
that rescattering effects may make significant contributions to other hadronic
decays, such as $B^{0}_{s}\rightarrow\pi^{+}\pi^{-}$ and $B^{0}_{s}\rightarrow
D\kern 1.99997pt\overline{\kern-1.99997ptD}{}$, as recently suggested [5].
## Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC
(China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG
(Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR
(Poland); ANCS/IFA (Romania); MinES, Rosatom, RFBR and NRC “Kurchatov
Institute” (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER
(Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We
also acknowledge the support received from the ERC under FP7. The Tier1
computing centres are supported by IN2P3 (France), KIT and BMBF (Germany),
INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United
Kingdom). We are thankful for the computing resources put at our disposal by
Yandex LLC (Russia), as well as to the communities behind the multiple open
source software packages that we depend on.
## References
* [1] LHCb collaboration, R. Aaij et al., First observation of $B^{0}\rightarrow D^{+}D^{-}$, $D_{s}^{+}D^{-}$ and $D^{0}\bar{D}^{0}$ decays, arXiv:1302.5854, submitted to Phys. Rev. D
* [2] LHCb collaboration, R. Aaij et al., Measurement of $b$-hadron branching fractions for two-body decays into charmless charged hadrons, JHEP 10 (2012) 037, arXiv:1206.2794
* [3] M. Gronau, O. F. Hernandez, D. London, and J. L. Rosner, Broken SU(3) symmetry in two-body $B$ decays, Phys. Rev. D52 (1995) 6356, arXiv:hep-ph/9504326
* [4] R. Fleischer, New strategies to obtain insights into $C\\!P$ violation through $B_{s}\rightarrow D_{s}^{\pm}K^{\mp}$, $D_{s}^{*\pm}K^{\mp}$, … and $B_{d}\rightarrow D^{\pm}\pi^{\mp}$, $D^{*\pm}\pi^{\mp}$, … decays, Nucl. Phys. B671 (2003) 459, arXiv:hep-ph/0304027
* [5] M. Gronau, D. London, and J. L. Rosner, Rescattering contributions to rare $B$-meson decays, Phys. Rev. D87 (2013) 036008, arXiv:1211.5785
* [6] LHCb collaboration, R. Aaij et al., First evidence for the annihilation decay mode $B^{+}\rightarrow D_{s}^{+}\phi$, JHEP 02 (2013) 043, arXiv:1210.1089
* [7] BaBar collaboration, B. Aubert et al., Branching fraction measurement of $B^{0}\rightarrow D^{(*)+}\pi^{-}$, $B^{-}\rightarrow D^{(*)0}\pi^{-}$ and isospin analysis of $\bar{B}\rightarrow D^{(*)}\pi$ decays, Phys. Rev. D75 (2007) 031101, arXiv:hep-ex/0610027
* [8] Particle Data Group, J. Beringer et al., Review of particle physics, Phys. Rev. D86 (2012) 010001
* [9] LHCb collaboration, A. A. Alves Jr. et al., The LHCb detector at the LHC, JINST 3 (2008) S08005
* [10] R. Aaij et al., The LHCb trigger and its performance, arXiv:1211.3055, submitted to JINST
* [11] M. Adinolfi et al., Performance of the LHCb RICH detector at the LHC, arXiv:1211.6759, submitted to Eur. Phys. J. C
* [12] LHCb collaboration, R. Aaij et al., Observation of the decay $B^{0}\rightarrow\bar{D}^{0}K^{+}K^{-}$ and evidence of $B^{0}_{s}\rightarrow\bar{D}^{0}K^{+}K^{-}$, Phys. Rev. Lett. 109 (2012) 131801, arXiv:1207.5991
* [13] T. Sjöstrand, S. Mrenna, and P. Skands, PYTHIA 6.4 physics and manual, JHEP 05 (2006) 026, arXiv:hep-ph/0603175
* [14] I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulation framework, Nuclear Science Symposium Conference Record (NSS/MIC) IEEE (2010) 1155
* [15] D. J. Lange, The EvtGen particle decay simulation package, Nucl. Instrum. Meth. A462 (2001) 152
* [16] GEANT4 collaboration, J. Allison et al., Geant4 developments and applications, IEEE Trans. Nucl. Sci. 53 (2006) 270
* [17] GEANT4 collaboration, S. Agostinelli et al., GEANT4: a simulation toolkit, Nucl. Instrum. Meth. A506 (2003) 250
* [18] M. Clemencic et al., The LHCb simulation application, Gauss: design, evolution and experience, J. of Phys. : Conf. Ser. 331 (2011) 032023
* [19] LHCb collaboration, R. Aaij et al., First observation of the decay $\bar{B}^{0}_{s}\rightarrow D^{0}K^{*0}$ and a measurement of the ratio of branching fractions $\frac{{\cal B}(\bar{B}^{0}_{s}\rightarrow D^{0}K^{*0})}{{\cal B}(\bar{B}^{0}\rightarrow D^{0}\rho^{0})}$, Phys. Lett. B706 (2011) 32, arXiv:1110.3676
* [20] M. Pivk and F. R. Le Diberder, sPlot: a statistical tool to unfold data distributions, Nucl. Instrum. Meth. A555 (2005) 356, arXiv:physics/0402083
* [21] M. Feindt and U. Kerzel, The NeuroBayes neural network package, Nucl. Instrum. Meth. A559 (2006) 190
* [22] LHCb collaboration, R. Aaij et al., Observation of $C\\!P$ violation in $B^{\pm}\rightarrow DK^{\pm}$ decays, Phys. Lett. B712 (2012) 203, arXiv:1203.3662
* [23] LHCb collaboration, R. Aaij et al., Measurement of the ratio of fragmentation functions $f_{s}/f_{d}$ and the dependence on $B$ meson kinematics, arXiv:1301.5286, submitted to JHEP
* [24] T. Skwarnicki, A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, DESY-F31-86-02
|
arxiv-papers
| 2013-02-26T14:45:56 |
2024-09-04T02:49:42.149284
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "LHCb collaboration: R. Aaij, C. Abellan Beteta, B. Adeva, M. Adinolfi,\n C. Adrover, A. Affolder, Z. Ajaltouni, J. Albrecht, F. Alessio, M. Alexander,\n S. Ali, G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, S. Amerio,\n Y. Amhis, L. Anderlini, J. Anderson, R. Andreassen, R.B. Appleby, O. Aquines\n Gutierrez, F. Archilli, A. Artamonov, M. Artuso, E. Aslanides, G. Auriemma,\n S. Bachmann, J.J. Back, C. Baesso, V. Balagura, W. Baldini, R.J. Barlow, C.\n Barschel, S. Barsuk, W. Barter, Th. Bauer, A. Bay, J. Beddow, F. Bedeschi, I.\n Bediaga, S. Belogurov, K. Belous, I. Belyaev, E. Ben-Haim, M. Benayoun, G.\n Bencivenni, S. Benson, J. Benton, A. Berezhnoy, R. Bernet, M.-O. Bettler, M.\n van Beuzekom, A. Bien, S. Bifani, T. Bird, A. Bizzeti, P.M. Bj{\\o}rnstad, T.\n Blake, F. Blanc, J. Blouw, S. Blusk, V. Bocci, A. Bondar, N. Bondar, W.\n Bonivento, S. Borghi, A. Borgia, T.J.V. Bowcock, E. Bowen, C. Bozzi, T.\n Brambach, J. van den Brand, J. Bressieux, D. Brett, M. Britsch, T. Britton,\n N.H. Brook, H. Brown, I. Burducea, A. Bursche, G. Busetto, J. Buytaert, S.\n Cadeddu, O. Callot, M. Calvi, M. Calvo Gomez, A. Camboni, P. Campana, A.\n Carbone, G. Carboni, R. Cardinale, A. Cardini, H. Carranza-Mejia, L. Carson,\n K. Carvalho Akiba, G. Casse, M. Cattaneo, Ch. Cauet, M. Charles, Ph.\n Charpentier, P. Chen, N. Chiapolini, M. Chrzaszcz, K. Ciba, X. Cid Vidal, G.\n Ciezarek, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier, C. Coca, V.\n Coco, J. Cogan, E. Cogneras, P. Collins, A. Comerma-Montells, A. Contu, A.\n Cook, M. Coombes, S. Coquereau, G. Corti, B. Couturier, G.A. Cowan, D. Craik,\n S. Cunliffe, R. Currie, C. D'Ambrosio, P. David, P.N.Y. David, I. De Bonis,\n K. De Bruyn, S. De Capua, M. De Cian, J.M. De Miranda, M. De Oyanguren\n Campos, L. De Paula, W. De Silva, P. De Simone, D. Decamp, M. Deckenhoff, L.\n Del Buono, D. Derkach, O. Deschamps, F. Dettori, A. Di Canto, H. Dijkstra, M.\n Dogaru, S. Donleavy, F. Dordei, A. Dosil Su\\'arez, D. Dossett, A. Dovbnya, F.\n Dupertuis, R. Dzhelyadin, A. Dziurda, A. Dzyuba, S. Easo, U. Egede, V.\n Egorychev, S. Eidelman, D. van Eijk, S. Eisenhardt, U. Eitschberger, R.\n Ekelhof, L. Eklund, I. El Rifai, Ch. Elsasser, D. Elsby, A. Falabella, C.\n F\\\"arber, G. Fardell, C. Farinelli, S. Farry, V. Fave, D. Ferguson, V.\n Fernandez Albor, F. Ferreira Rodrigues, M. Ferro-Luzzi, S. Filippov, C.\n Fitzpatrick, M. Fontana, F. Fontanelli, R. Forty, O. Francisco, M. Frank, C.\n Frei, M. Frosini, S. Furcas, E. Furfaro, A. Gallas Torreira, D. Galli, M.\n Gandelman, P. Gandini, Y. Gao, J. Garofoli, P. Garosi, J. Garra Tico, L.\n Garrido, C. Gaspar, R. Gauld, E. Gersabeck, M. Gersabeck, T. Gershon, Ph.\n Ghez, V. Gibson, V.V. Gligorov, C. G\\\"obel, D. Golubkov, A. Golutvin, A.\n Gomes, H. Gordon, M. Grabalosa G\\'andara, R. Graciani Diaz, L.A. Granado\n Cardoso, E. Graug\\'es, G. Graziani, A. Grecu, E. Greening, S. Gregson, O.\n Gr\\\"unberg, B. Gui, E. Gushchin, Yu. Guz, T. Gys, C. Hadjivasiliou, G.\n Haefeli, C. Haen, S.C. Haines, S. Hall, T. Hampson, S. Hansmann-Menzemer, N.\n Harnew, S.T. Harnew, J. Harrison, T. Hartmann, J. He, V. Heijne, K. Hennessy,\n P. Henrard, J.A. Hernando Morata, E. van Herwijnen, E. Hicks, D. Hill, M.\n Hoballah, C. Hombach, P. Hopchev, W. Hulsbergen, P. Hunt, T. Huse, N.\n Hussain, D. Hutchcroft, D. Hynds, V. Iakovenko, M. Idzik, P. Ilten, R.\n Jacobsson, A. Jaeger, E. Jans, P. Jaton, F. Jing, M. John, D. Johnson, C.R.\n Jones, B. Jost, M. Kaballo, S. Kandybei, M. Karacson, T.M. Karbach, I.R.\n Kenyon, U. Kerzel, T. Ketel, A. Keune, B. Khanji, O. Kochebina, I. Komarov,\n R.F. Koopman, P. Koppenburg, M. Korolev, A. Kozlinskiy, L. Kravchuk, K.\n Kreplin, M. Kreps, G. Krocker, P. Krokovny, F. Kruse, M. Kucharczyk, V.\n Kudryavtsev, T. Kvaratskheliya, V.N. La Thi, D. Lacarrere, G. Lafferty, A.\n Lai, D. Lambert, R.W. Lambert, E. Lanciotti, G. Lanfranchi, C. Langenbruch,\n T. Latham, C. Lazzeroni, R. Le Gac, J. van Leerdam, J.-P. Lees, R. Lef\\`evre,\n A. Leflat, J. Lefran\\c{c}ois, S. Leo, O. Leroy, B. Leverington, Y. Li, L. Li\n Gioi, M. Liles, R. Lindner, C. Linn, B. Liu, G. Liu, J. von Loeben, S. Lohn,\n J.H. Lopes, E. Lopez Asamar, N. Lopez-March, H. Lu, D. Lucchesi, J. Luisier,\n H. Luo, F. Machefert, I.V. Machikhiliyan, F. Maciuc, O. Maev, S. Malde, G.\n Manca, G. Mancinelli, U. Marconi, R. M\\\"arki, J. Marks, G. Martellotti, A.\n Martens, L. Martin, A. Mart\\'in S\\'anchez, M. Martinelli, D. Martinez Santos,\n D. Martins Tostes, A. Massafferri, R. Matev, Z. Mathe, C. Matteuzzi, E.\n Maurice, A. Mazurov, J. McCarthy, R. McNulty, A. Mcnab, B. Meadows, F. Meier,\n M. Meissner, M. Merk, D.A. Milanes, M.-N. Minard, J. Molina Rodriguez, S.\n Monteil, D. Moran, P. Morawski, M.J. Morello, R. Mountain, I. Mous, F.\n Muheim, K. M\\\"uller, R. Muresan, B. Muryn, B. Muster, P. Naik, T. Nakada, R.\n Nandakumar, I. Nasteva, M. Needham, N. Neufeld, A.D. Nguyen, T.D. Nguyen, C.\n Nguyen-Mau, M. Nicol, V. Niess, R. Niet, N. Nikitin, T. Nikodem, A.\n Nomerotski, A. Novoselov, A. Oblakowska-Mucha, V. Obraztsov, S. Oggero, S.\n Ogilvy, O. Okhrimenko, R. Oldeman, M. Orlandea, J.M. Otalora Goicochea, P.\n Owen, B.K. Pal, A. Palano, M. Palutan, J. Panman, A. Papanestis, M.\n Pappagallo, C. Parkes, C.J. Parkinson, G. Passaleva, G.D. Patel, M. Patel,\n G.N. Patrick, C. Patrignani, C. Pavel-Nicorescu, A. Pazos Alvarez, A.\n Pellegrino, G. Penso, M. Pepe Altarelli, S. Perazzini, D.L. Perego, E. Perez\n Trigo, A. P\\'erez-Calero Yzquierdo, P. Perret, M. Perrin-Terrin, G. Pessina,\n K. Petridis, A. Petrolini, A. Phan, E. Picatoste Olloqui, B. Pietrzyk, T.\n Pila\\v{r}, D. Pinci, S. Playfer, M. Plo Casasus, F. Polci, G. Polok, A.\n Poluektov, E. Polycarpo, D. Popov, B. Popovici, C. Potterat, A. Powell, J.\n Prisciandaro, V. Pugatch, A. Puig Navarro, G. Punzi, W. Qian, J.H.\n Rademacker, B. Rakotomiaramanana, M.S. Rangel, I. Raniuk, N. Rauschmayr, G.\n Raven, S. Redford, M.M. Reid, A.C. dos Reis, S. Ricciardi, A. Richards, K.\n Rinnert, V. Rives Molina, D.A. Roa Romero, P. Robbe, E. Rodrigues, P.\n Rodriguez Perez, S. Roiser, V. Romanovsky, A. Romero Vidal, J. Rouvinet, T.\n Ruf, F. Ruffini, H. Ruiz, P. Ruiz Valls, G. Sabatino, J.J. Saborido Silva, N.\n Sagidova, P. Sail, B. Saitta, C. Salzmann, B. Sanmartin Sedes, M. Sannino, R.\n Santacesaria, C. Santamarina Rios, E. Santovetti, M. Sapunov, A. Sarti, C.\n Satriano, A. Satta, M. Savrie, D. Savrina, P. Schaack, M. Schiller, H.\n Schindler, M. Schlupp, M. Schmelling, B. Schmidt, O. Schneider, A. Schopper,\n M.-H. Schune, R. Schwemmer, B. Sciascia, A. Sciubba, M. Seco, A. Semennikov,\n K. Senderowska, I. Sepp, N. Serra, J. Serrano, P. Seyfert, M. Shapkin, I.\n Shapoval, P. Shatalov, Y. Shcheglov, T. Shears, L. Shekhtman, O. Shevchenko,\n V. Shevchenko, A. Shires, R. Silva Coutinho, T. Skwarnicki, N.A. Smith, E.\n Smith, M. Smith, M.D. Sokoloff, F.J.P. Soler, F. Soomro, D. Souza, B. Souza\n De Paula, B. Spaan, A. Sparkes, P. Spradlin, F. Stagni, S. Stahl, O.\n Steinkamp, S. Stoica, S. Stone, B. Storaci, M. Straticiuc, U. Straumann, V.K.\n Subbiah, S. Swientek, V. Syropoulos, M. Szczekowski, P. Szczypka, T. Szumlak,\n S. T'Jampens, M. Teklishyn, E. Teodorescu, F. Teubert, C. Thomas, E. Thomas,\n J. van Tilburg, V. Tisserand, M. Tobin, S. Tolk, D. Tonelli, S.\n Topp-Joergensen, N. Torr, E. Tournefier, S. Tourneur, M.T. Tran, M. Tresch,\n A. Tsaregorodtsev, P. Tsopelas, N. Tuning, M. Ubeda Garcia, A. Ukleja, D.\n Urner, U. Uwer, V. Vagnoni, G. Valenti, R. Vazquez Gomez, P. Vazquez\n Regueiro, S. Vecchi, J.J. Velthuis, M. Veltri, G. Veneziano, M. Vesterinen,\n B. Viaud, D. Vieira, X. Vilasis-Cardona, A. Vollhardt, D. Volyanskyy, D.\n Voong, A. Vorobyev, V. Vorobyev, C. Vo{\\ss}, H. Voss, R. Waldi, R. Wallace,\n S. Wandernoth, J. Wang, D.R. Ward, N.K. Watson, A.D. Webber, D. Websdale, M.\n Whitehead, J. Wicht, J. Wiechczynski, D. Wiedner, L. Wiggers, G. Wilkinson,\n M.P. Williams, M. Williams, F.F. Wilson, J. Wishahi, M. Witek, S.A. Wotton,\n S. Wright, S. Wu, K. Wyllie, Y. Xie, F. Xing, Z. Xing, Z. Yang, R. Young, X.\n Yuan, O. Yushchenko, M. Zangoli, M. Zavertyaev, F. Zhang, L. Zhang, W.C.\n Zhang, Y. Zhang, A. Zhelezov, A. Zhokhov, L. Zhong, A. Zvyagin",
"submitter": "Mark Whitehead",
"url": "https://arxiv.org/abs/1302.6446"
}
|
1302.6492
|
# Electromagnetic momentum and the energy–momentum tensor in a linear medium
with magnetic and dielectric properties
Michael E. Crenshaw US Army Aviation and Missile Research, Development, and
Engineering Center, Redstone Arsenal, AL 35898, USA
###### Abstract
The well-defined total energy and total momentum in a thermodynamically closed
system with complete equations of motion are used to construct the total
energy–momentum tensor for a stationary simple linear material with both
magnetic and dielectric properties illuminated by a quasimonochromatic pulse
of light through a gradient-index antireflection coating. The perplexing
issues surrounding the Abraham and Minkowski momenta are bypassed by working
entirely with conservation principles, the total energy, and the total
momentum. We derive electromagnetic continuity equations and equations of
motion for the macroscopic fields based on the material four-divergence of the
traceless, symmetric total energy–momentum tensor. The energy–momentum
formalism is consistent with the derivation of equations of motion for
macroscopic fields from a Lagrangian.
## I Introduction
The resolution of the Abraham–Minkowski momentum controversy for the
electromagnetic momentum and the energy–momentum tensor in a linear medium is
multifaceted, complex, and nuanced. It is not sufficient to simply derive an
electromagnetic momentum or energy–momentum tensor from the macroscopic
Maxwell equations. That much is apparent from any examination of the
scientific literature. After a century of study BIPfei ; BIMilBoy ; BIKemp ;
BIBaxL ; BIBarL , almost all aspects of the Abraham–Minkowski momentum
controversy have been carefully scrutinized, yet there is scarce mention in
the literature of conservation of linear momentum. None of the well-known
forms of continuum electrodynamic momentum, Minkowski BIMin , Abraham BIAbr ,
Einstein–Laub BIEinLau , Peierls BIPei , etc., are conserved and each one is
generally regarded as the momentum of some unspecified or arbitrary portion of
the whole system BIPfei ; BIMilBoy ; BIKemp ; BIBaxL ; BIBarL ; BIPei ; BIKran
; BIPenHau ; BIMikura ; BIGord ; BIBPRL . A number of composite momentums have
been constructed from the Abraham, Minkowski, or other momentum for the field
subsystem with a momentum for the material subsystem BIPfei ; BIMilBoy ;
BIKemp ; BIBaxL ; BIBarL ; BIPei ; BIKran ; BIPenHau ; BIMikura ; BIGord ;
BIBPRL . With the exception of the Gordon BIGord ; BICB ; BICB2 ; BISPIE
momentum in a dielectric, conservation of the composite momentum in a
thermodynamically closed system has not been explicitly demonstrated.
It is important to work with the conserved forms of energy and momentum in
order to avoid the ambiguous definitions of field and material momentums in
unclosed physical systems with incomplete equations of motion BIKran ; BICB ;
BICB2 ; BISPIE . In this article, we carefully define a thermodynamically
closed continuum electrodynamic system with complete equations of motion
containing a stationary simple linear medium illuminated by a plane
quasimonochromatic electromagnetic pulse through a gradient-index
antireflection coating and identify the conserved total energy and the
conserved total momentum, thereby extending prior work BICB ; BICB2 ; BISPIE
on dielectrics to a medium with both magnetic and dielectric properties.
Assuming the validity of the macroscopic Maxwell–Heaviside equations, the
total momentum in a simple linear medium is found using the law of
conservation of linear momentum. We then populate the total energy–momentum
tensor with the densities of the conserved energy and momentum using the
uniqueness property that certain elements of the tensor correspond to elements
of the four-momentum. We prove that the uniqueness property directly
contradicts the property that the four-divergence of the energy–momentum
tensor generates continuity equations BISPIE . We retain the uniqueness
property and recast the four divergence of the energy–momentum tensor in terms
of a material four-divergence operator whose time-like coordinate depends on
the refractive index BICB ; BICB2 ; BISPIE ; BIFinn . Once the elements and
properties of the energy–momentum tensor are defined, we derive the continuity
equations for the total energy. The total energy continuity equation is a
mixed second-order differential equation that we write as first-order
equations of motion for the macroscopic fields. These equations are
mathematically equivalent to the macroscopic Maxwell–Heaviside equations in
that they can be transformed into one another using vector identities. It is
not that simple, though, because the momentum continuity equation that is
derived from the equations of motion for the the fields is not consistent with
an energy–momentum formalism. We propose a resolution to this contradiction by
a reformulation of continuum electrodynamics in which the permittivity and
permeability are eliminated in favor of a linear refractive index that is
defined by the electric and magnetic properties of the material.
## II The Total Energy and the Total Momentum
A physical theory is an abstract mathematical description of some portion of a
real-world physical process. The theoretical description is useful to the
extent that there are correspondences between a subset of the theory and a
subset of the behavior of the real-world system BIRind . Real materials are
really complicated and we must carefully define the material properties and
boundary conditions in order to insure that the mathematical description
contains the most important characteristics while excluding the less
significant details. We define a simple linear material as an isotropic and
homogeneous medium that has a linear magnetic and a linear dielectric response
to an electromagnetic field that is tuned sufficiently far from any material
resonances that absorption and dispersion are negligible. Electrostrictive
effects and magnetostrictive effects are also taken as being negligible
BIMikura . Clearly, we are not dealing with any real material, but the
idealization of a simple linear material is necessary in order to develop the
basic concepts in a clear, concise, convincing, and incontrovertible manner.
A rectangular prism of the simple linear material in free space is illuminated
by a quasimonochromatic pulse of radiation at normal incidence in the plane-
wave limit. This system is thermodynamically closed with a conserved total
energy and a conserved total momentum but we also require a complete set of
equations of motion in order to obtain well-defined quantities for the total
energy and total momentum. There are various formulations of the
electrodynamics of uniformly moving media, but a complete treatment of
continuum electrodynamics in moving media is far from settled BIPenHau . We
also have to recognize that the material is accelerating due to a radiation
surface pressure that can be attributed to the partial reflection of the
incident radiation. In this work, the prism of material is covered with a thin
gradient-index antireflection coating, not an interference antireflection
coating, that makes reflections and the acceleration from radiation surface
pressure negligible allowing the material to be regarded as stationary in the
laboratory frame of reference. The assumption of a rare vapor, by Gordon
BIGord for example, accomplishes the same purpose of making reflections
negligible, but then requires justification to apply the results to a material
with a non-perturbative index of refraction. There is nothing unusual,
mysterious, nefarious, or confusing about the conditions described above
because the use of a gradient-index antireflection coating on a homogeneous
linear material is required, and has always been required, for a rigorous
application of Maxwell’s equations to solids. Still, Maxwell’s equations are
almost always applied to moving and accelerating materials without adequate
justification. That s not to say that the use of the macroscopic Maxwell
equations cannot be justified in most cases, but it is especially important to
justify the use of the macroscopic Maxwell equations when invoking
conservation properties.
Classical continuum electrodynamics is founded on the macroscopic Maxwell
equations, so that is where we start our work. The common macroscopic
Maxwell–Heaviside equations
$\nabla\times{\bf E}=-\frac{\mu}{c}\frac{\partial{\bf H}}{\partial t}$ (1)
$\nabla\times{\bf H}=\frac{\varepsilon}{c}\frac{\partial{\bf E}}{\partial t}$
(2) $\nabla\cdot{\bf B}=0$ (3) $\nabla\cdot\varepsilon{\bf E}=0$ (4)
are the complete equations of motion for the macroscopic fields in a
stationary simple linear medium. Here, ${\bf E}$ is the electric field, ${\bf
B}$ is the magnetic field, ${\bf H}={\bf B}/\mu$ is the auxiliary magnetic
field, $\varepsilon$ is the electric permittivity, $\mu$ is the magnetic
permeability, and $c$ is the speed of light in the vacuum. The medium is
explicitly required to be stationary because the macroscopic Maxwell–Heaviside
equations, Eqs. (1)–(4), are not complete for moving or accelerating
materials. The electric and magnetic fields can be defined in terms of the
vector potential in the usual manner as
${\bf E}=-\frac{1}{c}\frac{\partial{\bf A}}{\partial t}$ (5) ${\bf
B}=\nabla\times{\bf A}\,.$ (6)
Then, the propagation of an electromagnetic field through free-space and into
a simple linear medium is described by the wave equation,
$\nabla\times(\nabla\times{\bf A})+\frac{n^{2}}{c^{2}}\frac{\partial^{2}{\bf
A}}{\partial t^{2}}=\frac{\nabla\mu}{\mu}\times(\nabla\times{\bf A})\,,$ (7)
assuming the validity of the macroscopic Maxwell–Heaviside equations, Eqs.
(1)–(4). Here, $n({\bf r})=\sqrt{\mu\varepsilon}$ is the spatially slowly
varying linear refractive index. The spatial variation of the index and
permeability is limited to a narrow transition region in which these
quantities change gradually from the vacuum values to the nominal material
properties. We write the vector potential in terms of a slowly varying
envelope function and a carrier wave as
${\bf A}({\bf r},t)=\frac{1}{2}\left({\bf\tilde{A}}({\bf
r},t)e^{-i(\omega_{d}t-{\bf k}_{d}\cdot{\bf r})}+{\bf\tilde{A}}^{*}({\bf
r},t)e^{i(\omega_{d}t-{\bf k}_{d}\cdot{\bf r})}\right)\,,$ (8)
where $\tilde{A}$ is a slowly varying function of ${\bf r}$ and $t$, ${\bf
k}_{d}=(n\omega_{d}/c){\bf e}_{\bf k}$ is the wave vector that is associated
with the center frequency of the field $\omega_{d}$, and ${\bf e}_{\bf k}$ is
a unit vector in the direction of propagation.
Figure 1 shows a one-dimensional representation of the slowly varying
amplitude of the plane incident field
$\tilde{A}_{i}(z)=({\bf\tilde{A}}(z,t_{0})\cdot{\bf\tilde{A}}^{*}(z,t_{0}))^{1/2}$
about to enter the simple linear medium with index $n=1.386$ and permeability
$\mu=1.2$ through a gradient-index antireflection coating.
Figure 1: Incident field (amplitude arbitrary) just before it enters the
linear medium.
The gradient that has been applied to the index has also been applied to the
permeability as a matter of convenience. Figure 2 presents a time-domain
numerical solution of the wave equation at a later time $t_{1}$ when the
refracted field
$\tilde{A}_{t}(z)=({\bf\tilde{A}}(z,t_{1})\cdot{\bf\tilde{A}}^{*}(z,t_{1}))^{1/2}$
is entirely inside the medium.
Figure 2: Refracted field entirely within the linear medium.
The pulse has not propagated as far as it would have propagated in the vacuum
due to the reduced speed of light $c/n$ in the material. In addition, the
spatial extent of the refracted pulse in the medium is $w_{t}=w_{i}/n$ in
terms of the width $w_{i}$ of the incidence pulse due to the reduced speed of
light. As shown in Fig. 2, the amplitude of the refracted field is
${\tilde{A}}_{t}=\sqrt{{\mu}/{n}}{\tilde{A}}_{i}$, that is, the amplitude of
the incident field scaled by $\sqrt{\mu/n}$. Using these relations we can
construct the temporally invariant quantity
$U_{total}=\int_{\sigma}\frac{n}{2}\left(\sqrt{\frac{n}{\mu}}\frac{\omega_{d}}{c}\tilde{A}\right)\left(\sqrt{\frac{n}{\mu}}\frac{\omega_{d}}{c}\tilde{A}\right)^{*}dv$
(9)
and note that $U_{total}$ is conserved when the integration is performed over
a region $V$ that contains all of the electromagnetic field that is present in
the system. In that case, the region of integration $V$ is extended to all
space, $\sigma$. The pulse that is used in Fig. 1 has a generally rectangular
shape in order to facilitate a graphical interpretation of pulse width and
integration under the field envelope. It can be shown by additional numerical
solutions of the wave equation that the relations described above are quite
general in terms of the permeability and the refractive index, as well as the
shape and amplitude of a quasimonochromatic field.
For a stationary linear medium, the total energy is the electromagnetic
energy. Although a material may possess many forms of energy, we intend total
energy to mean all of the energy that impacts the dynamics or electrodynamics
of the model system with the specified conditions. In particular, kinetic
energy is excepted by the requirement for the material to remain stationary.
We assume that the electromagnetic energy density for a stationary simple
linear medium is
$\rho_{e}=\frac{1}{2}\frac{1}{\mu}\left(n^{2}{\bf E}^{2}+{\bf
B}^{2}\right)dv\,,$ (10)
as inferred from the Poynting theorem. Again extending the region of
integration $V$ that contains all fields present to integration over all space
$\sigma$, the electromagnetic energy,
$U_{e}=\int_{\sigma}\rho_{e}dv=\int_{\sigma}\frac{1}{2}\frac{1}{\mu}\left(n^{2}{\bf
E}^{2}+{\bf B}^{2}\right)dv\,,$ (11)
is the total energy of the closed system containing a stationary simple linear
medium illuminated by a quasimonochromatic pulse of light. Using Eqs. (5) and
(6) to eliminate the electric and magnetic fields in Eq. (11), we obtain the
energy formula
$U_{e}=\int_{\sigma}\frac{1}{2}\frac{1}{\mu}\left(\left(\frac{n}{c}\frac{\partial{\bf
A}}{\partial t}\right)^{2}+(\nabla\times{\bf A})^{2}\right)dv\,.$ (12)
Employing the expression for the vector potential in terms of slowly varying
envelope functions and carrier waves, Eq. (8), results in
$U_{e}=\int_{\sigma}\frac{\omega_{d}^{2}n^{2}}{4\mu
c^{2}}\left(\tilde{A}\tilde{A}^{*}-\tilde{A}^{2}e^{-2i(\omega_{d}-k_{d}z)t}+c.c.\right)dv\,.$
(13)
Applying a time average allows the double frequency terms to be neglected and
shows the equality of Eqs. (9) and (12),
$U_{e}=\int_{\sigma}\frac{\omega_{d}^{2}n^{2}}{2\mu
c^{2}}|\tilde{A}|^{2}dv=U_{total}\,.$ (14)
We have theoretical confirmation that our numerical $U_{total}$, Eq. (9), is
the total energy of the system and that it is conserved. More importantly, we
have an interpretation of the continuum electrodynamic energy formula, Eq.
(11), in term of what happens to the shape and amplitude of the
electromagnetic field as it propagates into the medium.
In the first half of the last century, there was an ongoing theoretical and
experimental effort devoted to determining the correct description of the
momentum of light propagating through a linear medium BIPfei ; BIMilBoy ;
BIKemp ; BIBaxL ; BIBarL . At various times, the issue was resolved in favor
of the Abraham momentum
${\bf G}_{A}=\int_{\sigma}\frac{{\bf E}\times{\bf H}}{c}dv$ (15)
and at other times the momentum was found to be the Minkowski momentum
${\bf G}_{M}=\int_{\sigma}\frac{{\bf D}\times{\bf B}}{c}dv\,.$ (16)
Since Penfield and Haus BIPenHau showed, in 1967, that neither the Abraham
momentum nor the Minkowski momentum is the total momentum, the resolution of
the Abraham–Minkowski controversy has been that the momentum for an arbitrary
or unspecified field subsystem must be supplemented by an appropriate material
momentum to obtain the total momentum BIPfei ; BIMilBoy ; BIKemp ; BIBaxL ;
BIBarL ; BIPei ; BIKran ; BIPenHau ; BIMikura ; BIGord ; BIBPRL . Then, it
must be demonstrated that the total momentum, so constructed, is actually
conserved in a thermodynamically closed system, as that is usually not the
case. The existence of a continuity equation, also known as a conservation
law, is not sufficient to prove conservation, as evidenced by the non-
conservation of Poynting’s vector ${\bf S}=c{\bf E}\times{\bf H}$.
Given the arbitrariness of the field momentum and the model dependence of the
material momentum that go into a construction of the total momentum, the only
convincing way to determine the total momentum is by the application of
conservation principles. Then conservation of the total energy is all that is
needed to prove that the total momentum BIGord ; BICB ; BICB2 ; BISPIE
${\bf G}_{total}=\int_{\sigma}\frac{1}{\mu}\frac{n{\bf E}\times{\bf
B}}{c}dv=\int_{\sigma}\frac{n{\bf E}\times{\bf H}}{c}dv$ (17)
is conserved in our closed system. In the limit of slowly varying plane waves,
we have the conserved vector quantity
${\bf G}_{total}=\int_{\sigma}\frac{\omega_{d}^{2}n^{2}}{2\mu
c^{3}}|\tilde{A}|^{2}dv{\bf e}_{\bf k}=\frac{U_{total}}{c}{\bf e}_{\bf k}$
(18)
that is equal to the conserved total energy, Eq. (14), divided by a constant
factor of $c$ and multiplied by a unit vector ${\bf e}_{\bf k}$. In addition
to being conserved, the total momentum, Eq. (17), is unique. There can be no
other non-trivial conserved macroscopic momentum in terms of the vector
product of electric and magnetic fields in our model system because these
would contain different combinations of the material properties. The momentum
formula, Eq. (17), was originally derived for a dielectric in 1973 by Gordon
BIGord who combined the Abraham momentum for the field with a material
momentum that was obtained from integrating the Lorentz force on the
microscopic constituents of matter. At the time, Gordon’s momentum was
unremarkable among a large group of momentum formulas that had been derived by
various methods. In fact, Gordon viewed it as a portion of the Minkowski
momentum, along with a pseudomomentum. What is extraordinary about the Gordon
momentum is that it is uniquely conserved in a thermodynamically closed
continuum electrodynamic system containing a stationary simple linear medium.
## III Total Energy–Momentum Tensor
A tensor is a very special mathematical object and a total energy–momentum
tensor is subject to even more stringent conditions. Any four continuity
equations can be combined using linear algebra to form a matrix differential
equation. Hence, the proliferation of matrices that have been purported to be
the energy–momentum tensor for a system containing an electromagnetic pulse
and a linear medium. In this section, we construct the unique total
energy–momentum tensor and the tensor continuity equation for a stationary
simple linear medium using the conservation, symmetry, trace, and divergence
conditions that must be satisfied.
For an unimpeded (force-free) flow, conservation of the components of the
total four-momentum $(U,{\bf G})$ BILL
$U=\int_{\sigma}T^{00}dv$ (19) $G^{i}=\frac{1}{c}\int_{\sigma}T^{i0}dv$ (20)
uniquely determines the first column of the total energy–momentum tensor. We
use the convention that Roman indices from the middle of the alphabet, like
$i$, run from 1 to 3 and Greek indices belong to $(0,1,2,3)$. As in Section
II, the region of integration has been extended to all-space, $\sigma$.
Conservation of angular momentum in a closed system imposes the diagonal
symmetry condition BILL
$T^{\alpha\beta}=T^{\beta\alpha}$ (21)
and uniquely determines the first row of the total energy–momentum tensor
based on the uniqueness of the first column. Applying the conservation of the
total energy and total momentum, we can construct the total energy–momentum
tensor
$T^{\alpha\beta}=$ $\left[\begin{matrix}(n^{2}{\bf E}^{2}+{\bf
B}^{2})/(2\mu)&(n{\bf E}\times{\bf H})_{x}&(n{\bf E}\times{\bf H})_{y}&(n{\bf
E}\times{\bf H})_{z}\cr(n{\bf E}\times{\bf
H})_{x}&W_{11}&W_{12}&W_{13}\cr(n{\bf E}\times{\bf
H})_{y}&W_{21}&W_{22}&W_{23}\cr(n{\bf E}\times{\bf
H})_{z}&W_{31}&W_{32}&W_{33}\cr\end{matrix}\right]$ (22)
where the elements of the Maxwell stress-tensor $W$ are yet to be specified.
Our approach here is very different from the usual technique of constructing
the energy–momentum tensor from the electromagnetic continuity equations. The
energy continuity equation is known to be given by Poynting’s theorem
$\frac{1}{c}\frac{\partial\rho_{e}}{\partial t}+\nabla\cdot\left({\bf
E}\times{\bf H}\right)=0\,.$ (23)
However if we use Poynting’s theorem and the tensor energy continuity law
$\partial_{\beta}T^{0\beta}=0$ (24)
to populate the first row of the energy–momentum tensor, then Eq. (20) becomes
$G^{i}=\frac{1}{c}\int_{\sigma}({\bf E}\times{\bf H})_{i}dv$ (25)
by symmetry, Eq. (21). This result is contraindicated because
${\bf G}=\frac{1}{c}\int_{\sigma}\frac{\omega_{d}^{2}n}{2\mu
c^{2}}|\tilde{A}|^{2}{\bf e}_{\bf k}dv=\frac{U_{total}}{nc}{\bf e}_{\bf k}$
(26)
is not temporally invariant as the pulse travels from the vacuum into the
medium. The practice has been to ignore the condition that the components of
the four-momentum, Eqs. (19) and (20), be conserved and use the components of
the Poynting vector, along with the electromagnetic energy density, to
populate the first row of the energy–momentum tensor using the tensor energy
continuity law, Eq. (24).
We regard the conservation properties of the total energy–momentum tensor as
fundamental and these conservation properties conclusively establish Eq. (22)
as the form of the energy–momentum tensor. Applying the tensor continuity
equation, Eq. (24), to the energy–momentum tensor, Eq. (22), we obtain an
energy continuity equation
$\frac{1}{c}\frac{\partial\rho_{e}}{\partial t}+\nabla\cdot\left(n{\bf
E}\times{\bf H}\right)=0$ (27)
that is inconsistent with the Poynting theorem, Eq. (23). In the limit of
slowly varying plane waves, we obtain an obvious contradiction
$\frac{n^{2}\omega_{d}^{2}}{2\mu
c^{3}}\left({\bf\tilde{A}}\frac{\partial{\bf\tilde{A}}^{*}}{\partial
t}+c.c.\right)-\frac{n^{3}\omega_{d}^{2}}{2\mu
c^{3}}\left({\bf\tilde{A}}\frac{\partial{\bf\tilde{A}}^{*}}{\partial
t}+c.c.\right)=0$ (28)
from the energy continuity equation, Eq. (27).
We have identified the essential contradiction of the Abraham–Minkowski
controversy: Absent the uniqueness that accompanies conservation of total
energy and total momentum, the components of the first row and column of the
energy–momentum tensor are essentially arbitrary thereby rendering the
energy–momentum tensor meaningless. Yet, the energy continuity equation that
is obtained from the four-divergence of an energy–momentum tensor that is
populated with the densities of the conserved energy and momentum quantities
is demonstrably false. We are at an impasse with the existing theory that
requires two contradictory conditions to be satisfied BISPIE .
The resolution of this contradiction cannot be found within the formal system
of continuum electrodynamics. Based on the reduced speed of light in the
medium, $c/n$, we make the $ansatz$ that the continuity equations are
generated by
$\bar{\partial}_{\beta}T^{\alpha\beta}=f_{\alpha}\,,$ (29)
where $\bar{\partial}_{\beta}$ is the material four-divergence operator BICB ;
BICB2 ; BISPIE ; BIFinn
$\bar{\partial}_{\beta}=\left(\frac{n}{c}\frac{\partial}{\partial
t},\frac{\partial}{\partial x},\frac{\partial}{\partial
y},\frac{\partial}{\partial z}\right)\,.$ (30)
The continuity equation, Eq. (29), is generalized in a form that allows a
source/sink $f_{\alpha}$ of energy and the components of momentum BICB2 .
Reference BIxxx provides a solid theoretical justification for the ansatz of
Eqs. (29) and (30). Applying the tensor continuity law, Eq. (29), to the
energy–momentum tensor, Eq. (22), we obtain the energy continuity equation
$\frac{n}{c}\frac{\partial\rho_{e}}{\partial t}+\nabla\cdot\left(n{\bf
E}\times{\bf H}\right)=f_{0}\,.$ (31)
Now, let us re-consider the Poynting theorem. We multiply Poynting’s theorem,
Eq. (23), by $n$ and use a vector identity to commute the index of refraction
with the divergence operator to obtain
$\frac{n}{c}\frac{\partial\rho_{e}}{\partial t}+\nabla\cdot\left(n{\bf
E}\times{\bf H}\right)=\frac{\nabla n}{n}\cdot(n{\bf E}\times{\bf H})\,.$ (32)
The two energy continuity equations, Eqs. (31) and (32), are equal if we
identify the source/sink of energy as
$f_{0}=\frac{\nabla n}{n}\cdot(n{\bf E}\times{\bf H})\,.$ (33)
Then an inhomogeneity in the index of refraction, $\nabla n\neq 0$, is
associated with work done by the field on the material or work done on the
field by the interaction with the material BICB2 . Obviously, the gradient of
the index of refraction in our anti-reflection coating must be sufficiently
small that the work done is perturbative.
The Poynting’s theorem that is found in Eq. (32) is a mixed second-order
vector differential equation. It can be separated into two mixed first-order
vector differential equations for the macroscopic electric and magnetic fields
$\frac{n}{c}\frac{\partial{\bf B}}{\partial t}=-\nabla\times n{\bf
E}+\frac{\nabla n}{n}\times n{\bf E}$ (34) $\frac{n}{c}\frac{\partial n{\bf
E}}{\partial t}=\mu\nabla\times{\bf H}.$ (35)
We provide confirmation of this result in the Appendix, deriving the equations
of motion for the macroscopic fields from a Lagrangian.
The remaining task is to show that the material four-divergence of the
energy–momentum tensor is a faithful representation of the electromagnetic
continuity equations. We multiply Eq. (34) by ${\bf H}$ and multiply Eq. (35)
by $n{\bf E}$. The resulting equations are summed to produce the energy
continuity equation
$\frac{n}{c}\frac{\partial\rho_{e}}{\partial t}+\nabla\cdot\left(n{\bf
E}\times{\bf H}\right)=\frac{\nabla n}{n}\cdot(n{\bf E}\times{\bf H})\,.$ (36)
To obtain the total momentum continuity equation, we substitute Eqs. (34) and
(35) into the material timelike derivative of the total momentum density
$\frac{n}{c}\frac{\partial{\bf g}_{total}}{\partial
t}=\frac{1}{\mu}\frac{n}{c}\frac{\partial n{\bf E}}{\partial t}\times{\bf
B}+n{\bf E}\times\frac{1}{\mu}\frac{n}{c}\frac{\partial{\bf B}}{\partial t}\,$
(37)
where the momentum density ${\bf g}_{total}$ is the integrand of Eq. (17). The
momentum continuity equation
$\frac{n}{c}\frac{\partial{\bf g}_{total}}{\partial t}=$
$\frac{1}{c\mu}\left[(\nabla\times n{\bf E})\times n{\bf
E}+\mu^{2}(\nabla\times{\bf H})\times{\bf H}+\frac{\nabla n}{n}(n{\bf
E})^{2}\right]$ (38)
is not expressible using the tensor continuity equation, Eq. (29), without
additional transformations.
There is a scientific record of algebraic transformations of the energy and
momentum continuity equations. Abraham BIAbr was apparently the first to
pursue this approach, deriving a variant form of the continuity equation for
the Minkowski momentum in which the time-derivative of the Minkowski momentum
is split into the temporal derivative of the Abraham momentum and a fictitious
Abraham force. Kinsler, Favaro, and McCall BIKins discuss various
transformations of Poynting’s theorem and the resulting differences in the way
various physical processes are expressed. Frias and Smolyakov BIFrias did the
same for transformations of the momentum continuity equation. It appears to
have gone unrecognized, however, that the different expression of physical
processes in the electromagnetic continuity equations has to carry over into
the Maxwell equations of motion for the macroscopic fields, Eqs. (34) and
(35). As a consequence, the energy and momentum continuity equations cannot be
separately transformed. We could, for example, commute the magnetic
permeability with the curl operator in the Maxwell–Ampère Law, Eq. (35) in
order to allow the momentum continuity law, Eq. (38) to be expressed in the
form of a continuity equation. However, the resulting energy continuity law
would run afoul of the uniqueness condition on the elements of the first row
of the energy–momentum tensor.
The energy–momentum tensor stands at the center of theoretical continuum
electrodynamics. The total energy–momentum tensor, Eq. (22), was constructed
using conservation principles and it is inarguably correct as long as the
formula for the energy, Eq. (11), is correct. However, there is a
contradiction between the continuity equations of energy and momentum for a
linear medium with a full or partial magnetic response. Again, we are
confronted with a situation of contradiction that we cannot derive our way out
of and we must justify a correction based on an analysis of the physical
model.
## IV The Linear Index of Refraction
Consider a rectangular volume of space $V=Ad$, where $A$ is the cross-
sectional area perpendicular to the direction of an incident electromagnetic
pulse of radiation in the plane-wave limit. The volume is separated into
halves by a barrier at $d/2$ and half of the volume is filled with a vapor
with refractive index $n_{1}$. The other half of the volume is filled with a
vapor with refractive index $n_{2}$. We define a material variable
$\xi_{i}=n_{i}-1$. Then the time that it takes a light pulse to traverse the
volume is $(\xi_{1}+\xi_{2})d/(2c)$ greater than the time it takes light to
travel the same distance in a vacuum. Note that reflections do not affect the
time-of-flight of the pulse, only the amount of light that is transmitted. The
time-of-flight of the light pulse does not change when the barrier is removed.
The atoms of each vapor are noninteracting and there is no physical process
that would affect the time-of-flight of the light pulse as the two species of
atoms diffuse into each other. Generalizing to an arbitrary number of
components, the linear refractive index of a mixture of vapors obeys a
superposition principle
$n=1+\sum_{i=1}^{q}\xi_{i}$ (39)
for $q$ components. If we take $\xi_{0}=1$ for the vacuum, then
$n=\sum_{i=0}^{q}\xi_{i}$ (40)
As long as the atoms are noninteracting, the superposition principle holds
whether the vapors are composed of atoms that behave as electric dipoles,
magnetic dipoles, or as a mixture of both. Near-dipole–dipole interactions,
and other interactions between atoms, require a more detailed and model-
dependent treatment that is outside the scope of the current work.
Now that the linear refractive index is truly linear, we can regard the
permittivity and permeability as anachronisms. The refractive index carries
the material effects and we can eliminate the permeability and permittivity
from the equations of motion for the macroscopic fields, Eqs. (34) and (35),
such that
$\frac{n}{c}\frac{\partial{\bf B}}{\partial
t}=\nabla\times{\bf\Pi}-\frac{\nabla n}{n}\times{\bf\Pi}$ (41)
$\frac{n}{c}\frac{\partial{\bf\Pi}}{\partial t}=-\nabla\times{\bf B}$ (42)
where $\Pi=-n{\bf E}$. The equations of motion for the macroscopic fields,
Eqs. (41) and (42), can be combined in the usual manner to write an energy
continuity equation
$\frac{n}{c}\frac{\partial}{\partial
t}\left[\frac{1}{2}\left({\bf\Pi}^{2}+{\bf
B}^{2}\right)\right]+\nabla\cdot\left({\bf
B}\times{\bf\Pi}\right)=\frac{\nabla n}{n}\cdot({\bf B}\times{\bf\Pi})$ (43)
in terms of a total energy density
$\rho_{total}=\frac{1}{2}\left({\bf\Pi}^{2}+{\bf B}^{2}\right)\,,$ (44)
a total momentum density
${\bf g}_{total}=\frac{{\bf B}\times{\bf\Pi}}{c}\,,$ (45)
and a power density
$p_{total}=\frac{\nabla n}{n}\cdot({\bf B}\times{\bf\Pi})\,.$ (46)
Substituting Eqs. (49) and (50) into the material timelike derivative of the
total momentum, Eq. (45), the momentum continuity equation becomes
$\frac{n}{c}\frac{\partial{\bf g}_{total}}{\partial
t}+\frac{1}{c}\nabla\cdot{\bf W}=-\frac{\nabla n}{nc}(n{\bf E})^{2}$ (47)
where the Maxwell stress-tensor $W$ is
$W_{ij}=-\Pi_{i}\Pi_{j}-B_{i}B_{j}+\frac{1}{2}({\bf\Pi}^{2}+{\bf
B}^{2})\delta_{ij}\,.$ (48)
We can construct the total energy–momentum tensor
$T^{\alpha\beta}=\left[\begin{matrix}({\bf\Pi}^{2}+{\bf B}^{2})/2&({\bf
B}\times{\bf\Pi})_{x}&({\bf B}\times{\bf\Pi})_{y}&({\bf
B}\times{\bf\Pi})_{z}\cr({\bf
B}\times{\bf\Pi})_{x}&W_{11}&W_{12}&W_{13}\cr({\bf
B}\times{\bf\Pi})_{y}&W_{21}&W_{22}&W_{23}\cr({\bf
B}\times{\bf\Pi})_{z}&W_{31}&W_{32}&W_{33}\cr\end{matrix}\right]$ (49)
from the homogeneous part of the new electromagnetic continuity equations,
Eqs. (43) and (47). The total energy–momentum tensor, Eq. (49), is entirely
electromagnetic in character BIyyy and there is no need for a supplemental
dust energy–momentum tensor for the movement of the material BIPfei . The
Maxwell–Ampère Law can be written in terms of the vector potential as a wave
equation
$\nabla\times(\nabla\times{\bf A})+\frac{n^{2}}{c^{2}}\frac{\partial^{2}{\bf
A}}{\partial t^{2}}=0\,,$ (50)
without the inhomogeneous part found in Eq. (7). We also restate the
electromagnetic, or total, energy, Eq. (11),
$U_{e}=\int_{\sigma}\rho_{e}dv=\int_{\sigma}\frac{1}{2}\left(n^{2}{\bf
E}^{2}+{\bf B}^{2}\right)dv\,,$ (51)
and the total, or Gordon, momentum, Eq. (17),
${\bf G}_{total}=\int_{\sigma}\frac{n{\bf E}\times{\bf B}}{c}dv\,.$ (52)
The ${\\{\bf E}$, ${\bf D}$, ${\bf B}$, ${\bf H}\\}$, paradigm of classical
continuum electrodynamics is altogether broken. That much was obvious from the
conservation of the Gordon form of momentum in Refs. BIGord and BICB , also
in Eq. (17). The Abraham and Minkowski momentums are expressible solely in
terms of the ${\\{\bf E}$, ${\bf D}$, ${\bf B}$, ${\bf H}$} fields, while the
Gordon momentum is not. This is likely a contributing factor to the longevity
of the Abraham–Minkowski controversy. The Gordon momentum and the macroscopic
equations of motion depend on the fields ${\bf\Pi}$ and ${\bf B}$.
Consequently, continuum electrodynamics can be reformulated in terms of a
single pair of fields $\\{{\bf\Pi},{\bf B}\\}$ and a single field tensor BIxxx
$F^{\alpha\beta}=\left[\begin{matrix}0&\Pi_{x}&\Pi_{y}&\Pi_{z}\cr-\Pi_{x}&0&-B_{z}&B_{y}\cr-\Pi_{y}&B_{z}&0&-B_{x}\cr-\Pi_{z}&-B_{y}&B_{x}&0\cr\end{matrix}\right]\,.$
(53)
The reduction to a single pair of fields and a single field tensor is an
exquisite simplification of continuum electrodynamics.
The changes to the theoretical treatment of electromagnetic fields in linear
media that have been presented here are stunning and it is customary to
propose experiments that would validate unordinary theoretical results.
However, the existing experimental record related to the Abraham–Minkowski
controversy is an indication of serious technical and conceptual difficulties
in the measurement of continuum electrodynamic phenomena related to
macroscopic fields inside materials, fields that cannot be measured directly.
On the other hand, the dilation of time $t^{\prime}=t/n$ in a linear medium is
relatively easy to measure. An open optical cavity, or free-space etalon, is a
clock that ticks once for every round-trip of a light pulse and that ticks
more slowly when it is immersed in a dielectric fluid.
## V Conclusion
The Abraham–Minkowski momentum controversy has been treated as an ignorable
curiosity of classical continuum electrodynamics for a very long time.
However, we should never close our minds to opportunities to gain a more
complete understanding of the physical world. The Abraham–Minkowski
controversy is readily resolved by conservation of the total momentum of a
system consisting of a quasimonochromatic pulse of radiation passing through a
stationary simple linear medium with a gradient-index antireflection coating.
Unlike previous resolutions of the Abraham–Minkowski momentum dilemma, this
one is well-defined because it is based on exact conservation laws in a
thermodynamically closed system with complete equations of motion.
*
## Appendix A Equations of Motion for Macroscopic Fields
In Ref. BIxxx , we considered a simple linear medium in which the speed of
light is $c/n$. The generalization of the Lagrange equations for fields in a
linear medium is
$\frac{d}{d\bar{x}_{0}}\frac{\partial{\cal L}}{\partial(\partial
A_{j}/\partial\bar{x}_{0})}=\frac{\partial{\cal L}}{\partial
A_{j}}-\sum_{i}\partial_{i}\frac{\partial{\cal
L}}{\partial(\partial_{i}A_{j})},$ (54)
where $\bar{x}_{0}=ct/n$ is the time-like coordinate in the dielectric and
$x_{1}$, $x_{2}$, and $x_{3}$ correspond to the respective $x$, $y$ and $z$
coordinates. The conjugate momentum field
$\Pi_{j}=\frac{\partial{\cal L}}{\partial(\partial
A_{j}/\partial\bar{x}_{0})}$ (55)
is used to construct the Hamiltonian density
${\cal H}=\sum_{j}\Pi_{j}\frac{\partial A_{j}}{\partial\bar{x}_{0}}-{\cal L}$
(56)
from which Hamilton’s equations of motion
$\frac{\partial A_{j}}{\partial\bar{x}_{0}}=\frac{\partial{\cal
H}}{\partial\Pi_{j}}$ (57)
$\frac{\partial\Pi_{j}}{\partial\bar{x}_{0}}=-\frac{\partial{\cal H}}{\partial
A_{j}}+\sum_{i}\partial_{i}\frac{\partial{\cal
H}}{\partial(\partial_{i}A_{j})}$ (58)
are derived.
We take the Lagrangian density of the electromagnetic field in the medium to
be
${\cal L}=\frac{1}{2}\left(\frac{1}{\mu}\left(\frac{\partial{\bf
A}}{\partial\bar{x}_{0}}\right)^{2}-\frac{(\nabla\times{\bf
A})^{2}}{\mu}\right)$ (59)
in the absence of charges. Applying Eq. (55), the momentum field
${\bf\Pi}=\frac{1}{\mu}\frac{\partial{\bf A}}{\partial\bar{x}_{0}}$ (60)
is used to construct the Hamiltonian density
${\cal H}=\frac{1}{2}\left(\mu{\bf\Pi}^{2}+\frac{(\nabla\times{\bf
A})^{2}}{\mu}\right).$ (61)
Hamilton’s equations of motion in the linear medium
$\frac{\partial{\bf A}}{\partial\bar{x}_{0}}=\mu{\bf\Pi}$ (62)
$\frac{\partial{\bf\Pi}}{\partial\bar{x}_{0}}=-\nabla\times\frac{\nabla\times{\bf
A}}{\mu}$ (63)
are obtained from the Hamiltonian density, Eq. (61), using Eqs. (57) and (58).
Next, we use Eqs. (5) and (6) to eliminate the vector potential in favor of
the macroscopic electric and magnetic fields. From Eq. (62), we have
${\bf\Pi}=-n{\bf E}/\mu$. Making this substitution for ${\bf\Pi}$, we have
equations of motion for the macroscopic fields
$\frac{n}{c}\frac{\partial{\bf B}}{\partial t}=-\nabla\times n{\bf
E}+\frac{\nabla n}{n}\times n{\bf E}$ (64) $\mu\frac{n}{c}\frac{\partial n{\bf
E}}{\partial t}=\mu\nabla\times{\bf H}=\nabla\times{\bf
B}-\frac{\nabla\mu}{\mu}\times{\bf B}.$ (65)
The equations of motion for the macroscopic fields, Eqs. (64) and (65),
confirm the results, Eqs. (34) and (35), that were obtained from the
energy–momentum tensor through the material four-divergence and continuity
equations.
## References
* (1) R. N. C. Pfeifer, T. A. Nieminen, N. R. Heckenberg, and H. Rubinsztein-Dunlop, Rev. Mod. Phys. 79, 1197–1216 (2007).
* (2) P. W. Milonni and R. W. Boyd, Adv. Opt. Photon. 2, 519–553 (2010).
* (3) B. A. Kemp, J. Appl. Phys. 109, 111101 (2011).
* (4) C. Baxter and R. Loudon, J. Mod. Opt. 57, 830–842 (2010).
* (5) S. M. Barnett and R. Loudon, Phil. Trans. R. Soc. A 368, 927–939 (2010).
* (6) H. Minkowski, Natches. Ged. Wiss. Göttingen 53 (1908).
* (7) M. Abraham, Rend. Circ. Mat. Palermo 28, 1 (1909).
* (8) A. Einstein and J. Laub, Annalen der Physik 331, 532–540 (1908).
* (9) R. Peierls, Proc. R. Soc. Lond. A 347, 475-491 (1976).
* (10) M. Kranys, Int. J. Engng. Sci. 20, 1193-1213 (1982).
* (11) P. Penfield, Jr. and H. A. Haus, Electrodynamics of Moving Media (MIT Press, 1967).
* (12) Z. Mikura, Phys. Rev. A 13, 2265–2275 (1976).
* (13) S. M. Barnett, Phys. Rev. Lett. 104, 070401 (2010).
* (14) J. P. Gordon, Phys. Rev. A 8, 14-21 (1973).
* (15) M. E. Crenshaw and T. B. Bahder, Opt. Commun. 284, 2460-2465 (2011).
* (16) M. E. Crenshaw and T. B. Bahder, Opt. Commun. 285, 5180–5183 (2012).
* (17) M. E. Crenshaw, Proc. SPIE 8458, Optical Trapping and Optical Micromanipulation IX, 845804 (2012).
* (18) F. Ravndal, arxiv:0804.4013v3 (2008).
* (19) M. E. Crenshaw, arxiv:1303.1412 (2013).
* (20) W. Rindler, Introduction to Special Relativity (Oxford, New York, 1982).
* (21) L. D. Landau and E. M. Lifshitz, Electrodynamics of Continuous Media (Addison-Wesley, Reading, 1960).
* (22) P. Kinsler, A. Favaro, and M. W. McCall, European J. Phys., 30, 983 (2009).
* (23) W. Frias and A. I. Smolyakov, Phys. Rev. E 85, 046606 (2012).
* (24) M. E. Crenshaw, arxiv:1303.4980 (2013).
|
arxiv-papers
| 2013-02-26T16:54:57 |
2024-09-04T02:49:42.157091
|
{
"license": "Public Domain",
"authors": "Michael E. Crenshaw",
"submitter": "Michael Crenshaw",
"url": "https://arxiv.org/abs/1302.6492"
}
|
1302.6541
|
# Ultra-short, off-resonant, strong excitation of two-level systems
Pankaj K. Jha1,111Email: [email protected], Hichem Eleuch2,3, Fabio
Grazioso3 1Department of Physics and Astronomy, Texas A&M University, College
Station, Texas 77843, USA
2Ecole Polytechnique, C.P. 6079, Succ. Center Ville, Montréal(QC), H3C 3A7,
Canada
3DIRO, Université de Montréal, H3T 1J4, Montréal, Canada
###### Abstract
We present a model describing the use of ultra-short strong pulses to populate
the excited level of a two-level quantum system. In particular, we study an
off-resonance excitation with a few cycles pulse which presents a _smooth
phase jump_ i.e. a change of the pulse’s phase which is not step-like, but
happens over a finite time interval. A numerical solution is given for the
time-dependent probability amplitude of the excited level. The enhancement of
the excited level’s population is optimized with respect to the shape of the
phase transient, and to other parameters of the excitation pulse.
###### pacs:
42.65.Re 32.80.Qk 42.50.-p 03.65.-w
## I Introduction
Ultra-strong pulses with intensities of the order of $10^{15}$W/cm2, and
duration of the order of attoseconds, with just few optical cycles, are
feasible with present day technology (see e.g. H1 ; H2 ; Goulielmakis-04 ;
Corkum-07 ; Tsubouchi-08 ). This technological development has been motivated
by the large number of possible applications, several of which rely on
coherent population transfer techniques. A partial list of such applications
is: stimulated Raman adiabatic passage (STI-RAP) Garcia-05 ; Zhang-11 ;
Grigoryan-12 ; Nakamura-13 , adiabatic rapid passage (ARP) Jiang-13 , Raman
chirped adiabatic passage (RCAP) Chang-01 ; Yang-07 , temporal coherent
control (TCC) Brabec-00 ; Li-13 , coherent population trapping Harris-97 ;
Issler-10 , optical control of chemical reactions Yang-10 ; Cerullo-12 ,
electromagnetically induced transparency (EIT) Harris-97 ; el1 ; el2 ; el3 ;
Abdumalikov-10 , efficient generation of XUV radiation H4 ; TLA2 ; TLA3 ;
PKJha12 , coherent Raman umklappscatteringCRU , breakdown of dipole blockade
obtained driving atoms by phase-jump pulses V3 . Moreover, recently two
schemes for efficient and fast coherent population transfer have been
presented Kumar-12 , which use chirped and non-chirped few-cycles laser
pulses. Another recent application Xiang-12 presents high-order harmonic
generation obtained with laser pulses with a $\pi$-phase jump. Finally, the
field of quantum information processing benefits from these results, since
many qubit realizations rely on precise quantum levels manipulation Steane-98
; Lee-04 ; Campbell-10 ; Kim-11 .
Figure 1: The three functional shapes of the smooth phase jumps used: the red
is a dropping hyperbolic tangent: $\phi(t)=(\pi/2)[1-\tanh(5\alpha t)]$, the
blue is a rising hyperbolic tangent: $\phi(t)=(\pi/2)[1-\tanh(5\alpha t)]$,
and the black one is a hyperbolic secant:
$\phi(t)=(\pi/2)\operatorname{sech}(\alpha t)$. In all the simulations the
numerical normalized value is $\alpha=0.265$
The presence of few optical cycles in the pulse gives a constant phase
difference between the carrier wave and the pulse shaped envelope H3 , in
contrast with many cycle pulses V1 ; V2 ; V3 . Moreover, optimizing the pulses
parameters is proven to enhance the excited state population R1 or optimizing
coherence in two-level systems (TLSs) R2 . In previous works we have already
presented an analytical solution for the dynamics of a TLS excited with pulses
of arbitrary shape and polarization H5 ; H51 . Since in the model we present
the change rate of levels’ populations within a single optical cycle is not
negligible, the rotating-wave approximation can’t be used. In other words in
the present model we can’t neglect the contribution of the counter-rotating
terms in the Hamiltonian H5 ; H51 .
In a previous work PrevPRA we have presented a similar model, representing
the interaction of a TLS with few-cycle pulses, where at time $t=t_{0}$ the
phase of the carrier wave jumps of an amount $\phi$, this jump being sharp and
step-like. In that work, numerical analysis of the analytic model has lead to
an enhancement by a factor of $10^{6}-10^{8}$ in the population transfer, with
the optimal phase jump of $\phi=\pi$ and the optimal time coincident with the
peak of the envelope. In the present work we improve that model, considering a
_smooth phase change_ , i.e. not step-like but happening over a finite
interval of time. This new model more closely describes a realistic
experimental scenario.
The pulse is characterized by: Rabi frequency $\Omega_{0}$, pulse width
$\tau$, carrier frequency $\nu$, phase jump amplitude $\phi$, phase jump time
$t_{0}$ and phase jump duration $\Delta t$. Moreover, we consider two
_qualitative parameters_ : the phase jump shape, and the pulse envelope shape.
We present an analytical solution for the time evolution of the excited
state’s population, together with a numerical simulation. In the numerical
simulation we use 3 functional shapes for the smooth phase jump: rising
hyperbolic tangent, dropping hyperbolic tangent, and gaussian peek, (see
figure 4), whereas for the envelope a gaussian peak has been used. Numerically
optimizing the pulses parameters we have obtained enhancements for the
population transfer of the order of $10^{4}$.
Figure 2: Example of an excitation field with few oscillations and two
different phases, thick solid(cosine) and dashed lines(sine), encompassed by a
pulsed envelope (thick dashed line). Here the field is written in the form
E$(t)=\mathcal{E}(t)\cos(\nu t+\phi_{0})$ where $\phi_{0}=0$ for cosine and
$\phi_{0}=-\pi/2$ for sine pulse.
### I.1 NUMERICAL SIMULATION AND DISCUSSION
Be $|a\rangle$ and $|b\rangle$ the states of a two-level atom (TLA), with
energy difference $\hbar\omega$, and atomic dipole moment $\wp$. If we let
this system interact with a classic field $E(t)={\cal E}(t)\mbox{cos}\nu t$,
the equations of motion for the relative wavefunctions are BK1 :
$\displaystyle\dot{C}_{a}$ $\displaystyle=i\frac{\wp{\cal
E}(t)}{\hbar}\mbox{cos}(\nu t)e^{i\omega t}C_{b},$ (1a)
$\displaystyle\dot{C}_{b}$ $\displaystyle=i\frac{\wp^{*}{\cal
E}(t)}{\hbar}\mbox{cos}(\nu t)e^{-i\omega t}C_{a},$ (1b)
Figure 3: Population left on the upper-level $|a\rangle$(b,d,f) as a function
of ratio of the carrier-frequency($\nu$) of the excitation pulse and atomic
transition frequency($\omega_{c}$), in the long time limit $t\gg\tau$, for
corresponding phase jump function (a,c,e). Here the dashed line if the
numerical simulation of Eq.(1) and the solid line is the approximate solution
given by Eq.(6). For the excitation pulse we have used the form
$\Omega_{0}(t)=Ae^{-\alpha^{2}t^{2}}e^{i\phi(t)}$, and the phase functions
have the following forms: (a) $\phi(t)=(\pi/2)\operatorname{sech}[\alpha t]$,
(c) $\phi(t)=\pi/2(\operatorname{sech}[10\alpha t]+(1+\tanh[\alpha t])$, (e)
$\phi(t)=\pi/2(\operatorname{sech}[\alpha t]+(1-\tanh[10\alpha t]))$. For
numerical simulations we chose $A=0.035\omega$, $\alpha=0.265\gamma$ and
$\gamma=1.25\omega$ where $\omega=(2\pi)$ 80 Ghz
(a) (b) (c)
(d) (e) (f)
(g) (h) (i)
Figure 4: In this figure we present the results of the numerical analysis.
Each of the three rows of plots refers to a different functional shapes of the
smooth phase jump (phase change function). For each row we have a plot of the
smooth phase jump, a plot of the excited state’s population as function of
time, and a plot of the excited state’s population left after the pulse is
gone as a function of the normalized excitation frequency. Similarly to figure
3, the functional shape of the excitation pulse is
$\Omega_{0}(t)=Ae^{-\alpha^{2}t^{2}}e^{i\phi(t)}$. Moreover, for the plots of
the excited state’s population as function of time we have used the numerical
value of $\nu/\omega=0.75$. Phase change of the form (a)
$\phi(t)=(\pi/2)[1+\tanh(\alpha_{1}t)]$, with three different values of
$\alpha_{1}$: red (steeper) $\alpha_{1}=5\alpha$; blue (in-between)
$\alpha_{1}=\alpha$; black (smoother) $\alpha_{1}=0.5\alpha$. (b)
Corresponding behavior of the excited level population $|C_{a}(t)|$. (c)
Asymptotic value of the excited state population, as a function of the
“resonance ratio” (excitation’s frequency divided by transition’s frequency)
for this form of the phase change. (d) Phase change of the form
$\phi(t)=(\pi/2)[1-\tanh(\alpha_{1}t)]$, with (as in (a)) three different
values of $\alpha_{1}$: red (steeper) $\alpha_{1}=5\alpha$; blue (in-between)
$\alpha_{1}=\alpha$; black (smoother) $\alpha_{1}=0.5\alpha$. (e)
Corresponding behaviour of the excited level population $|C_{a}(t)|$. (f)
Asymptotic excited population for this form of the phase change. (g) Phase
change of the form: $\phi(t)=(\pi/2)\operatorname{sech}^{2}(\alpha_{1}t)$,
$\alpha_{1}$: red (larger) $\alpha_{1}=\alpha$; blue (in-between)
$\alpha_{1}=10\alpha$; black (narrower) $\alpha_{1}=20\alpha$. (h)
Corresponding behaviour of the excited level population $|C_{a}(t)|$. (i)
Asymptotic excited population for this form of the phase change. The value
used for $\alpha$ is $\alpha=0.265$. For numerical simulations we chose
$A=0.04375\omega$, $\alpha=0.265\gamma$ and $\gamma=1.25\omega$ where
$\omega=(2\pi)$ 80 Ghz
where $\Delta=\omega-\nu$ is the detuning from resonance. Similarly to PrevPRA
, defining $f(t)=C_{a}(t)/C_{b}(t)$ and $\Omega(t)=\wp{\cal E}(t)/\hbar$, we
have the following Riccati equation:
$\dot{f}+i\Omega^{*}(t)\text{cos}(\nu t)e^{-i\omega
t}f^{2}-i\Omega(t)\text{cos}(\nu t)e^{i\omega t}=0.$ (2)
The approximate solution for Eq. (2), in terms of the tip angle $\theta$ is
given as in H5
$\begin{split}f(t)&=i\int_{-\infty}^{t}dt^{\prime}\left\\{\left[\frac{d\theta(t^{\prime})}{dt^{\prime}}-\theta^{2}(t^{\prime})\frac{d\theta^{*}(t^{\prime})}{dt^{\prime}}\right]\right.\\\
&\left.\times\exp\left[2\int_{t^{\prime}}^{t}\theta(t^{\prime\prime})\dot{\theta}^{*}(t^{\prime\prime})dt^{\prime\prime}\right]\right\\},\end{split}$
(3)
where the tip angle $\theta(t)$ has been defined as
$\theta(t)=\int_{-\infty}^{t}\Omega(t^{\prime})\text{cos}(\nu
t^{\prime})e^{i\omega t^{\prime}}dt^{\prime}$ (4)
from which we have $|C_{a}(t)|=|f(t)|/\sqrt{1+|f(t)|^{2}}$. What is of
interest is the asymptotic behavior of $|C_{a}(\infty)|$. In PrevPRA is shown
good agreement between the analytical and a numerical simulation. To introduce
the phase jump, we can write the Rabi frequency as
$\Omega(t)=\Omega_{0}(t)\cos\nu t\ e^{i\omega t}e^{i\phi(t)}$ (5)
and then, using the same method as in PrevPRA , we can obtain an approximated
analytic solution for the Riccati equation (2):
$f(t)=i\int_{-\infty}^{t}dt^{\prime}\Phi(t^{\prime})\text{exp}\left[2\int_{t^{\prime}}^{t}\zeta(t^{\prime\prime})dt^{\prime\prime}\right],$
(6)
The approximate analytical solution is in good agreement with the numerical
simulation obtained by directly solving the coupled differential Eq.(1). From
Fig.(3) we see that even for complex phase function the agreement is good. For
the sake of completeness, we have added an appendix in which we show the
strength of this approach beyond standard TLS. Indeed the Riccati equation
approach gives a closed compact form from which both the temporal and steady-
state behavior of the two and three-level system can obtained.
An interesting observation is that it is possible to rewrite the Rabi
frequency in (5) as
$\Omega(t)=\Omega_{0}(t)\cos\nu te^{i[\omega t+\phi(t)]}$ (7)
and then define $\tilde{\omega}(t)=\omega+\phi(t)/t$ and interpret this as a
modulation of the atomic frequency, instead of a modulation of the excitation.
Experimentally this can be realized in several ways, e.g. using modulated
Zeeman or Stark effect.
Now we move to discuss our numerical simulation of the dynamics of the two-
level atom interacting with ultra-short, off-resonant and gradually changing
phase $\phi(t)$. We have performed numerical solution of the Riccati equation,
using different types of phase change (smooth phase jump) functions. The
result of this numerical analysis is shown in figure 4. The goal of this study
is to find the best phase change which allows for the best coupling (most
efficient energy exchange) of the excitation pulse with the excited state.
In figure 4 we present the results of the numerical analysis. Each of the
three rows of plots refers to a different functional shapes of the smooth
phase jump (phase change function). For each row we have a plot of the smooth
phase jump, a plot of the excited state’s population as function of time, and
a plot of the excited state’s population left after the application of the
pulse as a function of the normalized excitation frequency. Similarly to
figure 3, the functional form of the excitation pulse is
$\Omega_{0}(t)=Ae^{-\alpha^{2}t^{2}}e^{i\phi(t)}$. Moreover, for the plots of
of the excited state’s population as function of time we have used the
numerical value of $\nu/\omega=0.75$.
For this numerical simulation we have considered the following three phase
functions (a)(b)(c): $\phi(t)=(\pi/2)[1+\tanh(\alpha t)]$, (d)(e)(f):
$\phi(t)=(\pi/2)[1-\tanh(\alpha t)]$ and (g)(h)(i)
$\phi(t)=(\pi/2)\operatorname{sech}^{2}(\alpha t)$. We can see how the phase
change duration $\Delta t$, i.e. the steepness of the $\phi(t)$ function, has
not an unique effect on the excited population, and depends on the general
shape of the phase change. In particular, it is worth noting that for
ascending and descending phase changes built on the $\tanh(t)$ function the
effect of the steepness is opposite. We can observe a global behaviour which
relates the characterizing parameters of the phase change with the amplitude
of the population of the excited state. Qualitatively, for the ascendent
hyperbolic tangent we observe that by increasing the slope the population
increases. On the other hand, for the descendent hyperbolic tangent the effect
of this parameter is reversed: decreasing the slope of phase change leads to a
decrease of the population. We remark that these behaviors are only global,
and are reversed for some small ranges of frequencies. As an example, in plot
4.(f), for low ranges of laser frequencies, by decreasing the slope we
increase the population, which is opposite of the behavior observed for higher
frequencies. For the peaked shape ($\operatorname{sech}(t)$), no general
behaviors are observed. However, for the intermediate range of frequencies it
can be observed a link between the increasing of the pulse width and the
increase of the population.
## II conclusion
To conclude, here we report our analytical and numerical results to show the
effect of smooth phase jump on the dynamics (both transient and steady-state).
We observed that the temporal profile of the phase jump function $\phi(t)$ has
a profound effect on the excited state population $|a\rangle$. The two-level
considered here can be the Zeeman-sublevels and the ultra-short (few to multi-
cycle) pulse would be in the radio-frequency regime which has been reported
inh11 ; JhaCEP1 ; JhaCEP2 . For an optimized phase function (within the set of
parameters considered here), we were able to an observe an enhancement of
$10^{4}$ in the population transfer. Such enhancement is seen in Fig. 4(f)
(blue curve) which takes the value of $|C_{\infty}|=0.002251$ at
$\nu/\omega=0.5$ and phase function is $\phi(t)=(\pi/2)[1-\tanh(\alpha t)]$
where $\alpha=0.265$. When $\alpha_{1}=5\alpha=1.325$ the excited state
population is enhanced by factor of $\sim 2\times 10^{2}$. Similar enhancement
is also observed for the phase function $\phi(t)=(\pi/2)[1-\tanh(\alpha t)]$
at frequency $\nu/\omega\sim 0.9$(see Fig. 4c). We not only can enhance
excitation but for the same phase function and other choice of the parameter
$\alpha$, we can also suppress it. For example at near resonant excitation
(see Fig. 4(f)), the excited state population can be suppressed by $\sim
15$-fold when $\alpha=1.325$ (red curve) in comparison to
$\alpha_{1}=0.5\alpha$ (black curve). Such control over excited state dynamics
using smooth phase jump as an external parameter can be useful in microwave
controlled RamanJha13 ; JhaAPL12 , EIT with superstructuresJhaAPB to name a
few while on the other hand with recent proposal on coherence-enhanced
spaserDorfman13 and propagating surface plasmon polaritonsJhaAPL13 also
viable areas to explore phase effects. The approximate analytical solution are
in excellent agreement for both delta functionPrevPRA or smooth phase jump
considered here. We also extended this approach beyond two-level atom to
three-level in lambda configuration.
## III Acknowledgement
We acknowledge fruitful discussions with Yuri Rostovtsev. P. K. Jha
acknowledges Herman F. Heep and Minnie Belle Heep Texas A&M University Endowed
Fund held and administered by the Texas A&M Foundation and Robert A. Welch
Foundation for financial support.
## Appendix A Analytical solution for three-level atom
Motivation to add an appendix on the approximate analytical soltuion for
three-level system in lambda configuration is to enlighten the strength of the
method used to find the solution for two level atom with and without phase
jumps. For the sake of simplicity we will consider constant phase $\phi$. Let
us consider a three-level atom(ThLA) in $\Lambda$ configuration [see Fig. 5
inset]. The transition $a\leftrightarrow c$ is driven by the field
$\Omega_{2}$, while the field $\Omega_{1}$ couples the $a\leftrightarrow b$
transition. For the time scale considered in this problem, we have neglected
any decays (radiative and non-radiative). The equation of motion for the
probability amplitudes for the states $|a\rangle$, $|b\rangle$ and $|c\rangle$
of the ThLA can be written as
$\dot{C}_{a}(t)=i\tilde{\Omega}_{1}(t)C_{b}(t)+i\tilde{\Omega}_{2}(t)C_{c}(t)$
(8) $\dot{C}_{b}(t)=i\tilde{\Omega}_{1}^{\ast}(t)C_{a}(t)$ (9)
$\dot{C}_{c}(t)=i\tilde{\Omega}_{2}^{\ast}(t)C_{a}(t)$ (10)
where $\tilde{\Omega}_{j}(t)$ is defined as the effective Rabi frequencies
$\tilde{\Omega}_{j}(t)=\Omega_{j}(t)\cos(\nu_{j}t)e^{i\omega_{j}t};\qquad
j=1,2$ (11)
To solve for $C_{a}(t)$ and $C_{c}(t)$ let us define
$f(t)=\frac{C_{a}(t)}{C_{b}(t)},\,\,\,g(t)=\frac{C_{c}(t)}{C_{b}(t)}$ (12)
In terms of $f(t)$ and $g(t)$ Eqs.(8, 9, 10) reduces to
$\dot{f}(t)+i\tilde{\Omega}_{1}^{\ast}f^{2}(t)=\tilde{\Omega}_{1}+i\tilde{\Omega}\Omega_{2}g(t)$
(13)
$\dot{g}(t)+i\tilde{\Omega}_{1}^{\ast}f(t)g(t)=i\tilde{\Omega}_{2}^{\ast}f(t)$
(14)
Figure 5: Numerical(red dotted line) and analytical (blue solid line)
solutions of the amplitude of the state $|a\rangle$ after long time in
function of $\nu/\omega$ for the laser pulse envelopes
$\Omega_{1}(t)=\Omega_{2}(t)=\Omega_{0}sech(\alpha t)$. For numerical
simulation we chose
$\Omega_{0}=.04\omega,\alpha=0.075\omega,\omega_{ab}=\omega_{ac}=\omega=1$.
In order to solve these equations we extended the method developedH5 ; H51 .
By neglecting the non-linear term $f^{2}(t)$ and the term $\propto g(t)$ in
Eq.(13) we can solve for $f_{1}(t)$ as
$f_{1}(t)=i\int_{-\infty}^{t}\tilde{\Omega}_{1}dt^{\prime}$ (15)
Similarly by neglecting the term $\propto g(t)$ in Eq.(14) we can solve for
$g_{1}(t)$ as
$g_{1}(t)=-\int_{-\infty}^{t}\tilde{\Omega}_{2}^{\ast}(t^{\prime})\theta_{1}(t^{\prime})dt^{\prime}$
(16)
where the tip angle $\theta_{1}(t)$ is defined as
$\theta_{1}(t^{\prime})=\int_{-\infty}^{t}\tilde{\Omega}_{1}(t^{\prime})dt^{\prime}$
(17)
Next let us write the non-linear term in Eq.(13) as
$f^{2}(t)=\left[f(t)-f_{1}(t)\right]^{2}+2f(t)f_{1}(t)-f_{1}^{2}(t)$ (18)
Then Eq.(13) can be written as
$\begin{split}\dot{f}(t)+i\tilde{\Omega}_{1}^{\ast}(t)\\{[f(t)-f_{1}(t)]^{2}+2f(t)f_{1}(t)-f_{1}^{2}(t)\\}\\\
=i\tilde{\Omega}_{1}(t)+i\tilde{\Omega}_{2}(t)g(t)\end{split}$ (19)
Let us assume that $g(t)\approx g_{1}(t)$ and we neglect
$[f(t)-f_{1}(t)]^{2}$H5 in this case we can write Eq.(19) in term of the tip
angles $\theta_{1}(t)$ and $\theta_{2}(t)$
$\dot{f}(t)+i\dot{\theta}_{1}^{\ast}(t)\left\\{2f(t)f_{1}(t)-f_{1}^{2}(t)\right\\}=i\dot{\theta}_{1}(t)+i\dot{\theta}_{2}(t)g_{1}(t)$
(20)
where
$\theta_{2}(t^{\prime})=\int_{-\infty}^{t}\tilde{\Omega}_{2}(t^{\prime})dt^{\prime},$
(21)
The analytical solution of the equation Eq.(20) is then:
$f(t)=e^{-a(t)}\int_{t_{0}}^{t}b(t^{\prime})e^{a(t^{\prime})}dt^{\prime}$ (22)
where
$a(x)=2i\dot{\theta}_{1}(t)f_{1}(t)$ (23)
and
$b(x)=i\dot{\theta}_{1}(t)+i\dot{\theta}_{2}(t)g_{1}(t)+i\dot{\theta}_{1}^{\ast}(t)f_{1}^{2}(t)$
(24)
For $g(t)$ the solution can be obtain from Eq.(14)where we use $f(t)\approx
f_{1}(t):$
$\dot{g}(t)+i\dot{\theta}_{1}^{\ast}(t)f_{1}(t)g(t)=i\dot{\theta}_{2}^{\ast}(t)f_{1}(t)$
(25)
which give us
$g(t)=e^{-c(t)}\int_{t_{0}}^{t}D(t^{\prime})e^{c(t^{\prime})}dt^{\prime}$ (26)
where
$c(x)=i\dot{\theta}_{1}^{\ast}(t)f_{1}(t)$ (27)
and
$D(x)=i\dot{\theta}_{2}^{\ast}(t)f_{1}(t)$ (28)
In Fig. 5 we have plotted the Numerical(red dotted line) and analytical (blue
solid line) solutions of the amplitude of the state $|a\rangle$ after long
time in function of $\nu/\omega_{c}$ for the laser pulse envelopes
$\Omega_{1}(t)=\Omega_{2}(t)=\Omega_{0}\operatorname{sech}(\alpha t)$ with
$\Omega_{0}=.04\omega,\alpha=0.075\omega,\omega_{ab}=\omega_{ac}=\omega=1$. We
see that the approximate analytical solution matches well with the numerics
under the parameters considered here. Extension of this methodology to
Schrodinger equationH6 ; H7 and position dependent mass Schrodinger(PDMSE)
equation can be found inJhaJMO11 ; Hichem3D .
## References
* (1) M. Wegener, Extreme Nonlinear Optics: An Introduction (Springer, Berlin, 2005).
* (2) T. Brabec and F. Krausz, Rev. Mod. Phys. 72, 545 (2000).
* (3) E. Goulielmakis, M. Uiberacker, R. Kienberger, A. Baltuska, V. Yakovlev, A. Scrinzi, T. Westerwalbesloh, U. Kleineberg, U. Heinzmann, M. Drescher, et al., Science 305, 1267 (2004).
* (4) P. B. Corkum and F. Krausz, Nat Phys 3, 381 (2007).
* (5) M. Tsubouchi, A. Khramov, and T. Momose, Phys. Rev. A 77, 023405 (2008).
* (6) R. Garcia-Fernandez, A. Ekers, L. P. Yatsenko, N. V. Vitanov, and K. Bergmann, Phys. Rev. Lett. 95, 043001 (2005).
* (7) B. Zhang, J.-H. Wu, X.-Z. Yan, L. Wang, X.-J. Zhang, and J.-Y. Gao, Opt. Express 19, 12000 (2011).
* (8) H. Eleuch, S. Guerin, and H. R. Jauslin, Phys. Rev. A. 85, 013830 (2012).
* (9) S. Nakamura, H. Goto, and K. Ichimura, Optics Communications 293, 160 (2013), ISSN 0030-4018.
* (10) L.-J. Jiang, X.-Z. Zhang, G.-R. Jia, Z. Yong-Hui, and X. Li-Hua, Chinese Physics B 22, 023101 (2012).
* (11) B. Y. Chang, I. R. Solá, V. S. Malinovsky, and J. Santamaría,Phys. Rev. A 64, 033420 (2001).
* (12) G. Dridi, S. Guerin, V. Hakobyan, H. R. Jauslin, and H. Eleuch, Phys. Rev. A. 80, 043408 (2009).
* (13) T. Brabec and F. Krausz, Rev. Mod. Phys. 72, 545 (2000).
* (14) Y. Li, Y. Zhang, C. Li, and X. Zhan, Optics Communications 287, 150 (2013).
* (15) S. E. Harris, Physics Today 50, 36 (1997).
* (16) M. Issler, E. M. Kessler, G. Giedke, S. Yelin, I. Cirac, M. D. Lukin, and A. Imamoglu, Phys. Rev. Lett. 105, 267202 (2010).
* (17) X. Yang, Z. Zhang, X. Yan, and C. Li, Phys. Rev. A 81, 035801 (2010).
* (18) G. Cerullo and C. Vozzi, Physics 5, 138 (2012).
* (19) H. Eleuch, and R. Bennaceur, Journal of Optics A: Pure and Applied Optics 5, 528 (2003).
* (20) N. Boutabba, H. Eleuch and H. Bouchriha, Synthetic Metals 159, 1239 (2009).
* (21) H. Eleuch, D. Elser, and R. Bennaceur, Laser Phys. Lett. 1, 391 (2004).
* (22) A. A. Abdumalikov, O. Astafiev, A. M. Zagoskin, Y. A. Pashkin, Y. Nakamura, and J. S. Tsai, Phys. Rev. Lett. 104, 193601 (2010).
* (23) E. A. Sete, A. A. Svidzinsky, Y. V. Rostovtsev, H. Eleuch, P. K. Jha, S. Suckewer, and M. O. Scully, IEEE J. Sel. Top. Quantum Electron. 18, 541 (2012).
* (24) P. K. Jha and Y. V. Rostovtsev, Phys. Rev. A 81, 033827 (2010)
* (25) P. K. Jha and Y. V. Rostovtsev, Phys. Rev. A 82, 015801 (2010).
* (26) P. K. Jha, A. A. Svidzinsky, and M. O. Scully, Laser Phys. Lett. 9, 368 (2012).
* (27) L. Yuan, A. A. Lanin, P. K. Jha, A. J. Traverso, D. V. Voronine, K. E. Dorfman, A. B. Fedotov, G. R. Welch, A. V. Sokolov, A. M. Zheltikov, and M. O. Scully, Laser Phys. Lett. 8, 736 (2011).
* (28) J. Qian, Y. Qian, M. Ke, X.-L. Feng, C. H. Oh, and Y.Wang, Phys. Rev. A 80, 053413 (2009).
* (29) P. Kumar and A. K. Sarma, Phys. Rev. A 85, 043417 (2012).
* (30) Y. Xiang, Y. Niu, H. Feng, Y. Qi, and S. Gong, Opt. Express 20, 19289 (2012).
* (31) Sh. Barzanjeh, and H. Eleuch, Physica E. 42, 2091 (2010).
* (32) K. F. Lee, D. M. Villeneuve, P. B. Corkum, and E. A. Shapiro, Phys. Rev. Lett. 93, 233601 (2004).
* (33) W. C. Campbell, J. Mizrahi, Q. Quraishi, C. Senko, D. Hayes, D. Hucul, D. N. Matsukevich, P. Maunz, and C. Monroe, Phys. Rev. Lett. 105, 090502 (2010).
* (34) D. Kim, S. G. Carter, A. Greilich, A. S. Bracker, and D.Gammon, Nat. Phys. 7, 223 (2011).
* (35) A. Baltuska, T. Udem, M. Uiberacker, M. Hentschel, E. Goulielmakis, C. Gohle, R. Holzwarth, V. Yakovlev, A. Scrinzi, T. Hansch, et al., Nature 421, 611 (2003).
* (36) N. V. Vitanov, N. J. Phys. 9, 58 (2007).
* (37) B. T. Torosov and N. V. Vitanov, Phys. Rev. A 76, 053404 (2007).
* (38) N. Dudovich, D. Oron, and Y. Silberberg, Phys. Rev. Lett. 88, 123004 (2002).
* (39) S. Malinovskaya, Optics Comm. 282, 3527 (2009).
* (40) Y. V. Rostovtsev, H. Eleuch, A. Svidzinsky, H. Li, V. Sautenkov, and M. Scully, Phys. Rev. A 79, 063833 (2009).
* (41) Y. V. Rostovtsev, and H. Eleuch, J. Mod. Opt. 57, 1882 (2010).
* (42) P. K. Jha, H. Eleuch, and Y. V. Rostovtsev, Phys. Rev. A 82, 045805 (2010).
* (43) M. O. Scully and M. S. Zubairy, Quantum Optics (Cambridge University Press, Cambridge, England, 1997).
* (44) H. Li, V. A. Sautenkov, Y. V. Rostovtsev, M. M. Kash, P. M. Anisimov, G. R.Welch, and M. Scully, Phys. Rev. Lett. 104, 103001 (2010).
* (45) P. K. Jha, H. Li, V. A. Sautenkov, Y. V. Rostovtsev, and M. O. Scully, Opt. Commun. 284, 2538 (2011).
* (46) P. K. Jha, Y. V. Rostovtsev, H. Li, V. A. Sautenkov, and M. O. Scully, Phys. Rev. A 83, 033404 (2011).
* (47) P. K. Jha, S. Das and T. N. Dey arXiv:1210.2356
* (48) P. K. Jha, K. E. Dorfman, Z. Yi, L. Yuan, Y. V. Rostovtsev, V. A. Sautenkov, G. R. Welch, A. M. Zheltikov, and M. O. Scully, Appl. Phys. Lett. 101, 091107 (2012).
* (49) P. K. Jha and C. H. R. Ooi arXiv:1205.5262.
* (50) K. E. Dorfman, P. K. Jha, D. V. Voronine, P. Genevet, F. Capasso and M. O. Scully arXiv:1212.523.
* (51) P. K. Jha, X. Yin and X. Zhang arXiv:1302.0570.
* (52) H. Eleuch, Y. V. Rostovtsev, M. O. Scully, EPL 89, 50004 (2010).
* (53) H. Eleuch and Y. V. Rostovtsev, J. Mod. Opt. 57, 1877 (2010).
* (54) P. K. Jha, H. Eleuch, and Y. V. Rostovtsev, J. Mod. Opt. 58, 652 (2011).
* (55) H. Eleuch, P. K. Jha, and Y. V. Rostovtsev, Math. Sci. Lett. 1, 1 (2012).
|
arxiv-papers
| 2013-02-26T19:08:55 |
2024-09-04T02:49:42.163694
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Pankaj K. Jha, Hichem Eleuch, Fabio Grazioso",
"submitter": "Fabio Grazioso",
"url": "https://arxiv.org/abs/1302.6541"
}
|
1302.6728
|
# Estimations of the low dimensional homology of Lie algebras with large
abelian ideals
Peyman Niroomand Department of Pure Mathematics
Damghan University, Damghan, Iran p$\\[email protected] and Francesco G.
Russo DIEETCAM
Universitá degli Studi di Palermo
Viale Delle Scienze, Edificio 8, 90128, Palermo, Italy
[email protected]
###### Abstract.
A Lie algebra $L$ of dimension $n\geq 1$ may be classified, looking for
restrictions of the size on its second integral homology Lie algebra
$H_{2}(L,\mathbb{Z})$, denoted by $M(L)$ and often called Schur multiplier of
$L$. In case $L$ is nilpotent, we proved that $\mathrm{dim}\
M(L)\leq\frac{1}{2}(n+m-2)(n-m-1)+1$, where $\mathrm{dim}\ L^{2}=m\geq 1$, and
worked on this bound under various perspectives. In the present paper, we
estimate the previous bound for $\mathrm{dim}\ M(L)$ with respect to other
inequalities of the same nature. Finally, we provide new upper bounds for the
Schur multipliers of pairs and triples of nilpotent Lie algebras, by means of
certain exact sequences due to Ganea and Stallings in their original form.
###### Key words and phrases:
Heisenberg algebras, Schur multiplier, nilpotent Lie algebras.
###### 2010 Mathematics Subject Classification:
17B30; 17B60; 17B99
## 1\. Previous contributions and statement of the results
The classification of finite dimensional nilpotent Lie algebras has interested
the works of several authors both in topology and in algebra, as we can note
from [2, 14]. The second integral homology Lie algebra $H_{2}(L,\mathbb{Z})$
of a nilpotent Lie algebra $L$ of dimension $\mathrm{dim}\ L=n$ is again a
finite dimensional Lie algebra and its dimension may be connected with that of
$L$ under many points of view. It is customary to call $H_{2}(L,\mathbb{Z})$
the Schur multiplier of $L$ and denote it with $M(L)$, in analogy with the
case of groups due to Schur (see [16, 25]). The study of the numerical
conditions among $\mathrm{dim}\ L$ and $\mathrm{dim}\ M(L)$ is the subject of
many investigations.
A classic contribution of Batten and others [6] shows that
(1.1) $\mathrm{dim}\ M(L)\leq\frac{1}{2}n(n-1).$
Denoting $\mathrm{dim}\ L^{2}=m\geq 1$, Yankosky [27] sharpened (1.1) by
(1.2) $\mathrm{dim}\ M(L)\leq\frac{1}{2}(m-m^{2}+2mn-2n)$
in which the role of $m$ is significant. We contributed in [18, 19, 20, 21,
23] under various aspects and showed that
(1.3) $\mathrm{dim}\ M(L)\leq\frac{1}{2}(n+m-2)(n-m-1)+1,$
is better than (1.1) and (1.2). The crucial step was to prove that (1.3) is
better than (1.2) unless small values of $m$ and $n$ occur (see [19, Corollary
3.4]) and this happens most of the times.
Another inequality of the same nature of (1.2) and (1.3) is given by
(1.4) $\mathrm{dim}\ M(L)\leq\mathrm{dim}\ M(L/L^{2})+\mathrm{dim}\
L^{2}(\mathrm{dim}\ L/Z(L)-1).$
and can be found in [3, Corollary 3.3], where it is shown that it is better
than (1.1).
We will concentrate on (1.1), (1.3) and (1.4) in the present paper, but inform
the reader that there are other inequalities in [3, 7, 8, 10, 11, 18, 19, 20,
21, 23] which may be treated in a similar way, once an appropriate study is
done.
Most of the above bounds is in fact inspired by analogies with the case of
groups (see [6, 15, 22]), even if careful distinctions should be made between
the two contexts. Here we do qualitative considerations among (1.2), (1.3) and
(1.4), because it is not easy to compare all of them and a specific study is
necessary.
On the other hand, the importance of (1.2), (1.3) and (1.4) is due to the
invariants which appear. In order to understand this point, we should recall
that the idea of classifying nilpotent Lie algebras of finite dimension by
restrictions on their Schur multipliers goes back to [6, 15] and continued in
[3, 7, 8, 10, 11] under different perspectives. These authors proved
inequalities on $\mathrm{dim}\ M(L)$, involving invariants related with the
presentation of $L$. We know that $L\simeq F/R$ by the short exact sequence
$0\rightarrow R\rightarrowtail F\twoheadrightarrow L\rightarrow 0$, where $F$
is a free Lie algebra $F$ on $n$ generators, $R$ an ideal of $F$ and Witt’s
Formula, which can be found in [3, 8, 14, 25], shows
(1.5) $\mathrm{dim}\ F^{d}/F^{d+1}=\frac{1}{d}\sum_{r|d}\mu(r)\
n^{\frac{d}{r}}\equiv l_{n}(d),$
where
$\mu(r)=\left\\{\begin{array}[]{lcl}1,&\mathrm{if}\ r=1,\\\ 0,&\mathrm{if}\ r\
\mathrm{is}\ \mathrm{divisible}\ \mathrm{by}\ \mathrm{a}\ \mathrm{square},\\\
(-1)^{s},&\mathrm{if}\ r=p_{1}\ldots p_{s}\ \mathrm{for}\ \mathrm{distinct}\
\mathrm{primes}\ p_{1},\ldots,p_{s}\end{array}\right.$
is the celebrated Möbius function. Now [3, Theorem 2.5] and similar results of
[3, 7, 8, 10, 11] provide inequalities of the same nature of (1.2) and (1.3)
but based on (1.5) and the main problem is to give an explicit expression for
$l_{n}(d)$. For instance, if $c$ denotes the class of nilpotence of $L$, then
[8, Theorem 4.1] shows
(1.6) $\mathrm{dim}\ M(L)\leq\sum_{j=1}^{c}l_{n}(j+1)=\sum_{j=1}^{c}\
\left(\frac{1}{j+1}\sum_{i|j+1}\mu(i)\ n^{\frac{j+1}{i}}\right)$
and [8, Examples 4.3, 4.4] provide explicit values for $\mu(m)$ in order to
evaluate numerically (1.6) and then to compare with (1.1). It is in fact hard
to describe the behaviour of the Möbius function from a general point of view
and so (1.5) is not very helpful in the practice, when we do not evaluate the
coefficients $\mu(m)$. We note briefly that [16, Theorem 3.2.5] is the
corresponding version of (1.6) for groups and the same problems happen also
here.
Now we may understand the importance of being as much concrete as possible in
the study of the upper bounds for $\mathrm{dim}\ M(L)$. We should also recall
that $A(n)$ denotes the abelian Lie algebra of dimension $n$ and the main
results of [3, 6, 7, 8, 10, 11, 15, 18, 19, 20, 23] illustrate that many
inequalities on $\mathrm{dim}\ M(L)$ become equalities if and only if $L$
splits in the sums of $A(n)$ and of a Heisenberg algebra $H(m)$ (here $m\geq
1$ is a given integer). To convenience of the reader, we recall that a finite
dimensional Lie algebra $L$ is called $Heisenberg$ provided that $L^{2}=Z(L)$
and $\mathrm{dim}\ L^{2}=1$. Such algebras are odd dimensional with basis
$v_{1},\ldots,v_{2m},v$ and the only non-zero multiplication between basis
elements is $[v_{2i-1},v_{2i}]=-[v_{2i},v_{2i-1}]=v$ for $i=1,\ldots,m$.
Unfortunately, theorems of splitting of the aforementioned papers hold only
for small values of $m$ and $n$ (see for instance [18, Theorem 3.1] or [19,
Theorems 2.2, 3.1, 3.5, 3.6, 4.2]) and we are very far from controlling the
general cases. In fact, Chao [9] and Seeley [26] proved that there exist
uncountably many non–isomorphic nilpotent Lie algebras of finite dimension,
beginning already from dimension 10 and this illustrates the complexity of the
problem. At this point, we may state the first main result.
###### Theorem 1.1.
Let $L$ be a nilpotent Lie algebra of $\mathrm{dim}\ L=n$, $\mathrm{dim}\
L^{2}=m$ and $\mathrm{dim}\ Z(L)=d$. If $L$ is nonabelian, then (1.3) is
better than (1.4) for all $n\geq 3$, $d\geq 1$ and
$m\leq\Big{\lfloor}\frac{n-2}{d+1}\Big{\rfloor}$.
We may be more specific in the nilpotent case and use certain exact sequences
due to Ganea and Stallings [16, Theorem 2.5.6] which have been adapted
recently in [1, 4, 5, 12, 24] to Lie algebras. Some notions of homological
algebra should be recalled, in order to formulate the next result. The Schur
multiplier of the pair $(L,N)$, where $L$ is a Lie algebra with ideal $N$, is
the abelian Lie algebra $M(L,N)$ which appears in the following natural exact
sequence of Mayer–Vietoris type
(1.7) $H_{3}(L)\longrightarrow H_{3}(L/N)\longrightarrow M(L,N)\longrightarrow
M(L)\longrightarrow M(L/N)\longrightarrow$
$\longrightarrow\frac{L}{[L,N]}\longrightarrow\frac{L}{L^{2}}\
\longrightarrow\ \frac{L}{L^{2}+N}\ \longrightarrow 0$
where the third homology (with integral coefficients) $H_{3}(L)$ of $L$ and
$H_{3}(L/N)$ of $L/N$ are involved. We also recall that $\Phi(L)$ denotes the
Frattini subalgebra of $L$, that is, the intersection of all maximal
subalgebras of $L$ (see [17, 24]). It is easy to see that $\Phi(L)$ is an
ideal of $L$, when $L$ is finite dimensional and nilpotent.
###### Theorem 1.2.
Let $L$ be a nilpotent Lie algebra of $\mathrm{dim}\ L=n$ and $N$ an ideal of
$L$ of $\mathrm{dim}\ L/N=u$. Then
$\mathrm{dim}\ M(L,N)\ +\ \mathrm{dim}\ [L,N]\leq\frac{1}{2}n(2u+n-1).$
Furthermore, if $\mathrm{dim}\ L/(N+\Phi(L))=s$ and $\mathrm{dim}\
N/N\cap\Phi(L)=t$, then
$\frac{1}{2}t(2s+t-1)\leq\mathrm{dim}\ M(L,N)\ +\ \mathrm{dim}\ [L,N].$
When $N=L$ in Theorem 1.2, we get $u=0$ and find again (1.1) so that Theorem
1.2 is a generalization of (1.1). On the other hand, Theorem 1.2 improves most
of the bounds in [10, Theorem B], where $L$ is assumed to be factorized.
The last main theorem describes a condition of complementation. We may
introduce $M(L,N)$ functorially from a wider perspective (originally, this is
due to Ellis in [13] for groups). Let $B(L)$ be a classifying space such that
* (i)
The topological space $B(L)$ is a connected CW–complex;
* (ii)
There exists a functor $\pi_{n}$ from the category of topological spaces to
that of Lie algebras such that $\pi_{1}(B(L))\simeq L$;
* (iii)
The Lie algebras $\pi_{n}(B(L))$ are trivial for all $n\geq 2$.
Since the homology Lie algebras $H_{n}(B(L))$ (with integral coefficients)
depend only on $L$, we have $H_{n}(L)=H_{n}(B(L))$ for all $n\geq 0$. For each
ideal $I$ of $L$ we may construct functorially a space $B(L,I)$ as follows.
The quotient homomorphism $L\twoheadrightarrow L/I$ induces a map
$f:B(L)\rightarrow B(L/I)$. Let $M(f)$ denote the mapping cylinder of this
map. Note that $B(L)$ is a subspace of $M(f)$, and that $M(f)$ is homotopy
equivalent to $B(L/I)$. We take $B(L,I)$ to be mapping cone of the cofibration
$B(L)\rightarrow M(f)$. The cofibration sequence
$B(L)\longrightarrow M(f)\longrightarrow B(L,I)$
yields a natural long exact homology sequence of Mayer–Vietoris
$\ldots\longrightarrow H_{n+1}(L/I)\longrightarrow
H_{n+1}B(L,I)\longrightarrow H_{n}(L)\longrightarrow
H_{n}(L/I)\longrightarrow\ldots\ \ \forall n\geq 0.$
It is straightforward to see that
$H_{1}(B(L,I))=0\ \mathrm{and}\ H_{2}(B(L,I))\simeq I/[L,I]\ \mathrm{and}\
M(L,I)=H_{3}(B(L,I)).$
These complications of homological nature have a positive consequence: we may
treat the topic with more generality. By a triple we mean a Lie algebra $L$
with two ideals $I$ and $J$ and by homomorphism of triples
$(L,I,J)\rightarrow(L^{\prime},I^{\prime},J^{\prime})$ we mean a homomorphism
of Lie algebras $L\rightarrow L^{\prime}$ that sends $I$ into $I^{\prime}$ and
$J$ into $J^{\prime}$. The Schur multiplier of the triple $(L,I,J)$ will be
the functorial abelian Lie algebra $M(L,I,J)$ defined by the natural exact
sequence
(1.8) $H_{3}(L,I)\longrightarrow
H_{3}\left(\frac{L}{I},\frac{I+J}{J}\right)\longrightarrow
M(L,I,J)\longrightarrow M(L,J)\longrightarrow
M\left(\frac{L}{I},\frac{I+J}{I}\right)\longrightarrow$
$\longrightarrow\frac{I\cap J}{[L,I\cap
J]+[I,J]}\longrightarrow\frac{J}{[L,J]}\longrightarrow\frac{I+J}{I+[L,J]}\longrightarrow
0,$
where $H_{3}(L,I)=H_{4}(B(L,I))$ and $M(L,I,J)=H_{4}(B(L,I,J))$ is defined in
terms of the mapping cone $B(L,I,J)$ of the canonical cofibration
$B(L,I)\twoheadrightarrow B(L/J,I+J/I)$. Our last result is the following.
###### Theorem 1.3.
Let $L$ be a finite dimensional Lie algebra with two ideals $I$ and $J$ of $L$
such that $L=I+J$ and $I\cap J=0$. Then
$\mathrm{dim}\ M(L,I,J)=\mathrm{dim}\ M(L,J)\ -\ \mathrm{dim}\ M(J).$
Moreover, if $K\subseteq J\cap Z(L)$, then
$\mathrm{dim}\ M(L,I,J)+\mathrm{dim}\ M(J)+\mathrm{dim}\ K\cap[L,J]$
$\leq\mathrm{dim}\ M(L/K,J/K)+\mathrm{dim}\ M(K)+\mathrm{dim}\
\frac{L}{L^{2}+K}\cdot\mathrm{dim}\ K.$
## 2\. Proofs of the results
The following property deals with the low dimensional homology of sums and is
a crucial instrument in the proofs of our main results.
###### Lemma 2.1 (See [25], Theorem 11.31, Künneth Formula).
Two finite dimensional Lie algebras $H$ and $K$ satisfy the condition
$M(H\oplus K)=M(H)\oplus M(K)\oplus(H/H^{2}\otimes K/K^{2}).$
In particular,
$\mathrm{dim}\ M(H\oplus K)=\mathrm{dim}\ M(H)\ +\ \mathrm{dim}\ M(K)\ +\
\mathrm{dim}\ H/H^{2}\otimes K/K^{2}.$
The dimension of the Schur multiplier of abelian Lie algebras is a classic.
###### Lemma 2.2 (See [6], Lemma 3).
$L\simeq A(n)$ if and only if $\mathrm{dim}\ M(L)=\frac{1}{2}n(n-1)$.
Now we may specify (1.4).
###### Lemma 2.3.
If a nilpotent Lie algebra $L$ of $\mathrm{dim}\ L=n$ has $\mathrm{dim}\
L^{2}=m$ and $\mathrm{dim}\ Z(L)=d$, then (1.4) becomes
$\mathrm{dim}\ M(L)\leq\frac{1}{2}(n-m)(n-m-1)+m(n-d-1).$
###### Proof.
This is an application of Lemma 2.2, noting that $\mathrm{dim}\
L/L^{2}=\mathrm{dim}\ L-\mathrm{dim}\ L^{2}=\mathrm{dim}\ A(n-m)=n-m$ and
$\mathrm{dim}\ L/Z(L)=\mathrm{dim}\ L-\mathrm{dim}\ Z(L)=m-d$. $\hbox
to0.0pt{$\sqcap$\hss}\sqcup$
###### Proof of Theorem 1.1.
From Lemma 2.3, (1.4) becomes
$\mathrm{dim}\ M(L)\leq\frac{1}{2}(n-m)(n-m-1)+m(n-d-1)$
$=\frac{1}{2}(n^{2}-nm-n-nm+m^{2}+m)+mn-dm-m$
$=\frac{1}{2}(n^{2}+m^{2}+m-n)-dm-m$
$=\frac{1}{2}(n^{2}+m^{2})+\frac{1}{2}m-m-dm-\frac{1}{2}n$
$=\frac{1}{2}(n^{2}+m^{2})-\frac{1}{2}m-dm-\frac{1}{2}n$
$=\frac{1}{2}(n^{2}+m^{2})-\left(d+\frac{1}{2}\right)m-\frac{1}{2}n.$
On the other hand, (1.3) becomes
$\mathrm{dim}\ M(L)\leq\frac{1}{2}(n+m-2)(n-m-1)+1$
$=\frac{1}{2}(n^{2}-nm-n+nm-m^{2}-m-2n+2m+2)+1$
$=\frac{1}{2}(n^{2}-m^{2})+\frac{1}{2}m-\frac{3}{2}n+2.$
Of course, the first terms satisfy
$\frac{1}{2}(n^{2}-m^{2})\leq\frac{1}{2}(n^{2}+m^{2})$ for all $m,n\geq 1$,
but the remaining terms satisfy
$\frac{1}{2}m-\frac{3}{2}n+2\leq-\left(d+\frac{1}{2}\right)m-\frac{1}{2}n\Leftrightarrow
0\leq-(d+1)m+n-2$ $\Leftrightarrow 0\geq(d+1)m-n+2\Leftrightarrow
m\leq\Big{\lfloor}\frac{n-2}{d+1}\Big{\rfloor}.$
It follows that (1.3) is better than (1.4) for these values of $m$. $\hbox
to0.0pt{$\sqcap$\hss}\sqcup$
In order to prove Theorem 1.2, we should note that the Schur multiplier of a
pair $(L,N)$ induce the following exact sequence
(2.1) $\longrightarrow M(L,C)\longrightarrow M(L,N)\longrightarrow
M(L/C,N/C)\longrightarrow 0,$
where $C\subseteq Z(N)$. Moreover, it is not hard to check that the following
exact sequence is induced by natural epimorphisms
(2.2) $C\otimes\frac{L}{L^{2}+C}\longrightarrow M(L,C).$
Now we may prove Theorem 1.2.
###### Proof of Theorem 1.2.
We begin to prove the lower bound. We claim that
$None$ $\mathrm{dim}\
M\left(\frac{L}{\Phi(L)},\frac{N}{N\cap\Phi(L)}\right)\leq\mathrm{dim}\
M(L,N)+\mathrm{dim}\ [L,N].$
Note from [17, Corollary 2, p.420] that $\Phi(L)=L^{2}$ is always true for
nilpotent Lie algebras. Then $L/\Phi(L)$ and $N/N\cap\Phi(L)\simeq
N+\Phi(L)/\Phi(L)\subseteq L/\Phi(L)$ are abelian. In our situation,
$M\left(\frac{L}{\Phi(L)},\frac{N}{N\cap\Phi(L)}\right)\simeq\frac{L}{\Phi(L)}\wedge\frac{N}{N\cap\Phi(L)}\simeq\frac{L}{L^{2}}\wedge\frac{N}{N\cap
L^{2}}.$
Now we should recall from [19, 20, 21, 24] that this is a classical situation
in which it is possible to consider certain compatible actions by conjugation
of $L$ over $N$ (and viceversa) which allows us to construct the nonabelian
tensor product $L\otimes N$ of $L$ and $N$. This construction has several
properties, useful to our scopes. For instance, one can see that $M(L,N)\simeq
L\wedge N=L\otimes N/L\square N,$ where $L\square N=\langle x\otimes x\ |\
x\in L\cap N\rangle$, and that the map
$\kappa^{\prime}:x\wedge y\in L\wedge N\longmapsto\kappa^{\prime}(x\wedge
y)=[x,y]\in[L,N]$
is an epimorphism of Lie algebras with $\ker\kappa^{\prime}=M(L,N)$ such that
$\mathrm{dim}\ L\wedge N=\mathrm{dim}\ M(L,N)+\mathrm{dim}\ [L,N].$
On the other hand,
$x\wedge y\in L\wedge N\longmapsto x+L^{2}\wedge y+(N\cap
L^{2})\in\frac{L}{L^{2}}\wedge\frac{N}{N\cap L^{2}}$
is also an epimorphism of Lie algebras and it implies
$\mathrm{dim}\ L\wedge N\geq\mathrm{dim}\ \frac{L}{L^{2}}\wedge\frac{N}{N\cap
L^{2}}$
so that
$\mathrm{dim}\
M\left(\frac{L}{\Phi(L)},\frac{N}{N\cap\Phi(L)}\right)=\mathrm{dim}\
\frac{L}{L^{2}}\wedge\frac{N}{N\cap L^{2}}\leq\mathrm{dim}\ L\wedge N.$
The claim $({\dagger})$ follows. Consequently, it will be enough to prove
$None$ $\mathrm{dim}\
M\left(\frac{L}{\Phi(L)},\frac{N}{N\cap\Phi(L)}\right)\leq\frac{1}{2}t(2s+t-1)$
in order to conclude
$\frac{1}{2}t(2s+t-1)\leq\mathrm{dim}\ M(L,N)\ +\ \mathrm{dim}\ [L,N].$
Since $N/N\cap\Phi(L)\simeq A(s)$ is a direct factor of the abelian Lie
algebra $L/\Phi(L)\simeq A(s+t)\simeq A(s)\oplus A(t)$, Lemma 2.2 implies
$\mathrm{dim}\ M\left(\frac{L}{\Phi(L)}\right)=\frac{1}{2}(s+t)(s+t-1)\
\mathrm{and}\ \mathrm{dim}\
M\left(\frac{L}{N+\Phi(L)}\right)=\frac{1}{2}s(s-1).$
On the other hand, we have (see for instance [4, p.174]) that
$\mathrm{dim}\
M\left(\frac{L}{\Phi(L)},\frac{N}{N\cap\Phi(L)}\right)=\mathrm{dim}\
M\left(\frac{L}{\Phi(L)}\right)-\mathrm{dim}\
M\left(\frac{L}{N+\Phi(L)}\right)$
$=\frac{1}{2}(s+t)(s+t-1)-\frac{1}{2}s(s-1)=\frac{1}{2}(s+t)(s+t-1)-\frac{1}{2}s(s-1)=\frac{1}{2}t(2s+t-1),$
and so $({\dagger}{\dagger})$ is proved.
Now we prove the upper bound
$\mathrm{dim}\ M(L,N)\leq\frac{1}{2}n(2u+n-1).$
Proceed by induction on $n$. Of course, the above inequality is true when
$n=0$. Suppose that it holds whenever $N$ is of dimension strictly less than
$n$, and suppose that $\mathrm{dim}\ N=n$ . Let $C$ be a monodimensional Lie
algebra contained in the center of $N$. We are in the situation described by
(2.1) and (2.2). Then
$\mathrm{dim}\ M(L,N)\leq\mathrm{dim}\ M(L,C)+\mathrm{dim}\
M\left(\frac{L}{C},\frac{N}{C}\right)$ $\leq\mathrm{dim}\
C\otimes\frac{L/C}{(L/C)^{2}}\ +\ \mathrm{dim}\
M\left(\frac{L}{C},\frac{N}{C}\right)\leq\mathrm{dim}\
\frac{L/C}{(L/C)^{2}}+\mathrm{dim}\ M\left(\frac{L}{C},\frac{N}{C}\right)$
$=\mathrm{dim}\ \frac{L}{C}-\mathrm{dim}\
\left(\frac{L}{C}\right)^{2}+\mathrm{dim}\
M\left(\frac{L}{C},\frac{N}{C}\right)\leq(u+n-1)+\frac{1}{2}(n-1)+\frac{1}{2}(2u+n-2)$
$=\frac{1}{2}n(2u+n-1),$
as wished. $\hbox to0.0pt{$\sqcap$\hss}\sqcup$
From [8, Equation 2.2] and [1, 3, 10, 11], we have a good description of the
2–dimensional homology over quotients and subalgebras. In fact it is possible
to overlap the celebrated sequences of Ganea and Stallings in [16, Theorem
2.5.6], studied for groups almost 30 years ago, in the context of Lie algebras
(see [1, 4, 5, 24] for more details). We should also recall that a finite
dimensional Lie algebra $L$ is capable if $L\simeq E/Z(E)$ for a suitable
finite dimensional Lie algebra $E$ (see [1, 11, 23, 24]). The smallest central
subalgebra of a finite dimensional Lie algebra $L$ whose factor is capable is
the epicenter of $L$ and is denoted by $Z^{*}(L)$ (see [1, 24]). Notice that
finite dimensional Lie algebras are capable if and only if they have trivial
epicenter, so that this ideal gives a measure of how far we are from having a
Lie algebra which may be expressed as a central quotient.
###### Lemma 2.4 (See [1], Proposition 4.1 and Theorem 4.4).
Let $L$ be a finite dimensional Lie algebra and $Z$ a central ideal of $L$.
Then the following sequences are exact:
* (i)
$Z\otimes L/L^{2}\rightarrow
M(L){\overset{\beta}{\longrightarrow}}M(L/Z){\overset{\gamma}{\longrightarrow}}L^{2}\cap
Z\rightarrow 0$,
* (ii)
$M(L){\overset{\beta}{\longrightarrow}}M(L/Z)\rightarrow Z\rightarrow
L/L^{2}\rightarrow L/Z+L^{2}\rightarrow 0$,
where $\beta$ and $\gamma$ are induced by natural embeddings. Moreover, the
following conditions are equivalent:
* (j)
$M(L)\simeq M(L/Z)/(L^{2}\cap Z)$,
* (jj)
The map $\beta:M(L)\rightarrow M(L/Z)$ is a monomorphism,
* (jjj)
$Z\subseteq Z^{*}(L)$.
The next corollary is an immediate consequence of the previous lemma.
###### Corollary 2.5 (See [1], Corollaries 4.2 and 4.5).
With the notations of Lemma 2.4,
$\mathrm{dim}\ M(L/Z)\leq\mathrm{dim}\ M(L)+\mathrm{dim}\ L^{2}\cap
Z\leq\mathrm{dim}\ M(L/Z)+\mathrm{dim}\ Z\cdot\mathrm{dim}\ L/L^{2}$
and, if $Z\subseteq Z^{*}(L)$, then
$\mathrm{dim}\ M(L)+\mathrm{dim}\ L^{2}\cap Z=\mathrm{dim}\ M(L/Z).$
To convenience of the reader, given an ideal $I$ of a finite dimensional Lie
algebra $L$ and a subalgebra $J$ of $L$, we recall that $I$ is said to be a
complement of $J$ in $L$ if $L=I+J$ and $I\cap J=0$. The generalization of
Corollary 2.5 to the pair $(L,N)$ is illustrated by Corollary 2.6, in which
the assumption that the ideal $N$ possesses a complement in $L$ implies (see
for instance [4, p.174]) the isomorphism
(2.3) $M(L)\simeq M(L,N)\oplus M(L/N).$
###### Corollary 2.6 (See [4], Theorems 2.3 and 2.8).
Let $N$ be a complement of a finite dimensional Lie algebra $L$ and $K$ be an
ideal of $L$. If $K\subseteq N\cap Z(L)$, then
$\mathrm{dim}\ M(L,N)+\mathrm{dim}\ K\cap[L,N]$ $\leq\mathrm{dim}\
M(L/K,N/K)+\mathrm{dim}\ M(K)+\mathrm{dim}\
\frac{L}{L^{2}+K}\cdot\mathrm{dim}\ K.$
Moreover, if $K\subseteq N\cap Z^{*}(L)$, then
$\mathrm{dim}\ M(L,N)+\mathrm{dim}\ K\cap[L,N]=\mathrm{dim}\ M(L/K,N/K).$
The proof of Corollary 2.6 is based on the exactness of the sequence (1.7),
when $N$ is a complement of a finite dimesional Lie algebra $L$ and we are
going to generalize this strategy in our last result.
###### Proof of Theorem 1.3.
Since $I,J$ are ideals of $L$ and $J$ is a complement of $I$ in $L$, the exact
sequence (1.8) induces the short exact sequence
$\ 0\rightarrow M(L,I,J)\rightarrowtail M(L,J)\twoheadrightarrow
M(L/I,L/I)\rightarrow 0,$
that is, $M(L,J)$ splits over $M(L,I,J)$ by $M(L/I,L/I)=M(L/I)=M(J)$. Then
$\mathrm{dim}\ M(L,J)=\mathrm{dim}\ M(L,I,J)+\mathrm{dim}\ M(J).$
We may apply Corollary 2.6 to the term $\mathrm{dim}\ M(L,J)$, getting
$\mathrm{dim}\ M(L,I,J)=\mathrm{dim}\ M(L,J)-\mathrm{dim}\ M(J)$
$\leq\Big{(}\mathrm{dim}\ M(L/K,J/K)\ +\ \mathrm{dim}\ M(K)\ +\ \mathrm{dim}\
\frac{L}{L^{2}+K}\otimes K-\mathrm{dim}\ K\cap[L,J]\Big{)}-\mathrm{dim}\ M(J)$
$=\mathrm{dim}\ M(L/K,J/K)\ -\mathrm{dim}\ M(J)\ +\ \mathrm{dim}\ M(K)$ $+\
\mathrm{dim}\ \frac{L}{L^{2}+K}\otimes K-\mathrm{dim}\ K\cap[L,J],$
as claimed. $\hbox to0.0pt{$\sqcap$\hss}\sqcup$
A special case is the following.
###### Corollary 2.7.
With the notations of Theorem 1.3, if $K\subseteq J\cap Z^{*}(L)$, then
$\mathrm{dim}\ M(L,I,J)=\mathrm{dim}\ M(L/K,J/K)-\mathrm{dim}\
K\cap[L,J]-\mathrm{dim}\ M(J).$
## References
* [1] V. Alamian, H. Mohammadzadeh and A.R. Salemkar, Some properties of the Schur multiplier and covers of Lie algebras, Comm. Algebra 36 (2008), 697–707.
* [2] J.M. Ancochea–Bermudez and O.R. Campoamor–Stursberg, On Lie algebras whose nilradical is $(n-p)$–filiform, Comm. Algebra 29 (2001), 427–450.
* [3] M. Araskhan, B. Edalatzadeh and A.R. Salemkar Some inequalities for the dimension of the $c$-nilpotent multiplier of Lie algebras, J. Algebra 322 (2009), 1575–1585.
* [4] M. Araskhan and M.R. Rismanchian, Some inequalities for the dimension of the Schur multiplier of a pair of (nilpotent) Lie algebras, J. Algebra 352 (2012), 173–179.
* [5] M. Araskhan and M.R. Rismanchian, Some properties on the Schur multiplier of a pair of Lie algebras, J. Algebra Appl. (2012), to appear.
* [6] P. Batten, K. Moneyhun and E. Stitzinger, On characterizing nilpotent Lie algebras by their multipliers, Comm. Algebra 24 (1996), 4319–4330.
* [7] L. Bosko, On Schur multipliers of Lie algebras and groups of maximal class, Int. J. Algebra Comput. 20 (2010), 807–821.
* [8] L. Bosko and E. Stitzinger, Schur multipliers of nilpotent Lie algebras, preprint, 2011, available online at http://arxiv.org/abs/1103.1812.
* [9] C.Y. Chao, Uncountably many non–isomorphic nilpotent Lie algebras,Proc. Amer. Math. Soc. 13 (1962), 903–906.
* [10] B. Edalatzadeh, F. Saeedi and A.R. Salemkar, The commutator subalgebra and Schur multiplier of a pair of nilpotent Lie algebras, J. Lie Theory 21 (2011), 491–498.
* [11] B. Edalatzadeh, H. Mohammadzadeh, A.R. Salemkar and M. Tavallaee, On the non-abelian tensor product of Lie algebras, Linear Multilinear Algebra 58 (2010), 333–341.
* [12] B. Edalatzadeh, H. Mohammadzadeh and A.R. Salemkar, On covers of perfect Lie algebras, Algebra Colloq. 18 (2011), 419–427,
* [13] G. Ellis, The Schur multiplier of a pair of groups, Appl. Categ. Structures 6 (1998), 355–371.
* [14] M. Goze and Yu. Khakimyanov, Nilpotent Lie Algebras, Kluwer, Dordrecht, 1996.
* [15] P. Hardy and E. Stitzinger, On characterizing nilpotent Lie algebras by their multipliers $t(L)=3,4,5,6$, Comm. Algebra 26 (1998), 3527–3539.
* [16] G. Karpilovsky, The Schur Multiplier, Clarendon Press, Oxford, 1987.
* [17] E.I. Marshall, The Frattini subalgebra of a Lie algebra, J. London. Math. Soc. 42 (1967), 416–422.
* [18] P. Niroomand and F.G. Russo, A note on the Schur multiplier of a nilpotent Lie algebra, Comm. Algebra 39 (2011), 1293–1297.
* [19] P. Niroomand and F.G. Russo, A restrictions on the Schur multiplier of a nilpotent Lie algebra, Elec. J. Linear Algebra 22 (2011), 1–9.
* [20] P. Niroomand, On dimension of the Schur multiplier of nilpotent Lie Algebras, Cent. Eur. J. Math. 9 (2011), 57–64.
* [21] P. Niroomand, On the tensor square of nonabelian nilpotent finite dimensional Lie algebras. Linear Multilinear Algebra 59 (2011), 831–836.
* [22] P. Niroomand and F.G. Russo, An improvement of a bound of Green, J. Algebra Appl. (2012), to appear.
* [23] P. Niroomand, Some properties on the tensor square of Lie algebras, J. Algebra Appl. (2012), to appear.
* [24] P. Niroomand and M. Parvizi, Capability of nilpotent Lie algebras with small derived subalgebras, preprint, available online at http://arxiv.org/abs/1002.4280.
* [25] J. Rotman, An Introduction to Homological Algebra, Academic Press, San Diego, 1979.
* [26] C. Seeley, Some nilpotent Lie algebras of even dimension, Bull. Austral. Math. Soc. 45 (1992), 71–77.
* [27] B. Yankosky, On the multiplier of a Lie algebra, J. Lie Theory 13 (2003), 1–6.
|
arxiv-papers
| 2013-02-27T11:32:28 |
2024-09-04T02:49:42.171299
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Peyman Niroomand (Damghan University, Damghan, Iran) and Francesco G.\n Russo (Universita' degli Studi di Palermo, Palermo, Italy)",
"submitter": "Francesco G. Russo",
"url": "https://arxiv.org/abs/1302.6728"
}
|
1302.7008
|
# Measuring the Size of Large
No-Limit Poker Games
Michael Johanson
###### Abstract
In the field of computational game theory, games are often compared in terms
of their size. This can be measured in several ways, including the number of
unique game states, the number of decision points, and the total number of
legal actions over all decision points. These numbers are either known or
estimated for a wide range of classic games such as chess and checkers. In the
stochastic and imperfect information game of poker, these sizes are easily
computed in “limit” games which restrict the players’ available actions, but
until now had only been estimated for the more complicated “no-limit”
variants. In this paper, we describe a simple algorithm for quickly computing
the size of two-player no-limit poker games, provide an implementation of this
algorithm, and present for the first time precise counts of the number of game
states, information sets, actions and terminal nodes in the no-limit poker
games played in the Annual Computer Poker Competition.
## 1 Introduction
Over the last decade, Texas hold’em poker has become a challenge problem and
common testbed for researchers studying artificial intelligence and
computational game theory. Poker has proved popular for this task because it
is a canonical example of a game with imperfect information and stochastic
outcomes. Since 2006, the Annual Computer Poker Competition (ACPC) [12, 2] has
served as a venue for researchers to play their poker agents against each
other, revealing which artificial intelligence techniques are effective in
practice. The competition has driven research in the field of computational
game theory, resulting in algorithms capable of finding close approximations
to optimal strategies in ever larger games.
The size of a game is a simple heuristic that can be used to describe its
complexity and compare it to other games, and a game’s size can be measured in
several ways. The most commonly used measurement is to count the number of
game states in a game: the number of possible sequences of actions by the
players or by chance, as viewed by a third party that observes all of the
players’ actions. In the poker setting, this would include all of the ways
that the players private and public cards can be dealt and all of the possible
betting sequences. This number allows us to compare a game against other games
such as chess or backgammon, which have $10^{47}$ and $10^{20}$ distinct game
states respectively (not including transpositions)[10].
In imperfect information games, an alternate measure is to count the number of
decision points, which are more formally called more formally called
information sets. When a player cannot observe some of the actions or chance
events in a game, such as in poker when the opponent’s private cards are
unknown, many game states will appear identical to the player. Each such set
of indistinguishable game states forms one information set, and an agent’s
strategy or policy for a game must necessarily depend on its information set
and not on the game state: it cannot choose to base its actions on information
it does not know. State-of-the-art algorithms for approximating optimal
strategies in imperfect information games, such as Counterfactual Regret
Minimization (CFR)[11], converge at a rate that depends on the total number of
information sets.
An additional measure related to the number of information sets is the number
of legal actions summed across each of the information sets, which we will
refer to as the number of infoset-actions. This measure has practical
implications on the memory required to store or compute a strategy. An agent’s
strategy can be represented as a behavioral strategy by storing a probability
of taking each legal action at each information set. Approximating an optimal
strategy using a standard CFR implementation requires two double-precision
floating point variables per infoset-action: one to store the accumulated
regret, and the other to store the average strategy111Some recent CFR
variants, such as CFR-BR [6], or Oskari Tammelin’s PureCFR which uses integers
instead of double-precision floats, may require less memory..
In some poker variants it is simple to compute the number of game states and
information sets in the game, and counting the number of infoset-actions is
not much harder. For example, in limit poker games such as heads-up limit
Texas hold’em, the number of information sets can be easily calculated with
the single closed-form expression, as we will describe further in Section 2.
This calculation is straightforward because the possible betting actions and
information sets within one round are independent of the betting history on
previous rounds, and so an expression to calculate the number of game states
can be stated for each round as the product of the possible chance events, the
number of betting sequences to reach the round, and the number of information
sets within the round. In the ACPC’s heads-up limit Texas hold’em events, this
can be performed by hand to measure the size of the game at $3.162\times
10^{17}$ game states and $3.194\times 10^{14}$ information sets. In practice,
researchers use a lossless state-space abstraction technique that merges
states with isomorphic cards, leading to a strategically equivalent but
smaller game with $1.380\times 10^{13}$ information sets and $3.589\times
10^{13}$ infoset-actions.
In no-limit poker variants, however, measuring the size of the game has until
now been computationally challenging. In these games, the players are provided
with a fixed amount of money (a stack size) at the start of each game, and may
make any number of betting actions of almost any size during any round until
they have committed their entire stack. This means that the possible betting
sequences cannot be neatly decomposed by round as is possible in limit poker
games. Since 2007, the ACPC has played three different no-limit poker games,
each of which was (correctly) presumed to be far larger than the limit Texas
hold’em variants. The variant played in 2007 and 2008, $1-$2 no-limit Texas
hold’em with $1000 (500-blind) stacks, was previously estimated by Gilpin et
al. to have $10^{71}$ game states [5]. However, the exact size of this game,
or of the 2009 and 2010-Present games, has not previously been computed.
In this technical report, we will present for the first time an algorithm that
can be used to count the number of game states, information sets, and infoset-
actions in these large two-player no-limit poker games. The algorithm is
simple to implement, and source code will be provided along with this
technical report. In Section 2, for context we will briefly describe how the
size of heads-up limit poker games are computed. In Section 3 we describe the
new algorithm, which uses dynamic programming to avoid traversing the game
tree. In Section 4 we will use our implementation to compute for the first
time the exact counts of the game states, information sets, and infoset-
actions in the 2007, 2008-2009 and 2010-Present ACPC heads-up no-limit poker
games. Finally, we will briefly discuss the ongoing challenges for action
abstraction research in this domain, and propose a new no-limit game as a
convenient research testbed for future work.
## 2 Measuring heads-up limit games
Over the last decade, heads-up limit Texas hold’em has become a common testbed
for researchers studying computational game theory in imperfect information
games, with significant efforts towards approximating optimal strategies for
the game [3, 11, 6, 4]. In the first paper on approximating a Nash equilibrium
strategy for the game, Billings et al. presented a figure illustrating the
branching factor of the game [3, Figure 1]. In this section, we will describe
how the size of the game (in game states, information sets, and infoset-
actions) can be precisely computed, to give context to our discussion of no-
limit poker.
The heads-up limit Texas hold’em game played in the ACPC is a two player game
with four rounds and at most four bets per round. In the first round, the
players’ small blind and big blind (an ante required to start the game),
counts as a bet, and at most three additional bets are allowed. The public and
private cards are dealt out as as normal for Texas hold’em games. The ACPC
uses the Doyle’s game convention, in which each player’s stack is reset at the
start of each game, and their total winnings are accumulated over all of the
games. In the limit poker events, each player’s stack is set to be
sufficiently large that the maximum number of bets can be made on each round,
making the stack size irrelevant for computing the size of the game.
To start our discussion of the size of the game, we present Table 1 which
lists the number of possible ways to deal the private and public cards on each
round. The Total Two-Player column describes the number of ways to deal the
private and public cards to both players on each round:
$\binom{52}{2}\times\binom{50}{2}$ on the first round,
$\binom{52}{2}\times\binom{50}{2}\times\binom{48}{3}$ on the second round, and
so on. The Total One-Player column describes the number of ways to deal the
cards from one player’s point of view, when the opponent’s cards are unknown:
$\binom{52}{2}$ on the first round, $\binom{52}{2}\times\binom{50}{3}$ on the
second round, and so on. Finally, the Canonical One-Player column lists the
number of canonical card combinations from one player’s point of view, after
losslessly merging isomorphic card combinations that are strategically
identical.
Round | Total Two-Player | Total One-Player | Canonical One-Player
---|---|---|---
Preflop | 1,624,350 | 1,326 | 169
Flop | 28,094,757,600 | 25,989,600 | 1,286,792
Turn | 1,264,264,092,000 | 1,221,511,200 | 55,190,538
River | 55,627,620,048,000 | 56,189,515,200 | 2,428,287,420
Table 1: Possible public and private card combinations in Texas hold’em poker
games.
Next, we note that in poker games, the betting actions available to the
players are independent of the cards that they have been dealt. This means
that the possible action sequences on each round can be enumerated on their
own, and then multiplied by the number of card combinations to find the number
of game states. Further, since the players start with a large enough stack
that the maximum number of bets can be made on each round, this means that the
possible betting sequences within one round are independent of the actions
made by the players on earlier rounds. In Table 2, we present the decision
points, terminal nodes, and action sequences that continue to the next round
in heads-up limit Texas hold’em. In the Decision Points column, “-” represents
the first decision in the round, and “c” and “r” respectively represent the
check/call and bet/raise actions by the players that lead to a decision. The
Terminal column lists the betting sequences that end the game in the current
round, and the Continuing column lists the betting sequences that continue to
the next round. Note that we do not allow players to fold when not facing a
bet, as this is dominated by checking or calling.
Round | Sequences | Actions | Continuing | Terminal
---|---|---|---|---
Preflop | | | |
Flop, Turn | | | |
River | | | |
Table 2: Betting sequences in limit hold’em poker games.
The figures in Tables 1 and 2 can be multiplied together to compute the number
of game states, information sets, and infoset-actions. This is done one round
at a time, by taking the number of betting sequences and multiplying it by the
branching factor due to the chance events. If we multiply by the number of
two-player chance events we obtain the number of game states, while
multiplying by the number of one-player chance events results in the number of
information sets. An example of this calculation is shown in Equation 1, in
which we calculate the total number of information sets, $|\mathcal{I}|$, in
heads-up limit Texas hold’em poker.
$\displaystyle|\mathcal{I}|$ $\displaystyle=\binom{52}{2}\times 8$
$\displaystyle+\binom{52}{2}\binom{50}{3}\times 7\times 10$
$\displaystyle+\binom{52}{2}\binom{50}{3}\binom{47}{1}\times 7\times 9\times
10$ $\displaystyle+\binom{52}{2}\binom{50}{3}\binom{47}{1}\binom{46}{1}\times
7\times 9\times 9\times 10$ $\displaystyle=319,365,922,522,608$ (1)
Similar calculations can be performed to compute the number of game states or
the number of infoset-actions, which are presented in Table 3. Of particular
interest are the total number of canonical information sets and canonical
infoset-actions, as these figures describe the complexity in time and memory
of computing an optimal strategy for the game using CFR. In theory, CFR’s
convergence bound is linear in the number of canonical information sets [11,
Theorem 4]. In practice, a standard CFR implementation requires two double-
precision floating point variable per infoset-action: one to accumulate
regret, and the other to accumulate the average strategy.
| Round | Sequences | Sequence-Actions | Continuing | Terminal
---|---|---|---|---|---
Preflop | 8 | 21 | 7 | 7
Flop | 70 | 182 | 63 | 56
Turn | 630 | 1638 | 567 | 504
River | 5670 | 14742 | 0 | 9639
Total | 6378 | 16583 | | 10206
| Round | Infosets | Infoset-Actions | Continuing | Terminal
Preflop | 1352 | 3549 | 1183 | 1183
Flop | 9.008e7 | 2.342e8 | 8.107e7 | 7.206e7
Turn | 3.477e10 | 9.040e10 | 3.129e10 | 2.781e10
River | 1.377e13 | 3.580e13 | 0 | 2.341e13
Total | 1.380e13 | 3.589e13 | | 2.343e13
| Round | Infosets | Infoset-Actions | Continuing | Terminal
Preflop | 10608 | 27846 | 9282 | 9282
Flop | 1.819e9 | 4.730e9 | 1.637e9 | 1.455e9
Turn | 7.696e11 | 2.001e12 | 6.926e11 | 6.156e11
River | 3.186e14 | 8.283e14 | 0 | 5.416e14
Total | 3.194e14 | 8.304e14 | | 5.422e14
| Round | States | State-Actions | Continuing | Terminal
Preflop | 1.299e7 | 3.411e7 | 1.137e7 | 1.137e7
Flop | 1.967e12 | 5.113e12 | 1.770e12 | 1.573e12
Turn | 7.965e14 | 2.071e15 | 7.168e14 | 6.372e14
River | 3.154e17 | 8.201e17 | 0 | 5.362e17
Total | 3.162e17 | 8.221e17 | | 5.368e17
Table 3: Game size figures for heads-up limit Texas hold’em.
The game’s size of $3.589\times 10^{13}$ canonical infoset-actions means that
33 terabytes of disk (using one byte per infoset-action) would be required to
store a behavioral strategy, and CFR would require 523 terabytes of RAM (two
8-byte doubles per infoset-action) to solve the game precisely. While this
makes the exact, lossless computation intractable with conventional hardware,
it is at least conceivable that such a computation will be possible in time
with hardware advances. Additionally, the size of the game is sufficiently
small that unabstracted best response computations have recently become
possible [8], and significant progress is being made towards closely
approximating an optimal strategy while using state-space abstraction
techniques [6].
## 3 Measuring large no-limit games
We now turn to the problem of measuring the size of large two-player no-limit
poker games. Unlike the limit poker game discussed in Section 2, no-limit
poker presents additional challenges that prevent us from using a single,
simple expression as in Equation 1. The difficulty is that the possible
betting sequences available in each round depend on the betting sequence taken
in earlier rounds; furthermore, there can be an enormous number of betting
sequences leading to the start of the final round, precluding the approach of
simply enumerating them.
The heads-up no-limit poker games played in the ACPC are parameterized by two
variables: the stack size that each player has at the start of the game, and
the value of the big blind, with the small blind being set equal to half of a
big blind. Each of these variables is measured in dollars, and the stack size
is typically a multiple of the big blind. Unlike in limit Texas hold’em, where
each player can only fold, call, or raise a predetermined amount at each
decision, no-limit poker allows for a large number of actions. Each player may
fold, call, or bet any whole dollar amount in a range from a min-bet to all of
their remaining chips. The size of a min-bet is context-dependent: if a bet
has not yet been placed in the current round then a min-bet is defined as
equal to the big blind; otherwise, it is equal to the size of the previous bet
after calling any outstanding bet. This means that bets cannot decrease in
size during a round. One exception is that a player is always allowed to bet
all of their remaining chips, even if this is smaller than a min-bet. Once the
players have each bet all of their chips (i.e., they are all-in), their only
legal actions are to call for the remaining rounds until the game is over.
When we present the size of no-limit games, we do not include these trivial
information sets or their forced actions.
At any decision point, the actions available to the players depend on the
betting history in the game so far: not only on the actions take in the
current round, as in limit poker, but on the actions in earlier rounds, as
these earlier actions determine the remaining money that the players can use
to bet with. Walking the betting tree of large no-limit games is intractable,
as the games are simply far too large. However, there is still structure to
the betting that can be exploited for the purposes of counting the possible
states in the game without explicitly walking the tree. We highlight two
critical properties that make this computation possible. First, a player’s
legal actions at any decision depend on only three factors: the amount of
money they have remaining, the size of the bet that they are facing, and if a
check is legal (i.e., if it is the first action in a round). Within one
betting round, any two decision points that are identical in these three
factors will have the same legal actions and the same betting subtrees for the
remainder of the game, regardless of other aspects of their history. Second,
each of these factors only increases or decreases during a round. A player’s
stack size only decreases as they make bets or call an opponent’s bets. The
bet being faced is zero at the start of a round (or if the opponent has
checked), and can only remain the same or increase during a round. Finally, a
check is only allowed as the first action of a round.
These observations mean that we do not have to walk the entire game in order
to count the decision points. Instead of considering each betting history
independently, we will instead consider the relatively small number of
possible configurations of round, stack-size, bet-faced, and check-allowed,
and do so one round at a time, starting from the start of the game. We will
incrementally compute the number of action histories that reach each of these
configurations by using dynamic programming. This involves a base case and an
inductive step. The base case is simple: there is one way to reach the start
of the game, at which the first player has a full stack minus a small blind,
is facing a bet equal to the big blind minus the small blind, and a check is
allowed. Next is the inductive step: if we know that there are $n$ action
sequences that reach a given configuration, then for each legal action at that
configuration, we can add another $n$ ways to reach the subsequent
configurations. Due to the second property, that each of the round, stack-
size, bet-faced and check-allowed factors only increase or decrease, we can
update the configurations in a particular order such that applying the
inductive step to a configuration only increases the number of ways to reach
configurations that we have not yet examined. For each round in increasing
order, we visit all configurations where checks are allowed first, followed by
those where a call ends the round. Within each of these sets, we update
configurations in order from largest stacks remaining to smallest. Within each
subset, we update configurations in order from smallest bets faced to largest.
Since all actions taken from a configuration only update the number of ways to
reach configurations later in the ordering, only a single traversal is
required in order to update all configurations.
When updating each configuration, we can increment counters for each round
that track the number of action sequences that lead to a decision by a player
and the total number of infoset-actions. After traversing the set of
configurations over all of the rounds, the resulting values can be multiplied
by the branching factor due to the chance events for presented earlier in
Table 1 to find the size of each round. Adding these values across each round
produces the overall size of the game in terms of game states, information
sets, infoset-actions, and canonical information sets and canonical infoset-
actions.
In practice, this algorithm is straightforward to implement and has reasonable
memory and time requirements. The main memory cost is that of allocating one
variable to each configuration of stack-size and bet-faced, which can simply
be done using a two-dimensional array. This array can be reused on each round
if we also allocate a one-dimensional array indexed by stack size to track the
possible ways to reach the next round. The type of each of these variables
should be chosen with caution, as for nontrivial no-limit poker games, they
will quickly surpass the maximum value of a 64-bit unsigned integer. Double-
precision floating point variables may be used, but of course result in
floating point inaccuracy and cannot provide a precise count. Instead, an
arbitrary precision integer library can be used so that each variable stores a
precise integer count. In our results and in the implementation accompanying
this technical report, we used the GNU Multiple Precision Arithmetic Library
(GMP) [1] for this purpose.
The final consideration of the algorithm is its space and time complexity. As
described above, we need only to store a single variable for each of a
relatively small number of configurations. To compute the size of the largest
ACPC no-limit game, played from 2010 to the present, approximately 400 million
variables were required (20000 possible stack sizes times 20000 possible bets
faced). Using double-precision floating point variables requires less than 3
gigabytes of RAM; using the GMP library’s mpz_t variables requires six
gigabytes at startup, and additional memory during the computation as some
variables increase and have to allocate more memory. In terms of time, only a
single traversal of the configurations is required, which is essentially four
nested for() loops over the rounds, stack sizes, bets faced, and (to update
each configuration) the legal actions. Measuring the size of the 2007-2008 and
2009 ACPC no-limit games, described below, took 47 seconds and 32 seconds
respectively. Measuring the significantly larger 2010-Present ACPC game took
nearly two days.
We have released an open source (BSD-licensed) implementation of the algorithm
to accompany this technical report. It can be found online at either of the
following locations:
* •
http://webdocs.cs.ualberta.ca/~johanson/publications/poker/2013-techreport-nl-
size/2013-techreport-nl-size.html
* •
http://webdocs.cs.ualberta.ca/~games/poker/count_nl_infosets.html
## 4 Sizes of no-limit poker games
Having described the algorithm used to measure the size of the games, we can
now present our main result: the size of the three no-limit games played in
the ACPC since 2007, in terms of game states, information sets, infoset-
actions, and canonical information sets and canonical infoset-actions. We will
briefly describe each game and its size, and also present the amount of memory
required to store a behavioral strategy and to compute an optimal strategy
using CFR. For each game, we will present a table listing the count for each
round in scientific notation, and the overall sizes as precise integers; if
exact counts of intermediate variables are required, the accompanying
implementation outputs precise values.
Note that in the tables below, the ‘Sequences’, ‘Infosets’ and ‘States’
columns show the total number of nontrivial situations, where the player has
more than one legal action. Namely, it does not count the forced moves after
the players are both all-in and must check and call for the remainder of the
game as the public cards are dealt. Likewise, the ‘Actions’ columns do not
include these forced actions.
### 4.1 2007-2008: $1-$2 with $1000 (500-blind) stacks
In 2007, the ACPC introduced its first no-limit poker game, which used a small
blind and big blind of $1 and $2 respectively and $1000 (500-blind) stacks.
This was intentionally chosen to be a large, “deep-stack” game, as humans
typically consider 100-blind stacks to be a normal size. Gilpin et al. had
previously estimated this game to have $10^{71}$ game states, quite close to
its actual size of $7.16\times 10^{75}$ game states. Note that the first round
alone, without considering any card information, has more action sequences
than the full four-round game of heads-up limit Texas hold’em has game states.
| Round | Sequences | Actions | Continuing | Terminal
---|---|---|---|---|---
Preflop | 8.54665e31 | 2.564e32 | 8.54665e31 | 8.54665e31
Flop | 4.66162e44 | 1.39849e45 | 4.66162e44 | 4.66162e44
Turn | 1.61489e54 | 4.84467e54 | 1.61489e54 | 1.61489e54
River | 1.28702e62 | 3.86106e62 | 0 | 2.57404e62
Total | 1.28702e62 | 3.86106e62 | | 2.57404e62
| Round | Infosets | Actions | Continuing | Terminal
Preflop | 1.44438e34 | 4.33315e34 | 1.44438e34 | 1.44438e34
Flop | 5.99853e50 | 1.79956e51 | 5.99853e50 | 5.99853e50
Turn | 8.91266e61 | 2.6738e62 | 8.91266e61 | 8.91266e61
River | 3.12525e71 | 9.37575e71 | 0 | 6.2505e71
Total | 3.12525e71 | 9.37575e71 | | 6.2505e71
| Round | Infosets | Actions | Continuing | Terminal
Preflop | 1.13329e35 | 3.39986e35 | 1.13329e35 | 1.13329e35
Flop | 1.21154e52 | 3.63461e52 | 1.21154e52 | 1.21154e52
Turn | 1.97261e63 | 5.91782e63 | 1.97261e63 | 1.97261e63
River | 7.2317e72 | 2.16951e73 | 0 | 1.44634e73
Total | 7.2317e72 | 2.16951e73 | | 1.44634e73
| Round | States | Actions | Continuing | Terminal
Preflop | 1.38828e38 | 4.16483e38 | 1.38828e38 | 1.38828e38
Flop | 1.30967e55 | 3.92901e55 | 1.30967e55 | 1.30967e55
Turn | 2.04165e66 | 6.12494e66 | 2.04165e66 | 2.04165e66
River | 7.15938e75 | 2.14781e76 | 0 | 1.43188e76
Total | 7.15938e75 | 2.14781e76 | | 1.43188e76
Table 4: Information Set and Game State counts for the 2007-2008 ACPC no-limit
game, $1-$2 No-Limit Texas Hold’em with $1000 (500-blind) stacks.
Precise counts:
* •
Game states: 7 159 379 256 300 503 000 014 733 539 416 250 494 206 634 292 391
071 646 899 171 132 778 113 414 200
* •
Information Sets: 7 231 696 218 395 692 677 395 045 408 177 846 358 424 267
196 938 605 536 692 771 479 904 913 016
* •
Canonical Infoset-Actions: 937 575 457 443 070 937 268 150 407 671 117 224 976
700 640 913 137 221 641 272 121 424 098 561
Solving this game using a standard CFR implementation (2 double-precision
floats per canonical infoset-action) would require 12 408 707 859 239 112 772
721 938 772 275 407 031 368 328 229 870 ($1.241\times 10^{49}$) yottabytes of
RAM.
### 4.2 2009: $1-$2 with $400 (200-blind) stacks
In 2009, the ACPC switched its no-limit game to a game with a smaller stack
size. This had two effects. First, it was closer to what humans would consider
a deep-stack no-limit game. Second, reducing the stack size resulted in a
significantly smaller game which required slightly less action abstraction.
| Round | Sequences | Actions | Continuing | Terminal
---|---|---|---|---|---
Preflop | 2.23569e19 | 6.70708e19 | 2.23569e19 | 2.23569e19
Flop | 9.91129e26 | 2.97339e27 | 9.91129e26 | 9.91129e26
Turn | 4.9179e32 | 1.47537e33 | 4.9179e32 | 4.91789e32
River | 2.47216e37 | 7.41638e37 | 0 | 4.94427e37
Total | 2.47221e37 | 7.41652e37 | | 4.94432e37
| Round | Infosets | Actions | Continuing | Terminal
Preflop | 3.77832e21 | 1.1335e22 | 3.77832e21 | 3.77832e21
Flop | 1.27538e33 | 3.82613e33 | 1.27538e33 | 1.27538e33
Turn | 2.71422e40 | 8.14264e40 | 2.71422e40 | 2.71421e40
River | 6.00311e46 | 1.80091e47 | 0 | 1.20061e47
Total | 6.00311e46 | 1.80091e47 | | 1.20061e47
| Round | Infosets | Actions | Continuing | Terminal
Preflop | 2.96453e22 | 8.89359e22 | 2.96453e22 | 2.96453e22
Flop | 2.5759e34 | 7.72771e34 | 2.5759e34 | 2.5759e34
Turn | 6.00727e41 | 1.80218e42 | 6.00727e41 | 6.00726e41
River | 1.38909e48 | 4.16723e48 | 0 | 2.77816e48
Total | 1.38909e48 | 4.16723e48 | | 2.77816e48
| Round | States | Actions | Continuing | Terminal
Preflop | 3.63155e25 | 1.08946e26 | 3.63155e25 | 3.63155e25
Flop | 2.78455e37 | 8.35366e37 | 2.78455e37 | 2.78455e37
Turn | 6.21753e44 | 1.86526e45 | 6.21753e44 | 6.21751e44
River | 1.3752e51 | 4.12555e51 | 0 | 2.75038e51
Total | 1.3752e51 | 4.12555e51 | | 2.75038e51
Table 5: Information Set and Game State counts for the 2009 ACPC no-limit
game, $1-$2 No-Limit Texas Hold’em with $4000 (200-blind) stacks.
Precise counts:
* •
Game states: 1 375 203 442 350 500 983 963 565 602 824 903 351 778 252 845 259
200
* •
Information Sets: 1 389 094 358 906 842 392 181 537 788 403 345 780 331 801
813 952
* •
Canonical Infoset-Actions: 180 091 019 297 791 288 982 204 479 657 796 281 550
065 385 037
Solving this game using a standard CFR implementation (2 double-precision
floats per canonical infoset-action) would require 2 383 484 794 528 738 021
376 773 ($2.383\times 10^{24}$) yottabytes of RAM.
### 4.3 2010-Present: $50-$100 with $20000 (200-blind) stacks
Finally, we move to the large game currently played in the ACPC. In 2010, the
ACPC competitors chose to “inflate” the game by increasing the size of the
blinds and the stack, while keeping the ratio between the blinds and the stack
the same. Since players can bet any dollar integer amount between a min-bet
and their remaining stack, this dramatically increased the size of the game:
instead of having at most 500 or 200 betting options, they now had up to
20000. The resulting game is by far the largest no-limit variant of the three.
| Round | Sequences | Actions | Continuing | Terminal
---|---|---|---|---|---
Preflop | 2.05342e95 | 6.16026e95 | 2.05342e95 | 2.05342e95
Flop | 1.01693e121 | 3.05079e121 | 1.01693e121 | 1.01693e121
Turn | 1.12027e138 | 3.36081e138 | 1.12027e138 | 1.12027e138
River | 1.13459e151 | 3.40376e151 | 0 | 2.26917e151
Total | 1.13459e151 | 3.40376e151 | | 2.26917e151
| Round | Infosets | Actions | Continuing | Terminal
Preflop | 3.47028e97 | 1.04108e98 | 3.47028e97 | 3.47028e97
Flop | 1.30858e127 | 3.92574e127 | 1.30858e127 | 1.30858e127
Turn | 6.18283e145 | 1.85485e146 | 6.18283e145 | 6.18283e145
River | 2.7551e160 | 8.26531e160 | 0 | 5.51021e160
Total | 2.7551e160 | 8.26531e160 | | 5.51021e160
| Round | Infosets | Actions | Continuing | Terminal
Preflop | 2.72284e98 | 8.16851e98 | 2.72284e98 | 2.72284e98
Flop | 2.64296e128 | 7.92889e128 | 2.64296e128 | 2.64296e128
Turn | 1.36842e147 | 4.10527e147 | 1.36842e147 | 1.36842e147
River | 6.37519e161 | 1.91256e162 | 0 | 1.27504e162
Total | 6.37519e161 | 1.91256e162 | | 1.27504e162
| Round | States | Actions | Continuing | Terminal
Preflop | 3.33547e101 | 1.00064e102 | 3.33547e101 | 3.33547e101
Flop | 2.85704e131 | 8.57113e131 | 2.85704e131 | 2.85704e131
Turn | 1.41632e150 | 4.24895e150 | 1.41632e150 | 1.41632e150
River | 6.31144e164 | 1.89343e165 | 0 | 1.26229e165
Total | 6.31144e164 | 1.89343e165 | | 1.26229e165
Table 6: Information Set and Game State counts for 2010-Present ACPC no-limit
game, $50-$100 No-Limit Texas Hold’em with $20000 (200-blind) stacks.
Precise counts:
* •
Game states: 631 143 875 439 997 536 762 421 500 982 349 491 523 134 755 009
560 867 161 754 754 138 543 071 866 492 234 040 692 467 854 187 671 526 019
435 023 155 654 264 055 463 548 134 458 792 123 919 483 147 215 176 128 484
600
* •
Information Sets: 637 519 066 101 007 550 690 301 496 238 244 324 920 475 418
719 042 634 144 396 116 764 136 550 474 559 674 075 887 513 367 166 011 522
983 983 431 697 050 644 965 107 911 879 207 553 424 525 286 198 175 080 441
144
* •
Canonical Infoset-Actions: 82 653 117 189 901 827 068 203 416 669 319 641 326
155 549 963 289 335 994 852 924 537 125 934 134 924 844 970 514 122 385 645
557 438 192 782 454 335 992 412 716 935 898 684 703 899 327 697 523 295 834
972 572 001
Solving this game using a standard CFR implementation (2 double-precision
floats per canonical infoset-action) would require 1 093 904 897 704 962 796
073 602 182 381 684 993 342 477 620 192 821 835 370 553 460 959 511 144 423
474 321 165 844 409 860 820 294 170 754 032 777 335 927 196 407 795 204 128
259 033 ($1.094\times 10^{138}$) yottabytes of RAM.
## 5 Discussion
While heads-up limit is sufficiently small that the suboptimality of
strategies can now be evaluated conveniently [8] and close approximations to
an optimal strategy are becoming possible [6], the situation in the no-limit
ACPC events appears bleak. Even the smallest of the three no-limit variants is
far larger than heads-up limit. This is simply a reality of the domain: the
game is intrinsically far more complex, and presents additional challenges for
state-space abstraction research. In particular, the no-limit games emphasize
the critical importance of research into action abstraction and translation
techniques, in which the game is simplified by merging clusters of similar
betting actions together. In practice, there is likely to be little benefit to
an agent’s ability to differentiate a $101 bet from a $99 bet out of a $20,000
stack, as opposed to simply using a $100 bet for both cases.
In order to make meaningful and measurable progress on abstraction and
translation techniques, it would be useful to have an analogue to our ability
in heads-up limit to evaluate a computer agent’s suboptimality in the
unabstracted game. Specifically, we would like to find or create a no-limit
game which has three properties:
* •
Unabstracted best response computations are tractable and convenient, so that
the worst-case performance of strategies with abstracted betting (and possibly
unabstracted cards) can be evaluated. This allows us to evaluate our
abstraction and translation techniques in isolation from other factors.
* •
Unabstracted equilibrium computations are tractable and convenient. This would
allow us to compute an optimal strategy for the game, and measure its in-game
performance against agents that use betting abstraction.
* •
Strategic elements similar to that of no-limit Texas hold’em. As much as
possible, we would prefer our game to have similar card elements and betting
structure to the game played in the competition. This means that when
possible, we would prefer a game with multiple rounds, a full-sized (or at
least large) deck, 5-card poker hands, and stack sizes large enough that
simple jam/fold techniques are not effective [9]. Agents that abstract the
actions in a straightforward way (such as fold-call-pot-allin, for example)
will ideally be demonstrated to be highly exploitable, so that an improvement
can be distinguished with additional research on action abstraction
techniques.
The first property is a strict requirement: for the game to be useful, we need
to be able to precisely evaluate agents in the full, unabstracted game. The
second property would be very convenient: if unabstracted equilibria can be
closely approximated, then it allows for the meaningful in-game performance
comparisons that we will be forced to use in the full-scale no-limit Texas
hold’em domain. We will likely have to be flexible on the final property. It
likely will not be possible to find a four-round game with a full deck and
large stack sizes that remains both tractable and interesting; instead, we
will have to simplify the game in some way. As motivation, we can consider the
[2-1], [2-4], and [3-1] parameterized limit hold’em games recently proposed by
Johanson et al. [7], in which the number of rounds and maximum number of bets
per round, respectively, are varied to produce smaller games. In the no-limit
domain, the equivalent parameterization is a [r-$s] game, where r is the
number of rounds and $s is the stack size.
## 6 2-$20 $1-$2 no-limit royal hold’em: a testbed game for future
abstraction research
As a final contribution of this technical report, we would like to propose one
such small no-limit game that may have the properties that we desire from a
new common research testbed game: [2-$20] $1-$2 no-limit royal hold’em. Royal
hold’em is a variant of Texas hold’em played with a 20-card deck containing
only the Ten through Ace of each of four suits. [2-$20] refers to a 2-round
game, with a $20 stack. As in Texas hold’em, preflop begins with each player
receiving two private cards, and the flop begins with three public cards. The
size of this game is presented below in Table 7.
| Round | Sequences | Actions | Continuing | Terminal
---|---|---|---|---|---
Preflop | 1188 | 3561 | 1187 | 1187
Flop | 19996 | 57616 | 0 | 38807
Total | 21184 | 61177 | | 39994
| Round | Infosets | Actions | Continuing | Terminal
Preflop | 29700 | 89025 | 29675 | 29675
Flop | 1.55169e08 | 4.471e08 | 0 | 3.01142e08
Total | 1.55199e08 | 4.47189e08 | | 3.01172e08
| Round | Infosets | Actions | Continuing | Terminal
Preflop | 225720 | 676590 | 225530 | 225530
Flop | 3.10018e09 | 8.93278e09 | 0 | 6.01664e09
Total | 3.10041e09 | 8.93346e09 | | 6.01686e09
| Round | States | Actions | Continuing | Terminal
Preflop | 3.45352e07 | 1.03518e08 | 3.45061e07 | 3.45061e07
Flop | 3.25519e11 | 9.37942e11 | 0 | 6.31747e11
Total | 3.25553e11 | 9.37942e11 | | 6.31781e11
Table 7: Information Set and Game State counts for [2-$20] $1-$2 no-limit
royal hold’em.
This game is small enough that CFR would only require 7 gigabytes of RAM,
making it tractable on consumer-grade computers, and a common testbed domain
that can be shared by all ACPC competitors. While it is tempting to consider
larger games that would require 256 gigabytes of RAM to solve, this would make
the game intractable to all but the largest academic research groups competing
in the ACPC. The number of game states in this game is significantly smaller
than that of heads-up limit Texas hold’em, and so real game best response
computations should be no slower and likely will be considerably faster. It
remains to be shown whether or not this game is sufficiently “interesting”, by
which we mean that simple jam-fold strategies and heavily abstracted agents
would ideally be both exploitable by a best response and lose to an
unabstracted equilibrium. If simple strategies are effective in the game, then
more complex games involving a larger stack size may have to be considered,
balanced against the exponentially growing memory requirement.
## 7 Conclusion
Heads-up no-limit Texas hold’em poker has become a significant research domain
since the introduction of a no-limit poker event in the Annual Computer Poker
Competition in 2007. However, even the simple measurement of the size of the
game in terms of game states, information sets, and actions has proved
difficult, and previously could only be estimated. In this technical report,
we presented an algorithm that can efficiently and exactly compute the size of
the ACPC no-limit poker games without requiring exhaustive game tree
traversals. We presented the size of the three no-limit poker variants played
in the ACPC since 2007, and discussed the need for a small testbed domain that
would help motivate state-space abstraction research into these very large
domains.
## References
* [1] The GNU Multiple Precision Arithmetic Library. http://gmplib.org/.
* [2] N. Bard. The Annual Computer Poker Competition webpage. http://www.computerpokercompetition.org/, 2010.
* [3] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron. Approximating game-theoretic optimal strategies for full-scale poker. In International Joint Conference on Artificial Intelligence, pages 661–668, 2003.
* [4] A. Gilpin, S. Hoda, J. Peña, and T. Sandholm. Gradient-based algorithms for finding nash equilibria in extensive form games. In Proceedings of the Eighteenth International Conference on Game Theory, 2007.
* [5] A. Gilpin, T. Sandholm, and T. B. Sørensen. A heads-up no-limit texas hold’em poker player: Discretized betting models and automatically generated equilibrium-finding programs. In Proceedings of the Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008), 2008.
* [6] M. Johanson, N. Bard, N. Burch, and M. Bowling. Finding optimal abstract strategies in extensive-form games. In AAAI, 2012.
* [7] M. Johanson, N. Bard, M. Lanctot, R. Gibson, and M. Bowling. Efficient nash equilibrium approximation through monte carlo counterfactual regret minimization. In Eleventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS). International Foundation for Autonomous Agents and Multiagent Systems, 2012. To appear.
* [8] M. Johanson, K. Waugh, M. Bowling, and M. Zinkevich. Accelerating best response calculation in large extensive games. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), pages 258–265. AAAI Press, 2011\.
* [9] P. B. Miltersen and T. B. Sørensen. A near-optimal strategy for a heads-up no-limit texas hold’em poker tournament. In Proceedings of the Sixth International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007), 2007.
* [10] Wikipedia. Game complexity — Wikipedia, the free encyclopedia, 2013. [Online; accessed 19-February-2013].
* [11] M. Zinkevich, M. Johanson, M. Bowling, and C. Piccione. Regret minimization in games with incomplete information. In Advances in Neural Information Processing Systems 20 (NIPS), 2008\.
* [12] M. Zinkevich and M. Littman. The AAAI computer poker competition. Journal of the International Computer Games Association, 29, 2006\. News item.
|
arxiv-papers
| 2013-02-27T21:20:49 |
2024-09-04T02:49:42.184159
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Michael Johanson",
"submitter": "Michael Johanson",
"url": "https://arxiv.org/abs/1302.7008"
}
|
1302.7019
|
# A bigroupoid’s topology
David Michael Roberts [email protected] School of Mathematical
Sciences
University of Adelaide
SA, 5005
Australia
###### Abstract.
The fundamental bigroupoid of a space captures its homotopy 2-type. When the
space is semilocally 2-connected, we can lift the construction to a bigroupoid
internal to the category of spaces, such that the invariants of the
topological bigroupoid corresponding to the path components and first two
homotopy groups are discrete. In fact more is true, in that the topologised
fundamental bigroupoid is locally trivial in a way analogous to the case of
topological groupoids.
This article is based material from the author’s 2010 PhD thesis. The author
is supported by the Australian Research Council (grant number DP120100106)
## 1\. Introduction
One of the standard examples of a non-trivial topological groupoid is the
fundamental groupoid $\Pi_{1}(X)$ of a space $X$ which is locally well-
behaved. In particular, the existence of this topology (which has to be
compatible with composition among other things) is equivalent to the existence
of a universal covering space.
Now there are higher analogues of the fundamental groupoid of a space, and
indeed the celebrated _Homotopy Hypothesis_ is that spaces and higher
groupoids amount to the same thing. The easiest higher groupoid associated to
a space is the fundamental bigroupoid $\Pi_{2}(X)$, which captures not only
the path-components, fundamental group and the second homotopy group, but also
the first Postnikov invariant (hence the action of $\pi_{1}(X)$ on
$\pi_{2}(X)$).
It is natural then to consider putting a topology on higher homotopy groupoids
in a way analogous to the case of $\Pi_{1}(X)$. Clearly some assumptions about
the local properties of the space are necessary, and indeed we find that a
2-dimensional analogue of semilocal simple connectedness is sufficient. This
condition is also necessary if one asks that $\pi_{1}(\Pi_{2}(X))$, $i=0,1,2$
are discrete
Extending this result further up the ladder of higher groupoids needs to take
a different approach, because weak 3-groupoids are quite complicated and after
that the explicit, algebraic definitions are no longer useful. One could
consider however other models for higher groupoids, such as operadic
definitions of weak $n$-groupoids. The analogue of the results in this paper
would be that under suitable local connectivity assumptions, the algebras for
the operads involved in the definitions would be _topological_ , i.e. in the
category of spaces rather than in the category of sets.
The paper essentially falls into two parts: a review of the topology of
mapping spaces, with a particular focus on constructing bases for the topology
which are sensitive to local connectivity properties. This allows us to
translate topological properties from a space to its path and loop spaces. The
rough statement is that local connectivity goes down by 1. The second half
applies the calculations in the first half to describe the topology on
$\Pi_{2}(X)$, and show it is a topological bigroupoid. An appendix reviews the
definition of a bigroupoid.
## 2\. Mapping space topology
First, we recast some facts about the compact-open topology on the path space
$X^{I}$ into a slightly different form. Recall the definition of an open
neighbourhood basis for a topology.
###### Definition 1.
Let $S$ be a set and for each $s\in S$ let
$\\{N_{s}(\lambda)\\}_{\lambda\in\Lambda_{s}}$ be a collection of subsets of
$S$. The collection $\left\\{\\{N_{s}(\lambda)\\}_{\lambda\in\Lambda_{s}}|s\in
S\right\\}$ is said to be a _basis of open neighbourhoods_ , or _open
neighbourhood basis_ , for a topology on $S$ if
1. (1)
For all $\lambda\in\Lambda_{s}$, $s\in N_{s}(\lambda)$
2. (2)
For all pairs $\lambda,\nu\in\Lambda_{s}$, there is a $\nu\in\Lambda_{s}$ such
that $N_{s}(\nu)\subset N_{s}(\lambda)\cap N_{s}(\mu)$
3. (3)
For all $\lambda\in\Lambda_{s}$ and all $s^{\prime}\in N_{s}(\lambda)$,
$N_{s}(\lambda)=N_{s^{\prime}}(\lambda^{\prime})$ for some
$\lambda^{\prime}\in\Lambda_{s^{\prime}}$.
The sets $N_{s}(\lambda)$ are called _basic open neigbourhoods_.
There is then a topology $\mathcal{T}$ on $S$ where the open sets are defined
to be those sets that contain a basic open neighbourhood of each of their
points. In this case, we can talk about an open neighbourhood basis for the
topological space $(S,\mathcal{T})$.
###### Example 2.
Consider the space $\mathbb{R}^{n}$ (with the usual topology). The sets
$(v,C)$ where $C\ni v$ is a conxex open subset of $\mathbb{R}^{n}$, form an
open neighbourhood basis.
A non-example of an open neighbourhood basis for $\mathbb{R}^{n}$ (perhaps to
the detriment of the nomenclature) is the collection of open neighbourhoods
$(v,B(v,r))$ with $B(v,r)$ an open ball of radius $r$ _centred_ at $v$.
It is sometimes very useful to know when a subset of the basic open
neighbourhoods also forms an open neighbourhood basis. First note that the
sets $\\{N_{s}(\lambda)\\}_{\lambda\in\Lambda_{s}}$ are partially ordered by
inclusions. The following lemma is an easy exercise.
###### Lemma 3.
If $\left\\{\\{N_{s}(\lambda)\\}_{\lambda\in\Lambda_{s}}|s\in S\right\\}$ is
an open neighbourhood basis for a topology $\mathcal{T}$, and
$\\{N_{s}(\lambda_{\mu})\\}\subset\\{N_{s}(\lambda)\\}_{\lambda\in\Lambda_{s}}$
is a cofinal subset for each $s\in S$
($\lambda_{\mu}\in\Lambda^{\prime}_{s}\subset\Lambda_{s}$) such that
$\left\\{\\{N_{s}(\lambda_{\mu})\\}_{\lambda_{\mu}\in\Lambda^{\prime}_{s}}|s\in
S\right\\}$ is an open neighbouhood basis for a topology
$\mathcal{T}^{\prime}$, then $\mathcal{T}=\mathcal{T}^{\prime}$.
The open neighbourhood basis
$\left\\{\\{N_{s}(\lambda_{\mu})\\}_{\lambda_{\mu}\in\Lambda^{\prime}_{s}}|s\in
S\right\\}$ is said to be _finer_ than the open neighbourhood basis
$\left\\{\\{N_{s}(\lambda)\\}_{\lambda\in\Lambda_{s}}|s\in S\right\\}$.
###### Example 4.
Following on from the previous example, there is a finer basis consisting of
the basic open neighbourhoods $(v,B)$ where $B\ni v$ is an open ball in
$\mathbb{R}^{n}$. The reader is encouraged to follow the simple exercise of
verifying the conditions of the definition and lemma for these two examples,
as they are a simple analogue of the flow of ideas in the next few definitions
and lemmata.
We also will need the definition of a topological groupoid, as a means of
concisely specifying properties of certain open covers.
###### Definition 5.
A _topological groupoid_ is a groupoid such that the sets of objects and
arrows are topological spaces and the source, target, unit, multiplication and
inversion maps are continuous. Functors between topological groupoids are
always assumed to be continuous.
We will only use two examples, both arising from a common construction. Recall
first that a space gives a topological groupoid with arrow space equal to the
object space and all maps (source, target etc.) the identity. Any map of
spaces gives a functor between the associated topological groupoids.
###### Example 6.
Let $X$ be a space and let $U=\coprod_{\alpha}U_{\alpha}$ be some collection
of open sets of $X$. There is an obvious map $j\colon U\to X$. There is a
groupoid $\check{C}(U)$ called the _Čech groupoid_ with object space $U$ and
arrow space $U\times_{M}U$. Source and target are projection on the two
factors, the unit map is the diagonal and multiplication
$U\times_{X}U\times_{X}U\to U\times_{X}U$ is projection on first and last
factors.
Recall that a _partition_ of the unit interval $I=[0,1]$ is a finite, strictly
increasing list of elements $\\{t_{1},\ldots,t_{n}\\}$ of $I$.
###### Example 7.
Given a partition $\\{t_{1},\ldots,t_{n}\\}$, it defines a _closed_ cover
$[0,t_{1}]\coprod\ldots\coprod[t_{n},1]\to I.$
Analogous to the Čech groupoid, we define a _partition groupoid_
$\mathfrak{p}$ with object space $[0,t_{1}]\coprod\ldots\coprod[t_{n},1]$ and
arrow space the fibred product of this space with itself (over $I$).
We point out that there are canonical functors $j\colon\check{C}(U)\to X$ and
$\mathfrak{p}\to I$. A functor $\mathfrak{p}\to\check{C}(U)$ consists of a
sequence of $n+1$ paths $[t_{i},t_{i+1}]\to U_{i}$ in open sets $U_{i}$
appearing in $U$, such that the endpoint of the $i^{th}$ path coincides with
the starting point of the $(i+1)^{st}$ path in the intersection $U_{i}\cap
U_{i+1}\subset X$. Lastly, we call a functor
$\mathfrak{p}^{\prime}\to\mathfrak{p}$ commuting with the maps to $I$ a
_refinement_.
Now let $\gamma\colon I\to X$ be a path, $\mathfrak{p}$ a partition groupoid
given by $\\{t_{1},\ldots,t_{n}\\}$ and $U=\coprod_{i=0}^{n}U_{i}$, a finite
collection of open sets of $X$ such that the indicated lift (a functor) exists
|
---|---
$\textstyle{\mathfrak{p}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widetilde{\gamma}}$$\textstyle{\check{C}(U)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\gamma}$$\textstyle{X}$
with $\widetilde{\gamma}([t_{i},t_{i+1}])\subset U_{i}$ (as usual we let
$t_{0}=0$ to ensure this makes sense). If this lift exists, we say
$\gamma[\mathfrak{p}]$ _lifts through_ $\check{C}(U)$, $\gamma[\mathfrak{p}]$
denoting the composition $\mathfrak{p}\to X$.
###### Lemma 8.
Given a set $N_{\gamma}(\mathfrak{p},U)\subset X^{I}$ as described above, and
any other path $\eta\in N_{\gamma}(\mathfrak{p},U)$, we have the equality
$N_{\gamma}(\mathfrak{p},U)=N_{\eta}(\mathfrak{p},U).$
Proof. If $\eta^{\prime}\in N_{\eta}(\mathfrak{p},U)$, then
$\eta^{\prime}[\mathfrak{p}]$ lifts through $\check{C}(U)$. But this is
precisely the definition of elements in $N_{\gamma}(\mathfrak{p},U)$. By
symmetry we see that these two basic open neighbourhoods are equal. $\Box$
In the following sequence of definitions of open neighbourhood bases we shall
prove after each one that the sets do indeed form an open neighbourhood basis.
###### Definition 9.
If $X$ is a space, the _compact-open topology_ on the set $C(I,X)$ of paths in
$X$ has as basic open neighbourhoods the sets
$N_{\gamma}(\mathfrak{p},U)=\\{\eta\colon I\to X\ |\ \eta[\mathfrak{p}]\text{
lifts through }\check{C}(U)\\}$
where $U$ is some finite collection of open sets such that
$\gamma[\mathfrak{p}]$ lifts through $\check{C}(U)$. The set of paths with
this topology will be denoted $X^{I}$.
Proof. (That these sets form an open neighbourhood basis)
The conditions (1) and (3) from definition 1 are manifest, the latter using
lemma 8. For the condition (2), let $N_{\gamma}(\mathfrak{p},U)$ and
$N_{\gamma}(\mathfrak{q},U^{\prime})$ be basic open neighbourhoods. Consider,
for fixed $\gamma\in C(I,X)$, the assignment
$(\mathfrak{p},U)\stackrel{{\scriptstyle\nu}}{{\mapsto}}N_{\gamma}(\mathfrak{p},U).$
If $\mathfrak{p}$ and $U$ don’t satisfy the conditions in the definition of
$N_{\gamma}(\mathfrak{p},U)$, then put $\nu(\mathfrak{p},U)=\varnothing$, the
empty subset of $C(I,X)$. This gives us a map
$\nu\colon\\{(\mathfrak{p},U)\\}\to\mathcal{P}C(I,X)$
to the power set of $C(I,X)$, which we claim is not injective (away from
$\varnothing$, where it is obviously not injective).
Let $\mathfrak{p}$ be given by $\\{t_{1},\ldots,t_{n}\\}$, and for a
refinement $\mathfrak{p}^{\prime}\to\mathfrak{p}$ let $m_{i}$ be the number of
regions of $\underline{\mathfrak{p}}^{\prime}$ that are mapped to
$[t_{i},t_{i+1}]\subset\underline{\mathfrak{p}}$. Then given
$U=\coprod_{i=1}^{n}U_{i}$ such that $\nu(\mathfrak{p},U)$ is not empty,
define
$U_{\underline{m}}=\coprod_{i=1}^{n}\coprod_{j=1}^{m_{i}}U_{i},$
whereupon the path $\gamma[\mathfrak{p}^{\prime}]$ lifts through
$\check{C}(U_{\underline{m}})$. In fact we have the equality
$N_{\gamma}(\mathfrak{p}^{\prime},U_{\underline{m}})=N_{\gamma}(\mathfrak{p},U),$
as a simple pasting argument shows, and hence $\nu$ is not injective. Thus if
we are given a common refinement $\mathfrak{pq}$ and sets
$N_{\gamma}(\mathfrak{p},U)$, $N_{\gamma}(\mathfrak{q},U^{\prime})$ we can
find $U_{\underline{m}}$ and $U^{\prime}_{\underline{l}}$ such that
$N_{\gamma}(\mathfrak{p},U)=N_{\gamma}(\mathfrak{pq},U_{\underline{m}})\qquad\mbox{and}\qquad
N_{\gamma}(\mathfrak{q},U^{\prime})=N_{\gamma}(\mathfrak{pq},U^{\prime}_{\underline{l}}).$
In this case the number of open sets making up $U_{\underline{m}}$ and
$U^{\prime}_{\underline{l}}$ are the same, so they can be paired off as
$(U_{\underline{m}})_{i}\cap(U^{\prime}_{\underline{l}})_{i}$, unlike the open
sets comprising $U$ and $U^{\prime}$.
Then, considering $N_{\gamma}(\mathfrak{p},U)\cap
N_{\gamma}(\mathfrak{q},U^{\prime})=N_{\gamma}(\mathfrak{pq},U_{\underline{m}})\cap
N_{\gamma}(\mathfrak{pq},U^{\prime}_{\underline{l}})$, define
$V_{i}=(U_{\underline{m}})_{i}\cap(U^{\prime}_{\underline{l}})_{i}$ for all
$i$, and $V=\coprod_{i}V_{i}$. There are obvious functors
$\check{C}(V)\to\check{C}(U_{\underline{m}})$ and
$\check{C}(V)\to\check{C}(U^{\prime}_{\underline{l}})$.
Since $\gamma[\mathfrak{pq}]$ lifts through both
$\check{C}(U_{\underline{m}})$ and $\check{C}(U^{\prime}_{\underline{l}})$, it
can be seen to lift through $\check{C}(V)$. We can thus consider the set
$N_{\gamma}(\mathfrak{pq},V)$. Any path $\eta$ in $X$ such that
$\eta[\mathfrak{pq}]$ lifts through $\check{C}(V)$ also lifts through
$\check{C}(U_{\underline{m}})$ and $\check{C}(U^{\prime}_{\underline{l}})$, so
$\eta\in N_{\gamma}(\mathfrak{pq},U)\cap
N_{\gamma}(\mathfrak{pq},U^{\prime})$. Thus
$N_{\gamma}(\mathfrak{pq},V)\subset N_{\gamma}(\mathfrak{pq},U)\cap
N_{\gamma}(\mathfrak{pq},U^{\prime})=N_{\gamma}(\mathfrak{p},U)\cap
N_{\gamma}(\mathfrak{q},U^{\prime})$
as needed. $\Box$
###### Remark 10.
Ordinarily, the compact-open topology on a mapping space is defined using a
subbasis, but $I$ is compact, and the given basic open neighbourhoods are
cofinal in those given by finite intersections of subbasic neighbourhoods, and
so define the same topology.
When the finite collection $U$ of open sets is replaced by a finite collection
of _basic_ open neighbourhoods we find that this still defines an open
neighbourhood basis for the compact-open topology.
###### Lemma 11.
The sets
$N_{\gamma}(\mathfrak{p},W)=\\{\eta\colon I\to X\ |\ \eta[\mathfrak{p}]\text{
{\rm lifts through} }\check{C}(W)\\},$
where $W$ is a finite collection of _basic_ open neighbourhoods of $X$ such
that $\gamma[\mathfrak{p}]$ lifts through $\check{C}(W)$, is a basis of open
neighbourhoods for $X^{I}$.
Proof. The proof that this is indeed a basis of neighbourhoods and is a basis
of neighbourhoods for $X^{I}$ will proceed in tandem. Clearly basic open
neighbourhoods of this sort are also basic open neighbourhoods of the sort
given in definition 9. As with the treatment of the first basis for compact-
open topology, conditions (1) and (3) in definition 1 are easily seen to hold,
again using lemma 8. To show that condition (2) holds, we define the set
$N_{\gamma}(\mathfrak{p},V)\subset N_{\gamma}(\mathfrak{p},W)\cap
N_{\gamma}(\mathfrak{q},W^{\prime})$ as in the previous proof. This is a basic
open neighbourhood for the compact-open topology as in definition 9. Now if we
show that any such basic open neighbourhood contains a basic open
neighbourhood $N_{\gamma}(\mathfrak{p},W^{\prime\prime})$ as defined in the
lemma, we have both shown that sets of this form comprise an open
neighbourhood basis, and that they are cofinal in basic open neighbourhoods of
the form $N_{\gamma}(\mathfrak{p},U)$.
Consider then a basic open neighbourhood $N_{\gamma}(\mathfrak{p},U)$ as in
definition 9. The open sets $U_{i}$ in the collection $U$ are a union of basic
open neighbouhoods, $U_{i}=\bigcup_{\alpha\in J_{i}}W_{i}^{\alpha}$. Pull the
cover
$\coprod_{i=0}^{n}\coprod_{\alpha\in J_{i}}W_{i}^{\alpha}\to X$
back along $\gamma$ and choose a finite subcover
$\coprod_{i=0}^{n}\coprod_{\alpha=1}^{k_{i}}\gamma^{*}W_{i}^{\alpha}$. Denote
by $W=\coprod_{i,\alpha}W_{i}^{\alpha}$ the corresponding collection of
$k_{0}+k_{1}+\ldots+k_{n}$ basic open neighbourhoods of $X$. This clearly
covers the image of $\gamma$. Choose a refinement
$\mathfrak{p}^{\prime}\to\mathfrak{p}$ such that
$\gamma[\mathfrak{p}^{\prime}]$ lifts through $\check{C}(W)$.
If $\eta\in N_{\gamma}(\mathfrak{p}^{\prime},W)$,
$\eta[\mathfrak{p}^{\prime}]$ lifts through $\check{C}(W)$ and hence through
$\check{C}(U)$. To show that $\eta\in N_{\gamma}(\mathfrak{p},U)$ we just need
to show that $\eta[\mathfrak{p}^{\prime}]\to\check{C}(U)$ factors through
$\mathfrak{p}$:
$\textstyle{\mathfrak{p}^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\check{C}(W)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\ast)}$$\textstyle{\mathfrak{p}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\check{C}(U)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta}$$\textstyle{X}$
Let $(t_{i}^{-},t_{i}^{+})$ be an arrow in $\mathfrak{p}^{\prime}$ which maps
to an identity arrow in $\mathfrak{p}$. We need to show that
$(t_{i}^{-},t_{i}^{+})$ is mapped to an identity arrow in $\check{C}(U)$,
which would imply the diagonal arrow in the above diagram factors through
$\mathfrak{p}$.
Let $\check{C}(W)_{i}\to U_{i}$ be the pullback of the map $(\ast)$ along
$\mbox{disc}(U_{i})\to\check{C}(U)$. If $[t_{i},t_{i+1}]$ is a region of
$\mathfrak{p}$ and
$\mathfrak{p}^{\prime}(i)=[t_{i},t_{i+1}]\times_{\mathfrak{p}}\mathfrak{p}^{\prime}$,
then $\mathfrak{p}^{\prime}(i)\to\check{C}(U)$ lands in $\mbox{disc}(U_{i})$
and so descends to $[t_{i},t_{i+1}]$. Repeating this argument for each $i$
gives the required result. We then apply lemma 3 and so the sets
$N_{\gamma}(\mathfrak{p},W)$ form an open neighbourhood basis for the compact-
open topology. $\Box$
We shall define special open sets $N^{*}_{\gamma}(\mathfrak{p},W)$ which are
just basic open neighbourhoods $N_{\gamma}(\mathfrak{p},W)$ where
$W=\coprod_{i=0}^{2n}W_{i}$ such that $W_{2i+1}\subset W_{2i}\cap W_{2i+2}$
for $i=0,\ldots,n-1$.
###### Lemma 12.
For every basic open neighbourhood $N_{\gamma}(\mathfrak{p},W)$ there is an
open neighbourhood $N^{*}_{\gamma}(\mathfrak{p}^{\prime},W^{*})\subset
N_{\gamma}(\mathfrak{p},W)$.
Proof. If $W=\coprod_{i=0}^{n}W_{i}$ and $\mathfrak{p}$ is given by
$\\{t_{1},\ldots,t_{n}\\}$, define $W^{*}_{2i}=W_{i}$ for each $i=0,\ldots,n$,
and choose a basic open neighbourhood $W^{*}_{2i+1}\subset W_{i}\cap
W_{i+1}=W^{*}_{2i}\cap W^{*}_{2i+2}$ of $\gamma(t_{i})$. Let
$W^{*}:=\coprod_{i=0}^{2n}W^{*}_{i}$. Then for $i=1,\ldots,n$, choose an
$\varepsilon_{i}>0$ such that $\gamma([t_{i},t_{i}+\varepsilon_{i}])\subset
W^{*}_{2i+1}$ and $t_{i}+\varepsilon_{i}<t_{i+1}$. The figure gives a
schematic picture of this construction for $n=3$:
Let $\mathfrak{p}^{\prime}$ be given by
$\\{t_{1},t_{1}+\varepsilon_{1},\ldots,t_{n},t_{n}+\varepsilon_{n}\\}$. Then
$\gamma[\mathfrak{p}^{\prime}]$ lifts through $\check{C}(W^{*})$, so we can
consider the basic open neighbourhood
$N^{*}_{\gamma}(\mathfrak{p}^{\prime},W^{*})$. Applying the argument from the
end of the proof of lemma 11 we can see that any element $\eta$ of
$N^{*}_{\gamma}(\mathfrak{p}^{\prime},W^{*})$ is such that
$\eta[\mathfrak{p}^{\prime}]$, which lifts through $\check{C}(W^{*})$ and
hence $\check{C}(W)$, descends to a functor
$\eta[\mathfrak{p}]\to\check{C}(W)$, and so is an element of
$N_{\gamma}(\mathfrak{p},W)$. $\Box$
Given a pair of basic open neighbourhoods $W_{i},W_{i+1}$ as per the
definition of $N^{*}_{\gamma}(\mathfrak{p},W)$, we know that either $W_{i}\cap
W_{i+1}=W_{i}$ or $W_{i}\cap W_{i+1}=W_{i+1}$. Thus each intersection
$W_{i}\cap W_{i+1}$ for $i=0,\ldots,n-1$ is a basic open neighbourhood.
###### Proposition 13.
The open sets $N^{*}_{\gamma}(\mathfrak{p},W)$ form an open neighbourhood
basis for the compact-open topology on $X^{I}$
Proof. As in the previous two proofs, the sets
$N^{*}_{\gamma}(\mathfrak{p},W)$ easily satisfy conditions (1) and (3) of
definition 1. The intersection $N^{*}_{\gamma}(\mathfrak{p},W)\cap
N^{*}_{\gamma}(\mathfrak{p}^{\prime},W^{\prime})$ contains an open set of the
form $N_{\gamma}(\mathfrak{p},U)$, and by lemma 11 it contains an open set
$N_{\gamma}(\mathfrak{p},W^{\prime\prime})$. Using lemma 12, there is a subset
of $N_{\gamma}(\mathfrak{p},W^{\prime\prime})$ of the form
$N^{*}_{\gamma}(\mathfrak{p},W^{\prime\prime*})$. Thus we see that the given
open sets satisfies condition (2) of definition 1, and are cofinal in the
basic open neighbourhoods from lemma 11. Hence they form an open neighbourhood
basis for $X^{I}$. $\Box$
In light of this result we can use any of these open neighbourhood bases when
dealing with the compact-open topology. We can then transfer the topological
properties of $X$ described in terms of basic open neighbourhoods to the
topological properties of $X^{I}$, and various subspaces, described in terms
of basic open neighbourhoods.
###### Definition 14.
Let $n$ be a positive integer. A space $X$ is called _semilocally
$n$-connected_ if it has a basis of $(n-1)$-connected open neighbourhoods
$N_{\lambda}$ such that $\pi_{n}(N_{\lambda})\to\pi_{n}(X)$ is the trivial map
(for any choice of basepoint). We say a space is _semilocally 0-connected_ if
for any basic neighbouhood $N_{\lambda}$ and any two points $x,y\in
N_{\lambda}$, there is a path from $x$ to $y$ in $X$.
Let $P_{x_{0},x_{1}}X$ be the fibre of $(ev_{0},ev_{1})\colon X^{I}\to X\times
X$ over $(x_{0},x_{1})$. Notice that the based loop space $\Omega_{x}X$ at a
point $x$ is $P_{x,x}X$. We shall denote by $P_{x}X$ the space of paths based
at $x$, i.e. the fibre of $ev_{0}\colon X^{I}\to X$ at $x$. The space of free
loops $LX=X^{S^{1}}$ (given the compact-open topology) can be identified with
the inverse image $(ev_{0},ev_{1})^{-1}(X)$ of the diagonal $X\hookrightarrow
X\times X$. If there is no confusion, we will usually denote the based loop
space simply by $\Omega X$.
The following theorem is more general than we need, but is of independent
interest. Although a more general theorem is stated in [Wad55], the proof is
only implied from the slightly weaker case that _is_ proved in _loc. cit._ ,
namely when $X$ is locally $n$-connected. That proof is intended for an
analogous result for the local properties of the mapping space $X^{P}$ for $P$
any finite polyhedron, and for various subspaces thereof. As a result, the
proof has to deal with the fact $P$ is not one-dimensional, and so is
necessarily quite complicated.
###### Theorem 15.
If a space $X$ is semilocally $n$-connected, $n\geq 1$, the spaces $X^{I}$,
$P_{x}X$, $P_{x,y}X$ and $\Omega_{x}X=P_{x,x}X$ are all semilocally
$(n-1)$-connected.
Proof. First of all, assume that $X$ is semilocally 1-connected, let
$\gamma\in X^{I}$ and $N^{*}_{\gamma}(\mathfrak{p},W)$ be a basic neighbouhood
where $\mathfrak{p}$ is given by $\\{t_{1},\ldots,t_{m}\\}$. Temporarily
define $t_{0}:=0$ and $t_{m+1}:=1$. Then given two points
$\gamma_{0},\gamma_{1}\in N^{*}_{\gamma}(\mathfrak{p},W)$, we know that for
each $i=0,\ldots,m+1$, $\gamma_{0}(t_{i}),\gamma_{1}(t_{i})\in W_{i-1}\cap
W_{i}$, which is a basic open neighbourhood of $X$. Let $\eta_{i}$ be a path
in $W_{i-1}\cap W_{i}$ from $\gamma_{0}(t_{i})$ to $\gamma_{1}(t_{i})$ for
$i=1,\ldots,m$, and let $\eta_{0}$, be a path from $\gamma_{0}(0)$ to
$\gamma_{0}(0)$ in $W_{0}$ and $\eta_{m+1}$ be a path from $\gamma_{0}(1)$ to
$\gamma_{1}(1)$ in $W_{n}$. The sequence of paths
(1) |
---|---
$\textstyle{\gamma_{0}(t_{i})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{\eta_{i}}}$$\scriptstyle{\gamma_{0}\big{|}_{[t_{i},t_{i+1}]}}$$\textstyle{\gamma_{1}(t_{i})}$$\textstyle{\gamma_{0}(t_{i+1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta_{i+1}}$$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces\gamma_{1}(t_{i+1})}$$\scriptstyle{\overline{\gamma_{1}\big{|}_{[t_{i},t_{i+1}]}}}$
then defines a loop in $W_{i}$ for $i=1,\ldots,m-1$. As $X$ is semilocally
1-connected, there is a surface in $X$ of whch this loop is the boundary.
These surfaces patch together to form a free homotopy in $X$ between the
_paths_ $\gamma_{0}$ and $\gamma_{1}$. By adjointness, this defines a path in
$X^{I}$ between the _points_ $\gamma_{0}$ and $\gamma_{1}$. Thus $X^{I}$ is
semilocally 0-connected.
If we consider the subspace $P_{x}X$ (resp. $P_{x,y}X$), then we take the path
$\eta_{0}$ (resp. the paths $\eta_{0}$ and $\eta_{n+1}$) to be constant. This
implies that the path in $X^{I}$ defined in the previous paragraph lands in
$P_{x}X$ (resp. $P_{x,y}X$), and so those subspaces are likewise semilocally
0-connected.
Now assume that $X$ is semilocally $n$-connected with $n\geq 2$ and that
$N^{*}_{\gamma}(\mathfrak{p},W)$ is a basic open neighbourhood of the point
$\gamma$. Consider the $k$-sphere $S^{k}$ $(k\geq 0)$ to be pointed by the
‘north pole’ $N$. Let $f\colon S^{k}\to X^{I}$ be a map in
$N^{*}_{\gamma}(\mathfrak{p},W)$ such that at $f(N)=\gamma$. By adjointness,
this determines a map $\widetilde{f}\colon S^{k}\times I\to X$ such that the
restriction $\widetilde{f}\big{|}_{S^{k}\times[t_{i},t_{i+1}]}$ factors
through $W_{i}$ for $i=0,\ldots,m$. Note that if we further restrict this map
to $\widetilde{f}\big{|}_{S^{k}\times\\{t_{i}\\}}$ then for $i=1,\ldots,m$ it
factors through $W_{i-1}\cap W_{i}$, which is a basic open neighbourhood by
the assumption on $W$. We also have maps
$\widetilde{f}\big{|}_{S^{k}\times\\{0\\}}$, landing in $W_{0}$, and
$\widetilde{f}\big{|}_{S^{k}\times\\{1\\}}$, landing in $W_{m}$. The
assumption on $X$ implies that that the basic open neighbourhoods are
$(n-1)$-connected, so that for $k=0,\ldots,m-1$, there are maps
$\eta_{i}\colon B^{k+1}\to W_{i-1}\cap W_{i}$, for $i=1,\ldots,m$
$\eta_{0}\colon B^{k+1}\to W_{0}$ and $\eta_{m+1}\colon B^{k+1}\to W_{m}$
filling these spheres.
Now for $k=0,\ldots,n-1$ and $i=0,\ldots,m-1$ the maps $\eta_{i}$ define,
together with the cylinders
$\widetilde{f}\big{|}_{S^{k}\times[t_{i},t_{i+1}]}$, maps $\xi_{i}$ from a
(space homeomorphic to a) $k+1$-sphere to $W_{i}$. As $W_{i}$ is
$(n-1)$-connected, for $k=0,\ldots,n-2$ and $i=0,\ldots,m$ there is a map
$\upsilon_{i}\colon B^{k+1}\times[t_{i},t_{i+1}]\to W_{i}$ filling the sphere.
The $m+1$ maps $\upsilon_{i}$ paste together to form a homotopy $B^{k+1}\times
I\to X$ and a map $B^{k+1}\to N^{*}_{\gamma}(\mathfrak{p},W)$ filling the map
from the sphere we started with. Thus the basic open neighbourhood
$N^{*}_{\gamma}(\mathfrak{p},W)$ is $(n-2)$-connected.
If we now take $k=n-1$, then we can find maps $\upsilon_{i}\colon
B^{n}\times[t_{i},t_{i+1}]\to X$ filling the sphere. These paste together to
give a homotopy $B^{n}\times I\to X$ and so a map $B^{n}\to X^{I}$ filling the
sphere $S^{n-1}\to N^{*}_{\gamma}(\mathfrak{p},W)\hookrightarrow X^{I}$. This
implies that the map
$\pi_{n-1}(N^{*}_{\gamma}(\mathfrak{p},W),\gamma)\to\pi_{n-1}(X^{I})$ is
trivial, and so $X^{I}$ is semilocally $(n-1)$-connected.
If we again consider the subspaces $P_{x}X$ and $P_{x,y}X$, we can choose the
maps $\eta_{0}$ and $\eta_{m}$ to be constant (where appropriate) and so this
ensures the maps $B^{k+1}\to X^{I}$ constructed above factor through the
relevant subspace. $\Box$
As a corollary we get a much simpler proof of another special case of the
theorem from [Wad55], namely for the mapping space $X^{S^{m}}$, or more
specifically the subspace of based maps.
###### Corollary 16.
If $X$ is semilocally $n$-connected and $m\leq n$, the space
$(X,x)^{(S^{m},N)}$ of pointed maps is semilocally $(n-m)$-connected.
Proof. This is an easy induction on $m$ using theorem 15, using
$X^{S^{m}}=\Omega^{m}X$, the $m$-fold based loop space. $\Box$
We can also discuss the local homotopical properties of the space $LX$, as
long as we make one further refinement to the neighbourhood basis it inherits
from $X^{I}$. Let $N^{o}_{\gamma}(\mathfrak{p},W)$ denote an open
neighbourhood $N_{\gamma}(\mathfrak{p},W)$ of $LX$ with
$W=\coprod_{i=0}^{2n+1}$ where for $i=1,\ldots,n-1$, we have $W_{2i+1}\subset
W_{2i}\cap W_{2i+2}$ and $W_{2n+1}\subset W_{2n}\cap W_{0}$. The proofs of the
following lemma and proposition are almost identical to that of lemma 12 and
proposition 13, so we omit them.
###### Lemma 17.
For every basic open neighbourhood $N_{\gamma}(\mathfrak{p},W)$ of $LX$, there
is a basic open neighbourhood of the form
$N^{o}_{\gamma}(\mathfrak{p}^{\prime},W^{\prime})$ contained in
$N_{\gamma}(\mathfrak{p},W)$.
###### Proposition 18.
The sets $N^{o}_{\gamma}(\mathfrak{p},W)$ form an open neighbourhood basis for
the compact-open topology on $LX$.
We then have the following analogue of theorem 15.
###### Theorem 19.
If the space $X$ be semilocally $n$-connected, the space $LX$ is semilocally
$(n-1)$-connected.
Proof. Assume that the point $\gamma\in LX$ has a basic neigbourhood
$N^{o}_{\gamma}(\mathfrak{p},W)$ where $W=\coprod_{i=0}^{m}$. The proof
proceeds along the same lines as that of theorem 15, except we let
$\eta_{0}=\eta_{m+1}\colon B^{k+1}\to W_{m}\cap W_{0}$. This is enough to
ensure that the rest of the proof goes through and that for $k=0,\ldots,n-2$
we have maps $B^{k+1}\to N^{o}_{\gamma}(\mathfrak{p},W)$ expressing the
$k$-connectedness of $N^{o}_{\gamma}(\mathfrak{p},W)$, and maps $B^{n}\to LX$
that give us the result that
$\pi_{n-1}(N^{o}_{\gamma}(\mathfrak{p},W))\to\pi_{n-1}(LX)$ is the trivial
homomorphism. $\Box$
###### Corollary 20.
If $X$ is semilocally 2-connected, the components of $LX$ and $\Omega X$ admit
1-connected covering spaces.
In particular, the space $L_{0}X$ of null-homotopic free loops – loops that
bound a disk in $X$ – admits a covering space $(L_{0}X)^{(1)}$ with
1-connected components, which is connected if $X$ is connected.
We finish this section by showing the that several maps involving paths
spaces, induced by operations of paths, are indeed continuous.
###### Lemma 21.
The concatenation map $\cdot\colon X^{I}\times_{ev_{0},X,ev_{1}}X^{I}\to
X^{I}$ is continuous.
Proof. Let $\gamma_{1},\gamma_{2}\in X^{I}$ be paths in $X$, and
$N:=N_{\gamma_{2}\cdot\gamma_{1}}(\mathfrak{p},W)$ a basic open neighbourhood
as given by lemma 11. We can assume that $\mathfrak{p}$ is given by
$\\{t_{1},\ldots,t_{n},1/2,t^{\prime}_{1},\ldots,t^{\prime}_{m}\\},$
else we can refine $\mathfrak{p}$ and alter $W$ so that it does without
changing $N$ (as specified in the proof following definition 9). The
collection $W$ of basic open neighbourhoods then looks like
$W=\coprod_{i=0}^{n}W^{1}_{i}\coprod_{j=0}^{m}W^{2}_{j}=:W^{1}\coprod W^{2}.$
Define the refinement $\mathfrak{p}^{\prime}\to\mathfrak{p}$ by adding an
additional two points $\frac{1}{2}-\epsilon,\ \frac{1}{2}+\epsilon$ to the
specification of $\mathfrak{p}$, where $\epsilon$ is small enough that the
image of $[\frac{1}{2}-\epsilon,\frac{1}{2}+\epsilon]$ under
$\gamma_{2}\cdot\gamma$ lands in a basic open neighbourhood $W_{n+1}\subset
W^{1}_{n}\cap W^{2}_{0}$. Then defining
$W^{\prime}=\coprod_{i=0}^{n}W^{1}_{i}\coprod W_{n+1}\coprod
W_{n+1}\coprod_{j=0}^{m}W^{2}_{j}=:W^{1}\coprod W^{2},$
we see that $(\gamma_{2}\cdot\gamma_{1})[\mathfrak{p}]$ lifts through
$W^{\prime}$. There is then a subset
$N^{\prime}:=N_{\gamma_{2}\cdot\gamma_{1}}(\mathfrak{p}^{\prime},W^{\prime})\subset
N_{\gamma_{2}\cdot\gamma_{1}}(\mathfrak{p},W)$
We now set $N_{1}=N_{\gamma_{1}}(\mathfrak{p}_{1},W^{1}\coprod W_{n+1})$,
$N_{2}=N_{\gamma_{2}}(\mathfrak{p}_{2},W_{n+1}\coprod W^{2})$ where
$\mathfrak{p}_{1}$ is given by $\\{2t_{1},\ldots,2t_{n},1-2\epsilon\\}$ and
$\mathfrak{p}_{2}$ is given by
$\\{2\epsilon,2t^{\prime}_{1}-1,\ldots,2t^{\prime}_{m}-1\\}$. Thus
$\mathfrak{p}^{\prime}$ is the concatenation
$\mathfrak{p}_{1}\vee_{t}\mathfrak{p}_{2}$.
The fibred product $N_{2}\times_{X}N_{1}$ consists of pairs of paths
$\eta_{1},\eta_{2}$ that lift through $W_{1}$ and $W_{2}$ resp., whose
endpoints match and in particular,
$\eta_{1}(1)=\eta_{2}(1)=(\eta_{2}\cdot\eta_{1})(\frac{1}{2})\in W^{1}_{n}\cap
W^{2}_{0}$. Thus $\eta_{2}\cdot\eta_{1}[\mathfrak{p}]$ lifts through $W$, and
so the image of $N_{2}\times_{X}N_{1}$ under concatenation is contained in
$N$, so concatenation is continuous. $\Box$
###### Lemma 22.
The map $X\to X^{I}$ sending a point $x$ to the constant path $\underline{x}$
is continuous.
Proof. Let $N=N_{\underline{x}}(\mathfrak{p},W)$ be a basic open
neighbourhood. The collection $W$ is a finite set of basic neigbourhoods of
$x$, so take the intersection $W_{0}\cap\ldots W_{n}$ and let $W^{\prime}$ be
a basic open neighbourhood contained in that intersection. For all
$x^{\prime}\in W^{\prime}$, $\underline{x^{\prime}}[\mathfrak{p}]$ clearly
lifts through $W$, so the image of $W^{\prime}$ under $X\to X^{I}$ is
contained in $N$. $\Box$
The following easy lemma is left as a final exercise for the reader.
###### Lemma 23.
The ‘reverse’ map $X^{I}\to X^{I}$ sending a path $\gamma$ to the same path
traversed in the opposite direction is continuous.
## 3\. The topological fundamental bigroupoid of a space
One can put a topology on the fundamental groupoid of a space $X$ if it is
semilocally 1-connected. In this section we shall generalise this to the
fundamental bigroupoid defined in [Ste00, HKK01]. It requires local conditions
on the free loop space $LX$, which as we saw in the previous section, can be
phrased in terms of the topology of $X$. We shall also describe the conditions
algebraically using the fundamental bigroupoid.
We shall first treat the case of the fundamental groupoid, as though it is
long-known we shall need it again in the second part of this section. Assume
the space $X$ is semilocally 1-connected. Since the set of objects of
$\Pi_{1}(X)$ is just the set underlying $X$, we just give it the topology from
$X$. Now recall that the set $\Pi_{1}(X)_{1}$ is the set of paths $C(I,X)$ in
$X$ quotiented by the equivalence relation ‘homotopic rel endpoints’. Let
$x,y$ be points in $X$, and without loss of generality we can assume they are
in the same path-component. Let $W_{x}$ and $W_{y}$ be basic open
neigbourhoods of $x$ and $y$ respectively. Notice that they are path-connected
by assumption, and the homomorphisms $\pi_{1}(W_{x},x)\to\pi_{1}(X,x)$,
$\pi_{1}(W_{y},y)\to\pi_{1}(X,y)$ are trivial.
For $[\gamma]$ a homotopy class of paths from $x$ to $y$, we now describe an
open neighbourhood basis for $\Pi_{1}(X)_{1}$. Define the sets
$N_{[\gamma]}(W_{x},W_{y})=\\{[\eta_{x}\cdot\gamma\cdot\eta_{y}]\in\Pi_{1}(X)_{1}|\eta_{?}\colon
I\to W_{?},\ ?=x,y,\ \eta_{x}(1)=\gamma(0),\ \eta_{y}(0)=\gamma(1)\\},$
where the operation $-\cdot-$ is the usual concatenation of paths, with the
first path on the right and the second on the left. Note that these are
homotopy classes in $X$, as opposed to taking homotopies of paths of the form
$\eta_{x}\cdot\gamma\cdot\eta_{y}$.
###### Proposition 24.
The sets $N_{[\gamma]}(W_{x},W_{y})$ form an open neighbourhood basis for
$\Pi_{1}(X)_{1}$.
Proof. We have $\gamma\in N_{[\gamma]}(W_{x},W_{y})$ by definition, condition
(1) from definition 1 holds. If $[\omega]\in N_{[\gamma]}(W_{x},W_{y})$, then
for all $[\omega^{\prime}]\in N_{[\gamma]}(W_{x},W_{y})$ we can write
$\displaystyle[\omega^{\prime}]=$
$\displaystyle[\eta^{\prime}_{x}\cdot\gamma\cdot\eta^{\prime}_{y}]$
$\displaystyle=$
$\displaystyle[\eta^{\prime}_{x}\cdot\overline{\eta_{x}}\cdot\eta_{x}\cdot\gamma\cdot\eta_{y}\cdot\overline{\eta_{y}}\cdot\eta^{\prime}_{y}]$
$\displaystyle=$
$\displaystyle[\eta^{\prime}_{x}\cdot\omega\cdot\eta^{\prime}_{y}]$
where $[\omega]=[\eta_{x}\cdot\gamma\cdot\eta_{y}]$. Thus
$N_{[\gamma]}(W_{x},W_{y})\subset N_{[\omega]}(W_{x},W_{y})$. Since
$[\gamma]=[\overline{\eta_{x}}\cdot\omega\cdot\overline{\eta_{y}}]\in
N_{[\omega]}(W_{x},W_{y})$ we can use symmetry to show that
$N_{[\omega]}(W_{x},W_{y})\subset N_{[\gamma]}(W_{x},W_{y})$, and condition
(3) in definition 1 is satisfied.
To show that condition (2) is satisfied, let $N_{[\gamma]}(W_{x},W_{y})$,
$N_{[\gamma]}(W^{\prime}_{x},W^{\prime}_{y})$ be a pair of putative basic
neighbouhoods of $[\gamma]$. Let $W^{\prime\prime}_{x}\subset W_{x}\cap
W^{\prime}_{x}$ and $W^{\prime\prime}_{y}\subset W_{y}\cap W^{\prime}_{y}$ be
basic open neighbourhoods of $x$ and $y$. The set
$N_{[\gamma]}(W^{\prime\prime}_{x},W^{\prime\prime}_{y})$ is then contained in
$N_{[\gamma]}(W_{x},W_{y})\cap N_{[\gamma]}(W^{\prime}_{x},W^{\prime}_{y})$.
$\Box$
Although we now have topologies on the sets $\Pi_{1}(X)_{0}$ and
$\Pi_{1}(X)_{1}$, we do not know that they form a topological groupoid –
composition and other structure maps need to be checked for continuity.
###### Proposition 25.
With the topology as described above, $\Pi_{1}(X)$ is a topological groupoid
for $X$ a semilocally 1-connected space.
Proof. We need to check the continuity of four maps, namely
$\displaystyle(s,t)\colon$ $\displaystyle\Pi_{1}(X)_{1}\to X\times X,$
$\displaystyle\overline{(-)}\colon$
$\displaystyle\Pi_{1}(X)_{1}\to\Pi_{1}(X)_{1},$ $\displaystyle e\colon$
$\displaystyle X\to\Pi_{1}(X)_{1},$ $\displaystyle m\colon$
$\displaystyle\Pi_{1}(X)_{1}\times_{X}\Pi_{1}(X)_{1}\to\Pi_{1}(X)_{1}$
We shall use the following criterion to check for continuity:
* •
A map $f\colon X\to Y$ between topological spaces is continuous if and only if
for every $x\in X$ and basic open neighbourhood $N_{Y}$ of $f(x)$, there is a
basic open neighbourhood $N_{X}$ of $x$ such that $N_{X}\subset f^{-1}(N_{Y})$
(equivalently, $f(N_{X})\subset N_{Y}$).
Let $[\gamma]$ be a point in $\Pi_{1}(X)_{1}$, and set
$(x,y)=(s[\gamma],t[\gamma])$. The inverse image
$(s,t)^{-1}(W_{x}\times W_{y})$
contains the basic open neighbourhood $N_{[\gamma]}(W_{x},W_{y})$, so $(s,t)$
is continuous.
Given the basic open neighbourhood $N_{[\gamma]}(W_{x},W_{y})$, it is simple
to check that
$\overline{\left(N_{[\gamma]}(W_{x},W_{y})\right)}=N_{[\overline{\gamma}]}(W_{y},W_{x}),$
so $\overline{(-)}$ is continuous.
For $x\in X$, consider the basic open neighbourhood
$N_{[\mathrm{id}_{x}]}(W_{x},W^{\prime}_{x})$. The inverse image
$e^{-1}(N_{[\mathrm{id}_{x}]}(W_{x},W^{\prime}_{x}))$ is the intersection
$W_{x}\cap W^{\prime}_{x}$. There is a basic open neighbourhood
$W^{\prime\prime}_{x}\subset W_{x}\cap W^{\prime}_{x}$, so $e$ is continuous.
It now only remains to show that multiplication in $\Pi_{1}(X)$ is continuous.
For composable arrows $[\gamma_{1}]\colon x\to y$ and $[\gamma_{2}]\colon y\to
z$, let $N_{[\gamma_{2}\cdot\gamma_{1}]}(W_{x},W_{z})$ be a basic open
neighbourhood. If $W_{y}$ is a basic open neighbourhood of $y$ the set
$N_{[\gamma_{2}]}(W_{y},W_{z})\times_{X}N_{[\gamma_{1}]}(W_{x},W_{y})$ is a
basic open neighbourhood of $([\gamma_{2}],[\gamma_{1}])$ in
$\Pi_{1}(X)_{1}\times_{X}\Pi_{1}(X)_{1}$. Arrows in the image of this set
under $m$ look like
$[\lambda_{1}\cdot\gamma_{2}\cdot\lambda_{0}\cdot\eta_{1}\cdot\gamma_{1}\cdot\eta_{0}],$
where $\lambda_{1}$ is a path in $W_{z}$, $\lambda_{0}$ and $\eta_{1}$ are
paths in $W_{y}$ such that $\lambda_{0}(\epsilon)=\eta_{1}(\epsilon+1)$ for
$\epsilon=0,1\ (mod\ 2)$ and $\eta_{0}$ is a path in $W_{x}$. Now the
composite $\lambda_{0}\cdot\eta_{1}$ is a loop in $W_{y}$ at $y$. The arrow
$[\lambda_{0}\cdot\eta_{1}]$ is equal to $id_{y}$ in $\Pi_{1}(X)$ by the
assumption that $X$ is semilocally 1-connected. Thus
$[\lambda_{1}\cdot\gamma_{2}\cdot\lambda_{0}\cdot\eta_{1}\cdot\gamma_{1}\cdot\eta_{0}]=[\lambda_{1}\cdot\gamma_{2}\cdot\gamma_{1}\cdot\eta_{0}]$
and we have an inclusion
$m\left(N_{[\gamma_{2}]}(W_{y},W_{z})\times_{X}N_{[\gamma_{1}]}(W_{x},W_{y})\right)\subset
N_{[\gamma_{2}\cdot\gamma_{1}]}(W_{x},W_{z}).$
This implies multiplication is continuous. $\Box$
Topological groupoids have a notion of equivalence which is weaker than the
usual internal equivalence in the sense of having a pair of functors forming
an equivalence. We will not go into this, but will point out that sometimes a
topological groupoid can be weakly equivalent to a topological groupoid
equipped with the _discrete_ topology. We will give a definition which can be
shown to be equivalent to the usual definition.
###### Definition 26.
A _weakly discrete groupoid_ $X$ is a topological groupoid such that each hom-
space $X(x,y)=(s,t)^{-1}(x,y)$ is discrete and _locally trivial_ : for each
object $p\in X_{0}$ there is an open neighbourhood $U_{p}\in p$ in $X_{0}$ and
a lift
$\textstyle{X_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(s,t)}$$\textstyle{\\{p\\}\times
U_{p}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X_{0}\times
X_{0}}$
as indicated.
###### Proposition 27.
For a semilocally 1-connected space $X$, the topological groupoid $\Pi_{1}(X)$
is weakly discrete.
Proof. We will first show $\Pi_{1}(X)_{1}\to X\times X$ is a covering space.
The fact it’s a covering space means that homs are discrete. Only need then to
show local triviality, but this follows from the fact path components of $X$
are open. Let $X=\coprod_{\alpha}X_{\alpha}$ with each $X_{\alpha}$ a
connected (path-)component. Clearly the fibres over $X_{\alpha}\times
X_{\beta}$ for $\alpha\neq\beta$ are empty, so we can just consider the
restriction of $\Pi_{1}(X)_{1}$ to each $X_{\alpha}\times X_{\alpha}$, from
which it follows we can assume $X$ connected. It is also immediate that the
image of $(s,t)$ is open.
Let $(x,y)\in X^{2}$ and $W_{x}\times W_{y}$ be a basic open neighbourhood of
$(x,y)$, this means that $W_{x},W_{y}$ are path-connected and the inclusion
maps induce zero maps on fundamental groups. Let $N_{[\gamma]}(W_{x},W_{y})$
be a basic neighourhood. The restriction
$(s,t)\big{|}_{N_{[\gamma]}(W_{x},W_{y})}$ maps surjectively onto $W_{x}\times
W_{y}$, using the path-connectedness of $W_{x}$ and $W_{y}$. Consider now the
surjective map $s\big{|}_{N_{[\gamma]}(W_{x},W_{y})}\colon
N_{[\gamma]}(W_{x},W_{y})\to W_{x}$. Assume there are two paths
$\eta_{1},\eta_{2}$ in $N_{[\gamma]}(W_{x},W_{y})$ with source $x^{\prime}\in
W_{x}$ and target $y^{\prime}\in W_{y}$. We know that
$[\eta_{1}]=[\omega_{1}\cdot\gamma]$ and $[\eta_{2}]=[\omega_{2}\cdot\gamma]$
and so $\overline{\omega_{2}}\cdot\omega_{1}$ is a loop in $W_{y}$ based at
$y$. By the assumption on $X$, this loop is null-homotopic in $X$, or in other
words, $[\omega_{1}]=[\omega_{2}]$ in $\Pi_{1}(X)_{1}$, so
$[\eta_{1}]=[\eta_{2}]$. Using a similar argument for $W_{x}$, we get the
result that $(s,t)\big{|}_{N_{[\gamma]}(W_{x},W_{y})}$ is a bijection. It is
easily seen that $(s,t)$ maps basic open neighbourhoods to basic open
neighbourhoods, and so is an open map, hence an isomorphism. The sets
$N_{[\gamma]}(W_{x},W_{y})$, $N_{[\gamma^{\prime}]}(W_{x},W_{y})$ are disjoint
for $[\gamma]\neq[\gamma^{\prime}]$, by arguments from the proof of
proposition 24. Since every arrow $x^{\prime}\to y^{\prime}$ in $\Pi_{1}(X)$
for $x^{\prime}\in W_{x}$ and $y^{\prime}\in W_{y}$ lies in some
$N_{[\gamma]}(W_{x},W_{y})$, we have an isomorphism
$\Pi_{1}(X)\times_{X^{2}}(W_{x}\times W_{y})\simeq(W_{x}\times
W_{y})\times\Pi_{2}(X)(x,y)$
and so $\Pi_{1}(X)\to X\times X$ is a covering space. $\Box$
To assist in further proofs of continuity, we give a small lemma.
###### Lemma 28.
For a semilocally 1-connected space $X$, the map $[-]\colon
X^{I}\to\Pi_{1}(X)_{1}$ is continuous.
Proof. Let $N_{[\gamma]}(W_{x},W_{y})$ be a basic open neighbourhood. The
inverse image
$[-]^{-1}N_{[\gamma]}(W_{x},W_{y})$
consists of points in the open set $U_{x,y}:=(ev_{0},ev_{1})^{-1}(W_{x}\times
W_{y})\subset X^{I}$ that are connected by a path in $U_{x,y}$ to a point of
the form $\eta_{x}\cdot(\gamma\cdot\eta_{y})$. Note that every such point is
connected by a path in $U_{x,y}$ to the point $\gamma$ – this can be seen by
constructing a free homotopy connecting the path
$\eta_{x}\cdot(\gamma\cdot\eta_{y})$ to the path $\gamma$. Now $X^{I}$ is
semilocally 0-connected by theorem 15 we can choose a basic open neighbourhood
$N^{*}_{\gamma}(\mathfrak{p},W)$ with $W=\coprod_{i=0}^{n}W_{i}$ such that
$W_{0}=W_{x}$ and $W_{y}$. Every point $\eta$ in this neighbourhood is
connected by a path $\Gamma_{\eta}$ in $X^{I}$ to $\gamma$. Moreover, we can
choose this path, as in the proof of theorem 15, to be such that
$ev_{0}\circ\Gamma_{\eta}(t)\in W_{x}$ and $ev_{1}\circ\Gamma_{\eta}(t)\in
W_{y}$ for all $t\in I$. Thus the neighbourhood
$N^{*}_{\gamma}(\mathfrak{p},W)$ is a subset of $U_{x,y}$, and $m$ is
continuous. $\Box$
Now if we are given a homotopy $Y\times I\to X$, that is a map $Y\to X^{I}$,
we get a map $Y\to\Pi_{1}(X)_{1}$ by composition with $[-]$.
To describe the topological fundamental bigroupoid of a space, we first need
to define a topological bigroupoid. The definition of bigroupoid is recalled
in the appendix. The full diagrammatic definition of an _internal_ bicategory
appears in the original article on bicategories [Bén67]. Since we are only
interested in _topological_ bigroupoids—bigroupoids in Top, a concrete
category—we can refer to elements of objects with impunity. This means that
the pointwise coherence diagrams in the appendix are still valid, and we do
not need to display three-dimensional commuting diagrams of internal natural
transformations.
###### Definition 29.
A _topological bigroupoid_ $B$ is a topological groupoid $\underline{B}_{1}$
equipped with a functor
$(S,T)\colon\underline{B}_{1}\to\mbox{disc}(B_{0}\times B_{0})$ for a space
$B_{0}$ together with
* •
functors
$\displaystyle C\colon$
$\displaystyle\underline{B}_{1}\times_{S,\mbox{disc}(B_{0}),T}\underline{B}_{1}\to\underline{B}_{1}$
$\displaystyle I\colon$ $\displaystyle\mbox{disc}(B_{0})\to\underline{B}_{1}$
over $\mbox{disc}(B_{0}\times B_{0})$ and a functor
$\overline{(-)}\colon\underline{B}_{1}\to\underline{B}_{1}$
covering the swap map $\mbox{disc}(B_{0}\times
B_{0})\to\mbox{disc}(B_{0}\times B_{0})$.
* •
continuous maps
(2)
$\left\\{\begin{array}[]{l}a\colon\operatorname{Obj}(\underline{B}_{1})\times_{S,B_{0},T}\operatorname{Obj}(\underline{B}_{1})\times_{S,B_{0},T}\operatorname{Obj}(\underline{B}_{1})\to\operatorname{Mor}(\underline{B}_{1})\\\
r\colon\operatorname{Obj}(\underline{B}_{1})\to\operatorname{Mor}(\underline{B}_{1})\\\
l\colon\operatorname{Obj}(\underline{B}_{1})\to\operatorname{Mor}(\underline{B}_{1})\\\
e\colon\operatorname{Obj}(\underline{B}_{1})\to\operatorname{Mor}(\underline{B}_{1})\\\
i\colon\operatorname{Obj}(\underline{B}_{1})\to\operatorname{Mor}(\underline{B}_{1})\end{array}\right.$
which are the component maps of natural isomorphisms
|
---|---
$\textstyle{\underline{B}_{1}\times_{S,\mbox{disc}(B_{0}),T}\underline{B}_{1}\times_{S,\mbox{disc}(B_{0}),T}\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}\times
C}$$\scriptstyle{C\times\mathrm{id}}$$\textstyle{\underline{B}_{1}\times_{S,\mbox{disc}(B_{0}),T}\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{C}$$\textstyle{\underline{B}_{1}\times_{S,\mbox{disc}(B_{0}),T}\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{C}$$\textstyle{\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a}$
| | |
---|---|---|---
$\textstyle{\underline{B}_{1}\times_{S,\mbox{disc}(B_{0})}\mbox{disc}(B_{0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}\times
I}$
$\scriptstyle{\simeq}$$\textstyle{\underline{B}_{1}\times_{S,\mbox{disc}(B_{0}),T}\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{C}$
$\textstyle{\mbox{disc}(B_{0})\times_{\mbox{disc}(B_{0}),T}\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{I\times\mathrm{id}}$
$\scriptstyle{\simeq}$$\textstyle{\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\scriptstyle{l}$
| | |
---|---|---|---
$\textstyle{\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\overline{(-)},\mathrm{id})}$$\scriptstyle{S}$$\textstyle{\underline{B}_{1}\times_{S,\mbox{disc}(B_{0}),T}\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{C}$$\textstyle{\underline{B}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\mathrm{id},\overline{(-)})}$$\scriptstyle{T}$$\textstyle{\mbox{disc}(B_{0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{I}$$\textstyle{\underline{B}_{1}}$$\textstyle{\mbox{disc}(B_{0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{I}$$\scriptstyle{e}$$\scriptstyle{i}$
These are required to satisfy the usual coherence diagrams as given in
definitions 42 and 43 in the appendix.
The full definition of the fundamental bigroupoid $\Pi_{2}(X)$ can be found in
[Ste00, HKK01], but it can be described in rough detail as follows: the
objects are points of the space $X$, the arrows are paths $I\to X$ (_not_
homotopy classes) and the 2-arrows are homotopy classes of homotopies between
paths. The horizontal compostion of 2-arrows is by pasting such that source
and target paths are concatenated, and vertical composition is pasting of
homotopies. Horizontal composition, left and right units and inverses are only
coherent rather than strict. We will describe a topological bigroupoid
$\Pi_{2}^{T}(X)$ lifting $\Pi_{2}(X)$.
Since the 1-arrows of $\Pi_{2}^{T}(X)$ are paths in $X$, we can let the
topology on $\Pi_{2}^{T}(X)_{1}$ be the compact-open topology from the
previous section. We also let the topology on the objects of $\Pi_{2}^{T}(X)$
be that of $X$. The object components of the functors $S,T$, which are
evaluation at $0$ and $1$ respectively, are clearly continuous, as is the map
$X\to X^{I}$ sending a point to a constant path. All we need to have a
candidate for being a topological fundamental bigroupoid is a topology on the
set of 2-arrows.
Recall that the composition of 2-tracks $[f],[g]$ along a path (vertical
composition) is denoted by $[f+g]$, and the concatenation (horizontal
composition) is denoted $[f\cdot g]$. The inverse of a 2-track $[f]$ for this
composition is written $-[f]=[-f]$. If $[f]$ is a 2-track with representative
$f\colon I^{2}\to X$, let $\ulcorner f\\!\urcorner\colon I\to X^{I}$ be the
corresponding path.
###### Lemma 30.
Let $[h]\in\Pi_{2}^{T}(X)_{2}$ be a 2-track, $U_{0},U_{1}$ basic open
neighbourhoods of $s_{0}[h],t_{0}[h]\in X$ respectively, and
$V_{0}=N_{s_{1}[h]}(\mathfrak{p}_{0},\coprod_{i=0}^{n_{0}}W_{i}^{(0)})\quad\text{{\rm
and }}\quad
V_{1}=N_{t_{1}[h]}(\mathfrak{p}_{1},\coprod_{i=0}^{n_{1}}W_{i}^{(1)})$
basic open neighbourhoods in $X^{I}$. Also assume that
$U_{0}\subset W_{0}^{(0)}\cap W_{0}^{(1)},\qquad U_{1}\subset
W_{n_{0}}^{(0)}\cap W_{n_{1}}^{(1)}.$
Then the sets
$\displaystyle\langle[h],U_{0},U_{1},V_{0},V_{1}\rangle:=\\{[f]\in\Pi_{2}^{T}(X)_{2}\
|\ $ $\displaystyle\exists\beta_{\epsilon}\colon I\to U_{\epsilon}\
(\epsilon=0,1)\ \text{{\rm and }}\ulcorner\lambda_{\epsilon}\\!\urcorner\colon
I\to V_{\epsilon}\ (\epsilon=0,1)$ $\displaystyle\text{{\rm such that
}}[f]=[\lambda_{1}+(\mathrm{id}_{\beta_{1}}\cdot(h\cdot\mathrm{id}_{\beta_{0}}))+\lambda_{0}]\\},$
form an open neighbourhood basis for $\Pi_{2}^{T}(X)_{2}$.
Proof. Algebraically the elements of the basic open neighbourhoods
$\langle[h],U_{0},U_{1},V_{0},V_{1}\rangle$ look like diagrams
| | |
---|---|---|---
$\textstyle{x_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{0}}$
$\textstyle{s_{0}[h]\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{s_{1}[h]}$
$\scriptstyle{t_{1}[h]}$
$\textstyle{t_{0}[h]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{1}}$$\textstyle{x_{1}}$$\scriptstyle{[h]}$$\scriptstyle{\lambda_{0}}$$\scriptstyle{\lambda_{1}}$
with some hidden bracketing on the whiskering of $h$ by $\beta_{0},\beta_{1}$.
Here is a topological viewpoint of the same element of
$\langle[h],U_{0},U_{1},V_{0},V_{1}\rangle$:
Or more schematically,
It is immediate from the definition that
$[h]\in\langle[h],U_{0},U_{1},V_{0},V_{1}\rangle$. To see that for
$[f]\in\langle[h],U_{0},U_{1},V_{0},V_{1}\rangle$, we have
$[h]\in\langle[f],U_{0},U_{1},V_{0},V_{1}\rangle$, we can use the fact
$\Pi_{2}^{T}(X)$ is a bigroupoid, and apply the compose/concatenate with the
(weak) inverse of everything in sight. We do not display the all structure
morphisms (associator etc.), relying on coherence for bicategories. If we have
| | | | | |
---|---|---|---|---|---|---
$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{g_{0}}$
$\scriptstyle{g_{1}}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{[f]}$$\scriptstyle{=}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{0}}$
$\scriptstyle{g_{0}}$
$\scriptstyle{g_{1}}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{s_{1}[h]}$
$\scriptstyle{t_{1}[h]}$
$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{1}}$$\textstyle{\bullet}$$\scriptstyle{[h]}$$\scriptstyle{\lambda_{0}}$$\scriptstyle{\lambda_{1}}$
then
| | | | | |
---|---|---|---|---|---|---
| | | | | |
| | | | | |
$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{s_{1}[h]}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{1}}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{0}}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{s_{1}[h]}$
$\scriptstyle{t_{1}[h]}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{[h]}$$\scriptstyle{\beta_{1}}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{=}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g_{0}}$
$\scriptstyle{g_{1}}$
$\scriptstyle{\beta_{0}}$$\scriptstyle{\beta_{0}}$$\textstyle{\bullet}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{t_{1}[h]}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{1}}$$\scriptstyle{[f]}$$\scriptstyle{\lambda_{0}^{-1}}$$\scriptstyle{\lambda_{1}^{-1}}$
and so
| | | | |
---|---|---|---|---|---
| | | | | |
| | | | |
$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{s_{1}[h]}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{1}}$$\scriptstyle{I}$
$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{g_{0}}$
$\scriptstyle{g_{1}}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{[h]}$$\scriptstyle{=}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{\beta_{0}}}$$\scriptstyle{I}$
$\scriptstyle{I}$
$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g_{0}}$
$\scriptstyle{g_{1}}$
$\scriptstyle{\beta_{0}}$$\scriptstyle{\beta_{0}}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{\beta_{1}}}$$\textstyle{\bullet}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\scriptstyle{t_{1}[h]}$$\textstyle{\bullet\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{1}}$$\scriptstyle{I}$
$\scriptstyle{[f]}$$\scriptstyle{\lambda_{0}^{-1}}$$\scriptstyle{\lambda_{1}^{-1}}$$\scriptstyle{i}$$\scriptstyle{i^{-1}}$$\scriptstyle{e^{-1}}$$\scriptstyle{e}$
We thus only need to show that the intersection
(3)
$\langle[h],U_{0},U_{1},V_{0},V_{1}\rangle\cap\langle[h],U^{\prime}_{0},U^{\prime}_{1},V^{\prime}_{0},V^{\prime}_{1}\rangle$
contains a basic open neighbourhood. Choose basic open neighbourhoods
$\displaystyle
V^{\prime\prime}_{0}:=N_{s_{1}[h]}(\mathfrak{p}_{0},\coprod_{i=0}^{n_{0}}W_{i}^{(0)})\subset
V_{0}\cap V^{\prime}_{0},$ $\displaystyle
V^{\prime\prime}_{1}:=N_{t_{1}[h]}(\mathfrak{p}_{1},\coprod_{i=0}^{n_{1}}W_{i}^{(1)})\subset
V_{1}\cap V^{\prime}_{1}$
of the points $s_{1}[h],t_{1}[h]$ respectively and basic open neighbourhoods
$\displaystyle U^{\prime\prime}_{0}\subset U_{0}\cap U^{\prime}_{0}\cap
W_{0}^{(0)}\cap W_{0}^{(1)},$ $\displaystyle U^{\prime\prime}_{1}\subset
U_{1}\cap U^{\prime}_{1}\cap W_{n_{0}}^{(0)}\cap W_{n_{1}}^{(1)}$
of the points $s_{0}[h],t_{0}[h]$ respectively. The four basic open
neighbourhoods satisfy the conditions necessary to make the set
$\langle[h],U^{\prime\prime}_{0},U^{\prime\prime}_{1},V^{\prime\prime}_{0},V^{\prime\prime}_{1}\rangle$
a basic open neighbourhood. By inspection this is contained in (3) as
required. $\Box$
Now recall that the map $(s_{1},t_{1})\colon B_{2}\to B_{1}\times B_{1}$ for
$B$ a bigroupoid factors through $B_{1}\times_{B_{0}\times B_{0}}B_{1}$. In
the case of $\Pi_{2}^{T}(X)$, this gives a function
$\Pi_{2}^{T}(X)_{2}\to X^{I}\times_{X\times X}X^{I}\simeq LX.$
of the underlying sets. If $L_{0}X$ denotes the (path) component of the null-
homotopic loops, then clearly $\operatorname{im}(s,t)=L_{0}X$, which is open
and closed in $LX$ by our assumptions on $X$.
We also introduce the notation $\mathfrak{p}_{1}\vee\mathfrak{p}_{2}$ for
partition groupoids $\mathfrak{p}_{i}$, which is meant to indicate the join
and rescaling, covering the same operation on intervals.
###### Lemma 31.
With the topology from lemma 30, $(s_{1},t_{1})\colon\Pi_{2}^{T}(X)_{2}\to
L_{0}X$ is a covering space when $X$ is semilocally 2-connected.
Proof. Recall that when $X$ is semilocally 2-connected, $LX$ is semilocally
1-connected, with path-connected basic open neighbourhoods. Let $\omega$ be a
point in $L_{0}X$, corresponding to the paths $\gamma_{1},\gamma_{2}\colon
I\to X$ from $x$ to $y$. Let $N:=N^{o}_{\omega}(\mathfrak{p},W)$ be a basic
open neighbourhood in $L_{0}X$ where
$W=W_{0}\coprod_{i=1}^{n}W_{i}\coprod
W_{n+1}\coprod_{j=n+2}^{k}W_{j}=W_{0}\coprod W^{1}\coprod W_{n+1}\coprod
W^{2},$
and without loss of generality
$\mathfrak{p}=\mathfrak{p}_{1}\vee\mathfrak{p}_{2}$, such that
$N_{\gamma_{1}}(\mathfrak{p}_{1},W^{1})$ and
$N_{\overline{\gamma_{2}}}(\mathfrak{p}_{2},W^{2})$ are basic open
neighbourhoods. Consider now the pullback
|
---|---
$\textstyle{N\times_{L_{0}X}\Pi_{2}^{T}(X)_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\Pi_{2}^{T}(X)_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{N\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{L_{0}X}$
which we want to show is a product
$N\times\Pi_{2}^{T}(X)(\gamma_{1},\gamma_{2})$. For
$[h]\in\Pi_{2}(X)(\gamma_{1},\gamma_{2})=(s_{1},t_{1})^{-1}(\omega)$, define
the following basic open neighbourhood:
$\langle[h]\rangle:=\langle[h],W_{0},W_{n+1},W^{1},W^{2}\rangle$
By definition, the neighbourhoods $N_{\gamma_{1}}(\mathfrak{p}_{1},W^{1})$ and
$N_{\overline{\gamma_{2}}}(\mathfrak{p}_{2},W^{2})$ are path-connected, so the
map $\langle[h]\rangle\to N$ is surjective. Using the same arguments as in the
proof of proposition 27, it is also surjective and open, hence an isomorphism.
We also know that if $[h]\neq[h^{\prime}]$, the neighbourhood
$\langle[h]\rangle$ is disjoint from $\langle[h^{\prime}]\rangle$, because if
they shared a common point, they would be equal (see the proof of lemma 30).
Every 2-track in $N\times_{L_{0}X}\Pi_{2}(X)_{2}\to N$ lies in some
$\langle[h]\rangle$, so there is an isomorphism
$N\times_{L_{0}X}\Pi_{2}^{T}(X)_{2}\simeq
N\times\Pi_{2}^{T}(X)(\gamma_{1},\gamma_{2})$
and $\Pi_{2}^{T}(X)_{2}\to L_{0}X$ is a covering space. $\Box$
###### Remark 32.
Since $L_{0}X$ is open and closed in $LX$, we know that $\Pi_{2}^{T}(X)_{2}\to
LX$ is a covering space where the fibres over the complement $LX-L_{0}X$ are
empty.
This lemma implies that the two composite maps
$s_{1},t_{1}\colon\Pi_{2}^{T}(X)_{2}\to LX\rightrightarrows X^{I}$ are
continuous. In fact these are the source and target for a topological groupoid
###### Lemma 33.
The 2-tracks and paths in a space, with the topologies as above, form a
topological groupoid
$\underline{\Pi_{2}^{T}(X)}_{1}:=(\Pi_{2}^{T}(X)_{2}\rightrightarrows X^{I})$.
Proof. We have already seen that the source and target maps are continuous, we
only need to show that the unit map $\mathrm{id}_{(-)}$, composition $+$ and
inversion $-(-)$ are continuous. For the unit map, let $\gamma\in X^{I}$, and
$\langle\mathrm{id}_{\gamma}\rangle:=\langle\mathrm{id}_{\gamma},U_{0},U_{1},V_{0},V_{1}\rangle$
a basic open neighbourhood. Define
$C:=\mathrm{id}_{(-)}^{-1}(\langle\mathrm{id}_{\gamma}\rangle)$ and consider
the image of $C$ under $\mathrm{id}_{(-)}$:
$\displaystyle\mathrm{id}_{(-)}(C)$
$\displaystyle=\\{\eta\in\langle\mathrm{id}_{\gamma}\rangle|\eta=[\lambda_{1}+(\mathrm{id}_{\beta_{1}}\cdot(\mathrm{id}_{\gamma}\cdot\mathrm{id}_{\beta_{0}})+\lambda_{0}]=\mathrm{id}_{\chi}\\}$
$\displaystyle=\\{\eta\in\langle\mathrm{id}_{\gamma}\rangle|\eta=[\lambda_{1}+\mathrm{id}_{\beta_{1}\cdot(\gamma\cdot\beta_{0})}+\lambda_{0}]=[\lambda_{1}+\lambda_{0}]=\mathrm{id}_{\chi}\\}.$
Then
$s_{1}(\lambda_{1})=t_{1}(\lambda_{0})=\beta_{1}\cdot(\gamma\cdot\beta_{0})$,
$t_{1}(\lambda_{1})=s_{1}(\lambda_{0})=\chi$ and
$\lambda_{0}=-\lambda_{1}=:\lambda$. As $\lambda_{0}$ is a path in $V_{0}$ and
$\lambda_{1}$ a path in $V_{1}$, we see that $\lambda$ is a path in $V_{0}\cap
V_{1}$ which implies $\chi\in V_{0}\cap V_{1}$. If we choose a basic
neigbourhood $V_{2}\subset V_{0}\cap V_{1}\subset X^{I}$ of $\gamma$, then
$\mathrm{id}_{(-)}(V_{2})\subset\langle\mathrm{id}_{\gamma}\rangle$, and so
the unit map is continuous.
We now need to show the map
$+\colon\Pi_{2}^{T}(X)_{2}\times_{X^{I}}\Pi_{2}^{T}(X)_{2}\to\Pi_{2}^{T}(X)_{2}$
is continuous. Let $[h_{1}],[h_{2}]$ be a pair of composable arrows, and let
$\langle[h_{2}+h_{1}]\rangle:=\langle[h_{2}+h_{1}],U_{0},U_{1},V_{0},V_{2}\rangle$
be a basic open neighbourhood. Choose a basic open neighbourhood
$V_{1}=N_{\gamma}(\mathfrak{p},W)$ of $\gamma=s_{1}[h_{2}]=t_{1}[h_{1}]$ in
$X^{I}$ such that the open neighbourhoods $U_{0}$ and $U_{1}$ are the first
and last basic open neighbourhoods in the collection $W$. Consider the image
$\mathcal{I}:=+(\langle[h_{2}],U_{0},U_{1},V_{1},V_{2}\rangle\times_{X^{I}}\langle[h_{1}],U_{0},U_{1},V_{0},V_{1}\rangle).$
The following diagram is a schematic of what an element in the image looks
like:
The thick lines are identified, and the circles are the basic opens
$U_{0},U_{1}\subset X$. Topologically this is a disk with a cylinder $I\times
S^{1}$ glued to it along some $I\times\\{\theta\\}$. For this 2-track to be an
element of our original neighbourhood $\langle[h_{2}+h_{1}]\rangle$ we need to
show that the surface that goes ‘under’ the cylinder is homotopic (rel
boundary) to the one that goes ‘over’ the cylinder, i.e. that there is a
filler for the cylinder. Then a generic 2-track $[f_{2}+f_{1}]\in\mathcal{I}$
is equal to one of the form
$[\lambda_{1}+(\mathrm{id}_{\beta_{1}}\cdot((h_{2}+h_{1})\cdot\mathrm{id}_{\beta_{0}}))+\lambda_{0}]\in\langle[h_{2}+h_{1}]\rangle$
which schematically looks like
The trapezoidal regions in the first picture correspond to paths in $V_{1}$,
which under the identification of the marked edges paste to form a loop in
$V_{1}$. As $X^{I}$ is semilocally 1-connected, there is a filler for this
loop in $X^{I}$. This implies that there is the homotopy we require, and so
$+$ is continuous.
It is clear from the definition of the basic open neighbourhoods of
$\Pi_{2}^{T}(X)_{2}$ that
$-\left(\langle[h],U_{0},U_{1},V_{0},V_{1}\rangle\right)=\langle[-h],U_{0},U_{1},V_{1},V_{0}\rangle$
and so $-(-)$ is manifestly continuous. $\Box$
The maps $ev_{0},ev_{1}\colon X^{I}\to X$ give us a functor
$\underline{\Pi_{2}^{T}(X)}_{1}\to\mbox{disc}(X\times X)$ of topological
groupoids. We now have all the ingredients for a topological bigroupoid, but
first a lemma about pasting open neighbourhoods of paths with matching
endpoints.
Let $\gamma_{1},\gamma_{2}\in X^{I}$ be paths such that
$\gamma_{1}(1)=\gamma_{2}(0)$ and let
$N_{1}:=N_{\gamma_{1}}(\mathfrak{p}_{1},W^{1})$,
$N_{2}:=N_{\gamma_{2}}(\mathfrak{p}_{2},W^{2})$ be basic open neighbourhoods.
For an open set $U\subset W^{1}_{n}\cap W^{2}_{m}$ (these being the last open
sets in their respective collections), define subsets of $X^{I}$,
$M_{1}:=\\{\eta\in N_{1}|\eta(1)\in U\\},\quad M_{2}:=\\{\eta\in
N_{2}|\eta(0)\in U\\}.$
We define the pullback $M_{1}\times_{X}M_{2}$ as a subset of
$X^{I}\times_{X}X^{I}$ where this latter pullback is by the maps
$ev_{0},ev_{1}$. The proof of the following lemma should be obvious.
###### Lemma 34.
The image of the set $M_{1}\times_{X}M_{2}$ under concatenation of paths is
the basic open neighbourhood
$N_{\gamma_{2}\cdot\gamma_{1}}(\mathfrak{p}_{1}\vee\mathfrak{p}_{2},W^{1}\amalg
U\amalg W^{2}).$
We shall denote the image of $M_{1}\times_{X}M_{2}$ as in the lemma by
$N_{1}\\#_{U}N_{2}$.
###### Proposition 35.
$\Pi_{2}^{T}(X)$ is a topological bigroupoid.
Proof. We need to show that the identity assigning functor
$\mbox{disc}(X)\to\underline{\Pi_{2}^{T}(X)}_{1},$
the concatenation and reverse functors,
$\displaystyle(-)\cdot(-)$
$\displaystyle\colon\underline{\Pi_{2}^{T}(X)}_{1}\times_{\mbox{disc}(X)}\underline{\Pi_{2}^{T}(X)}_{1}\to\underline{\Pi_{2}^{T}(X)}_{1},$
$\displaystyle\overline{(-)}$
$\displaystyle\colon\underline{\Pi_{2}^{T}(X)}_{1}\to\underline{\Pi_{2}^{T}(X)}_{1},$
and the structure maps in (2) are continuous. The first follows from lemma 22,
and the continuity of the object111Referring to the object space $X^{I}$ of
$\underline{\Pi_{2}^{T}(X)}_{1}$. Likewise, ‘arrow components’ refer to the
arrow space of this groupoid, corresponding to the 2-arrow space of the
bigroupoid components of the second two are just lemmas 21 and 23. On the
arrow space, the reverse functor clearly sends basic open neighbourhoods to
basic open neighbourhoods,
$\overline{\langle[h],U_{0},U_{1},V_{0},V_{1}\rangle}=\langle\overline{[h]},U_{1},U_{0},V_{0},V_{1}\rangle,$
and so is continuous.
Let $\langle[h_{2}\cdot h_{1}],U_{0},U_{1},V_{0},V_{1}\rangle$ be a basic open
neighbourhood in $\Pi_{2}^{T}(X)_{2}$, where we have the basic open
neighbourhoods
$V_{0}=N_{s_{1}[h_{2}\cdot h_{1}]}(\mathfrak{p}_{0},W^{0}),\quad
V_{1}=N_{t_{1}[h_{2}\cdot h_{1}]}(\mathfrak{p}_{1},W^{1})$
in $X^{I}$ where
$W^{0}=\coprod_{i=0}^{n}W_{i}^{0},\qquad
W^{1}=\coprod_{j=0}^{m}W_{j}^{1},\quad n,m\geq 3.$
We can assume that
$\mathfrak{p}_{0}=\mathfrak{q}^{0}_{1}\vee\mathfrak{q}^{0}_{2}$ and
$\mathfrak{p}_{0}=\mathfrak{q}^{1}_{1}\vee\mathfrak{q}^{1}_{2}$. Let the
partition groupoids be given by the following data
$\displaystyle\mathfrak{q}^{0}_{1}$
$\displaystyle\colon\\{t_{1},\ldots,t_{k}\\},$
$\displaystyle\mathfrak{q}^{0}_{2}$
$\displaystyle\colon\\{t_{k+2},\ldots,t_{n}\\},$
$\displaystyle\mathfrak{q}^{1}_{1}$
$\displaystyle\colon\\{t^{\prime}_{1},\ldots,t^{\prime}_{l}\\},$
$\displaystyle\mathfrak{q}^{1}_{2}$
$\displaystyle\colon\\{t^{\prime}_{l+2},\ldots,t^{\prime}_{m}\\}.$
We now define the neighbourhoods
$\displaystyle V^{0}_{1}$
$\displaystyle:=N_{s_{1}[h_{1}]}(\mathfrak{q}^{0}_{1},\coprod_{i=0}^{k}W^{0}_{i}),$
$\displaystyle V^{0}_{2}$
$\displaystyle:=N_{s_{1}[h_{2}]}(\mathfrak{q}^{0}_{2},\coprod_{i=k+2}^{n}W^{0}_{i}),$
$\displaystyle V^{1}_{1}$
$\displaystyle:=N_{t_{1}[h_{1}]}(\mathfrak{q}^{1}_{1},\coprod_{j=0}^{l}W^{1}_{j}),$
$\displaystyle V^{1}_{2}$
$\displaystyle:=N_{t_{1}[h_{2}]}(\mathfrak{q}^{1}_{2},\coprod_{j=l+2}^{m}W^{1}_{j}).$
Consider the image of the fibred product
$\langle[h_{1}],U_{0},U_{1},V^{0}_{1},V^{1}_{1}\rangle\times_{X}\langle[h_{2}],U_{1},U_{2},V^{0}_{2},V^{1}_{2}\rangle$
under concatenation, any element of which looks like
where the two points marked are identified, so the line between them is a
circle. Since the open set $U_{1}\subset X$ is 1-connected, there is a filler
for this circle, and there is a homotopy between this surface and one of the
form
Also, by lemma 34, the surfaces $\lambda_{2}^{0}\cdot\lambda_{1}^{0}$,
$\lambda_{2}^{1}\cdot\lambda_{1}^{1}$ are elements of
$V^{0}_{1}\\#_{U_{1}}V^{0}_{2}$ and $V^{1}_{1}\\#_{U_{1}}V^{1}_{2}$
respectively. Then the image of the open set
$\langle[h_{1}],U_{0},U_{1},V^{0}_{1},V^{1}_{1}\rangle\times_{X}\langle[h_{2}],U_{1},U_{2},V^{0}_{2},V^{1}_{2}\rangle$
under concatenation is contained in $\langle[h_{2}\cdot
h_{1}],U_{0},U_{1},V_{0},V_{1}\rangle$.
The assiduous reader will have already noticed that the following relations
hold for the (component maps of) the structure morphisms of $\Pi_{2}^{T}(X)$:
$l=r\circ\overline{(-)},\qquad e=-(i\circ\overline{(-)}).$
This means that we only need to check the continuity of $a$ and two of the
other four structure maps.
For the associator $a\colon
X^{I}\times_{X}X^{I}\times_{X}X^{I}\to\Pi_{2}^{T}(X)_{2}$, we take a basic
open neighbourhood
$\langle a_{\gamma_{1}\gamma_{2}\gamma_{3}}\rangle:=\langle
a_{\gamma_{1}\gamma_{2}\gamma_{3}},U_{0},U_{1},V^{0},V^{1}\rangle$
and by continuity of concatenation of paths choose a basic open neighbourhood
$N$ of $(\gamma_{1},\gamma_{2},\gamma_{3})$ in
$X^{I}\times_{X}X^{I}\times_{X}X^{I}$ whose image under the composite
$X^{I}\times_{X}X^{I}\times_{X}X^{I}\xrightarrow{a}\Pi_{2}^{T}(X)_{2}\xrightarrow{(s_{1},t_{1})}X^{I}\times_{X\times
X}X^{I}$
is contained in $V^{0}\times_{X\times X}V^{1}$. Also let $U\subset
X^{I}\times_{X}X^{I}\times_{X}X^{I}$ be a basic open neighbourhood whose image
under
$X^{I}\times_{X}X^{I}\times_{X}X^{I}\xrightarrow{a}\Pi_{2}^{T}(X)_{2}\xrightarrow{(s_{1},t_{1})}X^{I}\times_{X\times
X}X^{I}\xrightarrow{(s_{0},t_{0})}X_{0}\times X_{0}$
is contained within $U_{0}\times U_{1}$. Then if $N^{\prime}\subset N\cap U$
is a basic open neighbourhood of $(\gamma_{1},\gamma_{2},\gamma_{3})$, its
image under $a$ is contained in $\langle
a_{\gamma_{1}\gamma_{2}\gamma_{3}}\rangle$, so $a$ is continuous.
The continuity of the other structure maps is proved similarly, and left as an
exercise for the reader. $\Box$
It is expected that for a reasonable definition of a weak equivalence of
bicategories internal to Top, the canonical 2-functor
$\Pi_{2}(X)\to\Pi_{2}^{T}(X)$, where $\Pi_{2}(X)$ is equipped with the
discrete topology, is such a weak equivalence. In any case, we can define
strict 2-functors between topological bigroupoids, and these are the only such
morphisms we shall need here.
###### Definition 36.
A _strict 2-functor_ $F\colon B\to B^{\prime}$ between topological bigroupoids
$B,B^{\prime}$ consists of a continuous map $F_{0}\colon B_{0}\to
B^{\prime}_{0}$ and a functor
$\underline{F}_{1}\colon\underline{B}_{1}\to\underline{B^{\prime}}_{1}$
commuting with $(S,T)$ and the various structure maps from definition 29.
We define the category of topological bigroupoids and continuous strict
2-functors and denote it by $Bigpd(\textbf{{Top}})$. Let
$\textbf{{Top}}_{sl2c}$ denote the full subcategory (of Top) of semilocally
2-connected spaces.
###### Theorem 37.
There is a functor
$\Pi_{2}^{T}\colon\textbf{{Top}}_{sl2c}\to Bigpd(\textbf{{Top}}),$
given on objects by the construction described above, which lifts the
fundamental bigroupoid functor $\Pi_{2}$ of Stevenson and Hardie-Kamps-
Kieboom.
Proof. We only need to check that the strict 2-functor
$f_{*}\colon\Pi_{2}^{T}(X)\to\Pi_{2}^{T}(Y)$ induced by a map $f\colon X\to Y$
in continuous. Recall from [HKK01] that this strict 2-functor is given by $f$
on objects and post composition with $f$ on 1- and 2-arrows. We then just need
to check that this is continuous on 2-arrows, as it is obvious that it is
continuous on objects and 1-arrows.
Let $\langle[f\circ h]\rangle:=\langle[f\circ
h],U_{0}^{Y},U_{1}^{Y},V_{0},V_{1}\rangle$ be a basic open neighbourhood in
$\Pi_{2}^{T}(Y)_{2}$, and choose basic open neighbourhoods $W_{\epsilon}\in
f^{-1}(V_{\epsilon})$ in $X^{I}$ for $\epsilon=0,1$. If
$W_{0}=\coprod_{i=0}^{n}W_{i}^{0}$ and $W_{1}=\coprod_{i=0}^{m}W_{i}^{1}$,
then choose basic open neighbourhoods
$U_{0}^{X}\subset f^{-1}(U_{0}^{Y})\cap W_{0}^{0}\cap W_{0}^{1},\quad
U_{1}^{X}\subset f^{-1}(U_{1}^{Y})\cap W_{n}^{0}\cap W_{m}^{1}$
in $X$. It is then clear that
$f_{*}(\langle[h],U_{0}^{X},U_{1}^{X},W_{0},W_{1}\rangle)\subset\langle[f\circ
h]\rangle$, and so $f_{*}$ is a continuous 2-functor. $\Box$
Now there is a notion of local triviality of topological bigroupoids analogous
to that of ordinary topological groupoids. This requires a subsidiary
definition
###### Definition 38.
Let $p\colon X\to M$ be a functor between topological groupoids such that $M$
is a topological space. An _anasection_ is a pair $(V,\sigma)$ where $j\colon
V\to M$ is an open cover of $M$ and $\sigma\colon\check{C}(V)\to X$ is a
functor such that $j=p\circ\sigma$.
We can picture $(V,\sigma)$ as being an $X$-valued Čech cocycle on $M$
satisfying a particular property. Note also that an ordinary section of $p$
(which is essentially just a section of the object component of $p$) is also
an anasection.
###### Definition 39.
Let $B$ be a topological bigroupoid such that $X=B_{0}$ is locally path-
connected. We say $B$ is _locally trivial_ if the following conditions hold:
* (I)
The image of $(s_{1},t_{1})\colon B_{2}\to B_{1}\times_{B_{0}}B_{1}$ is open
and closed, and $B_{2}\to\operatorname{im}(s_{1},t_{1})$ admits local
sections.
* (II)
For every $b,b^{\prime}\in B_{0}$ there is an open neighbourhood $U$ of
$b^{\prime}$ such that for all $g\in S^{-1}(b)_{0}$ there is an anasection
$(V,\sigma)$ such that there is an arrow $g\xrightarrow{\simeq}\sigma(v)$ in
$S^{-1}(b)$ for some $v\in V$.
If $B$ satisfies just condition (II) it will be called a _submersive
bigroupoid_.222Compare with the definition of a topological submersion: a map
$p\colon M\to N$ of spaces such that every $m\in M$ there is a local section
$s\colon U\to M$ of $p$ such that $m=s(u)$.
In fact, composing a local section with the restriction of the inversion
functor $\underline{B}_{1}\to\underline{B}_{1}$, we get local sections of
target fibre $T^{-1}(b)_{0}\to B_{0}$. Given a pair of local sections, one of
the source fibre and one of the target fibre, they determine a map to the
fibred product
$\underline{B}_{1}\times_{\mbox{disc}(B_{0})}\underline{B}_{1}$, which can be
composed with the horizontal composition functor to give a local section of
$(S,T)\colon\operatorname{Obj}(\underline{B}_{1})\to B_{0}\times B_{0}$.
We will not actually use this definition as it stands, because we are only
interested in locally trivial bigroupoids that satisfy a stronger version of
condition (I):
###### Definition 40.
A topological bigroupoid $B$ will be called _locally weakly discrete_ if
* (I′)
The map $(s_{1},t_{1})\colon B_{2}\to B_{1}\times_{B_{0}}B_{1}$ is a covering
space.
Note that condition (I′) implies condition (I) from definition 39.
This nomenclature is consistent with the usage of the word ‘locally’ in the
theory of bicategories, in that condition (I′) implies that the groupoid
$\underline{B}(a,b)=(S,T)^{-1}(a,b)$ is locally trivial with discrete hom-
spaces, and hence weakly discrete. However, if we merely assume the fibres of
$\underline{B}_{1}\to\mbox{disc}(B_{0}\times B_{0})$ are weakly discrete, it
does not follow that we have a locally weakly discrete bigroupoid as defined
above.
Recall that a space is locally contractible if it has a neighbourhood basis of
contractible open sets. We shall call a space _semilocally contractible_ if it
has a neighbouhood basis such that the inclusion maps are null-homotopic.
###### Proposition 41.
If $X$ is semilocally contractible, $\Pi_{2}^{T}(X)$ is locally trivial and
locally weakly discrete.
Proof. We already know that $\Pi_{2}^{T}(X)$ is locally weakly discrete, by
lemma 31, hence we only need to show it is submersive.
Let $x_{0}$ be any point in $X$ and let $\gamma\in P_{x_{0}}X$. Let $U$ be a
neighbourhood of $x_{1}:=\gamma(1)$ such that $U\hookrightarrow X$ is null-
homotopic. Then the map $P_{x_{1}}X\to X$ admits a local section $s\colon U\to
P_{x_{1}}X$, which we claim can be chosen such that $\lambda:=s(x_{1})$, which
is a loop in $X$, is null-homotopic. If this is not the case, compose the
section with the map $P_{x_{1}}X\to P_{x_{1}}X$ given by preconcatenation with
$\overline{\lambda}$, then the new section sends $x_{1}$ to
$\lambda\cdot\overline{\lambda}$, which is null-homotopic. We then compose the
section $s$ with the map $P_{x_{1}}X\to P_{x_{0}}X$ which is preconcatenation
with $\gamma$ to get a section $s^{\prime}$. Since
$s^{\prime}(x_{1})=\lambda\cdot\gamma$, which is homotopic to $\gamma$ rel
endpoints, we have an anasection
$\mbox{disc}(U)\xleftarrow{=}\mbox{disc}(U)\xrightarrow{s^{\prime}}S^{-1}(x_{0}),$
such that $\gamma$ is isomorphic to an object in the image of $s^{\prime}$.
Thus $\Pi_{2}^{T}(X)$ is a submersive groupoid $\Box$
## Appendix A Definition of a bigroupoid
The following is adapted from [Lei98] and [HKK01], themselves a distilling of
the original source [Bén67]. We shall only be interested in _small_
bicategories and groupoids, that is, those where the data forms a set.
###### Definition 42.
A _bicategory_ $\mathbf{B}$ is given by the following data:
* •
A set $\mathbf{B}_{0}$ called the _0-cells_ or _objects_ of $\mathbf{B}$,
* •
A small category $\mathbf{B}_{1}$ with a functor
$(S,T)\colon\mathbf{B}_{1}\to\mbox{disc}(\mathbf{B}_{0})\times\mbox{disc}(\mathbf{B}_{0}).$
The fibre of $(S,T)$ at $(A,B)\in\mathbf{B}_{0}\times\mathbf{B}_{0}$ is
denoted $\mathbf{B}(A,B)$ and is called a hom-category. The objects
$f,g,\ldots$ of $\mathbf{B}_{1}$ are called _1-cells_ , or _1-arrows_ , and
the arrows $\alpha,\beta,\ldots$ of $\mathbf{B}_{1}$ are called _2-cells_ , or
_2-arrows_. The functors $S,T$ are the _source_ and _target_ functors. The
composition in $\mathbf{B}_{1}$ will be denoted
$(\alpha,\beta)\mapsto\alpha\cdot\beta$ where the target of $\alpha$ is the
source of $\beta$. This is also called _vertical composition_.
* •
Functors
$c_{ABC}\colon\mathbf{B}(B,C)\times\mathbf{B}(A,B)\to\mathbf{B}(A,C),$
for each $A,B,C\in\mathbf{B}_{0}$, called _horizontal composition_ , and an
element
$I_{A}\colon*\to\mathbf{B}(A,A).$
for each $A\in\mathbf{B}_{0}$, picking out a 1-cell $A\to A$ called the _weak
unit_ of $A$. Horizontal composition is denoted $(w,v)\mapsto w\circ v$ where
$T(v)=S(w)$, and $v,w$ are either 1-cells or 2-cells.
* •
Natural isomorphisms
$\textstyle{\mathbf{B}(C,D)\times\mathbf{B}(B,C)\times\mathbf{B}(A,B)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}\times
c_{ABC}}$$\scriptstyle{c_{BCD}\times\mathrm{id}}$$\textstyle{\mathbf{B}(C,D)\times\mathbf{B}(A,C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{c_{ACD}}$$\textstyle{\nearrow
a_{ABCD}}$$\textstyle{\mathbf{B}(B,D)\times\mathbf{B}(A,B)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{c_{ABD}}$$\textstyle{\mathbf{B}(A,D)}$
given by invertible 2-cells
$a_{hgf}\colon(h\circ g)\circ f\Rightarrow f\circ(g\circ f),$
for composable $f,g,h\in\operatorname{Obj}\mathbf{B}_{1}$ and natural
isomorphisms
$\textstyle{\mathbf{B}(A,B)\times\ast\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}\times
I_{A}}$$\scriptstyle{\simeq}$$\textstyle{\ast\times\mathbf{B}(A,B)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{I_{B}\times\mathrm{id}}$$\scriptstyle{\simeq}$$\textstyle{r_{AB}\nearrow}$$\textstyle{l_{AB}\nearrow}$$\textstyle{\mathbf{B}(A,B)\times\mathbf{B}(A,A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{c_{AAB}}$$\textstyle{\mathbf{B}(A,B)}$$\textstyle{\mathbf{B}(B,B)\times\mathbf{B}(A,B)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{c_{ABB}}$$\textstyle{\mathbf{B}(A,B)}$
given by invertible 2-cells
$r_{f}\colon f\circ I_{A}\Rightarrow f,\qquad l_{f}\colon I_{B}\circ
f\Rightarrow f.$
where $A=S(F)$ and $B=T(F)$.
The following diagrams are required to commute:
| | |
---|---|---|---
| | | | |
$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces((k\circ
h)\circ g)\circ
f\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{(k\circ
h)gf}}$$\scriptstyle{a_{khg}\circ\mathrm{id}_{f}}$$\textstyle{(k\circ(h\circ
g))\circ
f\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{k(h\circ
g)f}}$$\textstyle{(k\circ h)\circ(g\circ
f)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{kh(g\circ
f)}}$$\textstyle{k\circ((h\circ g)\circ
f)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}_{k}\circ
a_{hgf}}$$\textstyle{k\circ(h\circ(g\circ f))}$
---
$\textstyle{(g\circ I)\circ
f\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{gIf}}$$\scriptstyle{r_{g}\mathrm{id}_{f}}$$\textstyle{g\circ(I\circ
f)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}_{g}l_{f}}$$\textstyle{g\circ
f}$
If the 2-cells $a,l,r$ are all identity 2-cells, then the bicategory is called
a 2-category, or _strict_ 2-category for emphasis.
###### Definition 43.
A _bigroupoid_ is a bicategory $\mathbf{B}$ such that $\mathbf{B}_{1}$ is a
groupoid, and the following additional data for each $A,B\in\mathbf{B}_{0}$:
* •
Functors
$\overline{(-)}\colon\mathbf{B}(A,B)\to\mathbf{B}(B,A)$
* •
Natural isomorphisms
$\textstyle{\mathbf{B}(A,B)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\overline{(-)},\mathrm{id})}$$\textstyle{\mathbf{B}(B,A)\times\mathbf{B}(A,B)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{c_{ABA}}$$\textstyle{\mathbf{B}(A,B)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\mathrm{id},\overline{(-)})}$$\textstyle{\mathbf{B}(A,B)\times\mathbf{B}(B,A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{c_{BAB}}$$\textstyle{\swarrow
e_{AB}}$$\textstyle{\nearrow
i_{AB}}$$\textstyle{\ast\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{I_{A}}$$\textstyle{\mathbf{B}(A,A)}$$\textstyle{\ast\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{I_{B}}$$\textstyle{\mathbf{B}(B,B)}$
The following diagram is required to commute
| | |
---|---|---|---
$\textstyle{I\circ
f\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i_{f}\circ\mathrm{id}_{f}}$$\scriptstyle{l_{f}}$$\textstyle{(f\circ\overline{f})\circ
f\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{f\overline{f}f}}$$\textstyle{f\circ(\overline{f}\circ
f)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}_{f}\circ
e_{f}}$$\textstyle{f\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r_{f}^{-1}}$$\textstyle{f\circ
I}$
It is a consequence of the other axioms that the following diagram commutes:
| | |
---|---|---|---
$\textstyle{\overline{f}\circ
I\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}_{\overline{f}}\circ
i_{f}}$$\scriptstyle{r_{f}}$$\textstyle{\overline{f}\circ(f\circ\overline{f})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{\overline{f}f\overline{f}}}$$\textstyle{(\overline{f}\circ
f)\circ\overline{f}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{f}\circ\mathrm{id}_{\overline{f}}}$$\textstyle{\overline{f}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{l_{\overline{f}}^{-1}}$$\textstyle{I\circ\overline{f}}$
## References
* [Bén67] J. Bénabou, _Introduction to bicategories_ , Proceedings of the midwest category seminar, Springer Lecture Notes, vol. 47, Springer-Verlag, 1967.
* [HKK01] K.A. Hardie, K.H. Kamps, and R.W. Kieboom, _A homotopy bigroupoid of a topological space_ , Appl. Categ. Structures 9 (2001), 311–327.
* [Lei98] T. Leinster, _Basic bicategories_ , 1998, [arXiv:math.CT/9810017].
* [Ste00] D. Stevenson, _The geometry of bundle gerbes_ , Ph.D. thesis, Adelaide University, Department of Pure Mathematics, 2000.
* [Wad55] H. Wada, _Local connectivity of mapping spaces_ , Duke Math. J. 22 (1955), 419–425.
|
arxiv-papers
| 2013-02-27T22:35:50 |
2024-09-04T02:49:42.191695
|
{
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"authors": "David Michael Roberts",
"submitter": "David Roberts",
"url": "https://arxiv.org/abs/1302.7019"
}
|
1302.7066
|
# Integral mean estimates for the polar derivative of a polynomial
N. A. Rather and Suhail Gulzar Department of Mathematics
University of Kashmir
Srinagar, Hazratbal 190006
India [email protected] [email protected]
###### Abstract.
Let $P(z)$ be a polynomial of degree $n$ having all zeros in $|z|\leq k$ where
$k\leq 1,$ then it was proved by Dewan et al [6] that for every real or
complex number $\alpha$ with $|\alpha|\geq k$ and each $r\geq 0$
$n(|\alpha|-k)\left\\{\int\limits_{0}^{2\pi}\left|P\left(e^{i\theta}\right)\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}\left|1+ke^{i\theta}\right|^{r}d\theta\right\\}^{\frac{1}{r}}\underset{|z|=1}{Max}|D_{\alpha}P(z)|.$
In this paper, we shall present a refinement and generalization of above
result and also extend it to the class of polynomials
$P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$
having all its zeros in $|z|\leq k$ where $k\leq 1$ and thereby obtain certain
generalizations of above and many other known results.
###### Key words and phrases:
Polynomials; Polar derivatives; Integral mean estimates. Bernstein’s
inequality.
###### 2000 Mathematics Subject Classification:
30A10, 30C10, 30E10, 30C15
## 1\. Introduction and statement of results
Let $P(z)$ be a polynomial of degree $n$. It was shown by Turán [12] that if
$P(z)$ has all its zeros in $|z|\leq 1,$ then
$n\underset{\left|z\right|=1}{Max}\left|P(z)\right|\leq
2\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|.$ (1.1)
Inequality (1.1) is best possible with equality holds for $P(z)=\alpha
z^{n}+\beta$ where $|\alpha|=|\beta|.$ The above inequality (1.1) of Turán
[12] was generalized by Malik [10], who proved that if $P(z)$ is a polynomial
of degree $n$ having all its zeros in $|z|\leq k,$ where $k\leq 1$, then
$\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\geq\frac{n}{1+k}\underset{\left|z\right|=1}{Max}\left|P(z)\right|.$
(1.2)
where as for $k\geq 1,$ Govil [7] showed that
$\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\geq\frac{n}{1+k^{n}}\underset{\left|z\right|=1}{Max}\left|P(z)\right|,$
(1.3)
Both the above inequalities (1.2) and (1.3) are best possible, with equality
in (1.2) holding for $P(z)=(z+k)^{n}$, where $k\geq 1.$ While in (1.3) the
equality holds for the polynomial $P(z)=\alpha z^{n}+\beta k^{n}$ where
$|\alpha|=|\beta|.$
As a refinement of (1.2), Aziz and Shah [4] proved if $P(z)$ is a polynomial
of degree $n$ having all its zeros in $|z|\leq k,$ where $k\leq 1$, then
$\displaystyle\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\geq\frac{n}{1+k}\left\\{\underset{\left|z\right|=1}{Max}\left|P(z)\right|+\dfrac{1}{k^{n-1}}\underset{|z|=1}{Min}|P(z)|\right\\}.$
(1.4)
Let $D_{\alpha}P(z)$ denotes the polar derivative of the polynomial $P(z)$ of
degree $n$ with respect to the point $\alpha,$ then
$D_{\alpha}P(z)=nP(z)+(\alpha-z)P^{\prime}(z).$
The polynomial $D_{\alpha}P(z)$ is a polynomial of degree at most $n-1$ and it
generalizes the ordinary derivative in the sense that
$\underset{\alpha\rightarrow\infty}{Lim}\left[\dfrac{D_{\alpha}P(z)}{\alpha}\right]=P^{\prime}(z).$
Aziz and Rather [2] extends (1.2) to polar derivatives of a polynomial and
proved that if all the zeros of $P(z)$ lie in $|z|\leq k$ where $k\leq 1$ then
for every real or complex number $\alpha$ with $|\alpha|\geq k,$
$\underset{\left|z\right|=1}{Max}\left|D_{\alpha}P(z)\right|\geq
n\left(\dfrac{|\alpha|-k}{1+k}\right)\underset{\left|z\right|=1}{Max}\left|P(z)\right|.$
(1.5)
For the class of polynomials
$P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$ of
degree $n$ having all its zeros in $|z|\leq k$ where $k\leq 1,$ Aziz and
Rather [3] proved that if $\alpha$ is real or complex number with
$|\alpha|\geq k^{\mu}$ then
$\underset{\left|z\right|=1}{Max}\left|D_{\alpha}P(z)\right|\geq
n\left(\dfrac{|\alpha|-k^{\mu}}{1+k^{\mu}}\right)\underset{\left|z\right|=1}{Max}\left|P(z)\right|.$
(1.6)
Malik [11] obtained a generalization of (1.1) in the sense that the left-hand
side of (1.1) is replaced by a factor involving the integral mean of $|P(z)|$
on $|z|=1.$ In fact he proved that if $P(z)$ has all its zeros in $|z|\leq 1,$
then for each $q>0,$
$n\left\\{\int\limits_{0}^{2\pi}\left|P\left(e^{i\theta}\right)\right|^{q}d\theta\right\\}^{1/q}\leq\left\\{\int\limits_{0}^{2\pi}\left|1+e^{i\theta}\right|^{q}d\theta\right\\}^{1/q}\underset{|z|=1}{Max}|P^{\prime}(z)|.$
(1.7)
If we let $q$ tend to infinity in (1.7), we get (1.1).
The corresponding generalization of (1.2) which is an extension of (1.7) was
obtained by Aziz [1] by proving that if $P(z)$ is a polynomial of degree $n$
having all its zeros in $|z|\leq$ where $k\geq 1,$ then for each $q\geq 1$
$n\left\\{\int\limits_{0}^{2\pi}\left|P\left(e^{i\theta}\right)\right|^{q}d\theta\right\\}^{1/q}\leq\left\\{\int\limits_{0}^{2\pi}\left|1+k^{n}e^{i\theta}\right|^{q}d\theta\right\\}^{1/q}\underset{|z|=1}{Max}|P^{\prime}(z)|.$
(1.8)
The result is best possible and equality in (1.5) holds for the polynomial
$P(z)=\alpha z^{n}+\beta k^{n}$ where $|\alpha|=|\beta|.$
As a generalization of inequality 1.5, Dewan et al [6] obtained an $L^{p}$
inequality for the polar derivative of a polynomial and proved the following:
###### Theorem 1.1.
If $P(z)$ is a polynomial of degree $n$ having all its zeros in $|z|\leq k,$
where $k\leq 1,$ then for every real or complex number $\alpha$ with
$|\alpha|\geq k$ and for each $r>0,$
$n(|\alpha|-k)\left\\{\int\limits_{0}^{2\pi}\left|P\left(e^{i\theta}\right)\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}\left|1+ke^{i\theta}\right|^{r}d\theta\right\\}^{\frac{1}{r}}\underset{|z|=1}{Max}|D_{\alpha}P(z)|.$
(1.9)
In this paper, we consider the class of polynomials
$P(z)=a_{n}z^{n}+\sum_{j=\mu}^{n}a_{n-j}z^{n-j},$ $1\leq\mu\leq n,$ having all
its zeros in $|z|\leq k$ where $k\leq 1$ and establish some improvements and
generalizations of inequalities (1.1),(1.2),(1.5),(1.8) and (1.9).
In this direction, we first present the following interesting results which
yields (1.9) as a special case.
###### Theorem 1.2.
If $P(z)$ is a polynomial of degree $n$ having all its zeros in $|z|\leq k,$
where $k\leq 1,$ then for every real or complex $\alpha,$ $\beta$ with
$|\alpha|\geq k,$ $|\beta|\leq 1$ and for each $r>0,$ $p>1,$ $q>1$ with
$p^{-1}+q^{-1}=1,$ we have
$n(|\alpha|-k)\left\\{\int\limits_{0}^{2\pi}\left|P(e^{i\theta})+\beta\dfrac{m}{k^{n-1}}\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+ke^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}$
(1.10)
where $m={Min}_{|z|=k}|P(z)|.$
If we take $\beta=0,$ we get the following result.
###### Corollary 1.3.
If $P(z)$ is a polynomial of degree $n$ having all its zeros in $|z|\leq k,$
where $k\leq 1,$ then for every real or complex $\alpha,$ with $|\alpha|\geq
k$ and for each $r>0,$ $p>1,$ $q>1$ with $p^{-1}+q^{-1}=1,$ we have
$n(|\alpha|-k)\left\\{\int\limits_{0}^{2\pi}\left|P(e^{i\theta})\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+ke^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}.$
(1.11)
###### Remark 1.4.
Theorem 1.1 follows from (1.11) by letting $q\rightarrow\infty$ (so that
$p\rightarrow 1$) in Corollary 1.3. If we divide both sides of inequality
(1.11) by $|\alpha|$ and make $\alpha\rightarrow\infty,$ we get (1.5).
Dividing the two sides of (1.10) by $|\alpha|$ and letting
$|\alpha|\rightarrow\infty$, we get the following result.
###### Corollary 1.5.
If $P(z)$ is a polynomial of degree $n$ having all its zeros in $|z|\leq k,$
where $k\leq 1,$ then for every real or complex $\beta$ with $|\beta|\leq 1$
and for each $r>0,$ $p>1,$ $q>1$ with $p^{-1}+q^{-1}=1,$ we have
$n\left\\{\int\limits_{0}^{2\pi}\left|P(e^{i\theta})+\beta\dfrac{m}{k^{n-1}}\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+ke^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|P^{\prime}(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}$
(1.12)
where $m={Min}_{|z|=k}|P(z)|.$
If we let $q\rightarrow\infty$ in (1.12), we get the following corollary.
###### Corollary 1.6.
If $P(z)$ is a polynomial of degree $n$ having all its zeros in $|z|\leq k,$
where $k\leq 1,$ then for every real or complex $\beta$ with $|\beta|\leq 1$
and for each $r>0,$ we have
$n\left\\{\int\limits_{0}^{2\pi}\left|P(e^{i\theta})+\beta\dfrac{m}{k^{n-1}}\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+ke^{i\theta}|^{r}d\theta\right\\}^{\frac{1}{r}}\underset{|z|=1}{Max}|P^{\prime}(z)|,$
(1.13)
where $m={Min}_{|z|=k}|P(z)|.$
###### Remark 1.7.
If we let $r\rightarrow\infty$ in (1.13) and choosing argument of $\beta$
suitably with $|\beta|=1,$ we obtain (1.4).
Next, we extend (1.9) to the class of polynomials
$P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$
having all its zeros in $|z|\leq k,$ $k\leq 1$ and thereby obtain the
following result.
###### Theorem 1.8.
If $P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$
is a polynomial of degree $n$ having all its zeros in $|z|\leq k$ where $k\leq
1,$ then for every real or complex $\alpha$ with $|\alpha|\geq k^{\mu}$ and
for each $r>0,$ $p>1,$ $q>1$ with $p^{-1}+q^{-1}=1,$ we have
$n(|\alpha|-k^{\mu})\left\\{\int\limits_{0}^{2\pi}|P(e^{i\theta})|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+k^{\mu}e^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}.$
(1.14)
###### Remark 1.9.
We let $r\rightarrow\infty$ and $p\rightarrow\infty$ (so that $q\rightarrow
1$) in (1.14), we get inequality (1.6).
If we divide both sides of (1.14) by $|\alpha|$ and make
$\alpha\rightarrow\infty,$ we get the following result.
###### Corollary 1.10.
If $P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$
is a polynomial of degree $n$ having all its zeros in $|z|\leq k$ where $k\leq
1,$ then for for each $r>0,$ $p>1,$ $q>1$ with $p^{-1}+q^{-1}=1,$ we have
$n\left\\{\int\limits_{0}^{2\pi}|P(e^{i\theta})|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+k^{\mu}e^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|P^{\prime}(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}.$
(1.15)
Letting $q\rightarrow\infty$ (so that $p\rightarrow 1$) in (1.14), we get the
following result:
###### Corollary 1.11.
If $P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$
where $1\leq\mu\leq n,$ is a polynomial of degree $n$ having all its zeros in
$|z|\leq k,$ where $k\leq 1,$ then for every real or complex number $\alpha$
with $|\alpha|\geq k^{\mu}$ and for each $r>0,$
$n(|\alpha|-k^{\mu})\left\\{\int\limits_{0}^{2\pi}\left|P\left(e^{i\theta}\right)\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}\left|1+k^{\mu}e^{i\theta}\right|^{r}d\theta\right\\}^{\frac{1}{r}}\underset{|z|=1}{Max}|D_{\alpha}P(z)|.$
(1.16)
As a generalization of Theorem 1.8, we present the following result:
###### Theorem 1.12.
If $P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu}$ where $1\leq\mu\leq
n,$ is a polynomial of degree $n$ having all its zeros in $|z|\leq k$ where
$k\leq 1,$ then for every real or complex $\alpha$ with $|\alpha|\geq k^{\mu}$
and for each $r>0,$ $p>1,$ $q>1$ with $p^{-1}+q^{-1}=1,$ we have
$\displaystyle
n(|\alpha|-k^{\mu})\left\\{\int\limits_{0}^{2\pi}|P(e^{i\theta})+\beta
m|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+k^{\mu}e^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}$
(1.17)
where $m={Min}_{|z|=k}|P(z)|.$
If we divide both sides by $|\alpha|$ and make $\alpha\rightarrow\infty,$ we
get the following result:
###### Corollary 1.13.
If $P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$
is a polynomial of degree $n$ having all its zeros in $|z|\leq k$ where $k\leq
1,$ then for for each $r>0,$ $p>1,$ $q>1$ with $p^{-1}+q^{-1}=1,$ we have
$n\left\\{\int\limits_{0}^{2\pi}|P(e^{i\theta})+\beta
m|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+k^{\mu}e^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|P^{\prime}(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}$
(1.18)
where $m={Min}_{|z|=k}|P(z)|.$
Letting $q\rightarrow\infty$ (so that $p\rightarrow 1$) in (1.14), we get the
following result:
###### Corollary 1.14.
If $P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu}$ where $1\leq\mu\leq
n,$ is a polynomial of degree $n$ having all its zeros in $|z|\leq k,$ where
$k\leq 1,$ then for every real or complex number $\alpha$ with $|\alpha|\geq
k^{\mu}$ and for each $r>0,$
$n(|\alpha|-k^{\mu})\left\\{\int\limits_{0}^{2\pi}\left|P\left(e^{i\theta}\right)+\beta
m\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}\left|1+k^{\mu}e^{i\theta}\right|^{r}d\theta\right\\}^{\frac{1}{r}}\underset{|z|=1}{Max}|D_{\alpha}P(z)|$
(1.19)
where $m={Min}_{|z|=k}|P(z)|.$
## 2\. Lemmas
For the proofs of the theorems, we need the following Lemmas:
###### Lemma 2.1.
If $P(z)$ is a polynomial of degree almost $n$ having all its zeros in in
$|z|\leq k$ $k\leq 1$ then for $|z|=1,$
$|Q^{\prime}(z)|+\dfrac{nm}{k^{n-1}}\leq k|P^{\prime}(z)|,$ (2.1)
where $Q(z)=z^{n}\overline{P(1/\overline{z})}$ and $m={Min}_{|z|=k}|P(z)|.$
The above Lemma is due to Govil and McTume [8].
###### Lemma 2.2.
Let $P(z)=a_{0}+\sum_{\nu=\mu}^{n}a_{\nu}z^{\nu},$ $1\leq\mu\leq n,$ is a
polynomial of degree $n,$ which does not vanish for $|z|<k,$ where $k\geq 1$
then for $|z|=1,$
$k^{\mu}|P^{\prime}(z)|\leq|Q^{\prime}(z)|,$ (2.2)
where $Q(z)=z^{n}\overline{P(1/\overline{z})}.$
The above Lemma is due to Chan and Malik [5]. By applying Lemma 2.2 to the
polynomial $z^{n}\overline{P(1/\overline{z})},$ one can easily deduce:
###### Lemma 2.3.
Let $P(z)=a_{n}z^{n}+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$
is a polynomial of degree $n,$ having all its zeros in $|z|\leq k,$ where
$k\leq 1$ then for $|z|=1$
$k^{\mu}|P^{\prime}(z)|\geq|Q^{\prime}(z)|,$ (2.3)
where $Q(z)=z^{n}\overline{P(1/\overline{z})}.$
## 3\. Proof of Theorems
###### Proof of Theorem 1.2.
Let $Q(z)=z^{n}\overline{P(1/\overline{z})}$ then
$P(z)=z^{n}\overline{Q(1/\overline{z})}$ and it can be easily verified that
for $|z|=1,$
$|Q^{\prime}(z)|=|nP(z)-zP^{\prime}(z)|\,\,\,\,\textnormal{and}\,\,\,\,|P^{\prime}(z)|=|nQ(z)-zQ^{\prime}(z)|.$
(3.1)
By Lemma (2.1), we have for every $\beta$ with $|\beta|\leq 1$ and $|z|=1,$
$\displaystyle\left|Q^{\prime}(z)+\bar{\beta}\dfrac{nmz^{n-1}}{k^{n-1}}\right|\leq|Q^{\prime}(z)|+\dfrac{nm}{k^{n-1}}\leq
k|P^{\prime}(z)|.$ (3.2)
Using (3.1) in (3.2), for $|z|=1$ we have
$\left|Q^{\prime}(z)+\bar{\beta}\dfrac{nmz^{n-1}}{k^{n-1}}\right|\leq
k|nQ(z)-zQ^{\prime}(z)|.$ (3.3)
By Lemma 2.3 with $\mu=1,$ for every real or complex number $\alpha$ with
$|\alpha|\geq k$ and $|z|=1,$ we have
$\displaystyle|D_{\alpha}P(z)|$
$\displaystyle\geq|\alpha||P^{\prime}(z)|-|Q^{\prime}(z)|$
$\displaystyle\geq(|\alpha|-k)|P^{\prime}(z)|.$ (3.4)
Since $P(z)$ has all its zeros in $|z|\leq k\leq 1,$ it follows by Gauss-Lucas
Theorem that all the zeros of $P^{\prime}(z)$ also lie in $|z|\leq k\leq 1.$
This implies that the polynomial
$z^{n-1}\overline{P^{\prime}(1/\overline{z})}\equiv nQ(z)-zQ^{\prime}(z)$
does not vanish in $|z|<1.$ Therefore, it follows from (3.3) that the function
$w(z)=\dfrac{z\left(Q^{\prime}(z)+\bar{\beta}\dfrac{nmz^{n-1}}{k^{n-1}}\right)}{k\left(nQ(z)-zQ^{\prime}(z)\right)}$
is analytic for $|z|\leq 1$ and $|w(z)|\leq 1$ for $|z|=1.$ Furthermore,
$w(0)=0.$ Thus the function $1+kw(z)$ is subordinate to the function $1+kz$
for $|z|\leq 1.$ Hence by a well known property of subordination [9], we have
$\int\limits_{0}^{2\pi}\left|1+kw(e^{i\theta})\right|^{r}d\theta\leq\int\limits_{0}^{2\pi}\left|1+ke^{i\theta}\right|^{r}d\theta,\,\,\,r>0.$
(3.5)
Now
$1+kw(z)=\dfrac{n\left(Q(z)+\bar{\beta}\dfrac{mz^{n}}{k^{n-1}}\right)}{nQ(z)-zQ^{\prime}(z)},$
and
$|P^{\prime}(z)|=|z^{n-1}\overline{P^{\prime}(1/\overline{z})}|=|nQ(z)-zQ^{\prime}(z)|,\,\,\,\textrm{for}\,\,\,|z|=1,$
therefore for $|z|=1,$
$n\left|Q(z)+\bar{\beta}\dfrac{mz^{n}}{k^{n-1}}\right|=|1+kw(z)||nQ(z)-zQ^{\prime}(z)|=|1+kw(z)||P^{\prime}(z)|.$
equivalently,
$\displaystyle
n\left|z^{n}\overline{P(1/\overline{z})}+\bar{\beta}\dfrac{mz^{n}}{k^{n-1}}\right|=|1+kw(z)||P^{\prime}(z)|.$
This implies
$n\left|P(z)+\beta\dfrac{m}{k^{n-1}}\right|=|1+kw(z)||P^{\prime}(z)|\,\,\,\,\textnormal{for}\,\,\,\,|z|=1.$
(3.6)
From (3) and (3.6), we deduce that for $r>0,$
$n^{r}(|\alpha|-k)^{r}\int\limits_{0}^{2\pi}\left|P(e^{i\theta})+\beta\dfrac{m}{k^{n-1}}\right|^{r}d\theta\leq\int\limits_{0}^{2\pi}|1+kw(e^{i\theta})|^{r}|D_{\alpha}P(e^{i\theta})|^{r}d\theta.$
This gives with the help of Hölder’s inequality and using (3.5), for $p>1,$
$q>1$ with $p^{-1}+q^{-1}=1,$
$n^{r}(|\alpha|-k)^{r}\int\limits_{0}^{2\pi}\left|P(e^{i\theta})+\beta\dfrac{m}{k^{n-1}}\right|^{r}d\theta\leq\left(\int\limits_{0}^{2\pi}|1+ke^{i\theta}|^{pr}d\theta\right)^{1/p}\left(\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right)^{1/q},$
equivalently,
$n(|\alpha|-k^{\mu})\left\\{\int\limits_{0}^{2\pi}\left|P(e^{i\theta})+\beta\dfrac{m}{k^{n-1}}\right|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+ke^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}$
which proves the desired result. ∎
###### Proof of Theorem 1.8.
Since $P(z)$ has all its zeros in $|z|\leq k,$ therefore, by using Lemma 2.3
we have for $|z|=1,$
$|Q^{\prime}(z)|\leq k^{\mu}|nQ(z)-zQ^{\prime}(z)|.$ (3.7)
Now for every real or complex number $\alpha$ with $|\alpha|\geq k^{\mu},$ we
have
$\displaystyle|D_{\alpha}P(z)|$
$\displaystyle=|nP(z)+(\alpha-z)P^{\prime}(z)|$
$\displaystyle\geq|\alpha||P^{\prime}(z)|-|nP(z)-zP^{\prime}(z)|,$
by using (3.1) and Lemma 2.3, for $|z|=1,$ we get
$\displaystyle|D_{\alpha}P(z)|$
$\displaystyle\geq|\alpha||P^{\prime}(z)|-|Q^{\prime}(z)|$
$\displaystyle\geq(|\alpha|-k^{\mu})|P^{\prime}(z)|.$ (3.8)
Since $P(z)$ has all its zeros in $|z|\leq k\leq 1,$ it follows by Gauss-Lucas
Theorem that all the zeros of $P^{\prime}(z)$ also lie in $|z|\leq k\leq 1.$
This implies that the polynomial
$z^{n-1}\overline{P^{\prime}(1/\overline{z})}\equiv nQ(z)-zQ^{\prime}(z)$
does not vanish in $|z|<1.$ Therefore, it follows from (3.7) that the function
$w(z)=\dfrac{zQ^{\prime}(z)}{k^{\mu}\left(nQ(z)-zQ^{\prime}(z)\right)}$
is analytic for $|z|\leq 1$ and $|w(z)|\leq 1$ for $|z|=1.$ Furthermore,
$w(0)=0.$ Thus the function $1+k^{\mu}w(z)$ is subordinate to the function
$1+k^{\mu}z$ for $|z|\leq 1.$ Hence by a well known property of subordination
[9], we have
$\int\limits_{0}^{2\pi}\left|1+k^{\mu}w(e^{i\theta})\right|^{r}d\theta\leq\int\limits_{0}^{2\pi}\left|1+k^{\mu}e^{i\theta}\right|^{r}d\theta,\,\,\,r>0.$
(3.9)
Now
$1+k^{\mu}w(z)=\dfrac{nQ(z)}{nQ(z)-zQ^{\prime}(z)},$
and
$|P^{\prime}(z)|=|z^{n-1}\overline{P^{\prime}(1/\overline{z})}|=|nQ(z)-zQ^{\prime}(z)|,\,\,\,\textrm{for}\,\,\,|z|=1,$
therefore, for $|z|=1,$
$n|Q(z)|=|1+k^{\mu}w(z)||nQ(z)-zQ^{\prime}(z)|=|1+k^{\mu}w(z)||P^{\prime}(z)|.$
(3.10)
From (3) and (3.10), we deduce that for $r>0,$
$n^{r}(|\alpha|-k^{\mu})^{r}\int\limits_{0}^{2\pi}|P(e^{i\theta})|^{r}d\theta\leq\int\limits_{0}^{2\pi}|1+k^{\mu}w(e^{i\theta})|^{r}|D_{\alpha}P(e^{i\theta})|^{r}d\theta.$
This gives with the help of Hölder’s inequality and (3.9), for $p>1,$ $q>1$
with $p^{-1}+q^{-1}=1,$
$n^{r}(|\alpha|-k^{\mu})^{r}\int\limits_{0}^{2\pi}|P(e^{i\theta})|^{r}d\theta\leq\left(\int\limits_{0}^{2\pi}|1+k^{\mu}e^{i\theta}|^{pr}d\theta\right)^{1/p}\left(\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right)^{1/q},$
equivalently,
$n(|\alpha|-k^{\mu})\left\\{\int\limits_{0}^{2\pi}|P(e^{i\theta})|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+k^{\mu}e^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}$
which proves the desired result.
∎
###### Proof of Theorem 1.12.
Let $m=Min_{|z|=k}|P(z)|,$ so that $m\leq|P(z)|$ for $|z|=k.$ If $P(z)$ has a
zero on $|z|=k$ then $m=0$ and result follows from Theorem 1.8. Henceforth we
suppose that all the zeros of $P(z)$ lie in $|z|<k.$ Therefore for every
$\beta$ with $|\beta|<1,$ we have $|m\beta|<|P(z)|$ for $|z|=k.$ Since $P(z)$
has all its zeros in $|z|<k\leq 1,$ it follows by Rouche’s theorem that all
the zeros of $F(z)=P(z)+\beta m$ lie in $|z|<k\leq 1.$ If
$G(z)=z^{n}\overline{F(1/\overline{z})}=Q(z)+\bar{\beta}mz^{n},$ then by
applying Lemma 2.3 to polynomial $F(z)=P(z)+\beta m,$ we have for $|z|=1,$
$|G^{\prime}(z)|\leq k^{\mu}|F^{\prime}(z)|.$
This gives
$|Q^{\prime}(z)+nm\bar{\beta}z^{n-1}|\leq k^{\mu}|P^{\prime}(z)|.$ (3.11)
Using (3.1) in (3.11), for $|z|=1$ we have
$|Q^{\prime}(z)+nm\bar{\beta}z^{n-1}|\leq k^{\mu}|nQ(z)-zQ^{\prime}(z)|$
(3.12)
Since $P(z)$ has all its zeros in $|z|<k\leq 1,$ it follows by Gauss-Lucas
Theorem that all the zeros of $P^{\prime}(z)$ also lie in $|z|<k\leq 1.$ This
implies that the polynomial
$z^{n-1}\overline{P^{\prime}(1/\overline{z})}\equiv nQ(z)-zQ^{\prime}(z)$
does not vanish in $|z|<1.$ Therefore, it follows from (3.12) that the
function
$w(z)=\dfrac{z(Q^{\prime}(z)+nm\bar{\beta}z^{n-1})}{k^{\mu}\left(nQ(z)-zQ^{\prime}(z)\right)}$
is analytic for $|z|\leq 1$ and $|w(z)|\leq 1$ for $|z|=1.$ Furthermore,
$w(0)=0.$ Thus the function $1+k^{\mu}w(z)$ is subordinate to the function
$1+k^{\mu}z$ for $|z|\leq 1.$ Hence by a well known property of subordination
[9], we have
$\int\limits_{0}^{2\pi}\left|1+k^{\mu}w(e^{i\theta})\right|^{r}d\theta\leq\int\limits_{0}^{2\pi}\left|1+k^{\mu}e^{i\theta}\right|^{r}d\theta,\,\,\,r>0.$
(3.13)
Now
$1+k^{\mu}w(z)=\dfrac{n(Q(z)+m\bar{\beta}z^{n})}{nQ(z)-zQ^{\prime}(z)},$
and
$|P^{\prime}(z)|=|z^{n-1}\overline{P^{\prime}(1/\overline{z})}|=|nQ(z)-zQ^{\prime}(z)|,\,\,\,\textrm{for}\,\,\,|z|=1,$
therefore, for $|z|=1,$
$n|Q(z)+m\bar{\beta}z^{n}|=|1+k^{\mu}w(z)||nQ(z)-zQ^{\prime}(z)|=|1+k^{\mu}w(z)||P^{\prime}(z)|.$
This implies
$n|G(z)|=|1+k^{\mu}w(z)||nQ(z)-zQ^{\prime}(z)|=|1+k^{\mu}w(z)||P^{\prime}(z)|.$
(3.14)
Since $|F(z)|=|G(z)|$ for $|z|=1,$ therefore, from (3.14) we get
$n|P(z)+\beta
m|=|1+k^{\mu}w(z)||P^{\prime}(z)|\,\,\,\,\textnormal{for}\,\,\,\,\,|z|=1.$
(3.15)
From (3) and (3.15), we deduce that for $r>0,$
$n^{r}(|\alpha|-k^{\mu})^{r}\int\limits_{0}^{2\pi}|P(e^{i\theta})+\beta
m|^{r}d\theta\leq\int\limits_{0}^{2\pi}|1+k^{\mu}w(e^{i\theta})|^{r}|D_{\alpha}P(e^{i\theta})|^{r}d\theta.$
This gives with the help of Hölder’s inequality in conjunction with (3.13) for
$p>1,$ $q>1$ with $p^{-1}+q^{-1}=1,$
$n^{r}(|\alpha|-k^{\mu})^{r}\int\limits_{0}^{2\pi}|P(e^{i\theta})+\beta
m|^{r}d\theta\leq\left(\int\limits_{0}^{2\pi}|1+k^{\mu}e^{i\theta}|^{pr}d\theta\right)^{1/p}\left(\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right)^{1/q},$
equivalently,
$n(|\alpha|-k^{\mu})\left\\{\int\limits_{0}^{2\pi}|P(e^{i\theta})+\beta
m|^{r}d\theta\right\\}^{\frac{1}{r}}\leq\left\\{\int\limits_{0}^{2\pi}|1+k^{\mu}e^{i\theta}|^{pr}d\theta\right\\}^{\frac{1}{pr}}\left\\{\int\limits_{0}^{2\pi}|D_{\alpha}P(e^{i\theta})|^{qr}d\theta\right\\}^{\frac{1}{qr}}$
which proves the desired result. ∎
## References
* [1] A. Aziz, Integral mean estimates polynomials with restricted zeros, J. Approx. Theory 55 (1988), 232-239.
* [2] A. Aziz and N. A. Rather, A refinement of a theorem of Paul Turan concerning polynomials. Math Ineq Appl. 1, 231-238 (1998).
* [3] A. Aziz and N. A. Rather, Inequalities for the polar derivative of a polynomial with restricted zeros, Mathematica Balkana, 17 (2003), 15-28.
* [4] A. Aziz and W. M. Shah, An integral mean estimate for polynomial, Indian J. Pure and Appl. Math., 28 (1997) 1413-1419.
* [5] T. N. Chan and M. A. Malik, On Erdös-Lax theorem, Proc. Indian. Acad. Sci., 92 (1983), 191-193.
* [6] K. K. Dewan et al., Some inequalities for the polar derivative of a polynomial, Southeast Asian Bull. Math.,34 (2010), 69-77.
* [7] N. K. Govil, On the derivative of a polynomial, Proc. Amer. Math. Soc. 41 (1973), 543-546.
* [8] N. K. Govil and G. N. McTume, Some generalizations involving the polar derivative for an inequality of Paul Turán, Acta Math. Hungar., 104 (2004) 115-126.
* [9] E. Hille, Analytic function theory, Vol. II, Ginn and Company, New York, Toronto, 1962.
* [10] M. A. Malik, On the derivative of a polynomial, J. Lond. Math. Soc., Second Series 1 (1969), 57-60.
* [11] M. A. Malik, An integral mean estimates for polynomials, Proc. Amer. Math. Soc., 91 (1984), 281-284.
* [12] P. Tura´n, U¨ber die Ableitung von Polynomen, Compositio Mathematica 7 (1939), 89-95 (German).
|
arxiv-papers
| 2013-02-28T03:22:19 |
2024-09-04T02:49:42.201665
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "N. A. Rather and Suhail Gulzar",
"submitter": "Suhail Gulzar Mattoo Suhail Gulzar",
"url": "https://arxiv.org/abs/1302.7066"
}
|
1302.7255
|
# Determining the neutron star surface magnetic field strength of two Z
sources
Guoqiang Ding1 Chunping Huang1,2 Yanan Wang1,2 1Xinjiang Astronomical
Observatory, Chinese Academy of Sciences,
150, Science 1-Street, Urumqi, Xinjiang 830011, China
email: [email protected]
2University of Chinese Academy of Sciences,
China
(2012)
###### Abstract
From the extreme position of disk motion, we infer the neutron star (NS)
surface magnetic field strength ($B_{0}$) of Z-source GX 17+2 and Cyg X-2. The
inferred $B_{0}$ of GX 17+2 and Cyg X-2 are $\sim$(1–5)$\times 10^{8}\ {\rm
G}$ and $\sim$(1–3)$\times 10^{8}\ {\rm G}$, respectively, which are not
inferior to that of millisecond X-ray pulsars or atoll sources. It is likely
that the NS magnetic axis of Z sources is parallel to the axis of rotation,
which could result in the lack of pulsations in these sources.
###### keywords:
Compact object, Neutron star, Accretion disk
††volume: 290††journal: Feeding compact objects: Accretion on all
scales††editors: C.M. Zhang, T. Belloni, M. Méndez & S.N. Zhang, eds.
## 1 Introduction
The neutron star (NS) surface magnetic field strength ($B_{0}$) could be a
critical parameter responsible for the behaviors of NS X-ray binaries (NSXB).
However, it is difficult to measure $B_{0}$ directly. Nevertheless, it is
feasible to estimate $B_{0}$ from some observable phenomena, such as quasi-
periodic oscillations (QPOs) (e.g. [Focke (1996), Fock 1996]) or “propeller”
effects (e.g. [Zhang et al. (1998), Zhang et al. 1998]), or inferred from
theoretical models ([Zhang & Kojima (2006), Zhang & Kojima 2006]). From
previous studies, it is believed that $B_{0}$ is larger in Z sources than in
atoll sources (ref. [Focke (1996), Fock 1996], [Zhang et al. (1998), Zhang et
al. 1998], [Chen et al. (2006), Chen et al. 2006], [Ding et al. (2011), Ding
et al. 2011]) or millisecond X-ray pulsars (ref. [Cackett et al. (2009),
Cackett et al. 2009], [Di Salvo & Burderi (2003), Di Salvo & Burderi 2003]).
Unlike in millisecond X-ray pulsars or atoll sources, pulsations have never
been observed in Z sources, why?
## 2 Method and Data Analysis
As discussed by [Zhang et al. (1998)], during the regular accretion state the
inner disk radius ($R_{\rm in}$) of an NSXB, which is equal to the
magnetospheric radius ($R_{\rm m}$) ([Lamb et al. (1973), Lamb et al. 1973]),
will vary between the radius of the innermost stable circular orbit (ISCO) and
the corotation radius ($R_{\rm c}$). Therefore, assuming $R_{\rm in}=R_{\rm
c}$, we will get the upper limit of $B_{0}$, because more magnetic pressure is
needed to push the accretion disk further, having
$B_{\rm 0,\ max}=1.5\times 10^{12}\left(\frac{\dot{M}_{\rm m}}{\dot{M}_{\rm
Edd}}\right)^{1/2}\left(\frac{M_{\rm ns}}{1.4\mbox{
}M_{\odot}}\right)^{5/6}\left(\frac{R_{\rm ns}}{10^{6}\mbox{ }{\rm
cm}}\right)^{-3}f^{-7/6}{\xi_{1}}^{-7/4}\mbox{ }{\rm G},$ (1)
where $f$ is the NS spin frequency and $\xi_{1}$ is a coefficient in the range
of 0.87–0.95 ([Wang (1995), Wang 1995]). Similarly, letting $R_{\rm in}=R_{\rm
ISCO}$, we will obtain the lower limit of $B_{0}$, because less magnetic
pressure is needed to balance the gas pressure from the disk, getting
$B_{\rm 0,\ min}=2.8\times 10^{8}\left(\frac{\dot{M}_{\rm m}}{\dot{M}_{\rm
Edd}}\right)^{1/2}\left(\frac{M_{\rm ns}}{1.4\mbox{
}M_{\odot}}\right)^{2}\left(\frac{R_{\rm ns}}{10^{6}\mbox{ }{\rm
cm}}\right)^{-3}{\xi_{1}}^{-7/4}\mbox{ }{\rm G}.$ (2)
With software HEASOFT 6.11 and FTOOLS V6.11, we use the RXTE observations made
during Oct. 3–12, 1999, a total of $\sim$297.6 ks for GX 17+2 and those during
Jul. 2–7, 1998, a total of $\sim$42 ks for Cyg X-2 to perform our analysis. We
divide the track on the hardness-intensity diagrams (HIDs) into some regions
and then produce the spectrum of each region. XSPEC version 12.7 is used to
fit the spectra with spectral models. From the spectral fitting parameters, we
calculate mass accretion rates. Then, with the known NS mass and radius,
making use of the above equations, we calculate the limits of $B_{0}$.
## 3 Result and Discussion
With mass accretion rates and $f_{\rm max}=584\ {\rm Hz}$ for GX 17+2 and
$f_{\rm max}=658\ {\rm Hz}$ for Cyg X-2 ([Yin et al. (2007), Yin et al.
2007]), we infer the limits of $B_{0}$ and get $(1\leq B_{0}\leq 5)\times
10^{8}\ {\rm G}$ and $(1\leq B_{0}\leq 3)\times 10^{8}\ {\rm G}$ for GX 17+2
and Cyg X-2, respectively, which are higher than the reported $B_{\rm
0}\sim(0.3-1)\times 10^{8}$ G of atoll source Aql X-1 ([Zhang et al. (1998),
Zhang et al. 1998]), compatible with $B_{\rm 0}\sim(1-5)\times 10^{8}$ G of
accreting millisecond X-ray pulsar SAX J1808.4–3658 ([Di Salvo & Burderi
(2003), Di Salvo & Burderi 2003]), but lower than $B_{\rm 0}\sim(1-3)\times
10^{9}$ G of the first transient Z source XTE J1701-462 ([Ding et al. (2011),
Ding et al. 2011]) or $B_{\rm 0}\sim(1-8)\times 10^{9}$ G of Cir X-1 ([Ding et
al. (2006), Ding et al. 2006]).
Since the $B_{0}$ of Z sources is not inferior to that of millisecond X-ray
pulsars or atoll source, pulsations should have been observed in the former,
as in the latter. However, pulsations have not been detected in Z sources,
why? [Pringle & Rees (1972), Pringle & Rees (1972)] proposed that the NS
pulsation emission would depend on the shape of emission cone, the
orientations of the magnetic and rotation axes, and the line of the sight,
and, furthermore, [Lamb et al. (1973), Lamb et al. (1973)] suggested that the
detected pulsation could be resulted from the condition that the NS magnetic
axis does not coincide in direction with the axis of rotation. Therefore, it
is likely that the NS magnetic axis of Z sources is parallel to the axis of
rotation, or the orientation of the rotation axis is in agreement with the
line of the sight, any of which could result in lack of pulsations in these
sources.
## 4 Acknowledgements
This work is supported by the National Basic Research Program of China (973
Program 2009CB824800) and the Natural Science Foundation of China under grant
no. 11143013.
## References
* [Cackett et al. (2009)] Cackett, E.M., Altamirano, D., Patruno, A., Miller, J.M., Reynolds, M., Linares, M., & Wijnands, R. 2009, ApJ, 649, L21
* [Chen et al. (2006)] Chen, X., Zhang, S.N., & Ding, G.Q. 2006, ApJ, 650, 299
* [Ding et al. (2006)] Ding, G.Q., Zhang, S.N., Li, T. P., & Qu, J. L. 2006, ApJ, 645, 576
* [Ding et al. (2011)] Ding, G.Q., Zhang, S.N., Wang, N., Qu, J.L., & Yan, S.P. 2011, AJ, 142, 34
* [Di Salvo & Burderi (2003)] Di Salvo, T., & Burderi, L. 2003, A&A, 397, 723
* [Focke (1996)] Fock, W.B. 1996, ApJ, 470, L127
* [Lamb et al. (1973)] Lamb, F.K., Pethick, C.J., & Pines, D. 1973, ApJ, 184, 271
* [Pringle & Rees (1972)] Pringle, J.E., & Rees, M.J. 1972, A&A, 21, 1
* [Wang (1995)] Wang, Y.-M. 1995, ApJ, 449, L153
* [Yin et al. (2007)] Yin, H.X., Zhang, C.M., Zhao, Y. H., Lei, Y. J., Qu, J. L., Song, L. M., & Zhang, F. 2007, A&A, 471, 381
* [Zhang & Kojima (2006)] Zhang, C.M., & Kojima 2006, MNRAS, 366, 137
* [Zhang et al. (1998)] Zhang, S.N., Yu, W., & Zhang, W. 1998, ApJ, 494, L71
|
arxiv-papers
| 2013-02-28T16:53:29 |
2024-09-04T02:49:42.208568
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Guoqiang Ding, Chunping Huang, and Yanan Wang",
"submitter": "Guoqiang Ding",
"url": "https://arxiv.org/abs/1302.7255"
}
|
1302.7278
|
# Using cascading Bloom filters to improve the memory usage for de Brujin
graphs
Gregory Kucherov Université Paris-Est & CNRS, Laboratoire d’Informatique
Gaspard Monge, Marne-la-Vallée, France, [email protected] Kamil
Salikhov Lomonosov Moscow State University, Moscow, Russia,
[email protected]
###### Abstract
De Brujin graphs are widely used in bioinformatics for processing next-
generation sequencing data. Due to a very large size of NGS datasets, it is
essential to represent de Bruijn graphs compactly, and several approaches to
this problem have been proposed recently. In this note, we show how to reduce
the memory required by the algorithm of [2] that represents de Brujin graphs
using Bloom filters. Our method requires 25% to 42% less memory with respect
to the method of [2], with only insignificant increase in pre-processing and
query times.
## 1 Introduction
Modern Next-Generation Sequencing (NGS) technologies generate huge volumes of
short nucleotide sequences (_reads_) drawn from the DNA sample under study.
The length of a read varies from 35 to about 400 basepairs (letters) and the
number of reads may be hundreds of millions, thus the total volume of data may
reach tens or even hundreds of Gb.
Many computational tools dealing with NGS data, especially those devoted to
genome assembly, are based on the concept of _de Bruijn graph_ , see e.g. [5].
The nodes of the de Bruijn graph are all distinct $k$-mers occurring in the
reads, and two $k$-mers are linked by an edge if they occur at successive
positions in a read111Note that this actually a subgraph of the de Bruijn
graph under its classical combinatorial definition. However, we still call it
de Bruijn graph to follow the terminology common to the bioinformatics
literature.. In practice, the value of $k$ is between $20$ and $50$.
The idea of using de Bruijn graph for genome assembly goes back to the “pre-
NGS era” [6]. Note, however, that de novo assembly is not the only application
of those graphs [4].
Due to a very large size of NGS datasets, it is essential to represent de
Bruijn graphs compactly. Recently, several papers have been published that
propose different approaches to compressing de Bruijn graphs [3, 7, 2, 1].
In this note, we focus on the method based on Bloom filters proposed in [2].
Bloom filters provide a very space-efficient representation of a subset of a
given set (in our case, a subset of $k$-mers), at the price of allowing one-
sided errors, namely false positives. The method of [2] is based on the
following idea: if all queried nodes ($k$-mers) are only those which are
reachable from some node known to belong to the graph, then only a fraction of
all false positives can actually occur. Storing these false positives
explicitly leads to an exact (false positive free) and space-efficient
representation of the de Bruijn graph.
In this note we show how to improve this scheme by improving the
representation of the set of false positives. We achieve this by iteratively
applying a Bloom filter to represent the set of false positives, then the set
of “false false positives” etc. We show analytically that this cascade of
Bloom filters allows for a considerable further economy of memory, improving
the method of [2]. Depending on the value of $k$, our method requires 25% to
42% less memory with respect to the method of [2]. Moreover, with our method,
the memory grows very little with the growth of $k$. Finally, the pre-
processing and query times increase only insignificantly compared to the
original method of [2].
## 2 Cascading Bloom filter
Let $T_{0}$ be the set of $k$-mers of the de Brujin graph that we want to
store. The method of [2] stores $T_{0}$ via a bitmap $B_{1}$ using a Bloom
filter, together with the set $T_{1}$ of critical false positives. $T_{1}$
consists of those $k$-mers which are reachable from $T_{0}$ by a graph edge
and which are stored in $B_{1}$ "by mistake", i.e. which belong to $B_{1}$ but
are not in $T_{0}$.222By a slight abuse of language, we say that “an element
belongs to $B_{j}$” if it is accepted by the corresponding Bloom filter.
$B_{1}$ and $T_{1}$ are sufficient to represent the graph provided that the
only queried $k$-mers are those which are adjacent to $k$-mers of $T_{0}$.
The idea we introduce in this note is to use this structure recursively and
represent the set $T_{1}$ by a new bitmap $B_{2}$ and a new set $T_{2}$, then
represent $T_{2}$ by $B_{3}$ and $T_{3}$, and so on. More formally, starting
from $B_{1}$ and $T_{1}$ defined as above, we define a series of bitmaps
$B_{1},B_{2},\ldots$ and a series of sets $T_{1},T_{2},\ldots$ as follows.
$B_{2}$ will store the set $T_{1}$ of critical false positives using a Bloom
filter, and the set $T_{2}$ will contain “true nodes” from $T_{0}$ that are
stored in $B_{2}$ “by mistake” (we call them false2 positives). $B_{3}$ and
$T_{3}$, and, generally, $B_{i}$ and $T_{i}$ are defined similarly: $B_{i}$
stores $k$-mers of $T_{i-1}$ using a Bloom filter, and $T_{i}$ contains
$k$-mers stored in $B_{i}$ "by mistake", i.e. those $k$-mers that do not
belong to $T_{i-1}$ but belong to $T_{i-2}$ (we call them falsei positives).
Observe that $T_{0}\cap T_{1}=\emptyset$, $T_{0}\supseteq T_{2}\supseteq
T_{4}\ldots$ and $T_{1}\supseteq T_{3}\supseteq T_{5}\ldots$.
The following lemma shows that the construction is correct, that is it allows
one to verify whether or not a given $k$-mer is belongs to the set $T_{0}$.
###### Lemma 1
Given an element ($k$-mer) $K$, consider the smallest $i$ such that $K\not\in
B_{i+1}$ (if $K\not\in B_{1}$, we define $i=0$). Then, if $i$ is odd, then
$K\in T_{0}$, and if $i$ is even (including zero), then $K\not\in T_{0}$.
Proof: Observe that $K\not\in B_{j+1}$ implies $K\not\in T_{j}$ by the basic
property of Bloom filters. We first check the Lemma for $i=0,1$.
For $i=0$, we have $K\not\in B_{1}$, and then $K\not\in T_{0}$.
For $i=1$, we have $K\in B_{1}$ but $K\not\in B_{2}$. The latter implies that
$K\not\in T_{1}$, and then $K$ must be a false2 positive, that is $K\in
T_{0}$. Note that here we use the fact that the only queried $k$-mers $K$ are
either nodes of $T_{0}$ or their neighbors in the graph (see [2]), and
therefore if $K\in B_{1}$ and $K\not\in T_{0}$ then $K\in T_{1}$.
For the general case $i\geq 2$, we show by induction that $K\in T_{i-1}$.
Indeed, $K\in B_{1}\cap\ldots\cap B_{i}$ implies $K\in T_{i-1}\cup T_{i}$
(which, again, is easily seen by induction), and $K\not\in B_{i+1}$ implies
$K\not\in T_{i}$.
Since $T_{i-1}\subseteq T_{0}$ for odd $i$, and $T_{i-1}\subseteq T_{1}$ for
even $i$ (for $T_{0}\cap T_{1}=\emptyset$), the lemma follows. $\Box$
Naturally, the lemma provides an algorithm to check if a given $k$-mer $K$
belongs to the graph: it suffices to check successively if it belongs to
$B_{1},B_{2},\ldots$ until we encounter the first $B_{i+1}$ which does not
contain $K$. Then the answer will simply depend on whether $i$ is even or odd.
In our reasoning so far, we assumed an infinite number of bitmaps $B_{i}$. Of
course, in practice we cannot store infinitely many (and even simply many)
bitmaps. Therefore we “truncate” the construction at some step $t$ and store a
finite set of bitmaps $B_{1},B_{2},\ldots,B_{t}$ together with an explicit
representation of $T_{t}$. The procedure of Lemma 1 is extended in the obvious
way: if for all $1\leq i\leq t$, $K\in B_{i}$, then the answer is determined
by directly checking $K\in T_{t}$.
## 3 Memory and time usage
First, we estimate the memory needed by our data structure, under the
assumption of infinite number of bitmaps. Let $N$ be the number of “true
positives”, i.e. nodes of $T_{0}$. As it was shown in [2], if $N$ is the
number of nodes we want to store through a bitmap $B_{1}$ of size $rN$, then
the expected number of critical false positive nodes (set $T_{1}$) will be
$8Nc^{r}$, where $c=0.6185$. Then, to store these $8Nc^{r}$ critical false
positive nodes, we use a bitmap $B_{2}$ of size $8rNc^{r}$. Bitmap $B_{3}$ is
used for storing nodes of $T_{0}$ which are stored in $B_{2}$ “by mistake”
(set $T_{2}$). We estimate the number of these nodes as the fraction $c^{r}$
(false positive rate of filter $B_{2}$) of $N$ (size of $T_{0}$), that is
$Nc^{r}$. Similarly, the number of nodes we need to put to $B_{4}$ is
$8Nc^{r}$ multiplied by $c^{r}$, i.e. $8Nc^{2r}$. Keeping counting in this
way, we obtain that the memory needed for the whole structure is
$rN+8rNc^{r}+rNc^{r}+8rNc^{2r}+rNc^{2r}+...$ bits. By dividing this expression
by $N$ to obtain the number of bits per one $k$-mer, we get
$r+8rc^{r}+rc^{r}+8rc^{2r}+...=r(1+c^{r}+c^{2r}+...)+8rc^{r}(1+c^{r}+c^{2r}+...)=(1+8c^{r})\frac{r}{1-c^{r}}$.
A simple calculation shows that the minimum of this expression is achieved
when $r=6.299$, and then the minimum memory used per $k$-mer is $9.18801$
bits.
As mentioned earlier, in practice we store only a finite number of bitmaps
$B_{1},\ldots,B_{t}$ together with an explicite representation (hash table) of
$T_{t}$. In this case, the memory taken by the bitmaps is a truncated sum
$rN+8rNc^{r}+rNc^{r}+..$, and a hash table storing $T_{t}$ takes either
$2k\cdot Nc^{\lceil\frac{t}{2}\rceil r}$ or $2k\cdot
8Nc^{\lceil\frac{t}{2}\rceil r}$ bits, depending on whether $t$ is even or
odd. The latter follows from the observations that we need to store
$Nc^{\lceil\frac{t}{2}\rceil r}$ (or $8rNc^{\lceil\frac{t}{2}\rceil r}$)
$k$-mers, and every $k$-mer takes $2k$ bits of memory. Consequently, we have
to adjust the optimal value of $r$ minimizing the total space, and re-estimate
the resulting space spent on one $k$-mer.
Table 1 shows the optimal $r$ and the space per $k$-mer for $t=4$ and several
values of $k$. It demonstrates that even this small $t$ leads to considerable
memory savings. It appears that the space per $k$-mer is very close to the
“optimal” space ($9.18801$ bits) obtained for the infinite number of filters.
Table 1 reveals another advantage of our improvement: the number of bits per
stored $k$-mer remains almost constant for different values of $k$.
$k$ | optimal $r$ | bits per $k$-mer
---|---|---
16 | 6.447053 | 9.237855
32 | 6.609087 | 9.298095
64 | 6.848718 | 9.397210
128 | 7.171139 | 9.548099
Table 1: Optimal value of $r$ (bitmap size of a Bloom filter per number of
stored elements) and the resulting space per $k$-mer for $t=4$.
We now compare the memory usage of our method compared to the original method
of [2]. The data structure of [2] has been estimated to take
$(1.44\log_{2}(\frac{16k}{2.08})+2.08)$ bits per $k$-mer. With this formula,
comparative estimations of space consumption per $k$-mer by different methods
are shown in Table 2. Observe that according to the method of [2], doubling
the value of $k$ results in a memory increase by a factor of $1.44$, whereas
in our method the increase is very small, as we already mentioned earlier.
$k$ | “Optimal” (infinite) | Cascading Bloom Filter | Data structure
---|---|---|---
| Cascading Bloom Filter | with $t$ = 4 | from [2]
16 | 9.18801 | 9.237855 | 12.0785
32 | 9.18801 | 9.298095 | 13.5185
64 | 9.18801 | 9.397210 | 14.9585
128 | 9.18801 | 9.548099 | 16.3985
Table 2: Comparison of space consumption per $k$-mer for the “optimal”
(infinite) cascading Bloom filter, finite ($t=4$) cascading Bloom filter, and
the Bloom filter from [2], for different values of $k$.
Let us now estimate preprocessing and query times for our scheme. If the value
of $t$ is small (such as $t=4$, as in Table 1), the preprocessing time grows
insignificantly in comparison to the original method of [2]. To construct each
$B_{i}$, we need to store $T_{i-2}$ (possibly on disk, if we want to save on
the internal memory used by the algorithm) in order to compute those $k$-mers
which are stored in $B_{i-1}$ “by mistake”. The preprocessing time increases
little in comparison to the original method of [2], as the size of $B_{i}$
decreases exponentially and then the time spent to construct the whole
structure is linear on the size of $T_{0}$.
In the case of small $t$, the query time grows insignificantly as well, as a
query may have to go through up to $t$ Bloom filters instead of just one. The
above-mentioned exponential decrease of sizes of $B_{i}$ results in that the
average query time will remain almost the same.
## References
* [1] Alexander Bowe, Taku Onodera, Kunihiko Sadakane, and Tetsuo Shibuya. Succinct de bruijn graphs. In Benjamin J. Raphael and Jijun Tang, editors, Algorithms in Bioinformatics - 12th International Workshop, WABI 2012, Ljubljana, Slovenia, September 10-12, 2012. Proceedings, volume 7534 of Lecture Notes in Computer Science, pages 225–235. Springer, 2012.
* [2] Rayan Chikhi and Guillaume Rizk. Space-efficient and exact de Bruijn graph representation based on a Bloom filter. In Benjamin J. Raphael and Jijun Tang, editors, Algorithms in Bioinformatics - 12th International Workshop, WABI 2012, Ljubljana, Slovenia, September 10-12, 2012. Proceedings, volume 7534 of Lecture Notes in Computer Science, pages 236–248. Springer, 2012.
* [3] Thomas C. Conway and Andrew J. Bromage. Succinct data structures for assembling large genomes. Bioinformatics, 27(4):479–486, 2011.
* [4] Z. Iqbal, M. Caccamo, I. Turner, P. Flicek, and G. McVean. De novo assembly and genotyping of variants using colored de Bruijn graphs. Nat. Genet., 44(2):226–232, Feb 2012.
* [5] J. R. Miller, S. Koren, and G. Sutton. Assembly algorithms for next-generation sequencing data. Genomics, 95(6):315–327, Jun 2010.
* [6] P. A. Pevzner, H. Tang, and M. S. Waterman. An Eulerian path approach to DNA fragment assembly. Proc. Natl. Acad. Sci. U.S.A., 98(17):9748–9753, Aug 2001.
* [7] Chengxi Ye, Zhanshan Ma, Charles Cannon, Mihai Pop, and Douglas Yu. Exploiting sparseness in de novo genome assembly. BMC Bioinformatics, 13(Suppl 6):S1, 2012.
|
arxiv-papers
| 2013-02-28T18:35:21 |
2024-09-04T02:49:42.212691
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Kamil Salikhov, Gustavo Sacomoto, and Gregory Kucherov",
"submitter": "Gregory Kucherov",
"url": "https://arxiv.org/abs/1302.7278"
}
|
1303.0047
|
# Gauged baryon and lepton numbers in supersymmetry with a $125\;{\rm GeV}$
Higgs
Tai-Fu Fenga111email:[email protected], Shu-Min
Zhaoa222email:[email protected], Hai-Bin Zhanga,b, Yin-Jie Zhanga, Yu-Li Yana
aDepartment of Physics, Hebei University, Baoding, 071002, China
bDepartment of Physics, Dalian University of Technology, Dalian, 116024, China
###### Abstract
Assuming that the Yukawa couplings between the Higgs and exotic quarks cannot
be ignored, we analyze the signals of decay channels
$h\rightarrow\gamma\gamma$ and $h\rightarrow VV^{*}\;(V=Z,\;W)$ with the Higgs
mass around $125\;{\rm GeV}$ in a supersymmetric extension of the standard
model where baryon and lepton numbers are local gauge symmetries. Adopting
some assumptions on relevant parameter space, we can account for the
experimental data on Higgs from ATLAS and CMS naturally.
Supersymmetry, Baryon and Lepton numbers, Higgs
###### pacs:
14.80.Cp, 12.15Hh
## I Introduction
The main destination of the Large Hadron Collider (LHC) is to understand the
origin of the electroweak symmetry breaking, and searches the neutral Higgs
predicted by the standard model (SM) and its various extensions. Recently,
ATLAS and CMS have reported significant excess events which are interpreted
probably to be related to the neutral Higgs with mass $m_{h_{0}}\sim
124-126\;{\rm GeV}$CMS ; ATLAS . This implies that the Higgs mechanism to
break electroweak symmetry possibly has a solid experimental cornerstone.
As the simplest soft broken supersymmetry theory, the minimal supersymmetric
extension of the standard model (MSSM) MSSM has drawn the attention from
physicist for a long time. Furthermore, Broken baryon number (B) can explain
the origin of the matter-antimatter asymmetry in the Universe naturally. Since
heavy majorana neutrinos contained in the seesaw mechanism can induce the tiny
neutrino massesseesaw to explain the neutrino oscillation experiment, lepton
number (L) is also expected to be broken. Ignoring Yukawa couplings between
Higgs doublets and exotic quarks, the authors of literatureBL_h ; BL_h1
investigate the predictions for the mass and decays of the lightest CP-even
Higgs in a minimal local gauged B and L supersymmetric extension of the SM
which is named BLMSSM. Since the new quarks are vector-like, one obtains that
their masses can be above $500\;{\rm GeV}$ without assuming large couplings to
the Higgs doublets in this model. Therefore, there are no Landau poles for the
Yukawa couplings here. Additionally, literatureBL also examines two
extensions of the SM where $B$ and $L$ are spontaneously broken gauge
symmetries around ${\rm TeV}$ scale. Assuming that the Yukawa couplings
between Higgs and exotic quarks cannot be ignored here, we investigate the
lightest CP-even Higgs decay channels $h\rightarrow\gamma\gamma$,
$h\rightarrow VV^{*}\;(V=Z,\;W)$ in the BLMSSM.
Our presentation is organized as follows. In section II, we briefly summarize
the main ingredients of the BLMSSM, then present the mass squared matrices for
neutral scalar sectors and the mass matrices for exotic quarks, respectively.
We discuss the corrections on the mass squared matrix of CP-even Higgs from
exotic fields in section III, and present the decay widths for
$h^{0}\rightarrow\gamma\gamma,\;VV^{*}\;(V=Z,\;W)$ in section IV. The
numerical analyses are given in section V, and our conclusions are summarized
in section VI.
## II A supersymmtric extension of the SM where B and L are local gauge
symmetries
When B and L are local gauge symmetries, one can enlarge the local gauge group
of the SM to $SU(3)_{{}_{C}}\otimes SU(2)_{{}_{L}}\otimes U(1)_{{}_{Y}}\otimes
U(1)_{{}_{B}}\otimes U(1)_{{}_{L}}$. In the supersymmetric extension of the SM
proposed in Ref.BL_h ; BL_h1 , the exotic superfields include the new quarks
$\hat{Q}_{{}_{4}}\sim(3,\;2,\;1/6,\;B_{{}_{4}},\;0)$,
$\hat{U}_{{}_{4}}^{c}\sim(\bar{3},\;1,\;-2/3,\;-B_{{}_{4}},\;0)$,
$\hat{D}_{{}_{4}}^{c}\sim(\bar{3},\;1,\;1/3,\;-B_{{}_{4}},\;0)$,
$\hat{Q}_{{}_{5}}^{c}\sim(\bar{3},\;2,\;-1/6,\;-(1+B_{{}_{4}}),\;0)$,
$\hat{U}_{{}_{5}}\sim(3,\;1,\;2/3,\;1+B_{{}_{4}},\;0)$,
$\hat{D}_{{}_{5}}\sim(3,\;1,\;-1/3,\;1+B_{{}_{4}},\;0)$, and the new leptons
$\hat{L}_{{}_{4}}\sim(1,\;2,\;-1/2,\;0,\;L_{{}_{4}})$,
$\hat{E}_{{}_{4}}^{c}\sim(1,\;1,\;1,\;0,\;-L_{{}_{4}})$,
$\hat{N}_{{}_{4}}^{c}\sim(1,\;1,\;0,\;0,\;-L_{{}_{4}})$,
$\hat{L}_{{}_{5}}^{c}\sim(1,\;2,\;1/2,\;0,\;-(3+L_{{}_{4}}))$,
$\hat{E}_{{}_{5}}\sim(1,\;1,\;-1,\;0,\;3+L_{{}_{4}})$,
$\hat{N}_{{}_{5}}\sim(1,\;1,\;0,\;0,\;3+L_{{}_{4}})$ to cancel the $B$ and $L$
anomalies. The ’brand new’ Higgs superfields
$\hat{\Phi}_{{}_{B}}\sim(1,\;1,\;0,\;1,\;0)$ and
$\hat{\varphi}_{{}_{B}}\sim(1,\;1,\;0,\;-1,\;0)$ acquire nonzero vacuum
expectation values (VEVs) to break Baryon number spontaneously. Meanwhile,
nonzero VEVs of $\Phi_{{}_{B}}$ and $\phi_{{}_{B}}$ also induce the large
masses for exotic quarks. In addition, the superfields
$\hat{\Phi}_{{}_{L}}\sim(1,\;1,\;0,\;0,\;-2)$ and
$\hat{\varphi}_{{}_{L}}\sim(1,\;1,\;0,\;0,\;2)$ acquire nonzero VEVs to break
Lepton number spontaneously. In order to avoid stability for the exotic
quarks, the model also includes the superfields
$\hat{X}\sim(1,\;1,\;0,\;2/3+B_{{}_{4}},\;0)$,
$\hat{X}^{\prime}\sim(1,\;1,\;0,\;-(2/3+B_{{}_{4}}),\;0)$. Actually, the
lightest one can be a dark matter candidate. The superpotential of the model
is written as
$\displaystyle{\cal W}_{{}_{BLMSSM}}={\cal W}_{{}_{MSSM}}+{\cal
W}_{{}_{B}}+{\cal W}_{{}_{L}}+{\cal W}_{{}_{X}}\;,$ (1)
where ${\cal W}_{{}_{MSSM}}$ is superpotential of the MSSM, and
$\displaystyle{\cal
W}_{{}_{B}}=\lambda_{{}_{Q}}\hat{Q}_{{}_{4}}\hat{Q}_{{}_{5}}^{c}\hat{\Phi}_{{}_{B}}+\lambda_{{}_{U}}\hat{U}_{{}_{4}}^{c}\hat{U}_{{}_{5}}\hat{\varphi}_{{}_{B}}+\lambda_{{}_{D}}\hat{D}_{{}_{4}}^{c}\hat{D}_{{}_{5}}\hat{\varphi}_{{}_{B}}+\mu_{{}_{B}}\hat{\Phi}_{{}_{B}}\hat{\varphi}_{{}_{B}}$
$\displaystyle\hskip
34.14322pt+Y_{{}_{u_{4}}}\hat{Q}_{{}_{4}}\hat{H}_{{}_{u}}\hat{U}_{{}_{4}}^{c}+Y_{{}_{d_{4}}}\hat{Q}_{{}_{4}}\hat{H}_{{}_{d}}\hat{D}_{{}_{4}}^{c}+Y_{{}_{u_{5}}}\hat{Q}_{{}_{5}}^{c}\hat{H}_{{}_{d}}\hat{U}_{{}_{5}}+Y_{{}_{d_{5}}}\hat{Q}_{{}_{5}}^{c}\hat{H}_{{}_{u}}\hat{D}_{{}_{5}}\;,$
$\displaystyle{\cal
W}_{{}_{L}}=Y_{{}_{e_{4}}}\hat{L}_{{}_{4}}\hat{H}_{{}_{d}}\hat{E}_{{}_{4}}^{c}+Y_{{}_{\nu_{4}}}\hat{L}_{{}_{4}}\hat{H}_{{}_{u}}\hat{N}_{{}_{4}}^{c}+Y_{{}_{e_{5}}}\hat{L}_{{}_{5}}^{c}\hat{H}_{{}_{u}}\hat{E}_{{}_{5}}+Y_{{}_{\nu_{5}}}\hat{L}_{{}_{5}}^{c}\hat{H}_{{}_{d}}\hat{N}_{{}_{5}}$
$\displaystyle\hskip
34.14322pt+Y_{{}_{\nu}}\hat{L}\hat{H}_{{}_{u}}\hat{N}^{c}+\lambda_{{}_{N^{c}}}\hat{N}^{c}\hat{N}^{c}\hat{\varphi}_{{}_{L}}+\mu_{{}_{L}}\hat{\Phi}_{{}_{L}}\hat{\varphi}_{{}_{L}}\;,$
$\displaystyle{\cal
W}_{{}_{X}}=\lambda_{1}\hat{Q}\hat{Q}_{{}_{5}}^{c}\hat{X}+\lambda_{2}\hat{U}^{c}\hat{U}_{{}_{5}}\hat{X}^{\prime}+\lambda_{3}\hat{D}^{c}\hat{D}_{{}_{5}}\hat{X}^{\prime}+\mu_{{}_{X}}\hat{X}\hat{X}^{\prime}\;.$
(2)
In the superpotential above, the exotic quarks obtain ${\rm TeV}$ scale masses
after $\Phi_{{}_{B}},\;\varphi_{{}_{B}}$ acquire nonzero VEVs, and the nonzero
VEV of $\varphi_{{}_{L}}$ implements the seesaw mechanism for the tiny
neutrino masses. Correspondingly, the soft breaking terms are generally given
as
$\displaystyle{\cal L}_{{}_{soft}}={\cal
L}_{{}_{soft}}^{MSSM}-(m_{{}_{\tilde{N}^{c}}}^{2})_{{}_{IJ}}\tilde{N}_{I}^{c*}\tilde{N}_{J}^{c}-m_{{}_{\tilde{Q}_{4}}}^{2}\tilde{Q}_{{}_{4}}^{\dagger}\tilde{Q}_{{}_{4}}-m_{{}_{\tilde{U}_{4}}}^{2}\tilde{U}_{{}_{4}}^{c*}\tilde{U}_{{}_{4}}^{c}-m_{{}_{\tilde{D}_{4}}}^{2}\tilde{D}_{{}_{4}}^{c*}\tilde{D}_{{}_{4}}^{c}$
$\displaystyle\hskip 36.98866pt-
m_{{}_{\tilde{Q}_{5}}}^{2}\tilde{Q}_{{}_{5}}^{c\dagger}\tilde{Q}_{{}_{5}}^{c}-m_{{}_{\tilde{U}_{5}}}^{2}\tilde{U}_{{}_{5}}^{*}\tilde{U}_{{}_{5}}-m_{{}_{\tilde{D}_{5}}}^{2}\tilde{D}_{{}_{5}}^{*}\tilde{D}_{{}_{5}}-m_{{}_{\tilde{L}_{4}}}^{2}\tilde{L}_{{}_{4}}^{\dagger}\tilde{L}_{{}_{4}}-m_{{}_{\tilde{\nu}_{4}}}^{2}\tilde{\nu}_{{}_{4}}^{c*}\tilde{\nu}_{{}_{4}}^{c}$
$\displaystyle\hskip 36.98866pt-
m_{{}_{\tilde{E}_{4}}}^{2}\tilde{e}_{{}_{4}}^{c*}\tilde{e}_{{}_{4}}^{c}-m_{{}_{\tilde{L}_{5}}}^{2}\tilde{L}_{{}_{5}}^{c\dagger}\tilde{L}_{{}_{5}}^{c}-m_{{}_{\tilde{\nu}_{5}}}^{2}\tilde{\nu}_{{}_{5}}^{*}\tilde{\nu}_{{}_{5}}-m_{{}_{\tilde{E}_{5}}}^{2}\tilde{e}_{{}_{5}}^{*}\tilde{e}_{{}_{5}}-m_{{}_{\Phi_{{}_{B}}}}^{2}\Phi_{{}_{B}}^{*}\Phi_{{}_{B}}$
$\displaystyle\hskip 36.98866pt-
m_{{}_{\varphi_{{}_{B}}}}^{2}\varphi_{{}_{B}}^{*}\varphi_{{}_{B}}-m_{{}_{\Phi_{{}_{L}}}}^{2}\Phi_{{}_{L}}^{*}\Phi_{{}_{L}}-m_{{}_{\varphi_{{}_{L}}}}^{2}\varphi_{{}_{L}}^{*}\varphi_{{}_{L}}-\Big{(}m_{{}_{B}}\lambda_{{}_{B}}\lambda_{{}_{B}}+m_{{}_{L}}\lambda_{{}_{L}}\lambda_{{}_{L}}+h.c.\Big{)}$
$\displaystyle\hskip
36.98866pt+\Big{\\{}A_{{}_{u_{4}}}Y_{{}_{u_{4}}}\tilde{Q}_{{}_{4}}H_{{}_{u}}\tilde{U}_{{}_{4}}^{c}+A_{{}_{d_{4}}}Y_{{}_{d_{4}}}\tilde{Q}_{{}_{4}}H_{{}_{d}}\tilde{D}_{{}_{4}}^{c}+A_{{}_{u_{5}}}Y_{{}_{u_{5}}}\tilde{Q}_{{}_{5}}^{c}H_{{}_{d}}\tilde{U}_{{}_{5}}+A_{{}_{d_{5}}}Y_{{}_{d_{5}}}\tilde{Q}_{{}_{5}}^{c}H_{{}_{u}}\tilde{D}_{{}_{5}}$
$\displaystyle\hskip
36.98866pt+A_{{}_{BQ}}\lambda_{{}_{Q}}\tilde{Q}_{{}_{4}}\tilde{Q}_{{}_{5}}^{c}\Phi_{{}_{B}}+A_{{}_{BU}}\lambda_{{}_{U}}\tilde{U}_{{}_{4}}^{c}\tilde{U}_{{}_{5}}\varphi_{{}_{B}}+A_{{}_{BD}}\lambda_{{}_{D}}\tilde{D}_{{}_{4}}^{c}\tilde{D}_{{}_{5}}\varphi_{{}_{B}}+B_{{}_{B}}\mu_{{}_{B}}\Phi_{{}_{B}}\varphi_{{}_{B}}+h.c.\Big{\\}}$
$\displaystyle\hskip
36.98866pt+\Big{\\{}A_{{}_{e_{4}}}Y_{{}_{e_{4}}}\tilde{L}_{{}_{4}}H_{{}_{d}}\tilde{E}_{{}_{4}}^{c}+A_{{}_{N_{4}}}Y_{{}_{N_{4}}}\tilde{L}_{{}_{4}}H_{{}_{u}}\tilde{N}_{{}_{4}}^{c}+A_{{}_{e_{5}}}Y_{{}_{e_{5}}}\tilde{L}_{{}_{5}}^{c}H_{{}_{u}}\tilde{E}_{{}_{5}}+A_{{}_{N_{5}}}Y_{{}_{\nu_{5}}}\tilde{L}_{{}_{5}}^{c}H_{{}_{d}}\tilde{N}_{{}_{5}}$
$\displaystyle\hskip
36.98866pt+A_{{}_{N}}Y_{{}_{N}}\tilde{L}H_{{}_{u}}\tilde{N}^{c}+A_{{}_{N^{c}}}\lambda_{{}_{N^{c}}}\tilde{N}^{c}\tilde{N}^{c}\varphi_{{}_{L}}+B_{{}_{L}}\mu_{{}_{L}}\Phi_{{}_{L}}\varphi_{{}_{L}}+h.c.\Big{\\}}$
$\displaystyle\hskip
36.98866pt+\Big{\\{}A_{1}\lambda_{1}\tilde{Q}\tilde{Q}_{{}_{5}}^{c}X+A_{2}\lambda_{2}\tilde{U}^{c}\tilde{U}_{{}_{5}}X^{\prime}+A_{3}\lambda_{3}\tilde{D}^{c}\tilde{D}_{{}_{5}}X^{\prime}+B_{{}_{X}}\mu_{{}_{X}}XX^{\prime}+h.c.\Big{\\}}\;,$
(3)
where ${\cal L}_{{}_{soft}}^{MSSM}$ is soft breaking terms of the MSSM,
$\lambda_{B},\;\lambda_{L}$ are gauginos of $U(1)_{{}_{B}}$ and
$U(1)_{{}_{L}}$, respectively. After the $SU(2)_{L}$ doublets
$H_{{}_{u}},\;H_{{}_{d}}$ and $SU(2)_{L}$ singlets
$\Phi_{{}_{B}},\;\varphi_{{}_{B}},\;\Phi_{{}_{L}},\;\varphi_{{}_{L}}$ acquire
the nonzero VEVs
$\upsilon_{{}_{u}},\;\upsilon_{{}_{d}},\;\upsilon_{{}_{B}},\;\overline{\upsilon}_{{}_{B}}$,
and $\upsilon_{{}_{L}},\;\overline{\upsilon}_{{}_{L}}$,
$\displaystyle H_{{}_{u}}=\left(\begin{array}[]{c}H_{{}_{u}}^{+}\\\
{1\over\sqrt{2}}\Big{(}\upsilon_{{}_{u}}+H_{{}_{u}}^{0}+iP_{{}_{u}}^{0}\Big{)}\end{array}\right)\;,$
(6) $\displaystyle
H_{{}_{d}}=\left(\begin{array}[]{c}{1\over\sqrt{2}}\Big{(}\upsilon_{{}_{d}}+H_{{}_{d}}^{0}+iP_{{}_{d}}^{0}\Big{)}\\\
H_{{}_{d}}^{-}\end{array}\right)\;,$ (9)
$\displaystyle\Phi_{{}_{B}}={1\over\sqrt{2}}\Big{(}\upsilon_{{}_{B}}+\Phi_{{}_{B}}^{0}+iP_{{}_{B}}^{0}\Big{)}\;,$
$\displaystyle\varphi_{{}_{B}}={1\over\sqrt{2}}\Big{(}\overline{\upsilon}_{{}_{B}}+\varphi_{{}_{B}}^{0}+i\overline{P}_{{}_{B}}^{0}\Big{)}\;,$
$\displaystyle\Phi_{{}_{L}}={1\over\sqrt{2}}\Big{(}\upsilon_{{}_{L}}+\Phi_{{}_{L}}^{0}+iP_{{}_{L}}^{0}\Big{)}\;,$
$\displaystyle\varphi_{{}_{L}}={1\over\sqrt{2}}\Big{(}\overline{\upsilon}_{{}_{L}}+\varphi_{{}_{L}}^{0}+i\overline{P}_{{}_{L}}^{0}\Big{)}\;,$
(10)
the local gauge symmetry $SU(2)_{{}_{L}}\otimes U(1)_{{}_{Y}}\otimes
U(1)_{{}_{B}}\otimes U(1)_{{}_{L}}$ is broken down to the electromagnetic
symmetry $U(1)_{{}_{e}}$, where
$\displaystyle G^{\pm}=\cos\beta H_{{}_{d}}^{\pm}+\sin\beta H_{{}_{u}}^{\pm}$
(11)
denotes the charged Goldstone boson, and
$\displaystyle G^{0}=\cos\beta P_{{}_{d}}^{0}+\sin\beta P_{{}_{u}}^{0}\;,$
$\displaystyle
G_{{}_{B}}^{0}=\cos\beta_{{}_{B}}P_{{}_{B}}^{0}+\sin\beta_{{}_{B}}\overline{P}_{{}_{B}}^{0}\;,$
$\displaystyle
G_{{}_{L}}^{0}=\cos\beta_{{}_{L}}P_{{}_{L}}^{0}+\sin\beta_{{}_{L}}\overline{P}_{{}_{L}}^{0}$
(12)
denote the neutral Goldstone bosons, respectively. Here
$\tan\beta=\upsilon_{{}_{u}}/\upsilon_{{}_{d}},\;\tan\beta_{{}_{B}}=\overline{\upsilon}_{{}_{B}}/\upsilon_{{}_{B}}$,
and $\tan\beta_{{}_{L}}=\overline{\upsilon}_{{}_{L}}/\upsilon_{{}_{L}}$.
Correspondingly, the physical neutral pseudoscalar fields are
$\displaystyle A^{0}=-\sin\beta P_{{}_{d}}^{0}+\cos\beta P_{{}_{u}}^{0}\;,$
$\displaystyle
A_{{}_{B}}^{0}=-\sin\beta_{{}_{B}}P_{{}_{B}}^{0}+\cos\beta_{{}_{B}}\overline{P}_{{}_{B}}^{0}\;,$
$\displaystyle
A_{{}_{L}}^{0}=-\sin\beta_{{}_{L}}P_{{}_{L}}^{0}+\cos\beta_{{}_{L}}\overline{P}_{{}_{L}}^{0}\;.$
(13)
At tree level, the masses for those particles are respectively formulated as
$\displaystyle m_{{}_{A^{0}}}^{2}={B\mu\over\cos\beta\sin\beta}\;,$
$\displaystyle
m_{{}_{A_{{}_{B}}^{0}}}^{2}={B_{{}_{B}}\mu_{{}_{B}}\over\cos\beta_{{}_{B}}\sin\beta_{{}_{B}}}\;,$
$\displaystyle
m_{{}_{A_{{}_{L}}^{0}}}^{2}={B_{{}_{L}}\mu_{{}_{L}}\over\cos\beta_{{}_{L}}\sin\beta_{{}_{L}}}\;.$
(14)
Meanwhile the charged Higgs is
$\displaystyle H^{\pm}=-\sin\beta H_{{}_{d}}^{\pm}+\cos\beta H_{{}_{u}}^{\pm}$
(15)
with the tree level mass squared
$\displaystyle m_{{}_{H^{\pm}}}^{2}=m_{{}_{A^{0}}}^{2}+m_{{}_{\rm W}}^{2}\;.$
(16)
In the two Higgs doublet sector, the mass squared matrix of neutral CP-even
Higgs is diagonalized by the rotation
$\displaystyle\left(\begin{array}[]{l}H^{0}\\\
h^{0}\end{array}\right)=\left(\begin{array}[]{cc}\cos\alpha&\sin\alpha\\\
-\sin\alpha&\cos\alpha\end{array}\right)\left(\begin{array}[]{l}H_{{}_{d}}^{0}\\\
H_{{}_{u}}^{0}\end{array}\right)\;,$ (23)
where $h^{0}$ is the lightest neutral CP-even Higgs.
In the basis $(\Phi_{{}_{B}}^{0},\;\varphi_{{}_{B}}^{0})$, the mass squared
matrix is
$\displaystyle{\cal
M}_{{}_{EB}}^{2}=\left(\begin{array}[]{ll}m_{{}_{Z_{B}}}^{2}\cos^{2}\beta_{{}_{B}}+m_{{}_{A_{{}_{B}}^{0}}}^{2}\sin^{2}\beta_{{}_{B}},&(m_{{}_{Z_{B}}}^{2}+m_{{}_{A_{{}_{B}}^{0}}}^{2})\cos\beta_{{}_{B}}\sin\beta_{{}_{B}}\\\
(m_{{}_{Z_{B}}}^{2}+m_{{}_{A_{{}_{B}}^{0}}}^{2})\cos\beta_{{}_{B}}\sin\beta_{{}_{B}},&m_{{}_{Z_{B}}}^{2}\sin^{2}\beta_{{}_{B}}+m_{{}_{A_{{}_{B}}^{0}}}^{2}\cos^{2}\beta_{{}_{B}}\end{array}\right)\;,$
(26)
where
$m_{{}_{Z_{B}}}^{2}=g_{{}_{B}}^{2}(\upsilon_{{}_{B}}^{2}+\overline{\upsilon}_{{}_{B}}^{2})$
is mass squared of the neutral $U(1)_{{}_{B}}$ gauge boson $Z_{{}_{B}}$.
Defining the mixing angle $\alpha_{{}_{B}}$ through
$\displaystyle\tan
2\alpha_{{}_{B}}={m_{{}_{Z_{B}}}^{2}+m_{{}_{A_{{}_{B}}^{0}}}^{2}\over
m_{{}_{Z_{B}}}^{2}-m_{{}_{A_{{}_{B}}^{0}}}^{2}}\tan 2\beta_{{}_{B}}\;,$ (27)
we obtain two mass eigenstates as
$\displaystyle\left(\begin{array}[]{l}H_{{}_{B}}^{0}\\\
h_{{}_{B}}^{0}\end{array}\right)=\left(\begin{array}[]{cc}\cos\alpha_{{}_{B}}&\sin\alpha_{{}_{B}}\\\
-\sin\alpha_{{}_{B}}&\cos\alpha_{{}_{B}}\end{array}\right)\left(\begin{array}[]{l}\Phi_{{}_{B}}^{0}\\\
\varphi_{{}_{B}}^{0}\end{array}\right)\;.$ (34)
Similarly the mass squared matrix for
$(\Phi_{{}_{L}}^{0},\;\varphi_{{}_{L}}^{0})$ is written as
$\displaystyle{\cal
M}_{{}_{EL}}^{2}=\left(\begin{array}[]{ll}m_{{}_{Z_{L}}}^{2}\cos^{2}\beta_{{}_{L}}+m_{{}_{A_{{}_{L}}^{0}}}^{2}\sin^{2}\beta_{{}_{L}},&(m_{{}_{Z_{L}}}^{2}+m_{{}_{A_{{}_{L}}^{0}}}^{2})\cos\beta_{{}_{L}}\sin\beta_{{}_{L}}\\\
(m_{{}_{Z_{L}}}^{2}+m_{{}_{A_{{}_{L}}^{0}}}^{2})\cos\beta_{{}_{L}}\sin\beta_{{}_{L}},&m_{{}_{Z_{L}}}^{2}\sin^{2}\beta_{{}_{L}}+m_{{}_{A_{{}_{L}}^{0}}}^{2}\cos^{2}\beta_{{}_{L}}\end{array}\right)\;,$
(37)
with
$m_{{}_{Z_{L}}}^{2}=4g_{{}_{L}}^{2}(\upsilon_{{}_{L}}^{2}+\overline{\upsilon}_{{}_{L}}^{2})$
denoting mass squared of the neutral $U(1)_{{}_{L}}$ gauge boson $Z_{{}_{L}}$.
In four-component Dirac spinors, the mass matrix for exotic quarks with
charged $2/3$ is
$\displaystyle-{\cal
L}_{{}_{t^{\prime}}}^{mass}=\left(\begin{array}[]{ll}\bar{t}_{{}_{4R}}^{\prime},&\bar{t}_{{}_{5R}}^{\prime}\end{array}\right)\left(\begin{array}[]{ll}{1\over\sqrt{2}}\lambda_{{}_{Q}}\upsilon_{{}_{B}},&-{1\over\sqrt{2}}Y_{{}_{u_{5}}}\upsilon_{{}_{d}}\\\
-{1\over\sqrt{2}}Y_{{}_{u_{4}}}\upsilon_{{}_{u}},&{1\over\sqrt{2}}\lambda_{{}_{u}}\overline{\upsilon}_{{}_{B}}\end{array}\right)\left(\begin{array}[]{l}t_{{}_{4L}}^{\prime}\\\
t_{{}_{5L}}^{\prime}\end{array}\right)+h.c.$ (43)
Using the unitary transformations
$\displaystyle\left(\begin{array}[]{l}t_{{}_{4L}}^{\prime}\\\
t_{{}_{5L}}^{\prime}\end{array}\right)=U_{{}_{t^{\prime}}}^{\dagger}\cdot\left(\begin{array}[]{l}t_{{}_{4L}}\\\
t_{{}_{5L}}\end{array}\right)\;,\;\;\left(\begin{array}[]{l}t_{{}_{4R}}^{\prime}\\\
t_{{}_{5R}}^{\prime}\end{array}\right)=W_{{}_{t^{\prime}}}^{\dagger}\cdot\left(\begin{array}[]{l}t_{{}_{4R}}\\\
t_{{}_{5R}}\end{array}\right)\;,$ (52)
we diagonalize the mass matrix for the vector quarks with charged $2/3$:
$\displaystyle
W_{{}_{t^{\prime}}}^{\dagger}\cdot\left(\begin{array}[]{ll}{1\over\sqrt{2}}\lambda_{{}_{Q}}\upsilon_{{}_{B}},&-{1\over\sqrt{2}}Y_{{}_{u_{5}}}\upsilon_{{}_{d}}\\\
-{1\over\sqrt{2}}Y_{{}_{u_{4}}}\upsilon_{{}_{u}},&{1\over\sqrt{2}}\lambda_{{}_{u}}\overline{\upsilon}_{{}_{B}}\end{array}\right)\cdot
U_{{}_{t^{\prime}}}={\it diag}\Big{(}m_{{}_{t_{4}}},\;m_{{}_{t_{5}}}\Big{)}$
(55)
Similarly we can write the mass matrix for the exotic quarks with charged
$-1/3$ as
$\displaystyle-{\cal
L}_{{}_{b^{\prime}}}^{mass}=\left(\begin{array}[]{ll}\bar{b}_{{}_{4R}},&\bar{b}_{{}_{5R}}\end{array}\right)\left(\begin{array}[]{ll}-{1\over\sqrt{2}}\lambda_{{}_{Q}}\upsilon_{{}_{B}},&-{1\over\sqrt{2}}Y_{{}_{d_{5}}}\upsilon_{{}_{u}}\\\
-{1\over\sqrt{2}}Y_{{}_{d_{4}}}\upsilon_{{}_{d}},&{1\over\sqrt{2}}\lambda_{{}_{d}}\overline{\upsilon}_{{}_{B}}\end{array}\right)\left(\begin{array}[]{l}b_{{}_{4L}}\\\
b_{{}_{5L}}\end{array}\right)+h.c.$ (61)
Adopting the unitary transformations
$\displaystyle\left(\begin{array}[]{l}b_{{}_{4L}}^{\prime}\\\
b_{{}_{5L}}^{\prime}\end{array}\right)=U_{{}_{b^{\prime}}}^{\dagger}\cdot\left(\begin{array}[]{l}b_{{}_{4L}}\\\
b_{{}_{5L}}\end{array}\right)\;,\;\;\left(\begin{array}[]{l}b_{{}_{4R}}^{\prime}\\\
b_{{}_{5R}}^{\prime}\end{array}\right)=W_{{}_{b^{\prime}}}^{\dagger}\cdot\left(\begin{array}[]{l}b_{{}_{4R}}\\\
b_{{}_{5R}}\end{array}\right)\;,$ (70)
one can diagonalize mass matrix for the vector quarks with charged $-1/3$ as
$\displaystyle
W_{{}_{b^{\prime}}}^{\dagger}\cdot\left(\begin{array}[]{ll}-{1\over\sqrt{2}}\lambda_{{}_{Q}}\upsilon_{{}_{B}},&-{1\over\sqrt{2}}Y_{{}_{d_{5}}}\upsilon_{{}_{u}}\\\
-{1\over\sqrt{2}}Y_{{}_{d_{4}}}\upsilon_{{}_{d}},&{1\over\sqrt{2}}\lambda_{{}_{d}}\overline{\upsilon}_{{}_{B}}\end{array}\right)\cdot
U_{{}_{b^{\prime}}}={\it
diag}\Big{(}m_{{}_{b_{4}}},\;m_{{}_{b_{5}}}\Big{)}\;.$ (73)
Assuming CP conservation in exotic quark sector, we then derive the flavor
conservative couplings between the lightest neutral CP-even Higgs and charged
$2/3$ exotic quarks:
$\displaystyle{\cal
L}_{{}_{Ht^{\prime}t^{\prime}}}={1\over\sqrt{2}}\sum\limits_{i=1}^{2}\Big{[}Y_{{}_{u_{4}}}(W_{{}_{t}}^{T})_{{}_{i2}}(U_{{}_{t}})_{{}_{1i}}\cos\alpha+Y_{{}_{u_{5}}}(W_{{}_{t}}^{T})_{{}_{i1}}(U_{{}_{t}})_{{}_{2i}}\sin\alpha\Big{]}h^{0}\overline{t}_{{}_{i+3}}t_{{}_{i+3}}\;,$
(74)
where $T$ represents the transposing transformation of a matrix. In a similar
way, the flavor conservative couplings between the lightest neutral CP-even
Higgs and charged $-1/3$ exotic quarks are written as
$\displaystyle{\cal
L}_{{}_{Hb^{\prime}b^{\prime}}}={1\over\sqrt{2}}\sum\limits_{i=1}^{2}\Big{[}Y_{{}_{d_{4}}}(W_{{}_{b}}^{T})_{{}_{i2}}(U_{{}_{b}})_{{}_{1i}}\sin\alpha-
Y_{{}_{d_{5}}}(W_{{}_{b}}^{T})_{{}_{i1}}(U_{{}_{b}})_{{}_{2i}}\cos\alpha\Big{]}h^{0}\overline{b}_{{}_{i+3}}b_{{}_{i+3}}\;.$
(75)
Using the superpotential in Eq.(1) and the soft breaking terms, we write the
mass squared matrices for exotic scalar quarks as
$\displaystyle-{\cal
L}_{{}_{\widetilde{EQ}}}^{mass}=\tilde{t}^{\prime\dagger}\cdot{\cal
M}_{\tilde{t}^{\prime}}^{2}\cdot\tilde{t}^{\prime}+\tilde{b}^{\prime\dagger}\cdot{\cal
M}_{\tilde{b}^{\prime}}^{2}\cdot\tilde{b}^{\prime}$ (76)
with $\tilde{t}^{\prime
T}=(\tilde{Q}_{{}_{4}}^{1},\;\tilde{U}_{{}_{4}}^{c*},\;\tilde{Q}_{{}_{5}}^{2c*},\;\tilde{U}_{{}_{5}})$,
$\tilde{b}^{\prime
T}=(\tilde{Q}_{{}_{4}}^{2},\;\tilde{D}_{{}_{4}}^{c*},\;\tilde{Q}_{{}_{5}}^{1c*},\;\tilde{D}_{{}_{5}}^{*})$.
The concrete expressions for $4\times 4$ mass squared matrices ${\cal
M}_{\tilde{t}^{\prime}}^{2},\;{\cal M}_{\tilde{b}^{\prime}}^{2}$ are given in
appendix B, and the couplings between the lightest neutral CP-even Higgs and
exotic scalar quarks are collected in appendix C.
## III The lightest CP-even Higgs mass
It is well known since quite some time that radiative corrections modify the
tree level mass squared matrix of neutral Higgs substantially in the MSSM,
where the main effect originates from one-loop diagrams involving the top
quark and its scalar partners $\tilde{t}_{1,2}$ Haber1 . In order to obtain
masses of the neutral CP-even Higgs reasonably, we should include the
radiative corrections from exotic fermions and corresponding supersymmetric
partners in the BLMSSM. Then, the mass squared matrix for the neutral CP-even
Higgs in the basis $(H_{d}^{0},\;H_{u}^{0})$ is written as
$\displaystyle{\cal
M}^{2}_{even}=\left(\begin{array}[]{ll}M_{11}^{2}+\Delta_{11}&M_{12}^{2}+\Delta_{12}\\\
M_{12}^{2}+\Delta_{12}&M_{22}^{2}+\Delta_{22}\end{array}\right)\;,$ (79)
where
$\displaystyle M_{11}^{2}=m_{{}_{\rm
Z}}^{2}\cos^{2}\beta+m_{{}_{A^{0}}}^{2}\sin^{2}\beta\;,$ $\displaystyle
M_{12}^{2}=-(m_{{}_{\rm Z}}^{2}+m_{{}_{A^{0}}}^{2})\sin\beta\cos\beta\;,$
$\displaystyle M_{22}^{2}=m_{{}_{\rm
Z}}^{2}\sin^{2}\beta+m_{{}_{A^{0}}}^{2}\cos^{2}\beta\;,$ (80)
and $m_{{}_{A^{0}}}$ denotes the pseudo-scalar Higgs mass at tree level. The
radiative corrections originate from the MSSM sector, exotic fermions and
corresponding scalar fermions respectively in this model:
$\displaystyle\Delta_{11}=\Delta_{11}^{MSSM}+\Delta_{11}^{B}+\Delta_{11}^{L}\;,$
$\displaystyle\Delta_{12}=\Delta_{12}^{MSSM}+\Delta_{12}^{B}+\Delta_{12}^{L}\;,$
$\displaystyle\Delta_{22}=\Delta_{22}^{MSSM}+\Delta_{22}^{B}+\Delta_{22}^{L}\;.$
(81)
Here the concrete expressions for $\Delta_{11}^{MSSM}$, $\Delta_{12}^{MSSM}$,
$\Delta_{22}^{MSSM}$ at two-loop level can be found in literature2loop-HiggsM
, the one-loop radiative corrections from exotic quark fields are formulated
as1loop-HiggsM
$\displaystyle\Delta_{11}^{B}={3G_{{}_{F}}Y_{{}_{u_{4}}}^{4}\upsilon^{4}\over
4\sqrt{2}\pi^{2}\sin^{2}\beta}\cdot{\mu^{2}(A_{{}_{u_{4}}}-\mu\cot\beta)^{2}\over(m_{{}_{\tilde{t}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{2}^{\prime}}}^{2})^{2}}g(m_{{}_{\tilde{t}_{1}^{\prime}}},m_{{}_{\tilde{t}_{2}^{\prime}}})$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{u_{5}}}^{4}\upsilon^{4}\over
4\sqrt{2}\pi^{2}\cos^{2}\beta}\Big{\\{}\ln{m_{{}_{\tilde{t}_{3}^{\prime}}}m_{{}_{\tilde{t}_{4}^{\prime}}}\over
m_{{}_{t_{5}}}^{2}}+{A_{{}_{u_{5}}}(A_{{}_{u_{5}}}-\mu\tan\beta)\over
m_{{}_{\tilde{t}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{4}^{\prime}}}^{2}}\ln{m_{{}_{\tilde{t}_{3}^{\prime}}}^{2}\over
m_{{}_{\tilde{t}_{4}^{\prime}}}^{2}}$ $\displaystyle\hskip
34.14322pt+{A_{{}_{u_{5}}}^{2}(A_{{}_{u_{5}}}-\mu\tan\beta)^{2}\over(m_{{}_{\tilde{t}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{4}^{\prime}}}^{2})^{2}}g(m_{{}_{\tilde{t}_{3}^{\prime}}},m_{{}_{\tilde{t}_{4}^{\prime}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{d_{4}}}^{4}\upsilon^{4}\over
4\sqrt{2}\pi^{2}\cos^{2}\beta}\Big{\\{}\ln{m_{{}_{\tilde{b}_{1}^{\prime}}}m_{{}_{\tilde{b}_{2}^{\prime}}}\over
m_{{}_{b_{4}}}^{2}}+{A_{{}_{d_{4}}}(A_{{}_{d_{4}}}-\mu\tan\beta)\over
m_{{}_{\tilde{b}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{b}_{2}^{\prime}}}^{2}}\ln{m_{{}_{\tilde{b}_{1}^{\prime}}}^{2}\over
m_{{}_{\tilde{b}_{2}^{\prime}}}^{2}}$ $\displaystyle\hskip
34.14322pt+{A_{{}_{d_{4}}}^{2}(A_{{}_{d_{4}}}-\mu\tan\beta)^{2}\over(m_{{}_{\tilde{b}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{b}_{2}^{\prime}}}^{2})^{2}}g(m_{{}_{\tilde{b}_{1}^{\prime}}},m_{{}_{\tilde{b}_{2}^{\prime}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{d_{5}}}^{4}\upsilon^{4}\over
4\sqrt{2}\pi^{2}\sin^{2}\beta}\cdot{\mu^{2}(A_{{}_{d_{5}}}-\mu\cot\beta)^{2}\over(m_{{}_{\tilde{b}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{b}_{4}^{\prime}}}^{2})^{2}}g(m_{{}_{\tilde{b}_{3}^{\prime}}},m_{{}_{\tilde{b}_{4}^{\prime}}})\;,$
$\displaystyle\Delta_{12}^{B}={3G_{{}_{F}}Y_{{}_{u_{4}}}^{4}\upsilon^{4}\over
8\sqrt{2}\pi^{2}\sin^{2}\beta}\cdot{\mu(-A_{{}_{u_{4}}}+\mu\cot\beta)\over
m_{{}_{\tilde{t}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{2}^{\prime}}}^{2}}\Big{\\{}\ln{m_{{}_{\tilde{t}_{1}^{\prime}}}\over
m_{{}_{\tilde{t}_{2}^{\prime}}}}+{A_{{}_{u_{4}}}(A_{{}_{u_{4}}}-\mu\cot\beta)\over
m_{{}_{\tilde{t}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{2}^{\prime}}}^{2}}g(m_{{}_{\tilde{t}_{1}^{\prime}}},m_{{}_{\tilde{t}_{2}^{\prime}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{u_{5}}}^{4}\upsilon^{4}\over
8\sqrt{2}\pi^{2}\cos^{2}\beta}\cdot{\mu(-A_{{}_{u_{5}}}+\mu\tan\beta)\over
m_{{}_{\tilde{t}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{4}^{\prime}}}^{2}}\Big{\\{}\ln{m_{{}_{\tilde{t}_{3}^{\prime}}}\over
m_{{}_{\tilde{t}_{4}^{\prime}}}}+{A_{{}_{u_{5}}}(A_{{}_{u_{5}}}-\mu\tan\beta)\over
m_{{}_{\tilde{t}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{4}^{\prime}}}^{2}}g(m_{{}_{\tilde{t}_{3}^{\prime}}},m_{{}_{\tilde{t}_{4}^{\prime}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{d_{4}}}^{4}\upsilon^{4}\over
8\sqrt{2}\pi^{2}\cos^{2}\beta}\cdot{\mu(-A_{{}_{d_{4}}}+\mu\tan\beta)\over
m_{{}_{\tilde{d}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{d}_{2}^{\prime}}}^{2}}\Big{\\{}\ln{m_{{}_{\tilde{d}_{1}^{\prime}}}\over
m_{{}_{\tilde{d}_{2}^{\prime}}}}+{A_{{}_{d_{4}}}(A_{{}_{d_{4}}}-\mu\tan\beta)\over
m_{{}_{\tilde{d}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{d}_{2}^{\prime}}}^{2}}g(m_{{}_{\tilde{d}_{1}^{\prime}}},m_{{}_{\tilde{d}_{2}^{\prime}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{d_{5}}}^{4}\upsilon^{4}\over
8\sqrt{2}\pi^{2}\sin^{2}\beta}\cdot{\mu(-A_{{}_{d_{5}}}+\mu\cot\beta)\over
m_{{}_{\tilde{b}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{b}_{4}^{\prime}}}^{2}}\Big{\\{}\ln{m_{{}_{\tilde{b}_{3}^{\prime}}}\over
m_{{}_{\tilde{b}_{4}^{\prime}}}}+{A_{{}_{d_{5}}}(A_{{}_{d_{5}}}-\mu\cot\beta)\over
m_{{}_{\tilde{b}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{b}_{4}^{\prime}}}^{2}}g(m_{{}_{\tilde{b}_{3}^{\prime}}},m_{{}_{\tilde{b}_{4}^{\prime}}})\Big{\\}}\;,$
$\displaystyle\Delta_{22}^{B}={3G_{{}_{F}}Y_{{}_{u_{4}}}^{4}\upsilon^{4}\over
4\sqrt{2}\pi^{2}\sin^{2}\beta}\Big{\\{}\ln{m_{{}_{\tilde{t}_{1}^{\prime}}}m_{{}_{\tilde{t}_{2}^{\prime}}}\over
m_{{}_{t_{4}}}^{2}}+{A_{{}_{u_{4}}}(A_{{}_{u_{4}}}-\mu\cot\beta)\over
m_{{}_{\tilde{t}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{2}^{\prime}}}^{2}}\ln{m_{{}_{\tilde{t}_{1}^{\prime}}}^{2}\over
m_{{}_{\tilde{t}_{2}^{\prime}}}^{2}}$ $\displaystyle\hskip
34.14322pt+{A_{{}_{u_{4}}}^{2}(A_{{}_{u_{4}}}-\mu\cot\beta)^{2}\over(m_{{}_{\tilde{t}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{2}^{\prime}}}^{2})^{2}}g(m_{{}_{\tilde{t}_{1}^{\prime}}},m_{{}_{\tilde{t}_{2}^{\prime}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{u_{5}}}^{4}\upsilon^{4}\over
4\sqrt{2}\pi^{2}\cos^{2}\beta}\cdot{\mu^{2}(A_{{}_{u_{5}}}-\mu\tan\beta)^{2}\over(m_{{}_{\tilde{t}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{t}_{4}^{\prime}}}^{2})^{2}}g(m_{{}_{\tilde{t}_{3}^{\prime}}},m_{{}_{\tilde{t}_{4}^{\prime}}})$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{d_{4}}}^{4}\upsilon^{4}\over
4\sqrt{2}\pi^{2}\cos^{2}\beta}\cdot{\mu^{2}(A_{{}_{d_{4}}}-\mu\tan\beta)^{2}\over(m_{{}_{\tilde{b}_{1}^{\prime}}}^{2}-m_{{}_{\tilde{b}_{2}^{\prime}}}^{2})^{2}}g(m_{{}_{\tilde{b}_{1}^{\prime}}},m_{{}_{\tilde{b}_{2}^{\prime}}})$
$\displaystyle\hskip
34.14322pt+{3G_{{}_{F}}Y_{{}_{d_{5}}}^{4}\upsilon^{4}\over
4\sqrt{2}\pi^{2}\sin^{2}\beta}\Big{\\{}\ln{m_{{}_{\tilde{b}_{3}^{\prime}}}m_{{}_{\tilde{b}_{4}^{\prime}}}\over
m_{{}_{b_{5}}}^{2}}+{A_{{}_{d_{5}}}(A_{{}_{d_{5}}}-\mu\cot\beta)\over
m_{{}_{\tilde{b}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{b}_{4}^{\prime}}}^{2}}\ln{m_{{}_{\tilde{b}_{3}^{\prime}}}^{2}\over
m_{{}_{\tilde{b}_{4}^{\prime}}}^{2}}$ $\displaystyle\hskip
34.14322pt+{A_{{}_{d_{5}}}^{2}(A_{{}_{d_{5}}}-\mu\cot\beta)^{2}\over(m_{{}_{\tilde{b}_{3}^{\prime}}}^{2}-m_{{}_{\tilde{b}_{4}^{\prime}}}^{2})^{2}}g(m_{{}_{\tilde{b}_{3}^{\prime}}},m_{{}_{\tilde{b}_{4}^{\prime}}})\Big{\\}}\;,$
(82)
here $\upsilon=\sqrt{\upsilon_{{}_{u}}^{2}+\upsilon_{{}_{d}}^{2}}\simeq
246\;{\rm GeV}$ and
$\displaystyle g(x,y)=1-{x^{2}+y^{2}\over x^{2}-y^{2}}\ln{x\over y}\;.$ (83)
To derive the results presented in Eq.(82), we adopt the assumption
$|\lambda_{{}_{Q}}\upsilon_{{}_{B}}|,\;|\lambda_{{}_{u}}\overline{\upsilon}_{{}_{B}}|,\;|\lambda_{{}_{d}}\overline{\upsilon}_{{}_{B}}|\gg|Y_{{}_{u_{4}}}\upsilon|,\;|Y_{{}_{u_{5}}}\upsilon|,\;|Y_{{}_{d_{4}}}\upsilon|,\;|Y_{{}_{d_{5}}}\upsilon|$
in our calculation. Similarly, one can obtain the one-loop radiative
corrections from exotic lepton fields presented in appendix D.
One most stringent constraint on parameter space of the BLMSSM is that the
mass squared matrix in Eq.(79) should produce an eigenvalue around $(125\;{\rm
GeV})^{2}$ as mass squared of the lightest neutral CP-even Higgs. The current
combination of the ATLAS and CMS data givesCMS ; ATLAS ; CMS-ATLAS :
$\displaystyle m_{{}_{h^{0}}}=125.9\pm 2.1\;{\rm GeV}\;,$ (84)
this fact constrains parameter space of the BLMSSM strongly.
## IV $gg\rightarrow h^{0}$ and
$h^{0}\rightarrow\gamma\gamma,\;ZZ^{*},\;WW^{*}$
The Higgs is produced chiefly through the gluon fusion at the LHC. In the SM,
the leading order (LO) contributions originate from the one loop diagram which
involves virtual top quarks. The cross section for this process is known to
the next-to-next-to-leading order (NNLO)NNLO which can enhance the LO result
by 80-100%. Furthermore, any new particle which strongly couples with the
Higgs can significantly modified this cross section. In supersymmetric
extension of the SM considered here, the LO decay width for the process
$h^{0}\rightarrow gg$ is given by (see Ref.Gamma1 and references therein)
$\displaystyle\Gamma_{{}_{NP}}(h^{0}\rightarrow
gg)={G_{{}_{F}}\alpha_{s}^{2}m_{{}_{h^{0}}}^{3}\over
64\sqrt{2}\pi^{3}}\Big{|}\sum\limits_{q}g_{{}_{h^{0}qq}}A_{1/2}(x_{q})+\sum\limits_{\tilde{q}}g_{{}_{h^{0}\tilde{q}\tilde{q}}}{m_{{}_{\rm
Z}}^{2}\over m_{{}_{\tilde{q}}}^{2}}A_{0}(x_{{}_{\tilde{q}}})\Big{|}^{2}\;,$
(85)
with $x_{a}=m_{{}_{h^{0}}}^{2}/(4m_{a}^{2})$. In addition,
$q=t,\;b,\;t_{{}_{4}},\;t_{{}_{5}},\;b_{{}_{4}},\;b_{{}_{5}}$ and
$\tilde{q}=\tilde{t}_{{}_{1,2}},\;\tilde{b}_{{}_{1,2}},\;\tilde{\cal
U}_{i},\;\tilde{\cal D}_{i}\;(i=1,\;2,\;3,\;4)$. The concrete expressions for
$g_{{}_{h^{0}tt}},\;g_{{}_{h^{0}bb}},\;g_{{}_{h^{0}\tilde{t}_{i}\tilde{t}_{i}}},\;g_{{}_{h^{0}\tilde{b}_{i}\tilde{b}_{i}}},\;(i=1,\;2)$
can be found in literatureBL_h1 , and
$\displaystyle g_{{}_{h^{0}t_{4}t_{4}}}=-{\sqrt{2}m_{{}_{\rm W}}s_{{}_{\rm
W}}\over
em_{{}_{t_{4}}}}\Big{[}Y_{{}_{u_{4}}}(W_{{}_{t}}^{T})_{{}_{12}}(U_{{}_{t}})_{{}_{11}}\cos\alpha+Y_{{}_{u_{5}}}(W_{{}_{t}}^{T})_{{}_{11}}(U_{{}_{t}})_{{}_{21}}\sin\alpha\Big{]}\;,$
$\displaystyle g_{{}_{h^{0}t_{5}t_{5}}}=-{\sqrt{2}m_{{}_{\rm W}}s_{{}_{\rm
W}}\over
em_{{}_{t_{5}}}}\Big{[}Y_{{}_{u_{4}}}(W_{{}_{t}}^{T})_{{}_{22}}(U_{{}_{t}})_{{}_{12}}\cos\alpha+Y_{{}_{u_{5}}}(W_{{}_{t}}^{T})_{{}_{21}}(U_{{}_{t}})_{{}_{22}}\sin\alpha\Big{]}\;,$
$\displaystyle g_{{}_{h^{0}b_{4}b_{4}}}=-{\sqrt{2}m_{{}_{\rm W}}s_{{}_{\rm
W}}\over
em_{{}_{b_{4}}}}\Big{[}Y_{{}_{d_{4}}}(W_{{}_{b}}^{T})_{{}_{12}}(U_{{}_{b}})_{{}_{11}}\sin\alpha-
Y_{{}_{d_{5}}}(W_{{}_{b}}^{T})_{{}_{11}}(U_{{}_{b}})_{{}_{21}}\cos\alpha\Big{]}\;,$
$\displaystyle g_{{}_{h^{0}b_{5}b_{5}}}=-{\sqrt{2}m_{{}_{\rm W}}s_{{}_{\rm
W}}\over
em_{{}_{b_{5}}}}\Big{[}Y_{{}_{d_{4}}}(W_{{}_{b}}^{T})_{{}_{22}}(U_{{}_{b}})_{{}_{12}}\sin\alpha-
Y_{{}_{d_{5}}}(W_{{}_{b}}^{T})_{{}_{21}}(U_{{}_{b}})_{{}_{22}}\cos\alpha\Big{]}\;,$
$\displaystyle g_{{}_{h^{0}\tilde{\cal U}_{i}\tilde{\cal U}_{i}}}=-{m_{{}_{\rm
W}}^{2}s_{{}_{\rm W}}\over em_{{}_{\tilde{\cal
U}_{i}}}^{2}}\Big{[}\xi_{{}_{uii}}^{S}\cos\alpha-\xi_{{}_{dii}}^{S}\sin\alpha\Big{]}\;,\;\;(i=1,\;2,\;3,\;4)\;,$
$\displaystyle g_{{}_{h^{0}\tilde{\cal D}_{i}\tilde{\cal D}_{i}}}=-{m_{{}_{\rm
W}}^{2}s_{{}_{\rm W}}\over em_{{}_{\tilde{\cal
D}_{i}}}^{2}}\Big{[}\eta_{{}_{uii}}^{S}\cos\alpha-\eta_{{}_{dii}}^{S}\sin\alpha\Big{]}\;,\;\;(i=1,\;2,\;3,\;4)\;.$
(86)
Here, we adopt the abbreviation $s_{{}_{\rm W}}=\sin\theta_{{}_{\rm W}}$ with
$\theta_{{}_{\rm W}}$ denoting the Weinberg angle. Furthermore, $e$ is the
electromagnetic coupling constant, and the concrete expressions of
$\xi_{{}_{uii}}^{S},\;\xi_{{}_{dii}}^{S},\;\eta_{{}_{uii}}^{S},\;\eta_{{}_{dii}}^{S}$
can be found in appendix C. The form factors $A_{1/2},\;A_{0}$ in Eq.(85) are
defined as
$\displaystyle A_{1/2}(x)=2\Big{[}x+(x-1)g(x)\Big{]}/x^{2}\;,$ $\displaystyle
A_{0}(x)=-(x-g(x))/x^{2}\;,$ (87)
with
$\displaystyle g(x)=\left\\{\begin{array}[]{l}\arcsin^{2}\sqrt{x},\;x\leq 1\\\
-{1\over 4}\Big{[}\ln{1+\sqrt{1-1/x}\over
1-\sqrt{1-1/x}}-i\pi\Big{]}^{2},\;x>1\;.\end{array}\right.$ (90)
The Higgs to diphoton decay is also obtained from loop diagrams, the LO
contributions are derived from the one loop diagrams containing virtual
charged gauge boson $W^{\pm}$ or virtual top quarks in the SM. In the BLMSSM,
the exotic fermions $t_{{}_{4,5}},\;b_{{}_{4,5}},\;e_{{}_{4,5}}$ together with
their supersymmetric partners contribute the corrections to the decay width of
Higgs to diphoton at LO, the corresponding expression is written as
$\displaystyle\Gamma_{{}_{NP}}(h^{0}\rightarrow\gamma\gamma)={G_{{}_{F}}\alpha^{2}m_{{}_{h^{0}}}^{3}\over
128\sqrt{2}\pi^{3}}\Big{|}\sum\limits_{f}N_{c}Q_{{}_{f}}^{2}g_{{}_{h^{0}ff}}A_{1/2}(x_{f})+g_{{}_{h^{0}WW}}A_{1}(x_{{}_{\rm
W}})$ $\displaystyle\hskip 91.04872pt+g_{{}_{h^{0}H^{+}H^{-}}}{m_{{}_{\rm
W}}^{2}\over
m_{{}_{H^{\pm}}}^{2}}A_{0}(x_{{}_{H^{\pm}}})+\sum\limits_{i=1}^{2}g_{{}_{h^{0}\chi_{i}^{+}\chi_{i}^{-}}}{m_{{}_{\rm
W}}\over m_{{}_{\chi_{i}}}}A_{1/2}(x_{{}_{\chi_{i}}})$ $\displaystyle\hskip
91.04872pt+\sum\limits_{\tilde{f}}N_{c}Q_{{}_{f}}^{2}g_{{}_{h^{0}\tilde{f}\tilde{f}}}{m_{{}_{\rm
Z}}^{2}\over m_{{}_{\tilde{f}}}^{2}}A_{0}(x_{{}_{\tilde{f}}})\Big{|}^{2}\;,$
(91)
where $g_{{}_{h^{0}WW}}=\sin(\beta-\alpha)$, the concrete expression for the
loop functions $A_{1}$ is
$\displaystyle A_{1}(x)=-\Big{[}2x^{2}+3x+3(2x-1)g(x)\Big{]}/x^{2}\;.$ (92)
The concrete expressions for
$g_{{}_{h^{0}\chi_{i}^{+}\chi_{i}^{-}}},\;g_{{}_{h^{0}H^{+}H^{-}}}$ and the
couplings between the lightest neutral CP-even Higgs and exotic
leptons/sleptons can also be found in literatureBL_h1 .
For the lightest neutral CP-even Higgs around $125\;{\rm GeV}$ mass, it can
decay through the modes $h^{0}\rightarrow ZZ^{*},\;h^{0}\rightarrow WW^{*}$
where $Z^{*}/W^{*}$ denotes the off-shell neutral/charged electroweak gauge
bosons. Summing over all channels available to the $W^{*}$ or $Z^{*}$, one can
write the widths asKeung1 ; HtoVV-SUSY
$\displaystyle\Gamma(h^{0}\rightarrow WW^{*})={3e^{4}m_{{}_{h^{0}}}\over
512\pi^{3}s_{{}_{\rm W}}^{4}}|g_{{}_{h^{0}WW}}|^{2}F({m_{{}_{\rm W}}\over
m_{{}_{h^{0}}}}),\;$ $\displaystyle\Gamma(h^{0}\rightarrow
ZZ^{*})={e^{4}m_{{}_{h^{0}}}\over 2048\pi^{3}s_{{}_{\rm W}}^{4}c_{{}_{\rm
W}}^{4}}|g_{{}_{h^{0}ZZ}}|^{2}\Big{(}7-{40\over 3}s_{{}_{\rm W}}^{2}+{160\over
9}s_{{}_{\rm W}}^{4}\Big{)}F({m_{{}_{\rm Z}}\over m_{{}_{h^{0}}}}),\;$ (93)
with $g_{{}_{h^{0}ZZ}}=g_{{}_{h^{0}WW}}$ and the abbreviation $c_{{}_{\rm
W}}=\cos\theta_{{}_{\rm W}}$. The form factor $F(x)$ is given as
$\displaystyle F(x)=-(1-x^{2})\Big{(}{47\over 2}x^{2}-{13\over 2}+{1\over
x^{2}}\Big{)}-3(1-6x^{2}+4x^{4})\ln x$ $\displaystyle\hskip
42.67912pt+{3(1-8x^{2}+20x^{4})\over\sqrt{4x^{2}-1}}\cos^{-1}\Big{(}{3x^{2}-1\over
2x^{3}}\Big{)}\;.$ (94)
Besides the Higgs discovery the ATLAS and CMS experiments have both observed
an excess in Higgs production and decay into diphoton channel which is a
factor $1.4\sim 2$ times larger than the SM expectations. The observed signals
for the diphoton and $ZZ^{*},\;WW^{*}$ channels are quantified by the ratios
$\displaystyle R_{\gamma\gamma}={\Gamma_{{}_{NP}}(h_{0}\rightarrow
gg)\Gamma_{{}_{NP}}(h_{0}\rightarrow\gamma\gamma)\over\Gamma_{{}_{SM}}(h_{0}\rightarrow
gg)\Gamma_{{}_{SM}}(h_{0}\rightarrow\gamma\gamma)}\;,$ $\displaystyle
R_{VV^{*}}={\Gamma_{{}_{NP}}(h_{0}\rightarrow
gg)\Gamma_{{}_{NP}}(h_{0}\rightarrow
VV^{*})\over\Gamma_{{}_{SM}}(h_{0}\rightarrow
gg)\Gamma_{{}_{SM}}(h_{0}\rightarrow VV^{*})}\;,\;\;(V=Z,\;W)\;.$ (95)
The current values of the ratios are CMS ; ATLAS ; CMS-ATLAS :
$\displaystyle{\rm ATLAS+CMS}:\;\;R_{\gamma\gamma}=1.77\pm 0.33\;,$
$\displaystyle{\rm ATLAS+CMS}:\;\;R_{VV^{*}}=0.94\pm 0.40\;,(V=Z,\;W)\;.$ (96)
Note that the combinations of the ATLAS and CMS results are taken from
Ref.CMS-ATLAS .
## V Numerical analyses
Figure 1: As
$Y_{{}_{u_{5}}}=0.7Y_{b},\;Y_{{}_{d_{5}}}=0.13Y_{t},\;m_{{}_{\tilde{Q}_{4}}}=790\;{\rm
GeV}$ and $A_{t}=-1\;{\rm TeV}$, (a) $R_{\gamma\gamma}$ (solid-line) and
$R_{VV^{*}}$ (dashed-line) vary with the parameter $\mu$, and
(b)$m_{{}_{A^{0}}}$ (solid-line) and $m_{{}_{H^{0}}}$ (dashed-line) vary with
the parameter $\mu$, respectively.
As mentioned above, the most stringent constraint on the parameter space is
that the $2\times 2$ mass squared matrix in Eq.(79) should predict the
lightest eigenvector with a mass $m_{{}_{h_{0}}}\simeq 125.9\;{\rm GeV}$. In
order to obtain the final results coinciding with this condition, we require
the tree level mass of CP-odd Higgs $m_{{}_{A^{0}}}$ satisfying
$\displaystyle m_{{}_{A^{0}}}^{2}={m_{{}_{h_{0}}}^{2}(m_{{}_{\rm
z}}^{2}-m_{{}_{h_{0}}}^{2}+\Delta_{{}_{11}}+\Delta_{{}_{22}})-m_{{}_{\rm
z}}^{2}\Delta_{{}_{A}}+\Delta_{{}_{12}}^{2}-\Delta_{{}_{11}}\Delta_{{}_{22}}\over-
m_{{}_{h_{0}}}^{2}+m_{{}_{\rm z}}^{2}\cos^{2}2\beta+\Delta_{{}_{B}}}\;,$ (97)
where
$\displaystyle\Delta_{{}_{A}}=\sin^{2}\beta\Delta_{{}_{11}}+\cos^{2}\beta\Delta_{{}_{22}}+\sin
2\beta\Delta_{{}_{12}}\;,$
$\displaystyle\Delta_{{}_{B}}=\cos^{2}\beta\Delta_{{}_{11}}+\sin^{2}\beta\Delta_{{}_{22}}+\sin
2\beta\Delta_{{}_{12}}\;.$ (98)
Figure 2: As
$Y_{{}_{u_{5}}}=0.7Y_{b},\;Y_{{}_{d_{5}}}=0.13Y_{t},\;m_{{}_{\tilde{Q}_{4}}}=790\;{\rm
GeV}$ and $\mu=-800\;{\rm GeV}$, (a) $R_{\gamma\gamma}$ (solid-line) and
$R_{VV^{*}}$ (dashed-line) vary with the parameter $A_{t}$, and
(b)$m_{{}_{A^{0}}}$ (solid-line) and $m_{{}_{H^{0}}}$ (dashed-line) vary with
the parameter $A_{t}$, respectively.
Through scanning the parameter space, we find the evaluations on
$R_{\gamma\gamma}$, $R_{VV^{*}}$ and masses of the heaviest CP-even Higgs and
CP-odd Higgs depending on $\tan\beta$ acutely as $m_{{}_{h^{0}}}=125.9\;{\rm
GeV}$. In our numerical analysis, we adopt the ansatz on relevant parameter
space
$\displaystyle B_{4}=L_{4}={3\over 2}\;,$ $\displaystyle
m_{{}_{\tilde{Q}_{3}}}=m_{{}_{\tilde{U}_{3}}}=m_{{}_{\tilde{D}_{3}}}=1\;{\rm
TeV}\;,$ $\displaystyle
m_{{}_{\tilde{U}_{4}}}=m_{{}_{\tilde{D}_{4}}}=m_{{}_{\tilde{Q}_{5}}}=m_{{}_{\tilde{U}_{5}}}=m_{{}_{\tilde{D}_{5}}}=1\;{\rm
TeV}\;,$ $\displaystyle
m_{{}_{\tilde{L}_{4}}}=m_{{}_{\tilde{\nu}_{4}}}=m_{{}_{\tilde{E}_{4}}}=m_{{}_{\tilde{L}_{5}}}=m_{{}_{\tilde{\nu}_{5}}}=m_{{}_{\tilde{E}_{5}}}=1\;{\rm
TeV}\;,$ $\displaystyle m_{{}_{Z_{B}}}=m_{{}_{Z_{L}}}=1\;{\rm TeV}\;,$
$\displaystyle
A_{{}_{\nu_{4}}}=A_{{}_{e_{4}}}=A_{{}_{\nu_{5}}}=A_{{}_{e_{4}}}=A_{{}_{d_{4}}}=A_{{}_{u_{5}}}=A_{{}_{d_{5}}}=550\;{\rm
GeV}\;,$
$\displaystyle\upsilon_{{}_{B_{t}}}=\sqrt{\upsilon_{{}_{B}}^{2}+\overline{\upsilon}_{{}_{B}}^{2}}=3\;{\rm
TeV}\;,\;\;\upsilon_{{}_{L_{t}}}=\sqrt{\upsilon_{{}_{L}}^{2}+\overline{\upsilon}_{{}_{L}}^{2}}=3\;{\rm
TeV}\;,$ $\displaystyle
A_{{}_{BQ}}=A_{{}_{BU}}=A_{{}_{BD}}=-A_{{}_{b}}=1\;{\rm TeV}\;,$
$\displaystyle
Y_{{}_{u_{4}}}=0.76\;Y_{t}\;,\;\;Y_{{}_{d_{4}}}=0.7\;Y_{b}\;,\;\;\lambda_{{}_{Q}}=\lambda_{{}_{u}}=\lambda_{{}_{d}}=0.5$
$\displaystyle m_{2}=750\;{\rm GeV}\;,\;\;\;\mu_{{}_{B}}=500\;{\rm GeV}\;,\;,$
$\displaystyle\tan\beta=\tan\beta_{{}_{B}}=\tan\beta_{{}_{L}}=2\;,$ (99)
to reduce the number of free parameters in the model considered here.
Furthermore, we choose the masses for exotic leptons from Ref.BL_h1 :
$\displaystyle m_{{}_{\nu_{4}}}=m_{{}_{\nu_{5}}}=90\;{\rm
GeV}\;,\;\;m_{{}_{e_{4}}}=m_{{}_{e_{5}}}=100\;{\rm GeV}\;.$ (100)
Figure 3: As
$Y_{{}_{u_{5}}}=0.7Y_{b},\;Y_{{}_{d_{5}}}=0.13Y_{t},\;A_{t}=-1\;{\rm TeV}$ and
$\mu=-800\;{\rm GeV}$, (a) $R_{\gamma\gamma}$ (solid-line) and $R_{VV^{*}}$
(dashed-line) vary with the parameter $m_{{}_{\tilde{Q}_{4}}}$, and
(b)$m_{{}_{A^{0}}}$ (solid-line) and $m_{{}_{H^{0}}}$ (dashed-line) vary with
the parameter $m_{{}_{\tilde{Q}_{4}}}$, respectively.
For relevant parameters in the SM, we choosePDG
$\displaystyle\alpha_{s}(m_{{}_{\rm Z}})=0.118\;,\;\;\alpha(m_{{}_{\rm
Z}})=1/128\;,\;\;s_{{}_{\rm W}}^{2}(m_{{}_{\rm Z}})=0.23\;,$ $\displaystyle
m_{t}=174.2\;{\rm GeV}\;,\;\;m_{b}=4.2\;{\rm GeV}\;,\;\;m_{{}_{\rm
W}}=80.4\;{\rm GeV}\;.$ (101)
Figure 4: As $Y_{{}_{u_{5}}}=0.7Y_{b},\;A_{t}=-1\;{\rm TeV}$ and
$\mu=-790\;{\rm GeV}$, (a) $R_{\gamma\gamma}$ (solid-line) and $R_{VV^{*}}$
(dashed-line) vary with the ratio $Y_{{}_{d_{5}}}/Y_{t}$, and
(b)$m_{{}_{A^{0}}}$ (solid-line) and $m_{{}_{H^{0}}}$ (dashed-line) vary with
the ratio $Y_{{}_{d_{5}}}/Y_{t}$, respectively.
Considering that the CMS collaboration has excluded a SM Higgs with mass in
the range $127.5\;{\rm GeV}-600\;{\rm GeV}$, we require the theoretical
evaluations on masses of the heaviest CP-even Higgs and CP-odd Higgs
respectively in the range $m_{{}_{A^{0}}}\geq 600\;{\rm
GeV},\;m_{{}_{H^{0}}}\geq 600\;{\rm GeV}$. Choosing
$Y_{{}_{u_{5}}}=Y_{{}_{d_{4}}}=0.7Y_{b},\;Y_{{}_{d_{5}}}=0.13Y_{t},\;m_{{}_{\tilde{Q}_{4}}}=790\;{\rm
GeV}$ and $A_{t}=-1\;{\rm TeV}$, we plot $R_{\gamma\gamma}$ (solid-line) and
$R_{VV^{*}}$ (dashed-line) varying with the parameter $\mu$ in Fig.1(a), and
plot $m_{{}_{A^{0}}}$ (solid-line) and $m_{{}_{H^{0}}}$ (dashed-line) varying
with the parameter $\mu$ in Fig.1(b), respectively. Using our assumptions on
relevant parameter space in the BLMSSM, we find the theoretical evaluations on
$R_{\gamma\gamma}$, $R_{VV^{*}}$, $m_{{}_{A^{0}}}$ and $m_{{}_{H^{0}}}$
depending on $\mu$ acutely. As $-900\;{\rm GeV}\leq\mu\leq-800\;{\rm GeV}$,
the theoretical predictions on $R_{\gamma\gamma}$ and $R_{VV^{*}}$ are all
coincide with experimental data in Eq.(96), and masses of the heaviest CP-even
Higgs and CP-odd Higgs $m_{{}_{A^{0}}}\sim m_{{}_{H^{0}}}\geq 700\;{\rm GeV}$
simultaneously.
Another parameter $A_{t}$ maybe affects the theoretical evaluations on
$R_{\gamma\gamma}$ and $R_{VV^{*}}$ strongly here. Taking
$Y_{{}_{u_{5}}}=Y_{{}_{d_{4}}}=0.7Y_{b},\;Y_{{}_{d_{5}}}=0.13Y_{t},\;m_{{}_{\tilde{Q}_{4}}}=790\;{\rm
GeV}$ and $\mu=-800\;{\rm GeV}$, we depict $R_{\gamma\gamma}$ (solid-line) and
$R_{VV^{*}}$ (dashed-line) varying with the parameter $A_{t}$ in Fig.2(a), and
plot $m_{{}_{A^{0}}}$ (solid-line) and $m_{{}_{H^{0}}}$ (dashed-line) varying
with the parameter $A_{t}$ in Fig.2(b), respectively. Under our assumptions on
the parameter space, the dependence of $R_{\gamma\gamma}$ and $R_{VV^{*}}$ on
$A_{t}$ is very mild. Nevertheless, the theoretical evaluations of
$m_{{}_{A^{0}}}$ and $m_{{}_{H^{0}}}$ depend on $A_{t}$ strongly. When
$A_{t}\geq-1\;{TeV}$, the theoretical evaluations on $R_{\gamma\gamma}$ and
$R_{VV^{*}}$ are all coincide with experimental data in Eq.(96), and masses of
the heaviest CP-even Higgs and CP-odd Higgs $m_{{}_{A^{0}}}\sim
m_{{}_{H^{0}}}\geq 700\;{\rm GeV}$ meantime.
Besides those parameters existing in the MSSM already, the ’brand new’
parameters in the BLMSSM also affect the theoretical evaluations on
$R_{\gamma\gamma}$, $R_{VV^{*}}$ and $m_{{}_{A^{0}}},\;m_{{}_{H^{0}}}$
strongly as $m_{{}_{h^{0}}}=125.9\;{\rm GeV}$. In Fig.(3), we investigate
$R_{\gamma\gamma}$, $R_{VV^{*}}$ and $m_{{}_{A^{0}}},\;m_{{}_{H^{0}}}$ versus
the soft mass of fourth generation left-handed scalar quarks
$m_{{}_{\tilde{Q}_{4}}}$. Where the solid line in Fig.(3)(a) represents
$R_{\gamma\gamma}$ varying with $m_{{}_{\tilde{Q}_{4}}}$, the dashed line in
Fig.(3)(a) represents $R_{VV^{*}}$ varying with $m_{{}_{\tilde{Q}_{4}}}$, the
solid line in Fig.(3)(b) represents $m_{{}_{A^{0}}}$ varying with
$m_{{}_{\tilde{Q}_{4}}}$, the dashed line in Fig.(3)(b) represents
$m_{{}_{H^{0}}}$ varying with $m_{{}_{\tilde{Q}_{4}}}$, respectively.
Actually, the theoretical evaluations on $R_{\gamma\gamma}$, $R_{VV^{*}}$,
$m_{{}_{A^{0}}}$ and $m_{{}_{H^{0}}}$ decrease steeply with the increasing of
$m_{{}_{\tilde{Q}_{4}}}$. When $m_{{}_{\tilde{Q}_{4}}}\geq 800\;{\rm GeV}$,
the theoretical prediction on $R_{\gamma\gamma}$ already lies out the
experimental range in Eq.(96). In Fig.(4), we investigate the theoretical
predictions on $R_{\gamma\gamma}$, $R_{VV^{*}}$ and
$m_{{}_{A^{0}}},\;m_{{}_{H^{0}}}$ versus the Yukawa coupling of fifth
generation down-type quark $Y_{{}_{d_{5}}}$. Where the solid line in
Fig.(4)(a) represents $R_{\gamma\gamma}$ varying with $Y_{{}_{d_{5}}}$, the
dashed line in Fig.(4)(a) represents $R_{VV^{*}}$ varying with
$Y_{{}_{d_{5}}}$, the solid line in Fig.(4)(b) represents $m_{{}_{A^{0}}}$
varying with $Y_{{}_{d_{5}}}$, the dashed line in Fig.(4)(b) represents
$m_{{}_{H^{0}}}$ varying with $Y_{{}_{d_{5}}}$, respectively. In fact, the
theoretical evaluations on $R_{\gamma\gamma}$ and $R_{VV^{*}}$ raise slowly
with increasing of the ratio $Y_{{}_{d_{5}}}/Y_{t}$. When
$Y_{{}_{d_{5}}}/Y_{t}\geq 0.5$, the theoretical predictions on
$R_{\gamma\gamma},\;R_{VV^{*}}$ exceed the experimental range in Eq.(96), and
the numerical evaluations on $m_{{}_{A^{0}}},\;m_{{}_{H^{0}}}$ are below
$600\;{\rm GeV}$.
## VI Summary
In framework of the BLMSSM, we attempt to account for the experimental data on
Higgs reported by ATLAS and CMS recently. Assuming the Yukawa couplings
between Higgs doublets and exotic quarks satisfying
$Y_{{}_{u_{4}}},\;Y_{{}_{d_{5}}}<Y_{{}_{t}}$ as well as
$Y_{{}_{d_{4}}},\;Y_{{}_{u_{5}}}<Y_{{}_{b}}$, we find the theoretical
predictions on $R_{\gamma\gamma},\;R_{VV^{*}}$ fitting the experimental data
in Eq.(96) very well when $m_{{}_{h_{0}}}=125.9\;{\rm GeV}$. Furthermore, the
numerical evaluations on $m_{{}_{A^{0}}},\;m_{{}_{H^{0}}}$ exceed $600\;{\rm
GeV}$ simultaneously in some parameter space of the BLMSSM.
###### Acknowledgements.
The work has been supported by the National Natural Science Foundation of
China (NNSFC) with Grant No. 11275036, No. 11047002 and Natural Science Fund
of Hebei University with Grant No. 2011JQ05, No. 2012-242.
## Appendix A The couplings between neutral Higgs and exotic quarks
In the mass basis, the couplings between the neutral Higgs and charged $2/3$
exotic quarks are written as
$\displaystyle{\cal
L}_{{}_{Ht^{\prime}t^{\prime}}}={1\over\sqrt{2}}\sum\limits_{i,j=1}^{2}\Big{\\{}\Big{[}Y_{{}_{u_{4}}}(W_{{}_{t}}^{\dagger})_{{}_{i2}}(U_{{}_{t}})_{{}_{1j}}\cos\alpha+Y_{{}_{u_{5}}}(W_{{}_{t}}^{\dagger})_{{}_{i1}}(U_{{}_{t}})_{{}_{2j}}\sin\alpha\Big{]}h^{0}\overline{t}_{{}_{i+3}}P_{{}_{L}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}Y_{{}_{u_{4}}}(U_{{}_{t}}^{\dagger})_{{}_{i1}}(W_{{}_{t}})_{{}_{2j}}\cos\alpha+Y_{{}_{u_{5}}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{1j}}\sin\alpha\Big{]}h^{0}\overline{t}_{{}_{i+3}}P_{{}_{R}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}Y_{{}_{u_{4}}}(W_{{}_{t}}^{\dagger})_{{}_{i2}}(U_{{}_{t}})_{{}_{1j}}\sin\alpha-
Y_{{}_{u_{5}}}(W_{{}_{t}}^{\dagger})_{{}_{i1}}(U_{{}_{t}})_{{}_{2j}}\cos\alpha\Big{]}H^{0}\overline{t}_{{}_{i+3}}P_{{}_{L}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}Y_{{}_{u_{4}}}(U_{{}_{t}}^{\dagger})_{{}_{i1}}(W_{{}_{t}})_{{}_{2j}}\sin\alpha-
Y_{{}_{u_{5}}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{1j}}\cos\alpha\Big{]}H^{0}\overline{t}_{{}_{i+3}}P_{{}_{R}}t_{{}_{j+3}}\Big{\\}}$
$\displaystyle\hskip
51.21504pt+{i\over\sqrt{2}}\sum\limits_{i,j=1}^{2}\Big{\\{}\Big{[}Y_{{}_{u_{4}}}(W_{{}_{t}}^{\dagger})_{{}_{i2}}(U_{{}_{t}})_{{}_{1j}}\cos\beta+Y_{{}_{u_{5}}}(W_{{}_{t}}^{\dagger})_{{}_{i1}}(U_{{}_{t}})_{{}_{2j}}\sin\beta\Big{]}A^{0}\overline{t}_{{}_{i+3}}P_{{}_{L}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}Y_{{}_{u_{4}}}(U_{{}_{t}}^{\dagger})_{{}_{i1}}(W_{{}_{t}})_{{}_{2j}}\cos\beta+Y_{{}_{u_{5}}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{1j}}\sin\beta\Big{]}A^{0}\overline{t}_{{}_{i+3}}P_{{}_{R}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}Y_{{}_{u_{4}}}(W_{{}_{t}}^{\dagger})_{{}_{i2}}(U_{{}_{t}})_{{}_{1j}}\sin\beta-
Y_{{}_{u_{5}}}(W_{{}_{t}}^{\dagger})_{{}_{i1}}(U_{{}_{t}})_{{}_{2j}}\cos\beta\Big{]}G^{0}\overline{t}_{{}_{i+3}}P_{{}_{L}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}Y_{{}_{u_{4}}}(U_{{}_{t}}^{\dagger})_{{}_{i1}}(W_{{}_{t}})_{{}_{2j}}\sin\beta-
Y_{{}_{u_{5}}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{1j}}\cos\beta\Big{]}G^{0}\overline{t}_{{}_{i+3}}P_{{}_{R}}t_{{}_{j+3}}\Big{\\}}$
$\displaystyle\hskip
51.21504pt-{1\over\sqrt{2}}\sum\limits_{i,j=1}^{2}\Big{\\{}\Big{[}\lambda_{{}_{u}}(W_{{}_{t}}^{\dagger})_{{}_{i2}}(U_{{}_{t}})_{{}_{2j}}\cos\alpha_{{}_{B}}-\lambda_{{}_{Q}}(W_{{}_{t}}^{\dagger})_{{}_{i1}}(U_{{}_{t}})_{{}_{1j}}\sin\alpha_{{}_{B}}\Big{]}h_{{}_{B}}^{0}\overline{t}_{{}_{i+3}}P_{{}_{L}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}\lambda_{{}_{u}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{2j}}\cos\alpha_{{}_{B}}-\lambda_{{}_{Q}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{2j}}\sin\alpha_{{}_{B}}\Big{]}h_{{}_{B}}^{0}\overline{t}_{{}_{i+3}}P_{{}_{R}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}\lambda_{{}_{u}}(W_{{}_{t}}^{\dagger})_{{}_{i2}}(U_{{}_{t}})_{{}_{2j}}\sin\alpha_{{}_{B}}+\lambda_{{}_{Q}}(W_{{}_{t}}^{\dagger})_{{}_{i1}}(U_{{}_{t}})_{{}_{1j}}\cos\alpha_{{}_{B}}\Big{]}H_{{}_{B}}^{0}\overline{t}_{{}_{i+3}}P_{{}_{L}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}\lambda_{{}_{u}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{2j}}\sin\alpha_{{}_{B}}+\lambda_{{}_{Q}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{2j}}\cos\alpha_{{}_{B}}\Big{]}H_{{}_{B}}^{0}\overline{t}_{{}_{i+3}}P_{{}_{R}}t_{{}_{j+3}}\Big{\\}}$
$\displaystyle\hskip
51.21504pt-{i\over\sqrt{2}}\sum\limits_{i,j=1}^{2}\Big{\\{}\Big{[}\lambda_{{}_{u}}(W_{{}_{t}}^{\dagger})_{{}_{i2}}(U_{{}_{t}})_{{}_{2j}}\cos\beta_{{}_{B}}-\lambda_{{}_{Q}}(W_{{}_{t}}^{\dagger})_{{}_{i1}}(U_{{}_{t}})_{{}_{1j}}\sin\beta_{{}_{B}}\Big{]}A_{{}_{B}}^{0}\overline{t}_{{}_{i+3}}P_{{}_{L}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}\lambda_{{}_{u}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{2j}}\cos\beta_{{}_{B}}-\lambda_{{}_{Q}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{2j}}\sin\beta_{{}_{B}}\Big{]}A_{{}_{B}}^{0}\overline{t}_{{}_{i+3}}P_{{}_{R}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}\lambda_{{}_{u}}(W_{{}_{t}}^{\dagger})_{{}_{i2}}(U_{{}_{t}})_{{}_{2j}}\sin\beta_{{}_{B}}+\lambda_{{}_{Q}}(W_{{}_{t}}^{\dagger})_{{}_{i1}}(U_{{}_{t}})_{{}_{1j}}\cos\beta_{{}_{B}}\Big{]}G_{{}_{B}}^{0}\overline{t}_{{}_{i+3}}P_{{}_{L}}t_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}\lambda_{{}_{u}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{2j}}\sin\beta_{{}_{B}}+\lambda_{{}_{Q}}(U_{{}_{t}}^{\dagger})_{{}_{i2}}(W_{{}_{t}})_{{}_{2j}}\cos\beta_{{}_{B}}\Big{]}H_{{}_{B}}^{0}\overline{t}_{{}_{i+3}}P_{{}_{R}}t_{{}_{j+3}}\Big{\\}}$
(102)
Similarly, the couplings between the neutral Higgs and charged $-1/3$ exotic
quarks are written as
$\displaystyle{\cal
L}_{{}_{Hb^{\prime}b^{\prime}}}={1\over\sqrt{2}}\sum\limits_{i,j=1}^{2}\Big{\\{}\Big{[}Y_{{}_{d_{4}}}(W_{{}_{b}}^{\dagger})_{{}_{i2}}(U_{{}_{b}})_{{}_{1j}}\sin\alpha-
Y_{{}_{d_{5}}}(W_{{}_{b}}^{\dagger})_{{}_{i1}}(U_{{}_{b}})_{{}_{2j}}\cos\alpha\Big{]}h^{0}\overline{b}_{{}_{i+3}}P_{{}_{L}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}Y_{{}_{d_{4}}}(U_{{}_{b}}^{\dagger})_{{}_{i1}}(W_{{}_{b}})_{{}_{2j}}\sin\alpha-
Y_{{}_{d_{5}}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{1j}}\cos\alpha\Big{]}h^{0}\overline{b}_{{}_{i+3}}P_{{}_{R}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}Y_{{}_{d_{4}}}(W_{{}_{b}}^{\dagger})_{{}_{i2}}(U_{{}_{b}})_{{}_{1j}}\cos\alpha+Y_{{}_{d_{5}}}(W_{{}_{b}}^{\dagger})_{{}_{i1}}(U_{{}_{b}})_{{}_{2j}}\sin\alpha\Big{]}H^{0}\overline{b}_{{}_{i+3}}P_{{}_{L}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}Y_{{}_{d_{4}}}(U_{{}_{b}}^{\dagger})_{{}_{i1}}(W_{{}_{b}})_{{}_{2j}}\cos\alpha+Y_{{}_{d_{5}}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{1j}}\sin\alpha\Big{]}H^{0}\overline{b}_{{}_{i+3}}P_{{}_{R}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+{i\over\sqrt{2}}\sum\limits_{i,j=1}^{2}\Big{\\{}\Big{[}Y_{{}_{d_{4}}}(W_{{}_{b}}^{\dagger})_{{}_{i2}}(U_{{}_{b}})_{{}_{1j}}\sin\beta-
Y_{{}_{d_{5}}}(W_{{}_{b}}^{\dagger})_{{}_{i1}}(U_{{}_{b}})_{{}_{2j}}\cos\beta\Big{]}A^{0}\overline{b}_{{}_{i+3}}P_{{}_{L}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}Y_{{}_{d_{4}}}(U_{{}_{b}}^{\dagger})_{{}_{i1}}(W_{{}_{b}})_{{}_{2j}}\sin\beta-
Y_{{}_{d_{5}}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{1j}}\cos\beta\Big{]}A^{0}\overline{b}_{{}_{i+3}}P_{{}_{R}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}Y_{{}_{d_{4}}}(W_{{}_{b}}^{\dagger})_{{}_{i2}}(U_{{}_{b}})_{{}_{1j}}\cos\beta+Y_{{}_{d_{5}}}(W_{{}_{b}}^{\dagger})_{{}_{i1}}(U_{{}_{b}})_{{}_{2j}}\sin\beta\Big{]}G^{0}\overline{b}_{{}_{i+3}}P_{{}_{L}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}Y_{{}_{d_{4}}}(U_{{}_{b}}^{\dagger})_{{}_{i1}}(W_{{}_{b}})_{{}_{2j}}\cos\beta+Y_{{}_{d_{5}}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{1j}}\sin\beta\Big{]}G^{0}\overline{b}_{{}_{i+3}}P_{{}_{R}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-{1\over\sqrt{2}}\sum\limits_{i,j=1}^{2}\Big{\\{}\Big{[}\lambda_{{}_{d}}(W_{{}_{b}}^{\dagger})_{{}_{i2}}(U_{{}_{b}})_{{}_{2j}}\cos\alpha_{{}_{B}}+\lambda_{{}_{Q}}(W_{{}_{b}}^{\dagger})_{{}_{i1}}(U_{{}_{b}})_{{}_{1j}}\sin\alpha_{{}_{B}}\Big{]}h_{{}_{B}}^{0}\overline{b}_{{}_{i+3}}P_{{}_{L}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}\lambda_{{}_{d}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{2j}}\cos\alpha_{{}_{B}}+\lambda_{{}_{Q}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{2j}}\sin\alpha_{{}_{B}}\Big{]}h_{{}_{B}}^{0}\overline{b}_{{}_{i+3}}P_{{}_{R}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}\lambda_{{}_{d}}(W_{{}_{b}}^{\dagger})_{{}_{i2}}(U_{{}_{b}})_{{}_{2j}}\sin\alpha_{{}_{B}}+\lambda_{{}_{Q}}(W_{{}_{b}}^{\dagger})_{{}_{i1}}(U_{{}_{b}})_{{}_{1j}}\cos\alpha_{{}_{B}}\Big{]}H_{{}_{B}}^{0}\overline{b}_{{}_{i+3}}P_{{}_{L}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}\lambda_{{}_{d}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{2j}}\sin\alpha_{{}_{B}}+\lambda_{{}_{Q}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{2j}}\cos\alpha_{{}_{B}}\Big{]}H_{{}_{B}}^{0}\overline{b}_{{}_{i+3}}P_{{}_{R}}b_{{}_{j+3}}\Big{\\}}$
$\displaystyle\hskip
51.21504pt-{i\over\sqrt{2}}\sum\limits_{i,j=1}^{2}\Big{\\{}\Big{[}\lambda_{{}_{d}}(W_{{}_{b}}^{\dagger})_{{}_{i2}}(U_{{}_{b}})_{{}_{2j}}\cos\beta_{{}_{B}}+\lambda_{{}_{Q}}(W_{{}_{b}}^{\dagger})_{{}_{i1}}(U_{{}_{b}})_{{}_{1j}}\sin\beta_{{}_{B}}\Big{]}A_{{}_{B}}^{0}\overline{b}_{{}_{i+3}}P_{{}_{L}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}\lambda_{{}_{d}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{2j}}\cos\beta_{{}_{B}}+\lambda_{{}_{Q}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{2j}}\sin\beta_{{}_{B}}\Big{]}A_{{}_{B}}^{0}\overline{b}_{{}_{i+3}}P_{{}_{R}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt+\Big{[}\lambda_{{}_{d}}(W_{{}_{b}}^{\dagger})_{{}_{i2}}(U_{{}_{b}})_{{}_{2j}}\sin\beta_{{}_{B}}+\lambda_{{}_{Q}}(W_{{}_{b}}^{\dagger})_{{}_{i1}}(U_{{}_{b}})_{{}_{1j}}\cos\beta_{{}_{B}}\Big{]}G_{{}_{B}}^{0}\overline{b}_{{}_{i+3}}P_{{}_{L}}b_{{}_{j+3}}$
$\displaystyle\hskip
51.21504pt-\Big{[}\lambda_{{}_{d}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{2j}}\sin\beta_{{}_{B}}+\lambda_{{}_{Q}}(U_{{}_{b}}^{\dagger})_{{}_{i2}}(W_{{}_{b}})_{{}_{2j}}\cos\beta_{{}_{B}}\Big{]}G_{{}_{B}}^{0}\overline{b}_{{}_{i+3}}P_{{}_{R}}b_{{}_{j+3}}\Big{\\}}$
(103)
## Appendix B mass squared matrices for exotic squarks
For charged $2/3$ exotic scalar quarks, the elements of mass squared matrix
are written as
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{4}}^{1*}\tilde{Q}_{{}_{4}}^{1})=m_{{}_{\tilde{Q}_{4}}}^{2}+{1\over
2}Y_{{}_{u_{4}}}^{2}\upsilon_{{}_{u}}^{2}+{1\over
2}Y_{{}_{d_{4}}}^{2}\upsilon_{{}_{d}}^{2}+{1\over
2}\lambda_{{}_{Q}}^{2}\upsilon_{{}_{B}}^{2}+\Big{(}{1\over 2}-{2\over
3}s_{{}_{\rm W}}^{2}\Big{)}m_{{}_{\rm Z}}^{2}\cos 2\beta$ $\displaystyle\hskip
76.82234pt+{B_{{}_{4}}\over 2}m_{{}_{Z_{B}}}^{2}\cos 2\beta_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{U}_{{}_{4}}^{c}\tilde{U}_{{}_{4}}^{c*})=m_{{}_{\tilde{U}_{4}}}^{2}+{1\over
2}Y_{{}_{u_{4}}}^{2}\upsilon_{{}_{u}}^{2}+{1\over
2}\lambda_{{}_{u}}^{2}\overline{\upsilon}_{{}_{B}}^{2}-{2\over 3}s_{{}_{\rm
W}}^{2}m_{{}_{\rm Z}}^{2}\cos 2\beta-{B_{{}_{4}}\over 2}m_{{}_{Z_{B}}}^{2}\cos
2\beta_{{}_{B}}\;,$ $\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{5}}^{2c}\tilde{Q}_{{}_{5}}^{2c*})=m_{{}_{\tilde{Q}_{5}}}^{2}+{1\over
2}Y_{{}_{u_{5}}}^{2}\upsilon_{{}_{d}}^{2}+{1\over
2}Y_{{}_{d_{5}}}^{2}\upsilon_{{}_{u}}^{2}+{1\over
2}\lambda_{{}_{Q}}^{2}\upsilon_{{}_{B}}^{2}+\Big{(}{1\over 2}-{1\over
3}s_{{}_{\rm W}}^{2}\Big{)}m_{{}_{\rm Z}}^{2}\cos 2\beta$ $\displaystyle\hskip
82.51282pt-{1+B_{{}_{4}}\over 2}m_{{}_{Z_{B}}}^{2}\cos 2\beta_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{U}_{{}_{5}}^{*}\tilde{U}_{{}_{5}})=m_{{}_{\tilde{U}_{5}}}^{2}+{1\over
2}Y_{{}_{u_{5}}}^{2}\upsilon_{{}_{d}}^{2}+{1\over
2}\lambda_{{}_{u}}^{2}\overline{\upsilon}_{{}_{B}}^{2}+{2\over 3}s_{{}_{\rm
W}}^{2}m_{{}_{\rm Z}}^{2}\cos 2\beta+{1+B_{{}_{4}}\over
2}m_{{}_{Z_{B}}}^{2}\cos 2\beta_{{}_{B}}\;,$ $\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{U}_{{}_{4}}^{c}\tilde{Q}_{{}_{4}}^{1})=-{1\over\sqrt{2}}\upsilon_{{}_{u}}Y_{{}_{u_{4}}}A_{{}_{u_{4}}}+{1\over\sqrt{2}}Y_{{}_{u_{4}}}\mu\upsilon_{{}_{d}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{5}}^{2c}\tilde{Q}_{{}_{4}}^{1})=-{1\over\sqrt{2}}\upsilon_{{}_{B}}\lambda_{{}_{Q}}A_{{}_{BQ}}+\sqrt{2}\lambda_{{}_{Q}}\mu_{{}_{B}}\overline{\upsilon}_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{U}_{{}_{5}}^{*}\tilde{Q}_{{}_{4}}^{1})=-{1\over\sqrt{2}}Y_{{}_{u_{4}}}\lambda_{u}\upsilon_{{}_{u}}\overline{\upsilon}_{{}_{B}}+{1\over\sqrt{2}}Y_{{}_{u_{5}}}\lambda_{Q}\upsilon_{{}_{d}}\upsilon_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{5}}^{2c}\tilde{U}_{{}_{4}}^{c*})={1\over
2}\lambda_{Q}Y_{{}_{u_{4}}}\upsilon_{{}_{u}}\upsilon_{{}_{B}}-{1\over
2}\lambda_{u}Y_{{}_{u_{5}}}\upsilon_{{}_{d}}\overline{\upsilon}_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{U}_{{}_{5}}^{*}\tilde{U}_{{}_{4}}^{c*})=-{1\over\sqrt{2}}\lambda_{{}_{u}}A_{{}_{BU}}\overline{\upsilon}_{{}_{B}}+{1\over\sqrt{2}}\lambda_{{}_{u}}\mu_{{}_{B}}\upsilon_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{5}}^{2c}\tilde{U}_{{}_{5}})=-{1\over\sqrt{2}}Y_{{}_{u_{5}}}A_{{}_{u_{5}}}\upsilon_{{}_{d}}+{1\over\sqrt{2}}Y_{{}_{u_{5}}}\mu\upsilon_{{}_{u}}\;.$
(104)
For charged $-1/3$ exotic scalar quarks, the elements of mass squared matrix
are given as
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{4}}^{2*}\tilde{Q}_{{}_{4}}^{2})=m_{{}_{\tilde{Q}_{4}}}^{2}+{1\over
2}Y_{{}_{u_{4}}}^{2}\upsilon_{{}_{u}}^{2}+{1\over
2}Y_{{}_{d_{4}}}^{2}\upsilon_{{}_{d}}^{2}+{1\over
2}\lambda_{{}_{Q}}^{2}\upsilon_{{}_{B}}^{2}-\Big{(}{1\over 2}-{2\over
3}s_{{}_{\rm W}}^{2}\Big{)}m_{{}_{\rm Z}}^{2}\cos 2\beta$ $\displaystyle\hskip
76.82234pt+{B_{{}_{4}}\over 2}m_{{}_{Z_{B}}}^{2}\cos 2\beta_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{D}_{{}_{4}}^{c}\tilde{D}_{{}_{4}}^{c*})=m_{{}_{\tilde{D}_{4}}}^{2}+{1\over
2}Y_{{}_{d_{4}}}^{2}\upsilon_{{}_{d}}^{2}+{1\over
2}\lambda_{{}_{d}}^{2}\overline{\upsilon}_{{}_{B}}^{2}-{1\over 3}s_{{}_{\rm
W}}^{2}m_{{}_{\rm Z}}^{2}\cos 2\beta-{B_{{}_{4}}\over 2}m_{{}_{Z_{B}}}^{2}\cos
2\beta_{{}_{B}}\;,$ $\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{5}}^{1c}\tilde{Q}_{{}_{5}}^{1c*})=m_{{}_{\tilde{Q}_{5}}}^{2}+{1\over
2}Y_{{}_{u_{5}}}^{2}\upsilon_{{}_{d}}^{2}+{1\over
2}Y_{{}_{d_{5}}}^{2}\upsilon_{{}_{u}}^{2}+{1\over
2}\lambda_{{}_{Q}}^{2}\upsilon_{{}_{B}}^{2}-\Big{(}{1\over 2}+{1\over
3}s_{{}_{\rm W}}^{2}\Big{)}m_{{}_{\rm Z}}^{2}\cos 2\beta$ $\displaystyle\hskip
82.51282pt-{1+B_{{}_{4}}\over 2}m_{{}_{Z_{B}}}^{2}\cos 2\beta_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{D}_{{}_{5}}^{*}\tilde{D}_{{}_{5}})=m_{{}_{\tilde{D}_{5}}}^{2}+{1\over
2}Y_{{}_{d_{5}}}^{2}\upsilon_{{}_{u}}^{2}+{1\over
2}\lambda_{{}_{d}}^{2}\overline{\upsilon}_{{}_{B}}^{2}+{1\over 3}s_{{}_{\rm
W}}^{2}m_{{}_{\rm Z}}^{2}\cos 2\beta+{1+B_{{}_{4}}\over
2}m_{{}_{Z_{B}}}^{2}\cos 2\beta_{{}_{B}}\;,$ $\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{D}_{{}_{4}}^{c}\tilde{Q}_{{}_{4}}^{2})=-{1\over\sqrt{2}}Y_{{}_{d_{4}}}\upsilon_{{}_{d}}A_{{}_{d_{4}}}+{1\over\sqrt{2}}Y_{{}_{d_{4}}}\mu\upsilon_{{}_{d}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{5}}^{1c}\tilde{Q}_{{}_{4}}^{2})=-{1\over\sqrt{2}}\lambda_{{}_{Q}}\upsilon_{{}_{B}}A_{{}_{BQ}}+\sqrt{2}\lambda_{{}_{Q}}\mu_{{}_{B}}\overline{\upsilon}_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{D}_{{}_{5}}^{*}\tilde{Q}_{{}_{4}}^{2})=-{1\over\sqrt{2}}Y_{{}_{d_{4}}}\lambda_{d}\upsilon_{{}_{d}}\overline{\upsilon}_{{}_{B}}+{1\over\sqrt{2}}Y_{{}_{d_{5}}}\lambda_{Q}\upsilon_{{}_{u}}\upsilon_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{5}}^{1c}\tilde{D}_{{}_{4}}^{c*})={1\over
2}\lambda_{Q}Y_{{}_{d_{4}}}\upsilon_{{}_{d}}\upsilon_{{}_{B}}+{1\over
2}\lambda_{d}Y_{{}_{d_{5}}}\upsilon_{{}_{u}}\overline{\upsilon}_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{D}_{{}_{5}}^{*}\tilde{D}_{{}_{4}}^{c*})=-{1\over\sqrt{2}}\lambda_{{}_{d}}A_{{}_{BD}}\overline{\upsilon}_{{}_{B}}+{1\over\sqrt{2}}\lambda_{{}_{d}}\mu_{{}_{B}}\upsilon_{{}_{B}}\;,$
$\displaystyle{\cal
M}_{\tilde{t}^{\prime}}^{2}(\tilde{Q}_{{}_{5}}^{1c}\tilde{D}_{{}_{5}})=-{1\over\sqrt{2}}Y_{{}_{d_{5}}}A_{{}_{d_{5}}}\upsilon_{{}_{u}}+{1\over\sqrt{2}}Y_{{}_{d_{5}}}\mu\upsilon_{{}_{d}}\;.$
(105)
## Appendix C The couplings between neutral Higgs and exotic squarks
In the mass basis, the couplings between the neutral Higgs and exotic squarks
are
$\displaystyle{\cal L}_{{}_{H\tilde{\cal U}_{i}^{*}\tilde{\cal
U}_{j}}}=\sum\limits_{i,j}^{4}\Big{\\{}\Big{[}\xi_{{}_{uij}}^{S}\cos\alpha-\xi_{{}_{dij}}^{S}\sin\alpha\Big{]}h^{0}\tilde{\cal
U}_{i}^{*}\tilde{\cal
U}_{j}+\Big{[}\eta_{{}_{uij}}^{S}\cos\alpha-\eta_{{}_{dij}}^{S}\sin\alpha\Big{]}h^{0}\tilde{\cal
D}_{i}^{*}\tilde{\cal D}_{j}$ $\displaystyle\hskip
48.36958pt+\Big{[}\xi_{{}_{uij}}^{S}\sin\alpha+\xi_{{}_{dij}}^{S}\cos\alpha\Big{]}H^{0}\tilde{\cal
U}_{i}^{*}\tilde{\cal
U}_{j}+\Big{[}\eta_{{}_{uij}}^{S}\sin\alpha+\eta_{{}_{dij}}^{S}\cos\alpha\Big{]}H^{0}\tilde{\cal
D}_{i}^{*}\tilde{\cal D}_{j}$ $\displaystyle\hskip
48.36958pt+i\Big{[}\xi_{{}_{uij}}^{P}\cos\beta-\xi_{{}_{dij}}^{P}\sin\beta\Big{]}A^{0}\tilde{\cal
U}_{i}^{*}\tilde{\cal
U}_{j}+i\Big{[}\eta_{{}_{uij}}^{P}\cos\beta-\eta_{{}_{dij}}^{P}\sin\beta\Big{]}A^{0}\tilde{\cal
D}_{i}^{*}\tilde{\cal D}_{j}$ $\displaystyle\hskip
48.36958pt+i\Big{[}\xi_{{}_{uij}}^{P}\sin\beta+\xi_{{}_{dij}}^{P}\cos\beta\Big{]}G^{0}\tilde{\cal
U}_{i}^{*}\tilde{\cal
U}_{j}+i\Big{[}\eta_{{}_{uij}}^{P}\sin\beta+\eta_{{}_{dij}}^{P}\cos\beta\Big{]}G^{0}\tilde{\cal
D}_{i}^{*}\tilde{\cal D}_{j}$ $\displaystyle\hskip
48.36958pt+\Big{[}\varsigma_{{}_{uij}}^{S}\cos\alpha_{{}_{B}}-\varsigma_{{}_{dij}}^{S}\sin\alpha_{{}_{B}}\Big{]}h_{{}_{B}}^{0}\tilde{\cal
U}_{i}^{*}\tilde{\cal
U}_{j}+\Big{[}\zeta_{{}_{uij}}^{S}\cos\alpha_{{}_{B}}-\zeta_{{}_{dij}}^{S}\sin\alpha_{{}_{B}}\Big{]}h_{{}_{B}}^{0}\tilde{\cal
D}_{i}^{*}\tilde{\cal D}_{j}$ $\displaystyle\hskip
48.36958pt+\Big{[}\varsigma_{{}_{uij}}^{S}\sin\alpha_{{}_{B}}+\varsigma_{{}_{dij}}^{S}\cos\alpha_{{}_{B}}\Big{]}H_{{}_{B}}^{0}\tilde{\cal
U}_{i}^{*}\tilde{\cal
U}_{j}+\Big{[}\zeta_{{}_{uij}}^{S}\sin\alpha_{{}_{B}}+\zeta_{{}_{dij}}^{S}\cos\alpha_{{}_{B}}\Big{]}H_{{}_{B}}^{0}\tilde{\cal
D}_{i}^{*}\tilde{\cal D}_{j}$ $\displaystyle\hskip
48.36958pt+i\Big{[}\varsigma_{{}_{uij}}^{P}\cos\beta_{{}_{B}}-\varsigma_{{}_{dij}}^{P}\sin\beta_{{}_{B}}\Big{]}A_{{}_{B}}^{0}\tilde{\cal
U}_{i}^{*}\tilde{\cal
U}_{j}+i\Big{[}\zeta_{{}_{uij}}^{P}\cos\beta_{{}_{B}}-\zeta_{{}_{dij}}^{P}\sin\beta_{{}_{B}}\Big{]}A_{{}_{B}}^{0}\tilde{\cal
D}_{i}^{*}\tilde{\cal D}_{j}$ $\displaystyle\hskip
48.36958pt+i\Big{[}\varsigma_{{}_{uij}}^{P}\sin\beta_{{}_{B}}+\varsigma_{{}_{dij}}^{P}\cos\beta_{{}_{B}}\Big{]}G_{{}_{B}}^{0}\tilde{\cal
U}_{i}^{*}\tilde{\cal
U}_{j}+i\Big{[}\zeta_{{}_{uij}}^{P}\sin\beta_{{}_{B}}+\zeta_{{}_{dij}}^{P}\cos\beta_{{}_{B}}\Big{]}G_{{}_{B}}^{0}\tilde{\cal
D}_{i}^{*}\tilde{\cal D}_{j}\Big{\\}}\;,$ (106)
with
$\displaystyle\xi_{{}_{uij}}^{S}={1\over\sqrt{2}}Y_{{}_{u_{5}}}\mu\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{4j}}+U_{{}_{i4}}^{\dagger}U_{{}_{3j}}\Big{)}+{1\over
2}\lambda_{{}_{Q}}Y_{{}_{u_{4}}}\upsilon_{{}_{B}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{2j}}+U_{{}_{i2}}^{\dagger}U_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{u}}Y_{{}_{u_{4}}}\overline{\upsilon}_{{}_{B}}\Big{(}U_{{}_{i1}}^{\dagger}U_{{}_{4j}}+U_{{}_{i4}}^{\dagger}U_{{}_{1j}}\Big{)}+{e^{2}\over
4s_{{}_{\rm
W}}^{2}}\upsilon_{{}_{u}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{3j}}-U_{{}_{i1}}^{\dagger}U_{{}_{1j}}\Big{)}$
$\displaystyle\hskip 34.14322pt+{e^{2}\over 12c_{{}_{\rm
W}}^{2}}\upsilon_{{}_{u}}\Big{(}U_{{}_{i1}}^{\dagger}U_{{}_{1j}}-U_{{}_{i3}}^{\dagger}U_{{}_{3j}}-4U_{{}_{i2}}^{\dagger}U_{{}_{2j}}+4U_{{}_{i4}}^{\dagger}U_{{}_{4j}}\Big{)}$
$\displaystyle\hskip
34.14322pt-{1\over\sqrt{2}}A_{{}_{u_{4}}}Y_{{}_{u_{4}}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{1j}}+U_{{}_{i1}}^{\dagger}U_{{}_{2j}}\Big{)}\;,$
$\displaystyle\xi_{{}_{dij}}^{S}={1\over\sqrt{2}}Y_{{}_{u_{4}}}\mu\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{1j}}+U_{{}_{i1}}^{\dagger}U_{{}_{2j}}\Big{)}+{1\over
2}\lambda_{{}_{Q}}Y_{{}_{u_{5}}}\upsilon_{{}_{B}}\Big{(}U_{{}_{i5}}^{\dagger}U_{{}_{1j}}+U_{{}_{i1}}^{\dagger}U_{{}_{5j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{u}}Y_{{}_{u_{5}}}\overline{\upsilon}_{{}_{B}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{3j}}+U_{{}_{i3}}^{\dagger}U_{{}_{2j}}\Big{)}-{e^{2}\over
4s_{{}_{\rm
W}}^{2}}\upsilon_{{}_{d}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{3j}}+U_{{}_{i1}}^{\dagger}U_{{}_{1j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{e^{2}\over 12c_{{}_{\rm
W}}^{2}}\upsilon_{{}_{d}}\Big{(}U_{{}_{i1}}^{\dagger}U_{{}_{1j}}-U_{{}_{i3}}^{\dagger}U_{{}_{3j}}-4U_{{}_{i2}}^{\dagger}U_{{}_{2j}}+4U_{{}_{i4}}^{\dagger}U_{{}_{4j}}\Big{)}$
$\displaystyle\hskip
34.14322pt-{1\over\sqrt{2}}A_{{}_{u_{5}}}Y_{{}_{u_{5}}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{4j}}+U_{{}_{i4}}^{\dagger}U_{{}_{3j}}\Big{)}\;,$
$\displaystyle\eta_{{}_{uij}}^{S}={1\over\sqrt{2}}Y_{{}_{d_{4}}}\mu\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{1j}}+D_{{}_{i1}}^{\dagger}D_{{}_{2j}}\Big{)}+{1\over
2}\lambda_{{}_{Q}}Y_{{}_{d_{5}}}\upsilon_{{}_{B}}\Big{(}D_{{}_{i4}}^{\dagger}D_{{}_{1j}}+D_{{}_{i1}}^{\dagger}D_{{}_{4j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{d}}Y_{{}_{d_{5}}}\overline{\upsilon}_{{}_{B}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{3j}}+D_{{}_{i3}}^{\dagger}D_{{}_{2j}}\Big{)}+{e^{2}\over
4s_{{}_{\rm
W}}^{2}}\upsilon_{{}_{u}}\Big{(}D_{{}_{i1}}^{\dagger}D_{{}_{1j}}-D_{{}_{i3}}^{\dagger}D_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt+{e^{2}\over 12c_{{}_{\rm
W}}^{2}}\upsilon_{{}_{u}}\Big{(}D_{{}_{i1}}^{\dagger}D_{{}_{1j}}-D_{{}_{i3}}^{\dagger}D_{{}_{3j}}+2D_{{}_{i2}}^{\dagger}D_{{}_{2j}}-2D_{{}_{i4}}^{\dagger}D_{{}_{4j}}\Big{)}$
$\displaystyle\hskip
34.14322pt-{1\over\sqrt{2}}A_{{}_{d_{5}}}Y_{{}_{d_{5}}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{4j}}+D_{{}_{i4}}^{\dagger}D_{{}_{3j}}\Big{)}\;,$
$\displaystyle\eta_{{}_{dij}}^{S}={1\over\sqrt{2}}Y_{{}_{d_{5}}}\mu\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{4j}}+D_{{}_{i4}}^{\dagger}D_{{}_{3j}}\Big{)}+{1\over
2}\lambda_{{}_{Q}}Y_{{}_{d_{4}}}\upsilon_{{}_{B}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{2j}}+D_{{}_{i2}}^{\dagger}D_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{d}}Y_{{}_{d_{4}}}\overline{\upsilon}_{{}_{B}}\Big{(}D_{{}_{i1}}^{\dagger}D_{{}_{4j}}+D_{{}_{i4}}^{\dagger}D_{{}_{1j}}\Big{)}-{e^{2}\over
4s_{{}_{\rm
W}}^{2}}\upsilon_{{}_{d}}\Big{(}D_{{}_{i1}}^{\dagger}D_{{}_{1j}}-D_{{}_{i3}}^{\dagger}D_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{e^{2}\over 12c_{{}_{\rm
W}}^{2}}\upsilon_{{}_{u}}\Big{(}D_{{}_{i1}}^{\dagger}D_{{}_{1j}}-D_{{}_{i3}}^{\dagger}D_{{}_{3j}}+2D_{{}_{i2}}^{\dagger}D_{{}_{2j}}-2D_{{}_{i4}}^{\dagger}D_{{}_{4j}}\Big{)}$
$\displaystyle\hskip
34.14322pt-{1\over\sqrt{2}}A_{{}_{d_{4}}}Y_{{}_{d_{4}}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{1j}}+D_{{}_{i1}}^{\dagger}D_{{}_{2j}}\Big{)}\;,$
$\displaystyle\xi_{{}_{uij}}^{P}={1\over\sqrt{2}}Y_{{}_{u_{5}}}\mu\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{4j}}-U_{{}_{i4}}^{\dagger}U_{{}_{3j}}\Big{)}-{1\over
2}\lambda_{{}_{Q}}Y_{{}_{u_{4}}}\upsilon_{{}_{B}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{2j}}-U_{{}_{i2}}^{\dagger}U_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt+{1\over
2}\lambda_{{}_{u}}Y_{{}_{u_{4}}}\overline{\upsilon}_{{}_{B}}\Big{(}U_{{}_{i1}}^{\dagger}U_{{}_{4j}}-U_{{}_{i4}}^{\dagger}U_{{}_{1j}}\Big{)}-{1\over\sqrt{2}}A_{{}_{u_{4}}}Y_{{}_{u_{4}}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{1j}}-U_{{}_{i1}}^{\dagger}U_{{}_{2j}}\Big{)}\;,$
$\displaystyle\xi_{{}_{dij}}^{P}={1\over\sqrt{2}}Y_{{}_{u_{4}}}\mu\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{1j}}-U_{{}_{i1}}^{\dagger}U_{{}_{2j}}\Big{)}-{1\over
2}\lambda_{{}_{Q}}Y_{{}_{u_{5}}}\upsilon_{{}_{B}}\Big{(}U_{{}_{i5}}^{\dagger}U_{{}_{1j}}-U_{{}_{i1}}^{\dagger}U_{{}_{5j}}\Big{)}$
$\displaystyle\hskip 34.14322pt+{1\over
2}\lambda_{{}_{u}}Y_{{}_{u_{5}}}\overline{\upsilon}_{{}_{B}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{3j}}-U_{{}_{i3}}^{\dagger}U_{{}_{2j}}\Big{)}-{1\over\sqrt{2}}A_{{}_{u_{5}}}Y_{{}_{u_{5}}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{4j}}-U_{{}_{i4}}^{\dagger}U_{{}_{3j}}\Big{)}\;,$
$\displaystyle\eta_{{}_{uij}}^{P}={1\over\sqrt{2}}Y_{{}_{d_{4}}}\mu\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{1j}}-D_{{}_{i1}}^{\dagger}D_{{}_{2j}}\Big{)}-{1\over
2}\lambda_{{}_{Q}}Y_{{}_{d_{5}}}\upsilon_{{}_{B}}\Big{(}D_{{}_{i4}}^{\dagger}D_{{}_{1j}}-D_{{}_{i1}}^{\dagger}D_{{}_{4j}}\Big{)}$
$\displaystyle\hskip 34.14322pt+{1\over
2}\lambda_{{}_{d}}Y_{{}_{d_{5}}}\overline{\upsilon}_{{}_{B}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{3j}}-D_{{}_{i3}}^{\dagger}D_{{}_{2j}}\Big{)}-{1\over\sqrt{2}}A_{{}_{d_{5}}}Y_{{}_{d_{5}}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{4j}}-D_{{}_{i4}}^{\dagger}D_{{}_{3j}}\Big{)}\;,$
$\displaystyle\eta_{{}_{dij}}^{P}={1\over\sqrt{2}}Y_{{}_{d_{5}}}\mu\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{4j}}-D_{{}_{i4}}^{\dagger}D_{{}_{3j}}\Big{)}-{1\over
2}\lambda_{{}_{Q}}Y_{{}_{d_{4}}}\upsilon_{{}_{B}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{2j}}-D_{{}_{i2}}^{\dagger}D_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt+{1\over
2}\lambda_{{}_{d}}Y_{{}_{d_{4}}}\overline{\upsilon}_{{}_{B}}\Big{(}D_{{}_{i1}}^{\dagger}D_{{}_{4j}}-D_{{}_{i4}}^{\dagger}D_{{}_{1j}}\Big{)}-{1\over\sqrt{2}}A_{{}_{d_{4}}}Y_{{}_{d_{4}}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{1j}}-D_{{}_{i1}}^{\dagger}D_{{}_{2j}}\Big{)}\;,$
$\displaystyle\varsigma_{{}_{uij}}^{S}={1\over\sqrt{2}}\lambda_{{}_{u}}\mu_{{}_{B}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{4j}}+U_{{}_{i4}}^{\dagger}U_{{}_{2j}}\Big{)}+{1\over
2}\lambda_{{}_{Q}}Y_{{}_{u_{4}}}\upsilon_{{}_{u}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{2j}}+U_{{}_{i2}}^{\dagger}U_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{Q}}Y_{{}_{u_{5}}}\upsilon_{{}_{d}}\Big{(}U_{{}_{i4}}^{\dagger}U_{{}_{1j}}+U_{{}_{i1}}^{\dagger}U_{{}_{4j}}\Big{)}+g_{{}_{B}}^{2}\upsilon_{{}_{B}}\Big{(}B_{{}_{4}}U_{{}_{i1}}^{\dagger}U_{{}_{1j}}-(1+B_{{}_{4}})U_{{}_{i3}}^{\dagger}U_{{}_{3j}}$
$\displaystyle\hskip 34.14322pt-
B_{{}_{4}}U_{{}_{i2}}^{\dagger}U_{{}_{2j}}+(1+B_{{}_{4}})U_{{}_{i4}}^{\dagger}U_{{}_{4j}}\Big{)}-{1\over\sqrt{2}}A_{{}_{BQ}}\lambda_{{}_{Q}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{1j}}+U_{{}_{i1}}^{\dagger}U_{{}_{3j}}\Big{)}\;,$
$\displaystyle\varsigma_{{}_{dij}}^{S}={1\over\sqrt{2}}\lambda_{{}_{Q}}\mu_{{}_{B}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{1j}}+U_{{}_{i1}}^{\dagger}U_{{}_{3j}}\Big{)}-{1\over
2}\lambda_{{}_{u}}Y_{{}_{u_{4}}}\upsilon_{{}_{u}}\Big{(}U_{{}_{i1}}^{\dagger}U_{{}_{4j}}+U_{{}_{i4}}^{\dagger}U_{{}_{1j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{u}}Y_{{}_{u_{5}}}\upsilon_{{}_{d}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{3j}}+U_{{}_{i3}}^{\dagger}U_{{}_{2j}}\Big{)}-g_{{}_{B}}^{2}\overline{\upsilon}_{{}_{B}}\Big{(}B_{{}_{4}}U_{{}_{i1}}^{\dagger}U_{{}_{1j}}-(1+B_{{}_{4}})U_{{}_{i3}}^{\dagger}U_{{}_{3j}}$
$\displaystyle\hskip 34.14322pt-
B_{{}_{4}}U_{{}_{i2}}^{\dagger}U_{{}_{2j}}+(1+B_{{}_{4}})U_{{}_{i4}}^{\dagger}U_{{}_{4j}}\Big{)}+{1\over\sqrt{2}}A_{{}_{BU}}\lambda_{{}_{u}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{4j}}+U_{{}_{i4}}^{\dagger}U_{{}_{2j}}\Big{)}\;,$
$\displaystyle\zeta_{{}_{uij}}^{S}={1\over\sqrt{2}}\lambda_{{}_{d}}\mu_{{}_{B}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{4j}}+D_{{}_{i4}}^{\dagger}D_{{}_{2j}}\Big{)}+{1\over
2}\lambda_{{}_{Q}}Y_{{}_{d_{4}}}\upsilon_{{}_{d}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{2j}}+D_{{}_{i2}}^{\dagger}D_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{Q}}Y_{{}_{d_{5}}}\upsilon_{{}_{u}}\Big{(}D_{{}_{i4}}^{\dagger}D_{{}_{1j}}+D_{{}_{i1}}^{\dagger}D_{{}_{4j}}\Big{)}+g_{{}_{B}}^{2}\upsilon_{{}_{B}}\Big{(}B_{{}_{4}}D_{{}_{i1}}^{\dagger}D_{{}_{1j}}-(1+B_{{}_{4}})D_{{}_{i3}}^{\dagger}D_{{}_{3j}}$
$\displaystyle\hskip 34.14322pt-
B_{{}_{4}}D_{{}_{i2}}^{\dagger}D_{{}_{2j}}+(1+B_{{}_{4}})D_{{}_{i4}}^{\dagger}D_{{}_{4j}}\Big{)}-{1\over\sqrt{2}}A_{{}_{BQ}}\lambda_{{}_{Q}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{1j}}+D_{{}_{i1}}^{\dagger}D_{{}_{3j}}\Big{)}\;,$
$\displaystyle\zeta_{{}_{dij}}^{S}=-{1\over\sqrt{2}}\lambda_{{}_{Q}}\mu_{{}_{B}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{1j}}+D_{{}_{i1}}^{\dagger}D_{{}_{3j}}\Big{)}-{1\over
2}\lambda_{{}_{d}}Y_{{}_{d_{4}}}\upsilon_{{}_{d}}\Big{(}D_{{}_{i1}}^{\dagger}D_{{}_{4j}}+D_{{}_{i4}}^{\dagger}D_{{}_{1j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{d}}Y_{{}_{d_{5}}}\upsilon_{{}_{u}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{3j}}+D_{{}_{i3}}^{\dagger}D_{{}_{2j}}\Big{)}-g_{{}_{B}}^{2}\overline{\upsilon}_{{}_{B}}\Big{(}B_{{}_{4}}D_{{}_{i1}}^{\dagger}D_{{}_{1j}}-(1+B_{{}_{4}})D_{{}_{i3}}^{\dagger}D_{{}_{3j}}$
$\displaystyle\hskip 34.14322pt-
B_{{}_{4}}D_{{}_{i2}}^{\dagger}D_{{}_{2j}}+(1+B_{{}_{4}})D_{{}_{i4}}^{\dagger}D_{{}_{4j}}\Big{)}+{1\over\sqrt{2}}A_{{}_{BD}}\lambda_{{}_{d}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{4j}}+D_{{}_{i4}}^{\dagger}D_{{}_{2j}}\Big{)}\;,$
$\displaystyle\varsigma_{{}_{uij}}^{P}={1\over\sqrt{2}}\lambda_{{}_{u}}\mu_{{}_{B}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{4j}}-U_{{}_{i4}}^{\dagger}U_{{}_{2j}}\Big{)}+{1\over
2}\lambda_{{}_{Q}}Y_{{}_{u_{4}}}\upsilon_{{}_{u}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{2j}}-U_{{}_{i2}}^{\dagger}U_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{Q}}Y_{{}_{u_{5}}}\upsilon_{{}_{d}}\Big{(}U_{{}_{i4}}^{\dagger}U_{{}_{1j}}-U_{{}_{i1}}^{\dagger}U_{{}_{4j}}\Big{)}-{1\over\sqrt{2}}A_{{}_{BQ}}\lambda_{{}_{Q}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{1j}}-U_{{}_{i1}}^{\dagger}U_{{}_{3j}}\Big{)}\;,$
$\displaystyle\varsigma_{{}_{dij}}^{P}={1\over\sqrt{2}}\lambda_{{}_{Q}}\mu_{{}_{B}}\Big{(}U_{{}_{i3}}^{\dagger}U_{{}_{1j}}-U_{{}_{i1}}^{\dagger}U_{{}_{3j}}\Big{)}-{1\over
2}\lambda_{{}_{u}}Y_{{}_{u_{4}}}\upsilon_{{}_{u}}\Big{(}U_{{}_{i1}}^{\dagger}U_{{}_{4j}}-U_{{}_{i4}}^{\dagger}U_{{}_{1j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{u}}Y_{{}_{u_{5}}}\upsilon_{{}_{d}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{3j}}-U_{{}_{i3}}^{\dagger}U_{{}_{2j}}\Big{)}+{1\over\sqrt{2}}A_{{}_{BU}}\lambda_{{}_{u}}\Big{(}U_{{}_{i2}}^{\dagger}U_{{}_{4j}}-U_{{}_{i4}}^{\dagger}U_{{}_{2j}}\Big{)}\;,$
$\displaystyle\zeta_{{}_{uij}}^{P}={1\over\sqrt{2}}\lambda_{{}_{d}}\mu_{{}_{B}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{4j}}-D_{{}_{i4}}^{\dagger}D_{{}_{2j}}\Big{)}+{1\over
2}\lambda_{{}_{Q}}Y_{{}_{d_{4}}}\upsilon_{{}_{d}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{2j}}-D_{{}_{i2}}^{\dagger}D_{{}_{3j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{Q}}Y_{{}_{d_{5}}}\upsilon_{{}_{u}}\Big{(}D_{{}_{i4}}^{\dagger}D_{{}_{1j}}-D_{{}_{i1}}^{\dagger}D_{{}_{4j}}\Big{)}-{1\over\sqrt{2}}A_{{}_{BQ}}\lambda_{{}_{Q}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{1j}}-D_{{}_{i1}}^{\dagger}D_{{}_{3j}}\Big{)}\;,$
$\displaystyle\zeta_{{}_{dij}}^{P}=-{1\over\sqrt{2}}\lambda_{{}_{Q}}\mu_{{}_{B}}\Big{(}D_{{}_{i3}}^{\dagger}D_{{}_{1j}}-D_{{}_{i1}}^{\dagger}D_{{}_{3j}}\Big{)}-{1\over
2}\lambda_{{}_{d}}Y_{{}_{d_{4}}}\upsilon_{{}_{d}}\Big{(}D_{{}_{i1}}^{\dagger}D_{{}_{4j}}-D_{{}_{i4}}^{\dagger}D_{{}_{1j}}\Big{)}$
$\displaystyle\hskip 34.14322pt-{1\over
2}\lambda_{{}_{d}}Y_{{}_{d_{5}}}\upsilon_{{}_{u}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{3j}}-D_{{}_{i3}}^{\dagger}D_{{}_{2j}}\Big{)}+{1\over\sqrt{2}}A_{{}_{BD}}\lambda_{{}_{d}}\Big{(}D_{{}_{i2}}^{\dagger}D_{{}_{4j}}-D_{{}_{i4}}^{\dagger}D_{{}_{2j}}\Big{)}\;.$
(107)
## Appendix D The radiative corrections to the mass squared matrix from
exotic lepton fields
$\displaystyle\Delta_{11}^{L}={G_{{}_{F}}m_{{}_{\nu_{4}}}^{4}\over\sqrt{2}\pi^{2}\sin^{2}\beta}\cdot{\mu^{2}(A_{{}_{\nu_{4}}}-\mu\cot\beta)^{2}\over(m_{{}_{\tilde{\nu}_{4}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{4}^{2}}}^{2})^{2}}g(m_{{}_{\tilde{\nu}_{4}^{1}}},m_{{}_{\tilde{\nu}_{4}^{2}}})$
$\displaystyle\hskip
34.14322pt+{G_{{}_{F}}m_{{}_{e_{4}}}^{4}\over\sqrt{2}\pi^{2}\cos^{2}\beta}\Big{\\{}\ln{m_{{}_{\tilde{e}_{4}^{1}}}m_{{}_{\tilde{e}_{4}^{2}}}\over
m_{{}_{e_{4}}}^{2}}+{A_{{}_{e_{4}}}(A_{{}_{e_{4}}}-\mu\tan\beta)\over
m_{{}_{\tilde{e}_{4}^{1}}}^{2}-m_{{}_{\tilde{e}_{4}^{2}}}^{2}}\ln{m_{{}_{\tilde{e}_{4}^{1}}}^{2}\over
m_{{}_{\tilde{e}_{4}^{2}}}^{2}}$ $\displaystyle\hskip
34.14322pt+{A_{{}_{e_{4}}}^{2}(A_{{}_{e_{4}}}-\mu\tan\beta)^{2}\over(m_{{}_{\tilde{e}_{4}^{1}}}^{2}-m_{{}_{\tilde{e}_{4}^{2}}}^{2})^{2}}g(m_{{}_{\tilde{e}_{4}^{1}}},m_{{}_{\tilde{e}_{4}^{2}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{G_{{}_{F}}m_{{}_{\nu_{5}}}^{4}\over\sqrt{2}\pi^{2}\cos^{2}\beta}\Big{\\{}\ln{m_{{}_{\tilde{\nu}_{5}^{1}}}m_{{}_{\tilde{\nu}_{5}^{2}}}\over
m_{{}_{\nu_{5}}}^{2}}+{A_{{}_{\nu_{5}}}(A_{{}_{\nu_{5}}}-\mu\tan\beta)\over
m_{{}_{\tilde{\nu}_{5}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{5}^{2}}}^{2}}\ln{m_{{}_{\tilde{\nu}_{5}^{1}}}^{2}\over
m_{{}_{\tilde{\nu}_{5}^{2}}}^{2}}$ $\displaystyle\hskip
34.14322pt+{A_{{}_{\nu_{5}}}^{2}(A_{{}_{\nu_{5}}}-\mu\tan\beta)^{2}\over(m_{{}_{\tilde{\nu}_{5}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{5}^{2}}}^{2})^{2}}g(m_{{}_{\tilde{\nu}_{5}^{1}}},m_{{}_{\tilde{\nu}_{5}^{2}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{G_{{}_{F}}m_{{}_{e_{5}}}^{4}\over\sqrt{2}\pi^{2}\sin^{2}\beta}\cdot{\mu^{2}(A_{{}_{e_{5}}}-\mu\cot\beta)^{2}\over(m_{{}_{\tilde{e}_{5}^{1}}}^{2}-m_{{}_{\tilde{e}_{5}^{2}}}^{2})^{2}}g(m_{{}_{\tilde{e}_{5}^{1}}},m_{{}_{\tilde{e}_{5}^{2}}})\;,$
$\displaystyle\Delta_{12}^{L}={G_{{}_{F}}m_{{}_{\nu_{4}}}^{4}\over
2\sqrt{2}\pi^{2}\sin^{2}\beta}\cdot{\mu(-A_{{}_{\nu_{4}}}+\mu\cot\beta)\over
m_{{}_{\tilde{\nu}_{4}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{4}^{2}}}^{2}}\Big{\\{}\ln{m_{{}_{\tilde{\nu}_{4}^{1}}}\over
m_{{}_{\tilde{\nu}_{4}^{2}}}}+{A_{{}_{\nu_{4}}}(A_{{}_{\nu_{4}}}-\mu\cot\beta)\over
m_{{}_{\tilde{\nu}_{4}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{4}^{2}}}^{2}}g(m_{{}_{\tilde{\nu}_{4}^{1}}},m_{{}_{\tilde{\nu}_{4}^{2}}})\Big{\\}}$
$\displaystyle\hskip 34.14322pt+{G_{{}_{F}}m_{{}_{e_{4}}}^{4}\over
2\sqrt{2}\pi^{2}\cos^{2}\beta}\cdot{\mu(-A_{{}_{e_{4}}}+\mu\tan\beta)\over
m_{{}_{\tilde{e}_{4}^{1}}}^{2}-m_{{}_{\tilde{e}_{4}^{2}}}^{2}}\Big{\\{}\ln{m_{{}_{\tilde{e}_{4}^{1}}}\over
m_{{}_{\tilde{e}_{4}^{2}}}}+{A_{{}_{e_{4}}}(A_{{}_{e_{4}}}-\mu\tan\beta)\over
m_{{}_{\tilde{e}_{4}^{1}}}^{2}-m_{{}_{\tilde{e}_{4}^{2}}}^{2}}g(m_{{}_{\tilde{e}_{4}^{1}}},m_{{}_{\tilde{e}_{4}^{2}}})\Big{\\}}$
$\displaystyle\hskip 34.14322pt+{G_{{}_{F}}m_{{}_{\nu_{5}}}^{4}\over
2\sqrt{2}\pi^{2}\cos^{2}\beta}\cdot{\mu(-A_{{}_{\nu_{5}}}+\mu\tan\beta)\over
m_{{}_{\tilde{\nu}_{5}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{5}^{2}}}^{2}}\Big{\\{}\ln{m_{{}_{\tilde{\nu}_{5}^{1}}}\over
m_{{}_{\tilde{\nu}_{5}^{2}}}}+{A_{{}_{\nu_{5}}}(A_{{}_{\nu_{5}}}-\mu\tan\beta)\over
m_{{}_{\tilde{\nu}_{5}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{5}^{2}}}^{2}}g(m_{{}_{\tilde{\nu}_{5}^{1}}},m_{{}_{\tilde{\nu}_{5}^{2}}})\Big{\\}}$
$\displaystyle\hskip 34.14322pt+{G_{{}_{F}}m_{{}_{e_{5}}}^{4}\over
2\sqrt{2}\pi^{2}\sin^{2}\beta}\cdot{\mu(-A_{{}_{e_{5}}}+\mu\cot\beta)\over
m_{{}_{\tilde{e}_{5}^{1}}}^{2}-m_{{}_{\tilde{e}_{5}^{2}}}^{2}}\Big{\\{}\ln{m_{{}_{\tilde{e}_{5}^{1}}}\over
m_{{}_{\tilde{e}_{5}^{2}}}}+{A_{{}_{e_{5}}}(A_{{}_{e_{5}}}-\mu\cot\beta)\over
m_{{}_{\tilde{e}_{5}^{1}}}^{2}-m_{{}_{\tilde{e}_{5}^{2}}}^{2}}g(m_{{}_{\tilde{e}_{5}^{1}}},m_{{}_{\tilde{e}_{5}^{2}}})\Big{\\}}\;,$
$\displaystyle\Delta_{22}^{L}={G_{{}_{F}}m_{{}_{\nu_{4}}}^{4}\over\sqrt{2}\pi^{2}\sin^{2}\beta}\Big{\\{}\ln{m_{{}_{\tilde{\nu}_{4}^{1}}}m_{{}_{\tilde{\nu}_{4}^{2}}}\over
m_{{}_{\nu_{4}}}^{2}}+{A_{{}_{\nu_{4}}}(A_{{}_{\nu_{4}}}-\mu\cot\beta)\over
m_{{}_{\tilde{\nu}_{4}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{4}^{2}}}^{2}}\ln{m_{{}_{\tilde{\nu}_{4}^{1}}}^{2}\over
m_{{}_{\tilde{\nu}_{4}^{2}}}^{2}}$ $\displaystyle\hskip
34.14322pt+{A_{{}_{\nu_{4}}}^{2}(A_{{}_{\nu_{4}}}-\mu\cot\beta)^{2}\over(m_{{}_{\tilde{\nu}_{4}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{4}^{2}}}^{2})^{2}}g(m_{{}_{\tilde{\nu}_{4}^{1}}},m_{{}_{\tilde{\nu}_{4}^{2}}})\Big{\\}}$
$\displaystyle\hskip
34.14322pt+{G_{{}_{F}}m_{{}_{e_{4}}}^{4}\over\sqrt{2}\pi^{2}\cos^{2}\beta}\cdot{\mu^{2}(A_{{}_{e_{4}}}-\mu\tan\beta)^{2}\over(m_{{}_{\tilde{e}_{4}^{1}}}^{2}-m_{{}_{\tilde{e}_{4}^{2}}}^{2})^{2}}g(m_{{}_{\tilde{e}_{4}^{1}}},m_{{}_{\tilde{e}_{4}^{2}}})$
$\displaystyle\hskip
34.14322pt+{G_{{}_{F}}m_{{}_{\nu_{5}}}^{4}\over\sqrt{2}\pi^{2}\cos^{2}\beta}\cdot{\mu^{2}(A_{{}_{\nu_{5}}}-\mu\tan\beta)^{2}\over(m_{{}_{\tilde{\nu}_{5}^{1}}}^{2}-m_{{}_{\tilde{\nu}_{5}^{2}}}^{2})^{2}}g(m_{{}_{\tilde{\nu}_{5}^{1}}},m_{{}_{\tilde{\nu}_{5}^{2}}})$
$\displaystyle\hskip
34.14322pt+{G_{{}_{F}}m_{{}_{e_{5}}}^{4}\over\sqrt{2}\pi^{2}\sin^{2}\beta}\Big{\\{}\ln{m_{{}_{\tilde{e}_{5}^{1}}}m_{{}_{\tilde{e}_{5}^{2}}}\over
m_{{}_{e_{5}}}^{2}}+{A_{{}_{e_{5}}}(A_{{}_{e_{5}}}-\mu\cot\beta)\over
m_{{}_{\tilde{e}_{5}^{1}}}^{2}-m_{{}_{\tilde{e}_{5}^{2}}}^{2}}\ln{m_{{}_{\tilde{e}_{5}^{1}}}^{2}\over
m_{{}_{\tilde{e}_{5}^{2}}}^{2}}$ $\displaystyle\hskip
34.14322pt+{A_{{}_{e_{5}}}^{2}(A_{{}_{e_{5}}}-\mu\cot\beta)^{2}\over(m_{{}_{\tilde{e}_{5}^{1}}}^{2}-m_{{}_{\tilde{e}_{5}^{2}}}^{2})^{2}}g(m_{{}_{\tilde{e}_{5}^{1}}},m_{{}_{\tilde{e}_{5}^{2}}})\Big{\\}}\;,$
(108)
## References
* (1) CMS Collaboration, Phys. Lett. B716(2012)30.
* (2) ATLAS Collaboration, Phys. Lett. B716(2012)1.
* (3) H. E. Haber and G. L. Kane, Phys. Rep. 117(1985)75; J. Rosiek, Phys. Rev. D41(1990)3464.
* (4) P. Minkoski, Phys. Lett.B67(1977)421; T. Yanagida, in Proceedings of the Workshop on the Unified Theory and the Baryon Number in the Universe, edited by O. Sawada et.al. (KEK, Tsukuba, 1979), p. 95; M. Gell-Mann, P. Ramond, and R. Slansky, in Supergravity, edited by P. van Nieuwenhuizen et.al. (North-Holland, Amsterdam, 1979), p315; S. L. Glashow, in Quarks and Leptons, Cargése, edited by M. Lévy et.al. (Plenum, New York, 1980), p707; R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44(1980)912.
* (5) P. F. Perez, Phys. Lett. B711(2012)353.
* (6) J. M. Arnold, P. F. Perez, B. Fornal, and S. Spinner, Phys. Rev. D85(2012)115024.
* (7) P. F. Perez and M. B. Wise, JHEP1108(2011)068; Phys. Rev. D82(2010)011901; ibid.84(2011)055015; T. R. Dulaney, P. F. Perez and M. B. Wise, Phys. Rev. D83(2011)023520.
* (8) H. E. Haber, R. Hempfling, Phys. Rev. Lett. 66(1991)1815.
* (9) S. Heinemeyer, W. Hollik and G. Weiglein, Comput. Phys. Commun. 24(2000)76; Eur. Phys. J. C9(1999)343; G. Degrassi, S. Heinemeyer, W. Hollik, P. Slavich and G. Weiglein, Eur. Phys. J. C28(2003)133; M. Frank, T. Hahn, S. Heinemeyer, W. Hollik, H. Rzehak and G. Weiglein, JHEP0702(2007)047.
* (10) Y. Okada, M. Yamaguchi and T. Yanagida, Prog. Theor. Phys.85(1991)1; J. R. Ellis, G. Ridolfi and F. Zwirner, Phys. Lett. B257(1991)83; ibid.262(1991)477; S. P. Li and M. Sher, Phys. Lett. B140(1984)33; R. Barbieri and M. Frigeni, Phys. Lett. B258(1991)395; M. Drees and M. M. Nojiri, Phys. Rev. D45(1992)2482; J. A. Casas, J. R. Espinosa, M. Quiros and A. Riotto, Nucl. Phys. B436(1995)3[Erratum-ibid. B439(1995)466]; M. A. Diaz and H. E. Haber, Phys. Rev. D46(1992)3086; M. S. Carena, M. Quiros and C. E. M. Wagner, Nucl. Phys. B461(1996)407.
* (11) A. Arbey, M. Battaglia, A. Djouadi and F. Mahmoudi, JHEP1209(2012)107.
* (12) C. Anastasiou and K. Melnikov, Nucl. Phys. B646(2002)220.
* (13) J. R. Ellis, M. K. Gaillard and D. V. Nanopoulos, Nucl. Phys. B106(1976)292; M. A. Shifman, A. I. Vainshtein, M. B. Voloshin and V. I. Zakharov, Sov. J. Nucl. Phys. 30(1979)711; A. Djouadi, Phys. Rept. 459(2008)1; J. F. Gunion, H. E. Haber, G. L. Kane and S. Dawson, The Higgs Hunter s Guide, Addison-Wesley, Reading (USA), 1990; M. Carena, I. Low and C. E. M. Wagner, arXiv:1206.1082 [hep-ph].
* (14) W.-Y. Keung, W. J. Marciano, Phys. Rev. D30(1984)248; J. F. Gunion, H. E. Haber, G. Kane, S. Dawson, The Higgs Hunter’s Guide, Perseus Books(1990).
* (15) P. Gonzáleza, S. Palmerb, M. Wiebuschc, K. Williamsd, arXiv:1211.3079[hep-ph]; W. Bernreuther, P. Gonzalez, M. Wiebusch, Eur. Phys. J. C69 (2010) 31;
* (16) J. Beringer et al.(Particle Data Group), Phys. Rev. D86(2012)010001.
|
arxiv-papers
| 2013-02-28T23:33:53 |
2024-09-04T02:49:42.225566
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Tai-Fu Feng, Shu-Min Zhao, Hai-Bin Zhang, Yin-Jie Zhang, Yu-Li Yan",
"submitter": "Tai-Fu Feng",
"url": "https://arxiv.org/abs/1303.0047"
}
|
1303.0061
|
# Magnetic field dependence of Raman coupling in Alkali atoms
Ran Wei Hefei National Laboratory for Physical Sciences at Microscale and
Department of Modern Physics, University of Science and Technology of China,
Hefei, Anhui 230026, China Laboratory of Atomic and Solid State Physics,
Cornell University, Ithaca, NY, 14850 Erich J. Mueller Laboratory of Atomic
and Solid State Physics, Cornell University, Ithaca, NY, 14850
###### Abstract
We calculate the magnetic field dependence of Rabi rates for two-photon
optical Raman processes in alkali atoms. Due to a decoupling of the nuclear
and electronic spins, these rates fall with increasing field. At the typical
magnetic fields of alkali atom Feshbach resonances ($B\sim 200$G$-1200$G), the
Raman rates have the same order of magnitude as their zero field values,
suggesting one can combine Raman-induced gauge fields or spin-orbital coupling
with strong Feshbach-induced interactions. The exception is 6Li, where there
is a factor of $7$ suppression in the Raman coupling, compared to its already
small zero-field value.
###### pacs:
32.10.Fn
## I introduction
Two-photon “Raman” transitions act as an important control parameter in cold
atom experiments. These optical transitions couple motional and internal
degrees of freedom, mimicking important physical processes such as gauge
fields gaugeboson ; lattice and spin-orbital couplings SOboson ; PanSO ;
SOshanxi ; SOmit . They have also been used as spectroscopic probes, for
example allowing scientists to measure excitation spectra Dalibard2007 ;
Zhang2012 ; Zhang2012p . The most exciting future applications of these
techniques will involve strongly interacting atoms near Feshbach resonances
Sarma2008 ; Fujimoto2009 ; Duan2011 ; Zoller2011 ; Cooper2011 ; Chuanwei2012 ;
Huhui2012 ; Huhui2012r ; Liu2012 ; Mueller2012 . Here we study the Raman
couplings as a function of magnetic field, quantifying the practicality of
such experiments. We find that for relatively heavy atoms, such as 40K, the
Raman techniques are compatible with the magnetic fields needed for Feshbach
resonances. Despite important experimental demonstrations SOmit , lighter
atoms, such as 6Li are less promising, as the ratio of Raman Rabi frequency to
the inelastic scattering rate is not sufficiently large. This problem is
exacerbated by the magnetic field suppress .
We are interested in two-photon transitions which take an atom between two
hyperfine states $|g_{1}\rangle$ and $|g_{2}\rangle$. Optical photons only
couple to electronic motion in an atom. Hence such Raman transitions rely on
fine and hyperfine interactions. The former couple electronic spins and
motion, and the latter couple nuclear spins to the electronic angular
momentum. One expects that magnetic fields will reduce these Raman matrix
elements, as the disparate Zeeman coupling of electronic and nuclear degrees
of freedom competes with the fine and hyperfine interactions. We find that at
the typical magnetic fields of Feshbach resonance ($B\sim 200$G$-1200$G
Chengchin2010 ), the Raman couplings are still quite strong. The exception is
6Li, where there is a factor of $7$ suppression, compared to its already small
zero-field value.
A key figure of merit in experiment is the ratio of the Raman Rabi frequency
to the inelastic scattering rate $\beta\equiv\Omega_{R}/\Gamma_{\rm ine}$. The
inverse Rabi frequency gives the time required for an atom to flip between
$|g_{1}\rangle$ and $|g_{2}\rangle$, and the inverse inelastic scattering rate
gives the average time between photon absorption events, which leads to
heating. For equilibrium experiments on Raman-dressed atoms, one needs
$\Omega_{R}\sim\mu/\hbar\sim{\rm kHz}$, where $\mu$ is the chemical potential.
A typical experiment takes one second. Thus if $\beta\lesssim 10^{3}$, the
inelastic light scattering has a large impact. As argued by Spielman
Spielman2009 , both of these rates at sufficiently large detuning limit are
proportional to the laser intensity and inversely proportional to the square
of the detuning. We numerically calculate this ratio as a function of magnetic
field, including all relevant single-particle physics. We also explore the
detuning dependence of this ratio.
The remainder of this manuscript is organized as follows. In Sec. II, we
estimate the ratio of the Raman Rabi frequency to the inelastic scattering
rate in the absence of magnetic field for various alkali atoms. In Sec. III,
we calculate the magnetic field dependence of electric dipole transitions: We
introduce the single-particle Hamiltonian in Sec. III(A), and in Sec. III(B)
we introduce the formal expression of the electric dipole transitions. The
analytical discussions and numerical results are elaborated in Sec. III(C) and
Sec. III(D). In Sec. IV, we calculate the ratios for 23Na, 40K, 85Rb, 87Rb and
133Cs, and further study 6Li and explore how the ratio depends on detuning.
Finally we conclude in Sec. V.
## II Raman coupling
Figure 1: (Color online) Sketch of the energy level structures in a Raman
experiment. The Rabi frequencies $\Omega_{1},\Omega_{2}$ characterize the
coupling strengths between the ground states $|g_{1}\rangle,|g_{2}\rangle$ and
the excited states. The fine structure energy splitting between
$|e_{\mu}\rangle$ and $|e_{\nu}\rangle$ is $A_{f}=E_{\nu}-E_{\mu}$. The laser
detuning is $\Delta=(E_{\mu}-E_{g})-\hbar\omega$.
We consider a typical setup of a Raman experiment, as shown in Fig. 1: Two
hyperfine ground states $|g_{1}\rangle$ and $|g_{2}\rangle$ with energies
$E_{g}$, are coupled to a pair of excited multiplets
$\\{|e_{\mu}\rangle,|e_{\nu}\rangle\\}$ by two lasers, where the coupling
strengths are characterized by the Rabi frequencies $\Omega_{1}$ and
$\Omega_{2}$. The states $|e_{\mu}\rangle$ and $|e_{\nu}\rangle$ are states in
the $J=1/2$ and $J=3/2$ manifolds, with energies $E_{\mu}$ and $E_{\nu}$, and
the energy difference $A_{f}=E_{\nu}-E_{\mu}$. For 6Li,
$A_{f}\sim(2\pi\hbar)\times 10$GHz. For heavier atoms such as 40K,
$A_{f}\sim(2\pi\hbar)\times 1$THz. The laser detuning
$\Delta=(E_{\mu}-E_{g})-\hbar\omega$, characterizes the energy mismatch
between the laser frequency $f=\omega/2\pi$ and the atomic “D1” transition.
Within such a setup, the two lasers couple $|g_{1}\rangle$ and $|g_{2}\rangle$
via two-photon transitions, and the Raman Rabi frequency, which characterizes
the (Raman) coupling strength, is
$\displaystyle\Omega_{R}=\sum_{\mu}\frac{\hbar\Omega_{1\mu}\Omega_{2\mu}}{4\Delta}+\sum_{\nu}\frac{\hbar\Omega_{1\nu}\Omega_{2\nu}}{4(\Delta+A_{f})},$
(1)
where the optical Rabi frequency
$\Omega_{i\epsilon}=\frac{{\bm{E}}_{i}\cdot\langle
g_{i}|{\bm{d}}|e_{\epsilon}\rangle}{\hbar}$ with electronic dipole
${\bm{d}}=e{\bm{r}}$, characterizes the individual transition element between
the ground state $|g_{i}\rangle$ and the excited state $|e_{\epsilon}\rangle$
($i=1,2$ and $\epsilon=\mu,\nu$). In our following calculations, we assume
$|g_{1}\rangle$ and $|g_{2}\rangle$ are the lowest two ground states.
This expression can be simplified by noting that the ground state quadrupole
matrix element $\langle g_{i}|d_{a}d_{b}|g_{j}\rangle=0$ unless $i=j$ and
$a=b$ ($a,b=x,y,z$). This reflects the spherical symmetry of the electron
wavefunction, and the fact that the electronic dipole does not couple to spin.
Inserting a complete set of exited states, we find
$\sum_{\mu}\Omega_{1\mu}\Omega_{2\mu}+\sum_{\nu}\Omega_{1\nu}\Omega_{2\nu}=0$,
allowing us to write Eq. (1) solely in terms of the matrix elements for the D1
line,
$\displaystyle\Omega_{R}=\frac{\hbar
A_{f}}{4\Delta(\Delta+A_{f})}\sum_{\mu}\Omega_{1\mu}\Omega_{2\mu}.$ (2)
It is thus clear that $\Omega_{R}\sim A_{f}/\Delta^{2}$ for $\Delta\gg A_{f}$.
The inelastic scattering rate that emerges from the spontaneous emission of
the excited states is
$\displaystyle\Gamma_{\rm{ine}}=\gamma\left(\sum_{\mu}\frac{\hbar^{2}(\Omega_{1\mu}^{2}+\Omega_{2\mu}^{2})}{4\Delta^{2}}+\sum_{\nu}\frac{\hbar^{2}(\Omega_{1\nu}^{2}+\Omega_{2\nu}^{2})}{4(\Delta+A_{f})^{2}}\right)$
(3)
where $\gamma$ denotes the decay rate of these excited states. For $\Delta\gg
A_{f}$, this rate scales as $\Gamma_{\rm{ine}}\sim\gamma/\Delta^{2}$. Explicit
calculations show that to a good approximation
$\beta\equiv\Omega_{R}/\Gamma_{\rm ine}\approx\beta_{e}\equiv
A_{f}/12\hbar\gamma$ for $\Delta\gg A_{f}$. The factor of $\frac{1}{12}$ can
crudely be related to cancellation of terms of opposite signs in the
expression for $\Omega_{R}$. As an illustration, we show $A_{f}$, $\gamma$,
and $\beta_{e}$ for various alkali atoms in Tab. I. We also present results of
our numerical calculation of $\beta$. The details of this calculation will be
given in Sec. III.
Alkalis | 6Li($2p$) | 6Li($3p$) | 23Na | 40K | 85Rb | 87Rb | 133Cs
---|---|---|---|---|---|---|---
$A_{f}/$($2\pi\hbar$)GHz | $10.0$ | $2.88$ | $515$ | $1730$ | $7120$ | $7120$ | $16600$
$\gamma/$($2\pi$)MHz | $5.87$ | $0.754$ | $9.76$ | $6.04$ | $5.75$ | $5.75$ | $4.57$
$\beta_{e}/10^{3}$ | $0.14$ | $0.32$ | $4.4$ | $24$ | $103$ | $103$ | $303$
$\beta/10^{3}$ | $0.13$ | $0.30$ | $4.0$ | $23$ | $101$ | $103$ | $304$
Table 1: Fine structure energy splitting $A_{f}$, spontaneous decay rate
$\gamma$, and ratios $\beta_{e}$ and $\beta$ for various alkali atoms. For
6Li, we consider either $2p$ states or $3p$ states as the excited multiplet.
For other atoms, we consider the lowest $p$ multiplet. The ground states for
all alkali atoms are the two lowest magnetic substates. Data in the first two
rows were extracted from archived data archive .
As seen from the table, the heavier atoms have more favorable ratios
($\beta\gtrsim 10^{3}$ for most alkali atoms). For 6Li we include the rates
for Raman lasers detuned from the $2s-2p$ line and the narrower $2s-3p$ line.
For all other atoms we just consider the lowest energy $s-p$ transition. We
see that the ratio for 6Li can be improved by a factor of $2.2$ by using the
$3p$ states. Similar gains are found for laser cooling schemes using these
states Hulet2011 .
## III magnetic field dependence of electric dipole transitions
In this section we will calculate $\Omega_{i\epsilon}$ and its dependence on
the magnetic field. In the following section we will use these results to
calculate $\Omega_{R}$ and $\Gamma_{\rm ine}$.
### III.1 Single-particle Hamiltonian
Fixing the principle quantum number of the valence electron, the fine and
hyperfine atomic structure of an alkali in a magnetic field is described by a
coupled spin Hamiltonian
$\displaystyle H$ $\displaystyle=$ $\displaystyle H_{a}+H_{B}$ (4)
where
$\displaystyle H_{a}$ $\displaystyle=$ $\displaystyle
c_{f}{\bm{L}}\cdot{\bm{S}}+c_{hf1}{\bm{L}}\cdot{\bm{I}}+c_{hf2}{\bm{S}}\cdot{\bm{I}}$
(5) $\displaystyle H_{B}$ $\displaystyle=$
$\displaystyle\mu_{B}(g_{L}{\bm{L}}+g_{S}{\bm{S}}+g_{I}{\bm{I}})\cdot{\bm{B}}.$
(6)
Here the vectors ${\bm{L}}$ and ${\bm{S}}$ are the dimensionless orbital and
spin angular momentum of the electron, and ${\bm{I}}$ is the angular momentum
of the nuclear spin. The coefficients $c_{f}$ and $c_{hf1},c_{hf2}$ are the
fine structure constant and hyperfine structure constants, which were measured
in experiments hyperfine ; archive . $\mu_{B}$ is the Bohr magneton and
$g_{L},g_{S},g_{I}$ are the Lande $g$-factors.
For 6Li, the Zeeman splitting energy is $E_{B}\sim(2\pi\hbar)\times 5$GHz at
the magnetic field of the wide Feshbach resonance $B=834$G Chengchin2010 .
This splitting is comparable to the fine structure constant
$c_{f}\sim(2\pi\hbar)\times 7$GHz. For other heavier atoms where $E_{B}\ll
c_{f}\sim(2\pi\hbar)\times 1$THz, the fine structure interaction is robust
against the magnetic field, and we can appropriate the Hamiltonian as
$\displaystyle
H=c_{hf}{\bm{J}}\cdot{\bm{I}}+\mu_{B}(g_{J}{\bm{J}}+g_{I}{\bm{I}})\cdot{\bm{B}}$
(7)
with the vector ${\bm{J}}={\bm{L}}+{\bm{S}}$.
The Hamiltonian (4) can be diagonalized in the basis
$|Lm_{L}m_{S}m_{I}\rangle$, where $\\{m_{L},m_{S},m_{I}\\}$ are the
$z$-components of $\\{L,S,I\\}$, and the eigenstate $|LQ\rangle$ can be
expanded as
$\displaystyle|LQ\rangle=\sum_{m_{L}m_{S}m_{I}}C^{Q}_{m_{L}m_{S}m_{I}}|Lm_{L}m_{S}m_{I}\rangle$
(8)
where $Q$ labels the eigenstate, and $C^{Q}_{m_{L}m_{S}m_{I}}$ corresponds to
the eigenvector. We will use these coefficients to calculate the electric
dipole transition in the following subsections.
### III.2 Formal expressions
We define $D_{q}\equiv\langle
Lm_{L}m_{S}m_{I}|er_{q}|L^{\prime}m^{\prime}_{L}m^{\prime}_{S}m^{\prime}_{I}\rangle$,
the electric dipole transition between $|Lm_{L}m_{S}m_{I}\rangle$ and
$|L^{\prime}m_{L}^{\prime}m_{S}^{\prime}m_{I}^{\prime}\rangle$, where $r_{q}$
is the position operator, expressed as an irreducible spherical tensor:
$q=-1,0,1$ correspond to $\sigma^{-},\pi,\sigma^{+}$ polarized light. Note
that the electric dipole $er_{q}$ does not directly couple to the electric
spin $m_{S}$ or the nuclear spin $m_{I}$, and $D_{q}$ is of the form
$D_{q}=\delta_{m_{S}m^{\prime}_{S}}\delta_{m_{I}m^{\prime}_{I}}\langle
Lm_{L}|er_{q}|L^{\prime}m^{\prime}_{L}\rangle$. Using the Wigner-Eckart
theorem, we obtain
$\displaystyle
D_{q}=\delta_{m_{S}m^{\prime}_{S}}\delta_{m_{I}m^{\prime}_{I}}W_{m_{L}^{\prime}qm_{L}}^{L^{\prime}L}\langle
L||er||L^{\prime}\rangle$ (9)
where $\langle L||er||L^{\prime}\rangle$ is the reduced matrix element,
independent of $\\{m_{L},m_{S},m_{I}\\}$. The coefficient
$W_{m_{L}^{\prime}qm_{L}}^{L^{\prime}L}$ can be written in terms of the Wigner
$3$-$j$ symbol Brink
$\displaystyle
W_{m_{L}^{\prime}qm_{L}}^{L^{\prime}L}=(-1)^{L^{\prime}-1+m_{L}}\sqrt{2L+1}\left(\begin{array}[]{ccc}L^{\prime}&1&L\\\
m_{L}^{\prime}&q&-m_{L}\\\ \end{array}\right)$ (12)
Combining Eq. (8) and Eq. (9), we obtain the electric dipole transition
between two eigenstates $|LQ\rangle$ and $|L^{\prime}Q^{\prime}\rangle$,
$\displaystyle D_{q,LQ}^{L^{\prime}Q^{\prime}}\equiv\langle
LQ|er_{q}|L^{\prime}Q^{\prime}\rangle$ $\displaystyle=$
$\displaystyle\sum_{\bar{m}_{L}\bar{m}^{\prime}_{L}\bar{m}_{S}\bar{m}_{I}}C^{Q}_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}C^{Q^{\prime}}_{\bar{m}^{\prime}_{L}\bar{m}_{S}\bar{m}_{I}}W_{\bar{m}_{L}^{\prime}q\bar{m}_{L}}^{L^{\prime}L}\langle
L||er||L^{\prime}\rangle$
where the coefficients $C^{Q}_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}$ are defined
by Eq. (8).
### III.3 Analytical discussions
While $C^{Q}_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}$ can be numerically
calculated by extracting the eigenvector of the Hamiltonian, in some regimes
the problem simplifies, and $C^{Q}_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}$
corresponds to a Clebsch-Gordan coefficient. In this subsection we discuss
these simple limits.
In a weak magnetic field such that the Zeeman splitting energy $E_{B}\ll
c_{hf}$, the electronic angular momentum and the nuclear spins are strongly
mixed. Here $Q$ corresponds to the three quantum numbers $\\{J,F,m_{F}\\}$,
where $F$ is the quantum number associated with the total hyperfine spin
${\bm{F}}={\bm{J}}+{\bm{I}}$, and $m_{F}$ labels the magnetic sublevels. In
this limit
$C^{Q}_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}=C^{JFm_{F}}_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}=\langle\bar{m}_{L}\bar{m}_{S}|J\bar{m}_{J}\rangle\langle\bar{m}_{I}\bar{m}_{J}|Fm_{F}\rangle$
is simply the product of two Clebsch-Gordan coefficients.
In the regime $c_{hf}\ll E_{B}\ll c_{f}$, the electronic angular momentum and
the nuclear spins are decoupled, and $Q$ corresponds to $\\{J,m_{J},m_{I}\\}$,
with
$C^{Q}_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}=\langle\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}|Jm_{J}m_{I}\rangle=\delta_{\bar{m}_{I}m_{I}}\langle\bar{m}_{L}\bar{m}_{S}|Jm_{J}\rangle$.
In this case, the dipole transition element obeys
$D_{q,LQ}^{L^{\prime}Q^{\prime}}\propto\delta_{m_{I}m_{I}^{\prime}}$, and the
states with different nuclear spins are not coupled by the lasers.
In an extremely strong magnetic field such that $E_{B}\gg c_{f}$, the
electronic spins and the electronic orbital angular momentum are decoupled,
and $Q$ corresponds to $\\{m_{L},m_{S},m_{I}\\}$, with
$C^{Q}_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}=\delta_{\bar{m}_{L}m_{L}}\delta_{\bar{m}_{S}m_{S}}\delta_{\bar{m}_{I}m_{I}}$.
In this case, the dipole transition element obeys
$D_{q,LQ}^{L^{\prime}Q^{\prime}}\propto\delta_{m_{S}m_{S}^{\prime}}\delta_{m_{I}m_{I}^{\prime}}$,
and states with disparate nuclear or electronic spin projections are not
coupled by the lasers. In short, the large fields polarize the electronic
spins and nuclear spins, making them robust quantum numbers which cannot be
influenced by optical fields.
### III.4 Numerical results
Here we numerically calculate $D_{q,LQ}^{L^{\prime}Q^{\prime}}$ in the
intermediate regime $E_{B}\gtrsim c_{hf}$. For 6Li, $c_{hf}\lesssim
E_{B}\lesssim c_{f}$, one needs to diagonalize the Hamiltonian (4). The
numerics are simpler for other alkali atoms where $c_{hf}\lesssim E_{B}\ll
c_{f}$. In this case $Q$ is decomposed into $J$ and $\tilde{Q}$, with the
latter labeling the eigenstates of the simplified Hamiltonian in Eq. (7). The
coefficient $C_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}^{Q}$ is then reduced as
$C_{\bar{m}_{L}\bar{m}_{S}\bar{m}_{I}}^{Q}=\langle\bar{m}_{L}\bar{m}_{S}|J\bar{m}_{J}\rangle
C^{\tilde{Q}}_{\bar{m}_{J}\bar{m}_{I}}$.
Figure 2: (Color online) Dimensionless electric dipole transition
$\mathcal{D}_{q}^{\tilde{Q}\tilde{Q}^{\prime}}\equiv
D_{q,LJ\tilde{Q}}^{L^{\prime}J^{\prime}\tilde{Q}^{\prime}}/\langle
L||er||L^{\prime}\rangle$ as a function of magnetic field for 23Na, with the
parameters $L=0,J=1/2,L^{\prime}=1,J^{\prime}=1/2$ and $q=-1$. The twelve
different lines correspond to all of the various allowed dipole transitions
for $\sigma^{-}$ light.
As an illustration, in Fig. 2 we plot the dimensionless electric dipole
transition $\mathcal{D}_{q}^{\tilde{Q}\tilde{Q}^{\prime}}\equiv
D_{q,LJ\tilde{Q}}^{L^{\prime}J^{\prime}\tilde{Q}^{\prime}}/\langle
L||er||L^{\prime}\rangle$ as a function of the magnetic field for 23Na, where
we choose the eigenstates of $L=0,J=1/2$ and $L^{\prime}=1,J^{\prime}=1/2$ as
the initial states and the final states (D1 transitions), and use $\sigma^{-}$
polarized light.
The $F=1,2$ and $F^{\prime}=1,2$ manifolds allow twelve $\sigma^{-}$
transitions. At large fields, the absolute values of four of them saturate at
finite values while the rest of them approach zero. This large field result
stems from the decoupling of the electronic and nuclear spins. (the
$\tilde{Q}$ eigenstates can be described by $m_{J},m_{I}$ in that limit).
Under such circumstance, the allowed transitions for the $\sigma^{-}$
transition only occur at $m_{J}=1/2,m_{J}^{\prime}=-1/2$ and
$m_{I}=m_{I}^{\prime}$ with $m_{I}=-3/2,-1/2,1/2,3/2$.
## IV Magnetic field dependance of Raman coupling
We have illustrated how the electric dipole transition
$\mathcal{D}_{q}^{\tilde{Q}\tilde{Q}^{\prime}}$ depends on the magnetic field.
Here we calculate the ratio $\beta=\Omega_{R}/\Gamma_{\rm ine}$ using Eq. (1)
and Eq. (3) from Sec. II, and the relation
$\Omega_{i\epsilon}\propto\sqrt{I_{i}}\mathcal{D}_{q}^{\tilde{Q}\tilde{Q}^{\prime}}$,
with $I_{i}$ the intensity of each laser.
Figure 3: (Color online) Ratio $\beta=\Omega_{R}/\Gamma_{\rm ine}$ as a
function of magnetic field $B$ for various alkali atoms at
$\Delta=(2\pi\hbar)\times 100$THz. $\Omega_{R}$ is the Raman Rabi rate, and
$\Gamma_{\rm ine}$ is the inelastic scattering rate. Note the logarithmic
vertical scale.
In Fig. 3, we plot $\beta$ as a function of the magnetic field for various
alkali atoms at $\Delta=(2\pi\hbar)\times 100$THz $\gg A_{f}$, where we assume
$|g_{1}\rangle$ and $|g_{2}\rangle$ are the lowest ground states, and the two
lasers with $\sigma^{+}$ and $\pi$ polarized light have an equal intensity. We
see that although the ratios decrease with the magnetic field, they are still
quite appreciable at $B\sim 200$G$-1200$G, suggesting that the Raman
experiment and strong Feshbach-induced interactions are compatible.
Figure 4: (Color online) (a): Ratio $\beta=\Omega_{R}/\Gamma_{\rm ine}$ as a
function of magnetic field $B$ for various detuning $\Delta$ for 6Li. (b):
Ratio $\beta$ as a function of detuning $\Delta$ for various magnetic field
$B$.
Given that 6Li is in a different regime than the other alkalis, it is
convenient to discuss its properties separately. Analyzing Eqs. (5)-(6),
without making the approximations inherent in Eq. (7), yields the transition
rates in Fig. 4. Looking at the blue (dashed) curve in Fig. 4(a), we see at
the large detuning, the ratio of the Raman Rabi rate to the inelastic
scattering rate for 6Li decreases faster than those for the heavier atoms in
Fig. 3. This rapid fall-off can be attributed to the much weaker coupling
between the electronic and nuclear spins in 6Li. At the magnetic field of
Feshbach resonance $B=834$G Chengchin2010 , $\beta$ is suppressed by a factor
of $7$. At small detuning $\Delta$ the ratio $\beta$ changes non-monotonically
with the magnetic field, as $\Gamma_{\rm{ine}}$ decreases faster than
$\Omega_{R}$. In Fig. 4(b), we see that $\beta$ increases rapidly for small
$\Delta$ and levels out at large $\Delta$. There is an optimal detuning near
$\Delta\approx A_{f}$, where $\beta$ has a maxima. The peak value of $\beta$,
however, is only marginally larger than its large $\Delta$ asymptotic value.
Moreover, the peak is further reduced as $B$ increases.
## V conclusions
In summary, we comprehensively studied the Raman Rabi rates of alkali atoms in
the presence of a magnetic field. While the ratio of the Raman Rabi frequency
to the inelastic scattering rate decreases with the magnetic field, the
suppression is _not_ significant for most alkali atoms at the typical fields
of the Feshbach resonance. Our primary motivation is evaluating the
feasibility of using Raman techniques to generate strongly interacting Fermi
gases with spin-orbit coupling. We conclude that 6Li is not a good candidate,
but 40K is promising.
## VI acknowledgement
We thank Randall Hulet for many enlightening discussions. R. W. is supported
by CSC, CAS, NNSFC (Grant No.11275185), and the National Fundamental Research
Program (Grant No.2011CB921304). This work was supported by the Army Research
Office with funds from the DARPA optical lattice emulator program.
## References
* (1) Y.-J. Lin, R. L. Compton, K. Jiménez-García, J. V. Porto, and I. B. Spielman, Nature (London) 462, 628 (2009).
* (2) M. Aidelsburger, M. Atala, S. Nascimbène, S. Trotzky, Y.-A. Chen, and I. Bloch, Phys. Rev. Lett. 107, 255301 (2011).
* (3) Y.-J. Lin, K. Jiménez-García and I. B. Spielman, Nature (London) 471, 83 (2011).
* (4) J.-Y. Zhang, S.-C. Ji, Z. Chen, L. Zhang, Z.-D. Du, B. Yan, G.-S. Pan, B. Zhao, Y.-J. Deng, H. Zhai, S. Chen, and J.-W. Pan, Phys. Rev. Lett. 109, 115301 (2012).
* (5) P. Wang, Z.-Q. Yu, Z. Fu, J. Miao, L. Huang, S. Chai, H. Zhai, and J. Zhang, Phys. Rev. Lett. 109, 095301 (2012).
* (6) L. W. Cheuk, A. T. Sommer, Z. Hadzibabic, T. Yefsah, W. S. Bakr, and M. W. Zwierlein, Phys. Rev. Lett. 109, 095302 (2012).
* (7) T.-L. Dao, A. Georges, J. Dalibard, C. Salomon, and I. Carusotto, Phys. Rev. Lett. 98, 240402 (2007).
* (8) P. Wang, Z. Fu, L. Huang, and J. Zhang, Phys. Rev. A 85, 053626 (2012).
* (9) Z. Fu, P. Wang, L. Huang, Z. Meng, and J. Zhang, Phys. Rev. A 86, 033607 (2012).
* (10) C. Zhang, S. Tewari, R. M. Lutchyn, and S. DasSarma, Phys. Rev. Lett. 101, 160401 (2008).
* (11) M. Sato, Y. Takahashi, and S. Fujimoto, Phys. Rev. Lett. 103, 020401 (2009).
* (12) S.-L. Zhu, L.-B. Shao, Z. D. Wang, and L.-M. Duan, Phys. Rev. Lett. 106, 100404 (2011).
* (13) L. Jiang, T. Kitagawa, J. Alicea, A. R. Akhmerov, D. Pekker, G. Refael, J. I. Cirac, E. Demler, M. D. Lukin, and P. Zoller, Phys. Rev. Lett. 106, 220402 (2011).
* (14) N. R. Cooper, Phys. Rev. Lett. 106, 175301 (2011).
* (15) M. Gong, G. Chen, S. Jia, and C. Zhang, Phys. Rev. Lett. 109, 105302 (2012).
* (16) X.-J. Liu and H. Hu, Phys. Rev. A 85, 033622 (2012).
* (17) X.-J. Liu, L. Jiang, H. Pu, and H. Hu, Phys. Rev. A 85, 021603(R) (2012).
* (18) X.-J. Liu and P. D. Drummond, Phys. Rev. A 86, 035602 (2012).
* (19) R. Wei and E. J. Mueller, Phys. Rev. A 86, 063604 (2012).
* (20) The subject of how magnetic fields suppress two-photon transitions has a long history, some aspects of which are nicely recounted in Kassler’s Nobel lectures Nobel , where he referred to the suppression of matrix element as a “generalized Franck-Condon principle”.
* (21) A. Kastler, Nobel lecture 195-196 (1966).
* (22) C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Rev. Mod. Phys. 82, 1225 (2010).
* (23) I. B. Spielman, Phys. Rev. A 79, 063613 (2009).
* (24) The data for 6Li($2p$) was extracted from M. E. Gehm’s PhD thesis: Preparation of an optically trapped degenerate Fermi gas of 6Li: finding the route to degeneracy (Duke, 2003). The data of 40K was extracted from T. Tiecke’s PhD thesis: Feshbach resonances in ultracold mixtures of the fermionic gases 6Li and 40K (Amsterdam, 2010). The data of 23Na, 85Rb, 87Rb, 133Cs was extracted from D. A. Steck’s online public resources (http://steck.us/alkalidata). The data of 6Li($3p$) was from the private communications with R. G. Hulet (2012).
* (25) P. M. Duarte, R. A. Hart, J. M. Hitchcock, T. A. Corcovilos, T.-L. Yang, A. Reed, and R. G. Hulet, Phys. Rev. A 84, 061406(R) (2011).
* (26) E. Arimondo, M. Inguscio, and P. Violino, Rev. Mod. Phys. 49, 31 (1977).
* (27) D. M. Brink and G. R. Satchler, Angular Momentum (Oxford, 1968).
|
arxiv-papers
| 2013-03-01T00:53:51 |
2024-09-04T02:49:42.237724
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ran Wei, Erich J. Mueller",
"submitter": "Ran Wei",
"url": "https://arxiv.org/abs/1303.0061"
}
|
1303.0068
|
# Inequalities for the polar derivative of a polynomial
N. A. Rather and Suhail Gulzar Department of Mathematics
University of Kashmir
Srinagar, Hazratbal 190006
India [email protected] [email protected]
###### Abstract.
Let $P(z)$ be a polynomial of degree $n$ and for any real or complex number
$\alpha,$ let $D_{\alpha}P(z)=nP(z)+(\alpha-z)P^{\prime}(z)$ denote the polar
derivative with respect to $\alpha.$ In this paper, we obtain generalizations
of some inequalities for the polar derivative of a polynomial.
###### Key words and phrases:
Polynomials; Inequalities in the complex domain; Polar derivative; Bernstein’s
inequality.
###### 2000 Mathematics Subject Classification:
30A10, 30C10, 30E10.
## 1\. Introduction and statement of results
Let $P(z)$ be a polynomial of degree $n,$ then
(1.1) $\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\leq
n\underset{\left|z\right|=1}{Max}\left|P(z)\right|.$
Inequality (1.1) is an immediate consequence of S. Bernstein’s Theorem on the
derivative of a trigonometric polynomial (for reference, see [9, p.531], [10,
p.508] or [11]) and the result is best possible with equality holding for the
polynomial $P(z)=az^{n},$ $a\neq 0.$
If we restrict ourselves to the class of polynomials having no zero in
$|z|<1$, then inequality (1.1) can be replaced by
(1.2)
$\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\leq\frac{n}{2}\underset{\left|z\right|=1}{Max}\left|P(z)\right|.$
Inequality (1.2) was conjectured by Erdös and later verified by Lax [6]. The
result is sharp and equality holds for $P(z)=az^{n}+b,$ $|a|=|b|.$
For the class polynomials having all zeros in $|z|\leq 1,$ it was proved by
Turán [12] that
(1.3)
$\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\geq\frac{n}{2}\underset{\left|z\right|=1}{Max}\left|P(z)\right|.$
The inequality (1.3) is best possible and become equality for polynomial
$P(z)=(z+1)^{n}.$ As an extension of (1.2) and (1.3) Malik [7] proved that if
$P(z)\neq 0$ in $|z|<k$ where $k\geq 1,$ then
(1.4)
$\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\leq\frac{n}{1+k}\underset{\left|z\right|=1}{Max}\left|P(z)\right|,$
where as if $P(z)$ has all its zeros in $|z|\leq k$ where $k\leq 1,$ then
(1.5)
$\underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\geq\frac{n}{1+k}\underset{\left|z\right|=1}{Max}\left|P(z)\right|.$
Let $D_{\alpha}P(z)$ denotes the polar derivative of the polynomial $P(z)$ of
degree $n$ with respect to the point $\alpha,$ then
$D_{\alpha}P(z)=nP(z)+(\alpha-z)P^{\prime}(z).$
The polynomial $D_{\alpha}P(z)$ is a polynomial of degree at most $n-1$ and it
generalizes the ordinary derivative in the sense that
$\underset{\alpha\rightarrow\infty}{Lim}\left[\dfrac{D_{\alpha}P(z)}{\alpha}\right]=P^{\prime}(z).$
Now corresponding to a given $n^{th}$ degree polynomial $P(z),$ we construct a
sequence of polar derivatives
$D_{\alpha_{1}}P(z)=nP(z)+(\alpha_{1}-z)P^{\prime}(z)=P_{1}(z)$ $\displaystyle
D_{\alpha_{s}}D_{\alpha_{s-1}}\cdots D_{\alpha_{2}}D_{\alpha_{1}}P(z)=(n-s+1)$
$\displaystyle\left\\{D_{\alpha_{s-1}}\cdots
D_{\alpha_{2}}D_{\alpha_{1}}P(z)\right\\}$
$\displaystyle+(\alpha_{s}-z)\left\\{D_{\alpha_{s-1}}\cdots
D_{\alpha_{2}}D_{\alpha_{1}}P(z)\right\\}^{\prime}.$
The points $\alpha_{1},\alpha_{2},\cdots,\alpha_{s},$ $s=1,2,\cdots,n,$ may be
equal or unequal complex numbers. The $s^{th}$ polar derivative
$D_{\alpha_{s}}D_{\alpha_{s-1}}\cdots D_{\alpha_{2}}D_{\alpha_{1}}P(z)$ of
$P(z)$ is a polynomial of degree at most $n-s.$ For
$P_{j}(z)=D_{\alpha_{j}}D_{\alpha_{j-1}}\cdots
D_{\alpha_{2}}D_{\alpha_{1}}P(z),$
we have
$\displaystyle P_{j}(z)$
$\displaystyle=(n-j+1)P_{j-1}(z)+(\alpha_{j}-z)P^{\prime}_{j-1}(z),\,\,\,\,\,\,j=1,2,\cdots,s,$
$\displaystyle P_{0}(z)$ $\displaystyle=P(z).$
A. Aziz [2] extended the inequality (1.2) to the $s^{th}$ polar derivative by
proving if $P(z)$ is a polynomial of degree $n$ not vanishing in $|z|<1,$ then
for $|z|\geq 1$
(1.6) $\displaystyle|D_{\alpha_{s}}\cdots$ $\displaystyle
D_{\alpha_{1}}P(z)|\leq\dfrac{n(n-1)\cdots(n-s+1)}{2}\left\\{|\alpha_{1}\cdots\alpha_{s}z^{n-s}|+1\right\\}\underset{|z|=1}{Max}|P(z)|,$
where $|\alpha_{j}|\geq 1,$ for $j=1,2,\cdots,s.$ The result is best possible
and equality holds for the polynomial $P(z)=(z^{n}+1)/2.$
As a refinement of inequality (1.6), Aziz and Wali Mohammad [5] proved if
$P(z)$ is a polynomial of degree $n$ not vanishing in $|z|<1,$ then for
$|z|\geq 1$
$\displaystyle|D_{\alpha_{s}}\cdots
D_{\alpha_{1}}P(z)|\leq\dfrac{n(n-1)\cdots(n-s+1)}{2}$
$\displaystyle\Big{\\{}\left(|\alpha_{1}\cdots\alpha_{s}z^{n-s}|+1\right)\underset{|z|=1}{Max}|P(z)|,$
(1.7)
$\displaystyle-\left(|\alpha_{1}\cdots\alpha_{s}z^{n-s}|-1\right)\Big{\\}}\underset{|z|=1}{Min}|P(z)|$
where $|\alpha_{j}|\geq 1,$ for $j=1,2,\cdots,s.$ The result is best possible
and equality holds for the polynomial $P(z)=(z^{n}+1)/2.$
In this paper, we shall obtain several inequalities concerning the polar
derivative of a polynomial and thereby obtain compact generalizations of
inequalities (1.6) and (Inequalities for the polar derivative of a
polynomial).
We first prove following result from which certain interesting results follow
as special cases.
###### Theorem 1.1.
If $F(z)$ be a polynomial of degree $n$ having all its zeros in the disk
$\left|z\right|\leq k$ where $k\leq 1$, and $P(z)$ is a polynomial of degree n
such that
$\left|P(z)\right|\leq\left|F(z)\right|\,\,\,for\,\,\,|z|=k,$
then for $\alpha_{j},\beta\in\mathbb{C}$ with $\left|\alpha_{j}\right|\geq
k,\left|\beta\right|\leq 1$, $j=1,2,\cdots,s$ and $|z|\geq 1$,
(1.8)
$\displaystyle\left|z^{s}P_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\right|\leq\left|z^{s}F_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}F(z)\right|,$
where
(1.9) $\displaystyle
n_{s}=n(n-1)\cdots(n-s+1)\,\,\&\,\,\Lambda_{s}=(|\alpha_{1}|-k)(|\alpha_{2}|-k)\cdots(|\alpha_{s}|-k).$
If we choose $F(z)=z^{n}M/k^{n}$, where $M=Max_{|z|=k}\left|P(z)\right|$ in
Theorem 1.1, we get the following result.
###### Corollary 1.2.
If $P(z)$ be a polynomial of degree $n$, then for
$\alpha_{j},\beta\in\mathbb{C}$ with $\left|\alpha_{j}\right|\geq
k,\left|\beta\right|\leq 1$, $j=1,2,\cdots,s$ and $|z|\geq 1$,
(1.10)
$\displaystyle\left|z^{s}P_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\right|\leq\dfrac{n_{s}|z|^{n}}{k^{n}}\left|\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\beta\dfrac{\Lambda_{s}}{(1+k)^{s}}\right|\underset{\left|z\right|=k}{Max}\left|P(z)\right|,$
where $n_{s}$ and $\Lambda_{s}$ are given by (1.9).
If $\alpha_{1}=\alpha_{2}=\cdots=\alpha_{s}=\alpha,$ then dividing both sides
of (1.10) by $|\alpha|^{s}$ and letting $|\alpha|\rightarrow\infty,$ we obtain
the following result.
###### Corollary 1.3.
If $P(z)$ be a polynomial of degree $n$ then for $\beta\in\mathbb{C}$ with
$\left|\beta\right|\leq 1$, and $|z|\geq 1$,
(1.11)
$\displaystyle\left|z^{s}P^{(s)}(z)+\beta\dfrac{n_{s}}{(1+k)^{s}}P(z)\right|\leq\dfrac{n_{s}|z|^{n}}{k^{n}}\left|1+\dfrac{\beta}{(1+k)^{s}}\right|\underset{\left|z\right|=k}{Max}\left|P(z)\right|,$
where $n_{s}$ is given by (1.9).
Again, if we take $\alpha_{1}=\alpha_{2}=\cdots=\alpha_{s}=\alpha,$ then
divide both sides of (1.8) by $|\alpha|^{s}$ and letting
$|\alpha|\rightarrow\infty,$ we obtain the following result.
###### Corollary 1.4.
If $F(z)$ be a polynomial of degree $n$ having all its zeros in the disk
$\left|z\right|\leq k$ where $k\leq 1$, and $P(z)$ is a polynomial of degree
$n$ such that
$\left|P(z)\right|\leq\left|F(z)\right|\,\,\,for\,\,\,|z|=k,$
then for $\beta\in\mathbb{C}$ with $\left|\beta\right|\leq 1$, and $|z|\geq
1$,
(1.12)
$\displaystyle\left|z^{s}P^{(s)}(z)+\beta\dfrac{n_{s}}{(1+k)^{s}}P(z)\right|\leq\left|z^{s}F^{(s)}(z)+\beta\dfrac{n_{s}}{(1+k)^{s}}F(z)\right|,$
where $n_{s}$ is given by (1.9).
If we choose $P(z)=mz^{n}/k,$ where $m=Min_{|z|=k}|F(z)|,$ in Theorem 1.1 we
get the following result.
###### Corollary 1.5.
If $F(z)$ be a polynomial of degree $n$ having all its zeros in the disk
$\left|z\right|\leq k$ where $k\leq 1$, then for
$\alpha_{j},\beta\in\mathbb{C}$ with $\left|\alpha_{j}\right|\geq
k,\left|\beta\right|\leq 1$, where $j=1,2,\cdots,s$ and $|z|\geq 1$,
(1.13)
$\displaystyle\left|z^{s}F_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}F(z)\right|\geq\dfrac{n_{s}|z|^{n}}{k^{n}}\left|\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right|\underset{|z|=k}{Min}|F(z)|,$
where $n_{s}$ and $\Lambda_{s}$ are given by (1.9).
###### Remark 1.6.
For $\beta=0$ and $k=1,$ we get the result due to Aziz and Wali Mohammad [5,
Theorem 1].
Again, if we take $\alpha_{1}=\alpha_{2}=\cdots=\alpha_{s}=\alpha,$ then
divide both sides of (1.13) by $|\alpha|^{s}$ and letting
$|\alpha|\rightarrow\infty,$ we obtain the following result.
###### Corollary 1.7.
If $F(z)$ be a polynomial of degree $n$ having all its zeros in the disk
$\left|z\right|\leq k$ where $k\leq 1$, then for $\beta\in\mathbb{C}$ with
$\left|\beta\right|\leq 1$, and $|z|\geq 1$,
(1.14) $\displaystyle\left|z^{s}F^{(s)}(z)+\dfrac{\beta
n_{s}}{(1+k)^{s}}F(z)\right|\geq\dfrac{n_{s}|z|^{n}}{k^{n}}\left|1+\dfrac{\beta}{(1+k)^{s}}\right|\underset{|z|=k}{Min}|F(z)|,$
where $n_{s}$ is given by (1.9).
For $s=1,$ and $\alpha_{1}=\alpha$ in Theorem 1.1 we get the following result:
###### Corollary 1.8.
If $F(z)$ be a polynomial of degree $n$ having all its zeros in the disk
$\left|z\right|\leq k$ where $k\leq 1$, and $P(z)$ is a polynomial n such that
$\left|P(z)\right|\leq\left|F(z)\right|\,\,\,for\,\,\,|z|=k,$
then for $\alpha,\beta\in\mathbb{C}$ with $\left|\alpha\right|\geq
k,\left|\beta\right|\leq 1$, and $|z|\geq 1$,
$\left|zD_{\alpha}P(z)+n\beta\left(\dfrac{|\alpha|-k}{k+1}\right)P(z)\right|\leq\left|zD_{\alpha}F(z)+n\beta\left(\dfrac{|\alpha|-k}{k+1}\right)F(z)\right|.$
###### Theorem 1.9.
If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk
$|z|<k$ where $k\leq 1,$ then for
$\alpha_{1},\alpha_{2},\cdots,\alpha_{s},\beta\in\mathbb{C}$ with
$|\alpha_{1}|\geq k,|\alpha_{2}|\geq k,\cdots,|\alpha_{s}|\geq k,|\beta|\leq
1$ and $|z|\geq 1$ then
$\displaystyle\Bigg{|}z^{s}P_{s}(z)$
$\displaystyle+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}$ (1.15)
$\displaystyle\leq\dfrac{n_{s}}{2}\left\\{\dfrac{|z^{n}|}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}+\Bigg{|}z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}\right\\}\underset{|z|=k}{Max}|P(z)|,$
where $n_{s}$ and $\Lambda_{s}$ are given by (1.9).
###### Remark 1.10.
If we take $\beta=0$ and $k=1,$ we get inequality (1.6).
We next prove the following refinement of Theorem 1.9.
###### Theorem 1.11.
If $P(z)$ is a polynomial of degree $n$ and does not vanish in the disk
$|z|<k$ where $k\leq 1,$ then for
$\alpha_{1},\alpha_{2},\cdots,\alpha_{s},\beta\in\mathbb{C}$ with
$|\alpha_{1}|\geq k,|\alpha_{2}|\geq k,\cdots,|\alpha_{s}|\geq k,|\beta|\leq
1$ and $|z|\geq 1,$ we have
$\displaystyle\Bigg{|}z^{s}$ $\displaystyle
P_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}$
$\displaystyle\leq$
$\displaystyle\dfrac{n_{s}}{2}\Bigg{[}\left\\{\dfrac{|z|^{n}}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\beta\dfrac{\Lambda_{s}}{(1+k)^{s}}\Bigg{|}+\Bigg{|}z^{s}+\beta\dfrac{\Lambda_{s}}{(1+k)^{s}}\Bigg{|}\right\\}\underset{|z|=k}{Max}|P(z)|$
(1.16)
$\displaystyle-\left\\{\dfrac{|z|^{n}}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\beta\dfrac{\Lambda_{s}}{(1+k)^{s}}\Bigg{|}-\Bigg{|}z^{s}+\beta\dfrac{\Lambda_{s}}{(1+k)^{s}}\Bigg{|}\right\\}\underset{|z|=k}{Min}|P(z)|\Bigg{]},$
where $n_{s}$ and $\Lambda_{s}$ are given by (1.9).
If we take $\alpha_{1}=\alpha_{2}=\cdots=\alpha_{s}=\alpha,$ then divide both
sides of (1.8) by $|\alpha|^{s}$ and letting $|\alpha|\rightarrow\infty,$ we
obtain the following result.
###### Corollary 1.12.
If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk
$|z|<k$ where $k\leq 1,$ then for $\beta\in\mathbb{C}$ with $|\beta|\leq 1$
and $|z|\geq 1$ then
$\displaystyle\Bigg{|}z^{s}P^{(s)}(z)+\beta\dfrac{n_{s}}{(1+k)^{s}}P(z)\Bigg{|}\leq$
$\displaystyle\dfrac{n_{s}}{2}\Bigg{[}\left\\{\dfrac{|z|^{n}}{k^{n}}\Bigg{|}1+\dfrac{\beta}{(1+k)^{s}}\Bigg{|}+\Bigg{|}\dfrac{\beta}{(1+k)^{s}}\Bigg{|}\right\\}\underset{|z|=k}{Max}|P(z)|$
(1.17)
$\displaystyle-\left\\{\dfrac{|z|^{n}}{k^{n}}\Bigg{|}1+\dfrac{\beta}{(1+k)^{s}}\Bigg{|}-\Bigg{|}\dfrac{\beta}{(1+k)^{s}}\Bigg{|}\right\\}\underset{|z|=k}{Min}|P(z)|\Bigg{]},$
where $n_{s}$ is given by (1.9).
For $s=1$ in Corollary 1.12, we get the following result.
###### Corollary 1.13.
If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk
$|z|<k$ where $k\leq 1,$ then for $\beta\in\mathbb{C}$ with $|\alpha|\geq
k,|\beta|\leq 1$ and $|z|\geq 1$ then
$\displaystyle\Bigg{|}zP^{\prime}(z)+\dfrac{n\beta}{1+k}P(z)\Bigg{|}\leq$
$\displaystyle\dfrac{n}{2}\Bigg{[}\left\\{\dfrac{|z|^{n}}{k^{n}}\Bigg{|}1+\dfrac{\beta}{1+k}\Bigg{|}+\Bigg{|}\dfrac{\beta}{1+k}\Bigg{|}\right\\}\underset{|z|=k}{Max}|P(z)|$
(1.18)
$\displaystyle-\left\\{\dfrac{|z|^{n}}{k^{n}}\Bigg{|}1+\dfrac{\beta}{1+k}\Bigg{|}-\Bigg{|}\dfrac{\beta}{1+k}\Bigg{|}\right\\}\underset{|z|=k}{Min}|P(z)|\Bigg{]},$
For $\beta=0,$ Theorem 1.11 reduces to the following result.
###### Corollary 1.14.
If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk
$|z|<k$ where $k\leq 1,$ then for
$\alpha_{1},\alpha_{2},\cdots,\alpha_{s},\beta\in\mathbb{C}$ with
$|\alpha_{1}|\geq k,|\alpha_{2}|\geq k,\cdots,|\alpha_{s}|\geq k,|\beta|\leq
1$ and $|z|\geq 1$ then
$\displaystyle\big{|}P_{s}(z)\big{|}\leq\dfrac{n_{s}}{2}\Bigg{[}$
$\displaystyle\left\\{\dfrac{|z|^{n-s}}{k^{n}}|\alpha_{1}\alpha_{2}\cdots\alpha_{s}|+1\right\\}\underset{|z|=k}{Max}|P(z)|$
(1.19)
$\displaystyle-\left\\{\dfrac{|z|^{n-s}}{k^{n}}|\alpha_{1}\alpha_{2}\cdots\alpha_{s}|-1\right\\}\underset{|z|=k}{Min}|P(z)|\Bigg{]},$
where $n_{s}$ is given by (1.9).
###### Remark 1.15.
For $k=1,$ inequality (1.14) reduces to (Inequalities for the polar derivative
of a polynomial).
If we take $\alpha_{1}=\alpha_{2}=\cdots=\alpha_{s}=\alpha,$ then divide both
sides of (1.8) by $|\alpha|^{s}$ and letting $|\alpha|\rightarrow\infty,$ we
obtain the following result.
###### Corollary 1.16.
If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk
$|z|<k$ where $k\leq 1,$ then for
$\alpha_{1},\alpha_{2},\cdots,\alpha_{s},\beta\in\mathbb{C}$ with
$|\alpha_{1}|\geq k,|\alpha_{2}|\geq k,\cdots,|\alpha_{s}|\geq k,|\beta|\leq
1$ and $|z|\geq 1$ then
(1.20)
$\displaystyle\big{|}P^{(s)}(z)\big{|}\leq\dfrac{n(n-1)\cdots(n-t+1)|z|^{n-s}}{2k^{n}}\Bigg{[}\underset{|z|=k}{Max}|P(z)|-\underset{|z|=k}{Min}|P(z)|\Bigg{]}.$
If $s=1$ and $\alpha_{1}=\alpha$ then inequality (1.11) reduces to the
following result.
###### Corollary 1.17.
If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk
$|z|<k$ where $k\leq 1,$ then for $\alpha,\beta\in\mathbb{C}$ with
$|\alpha|\geq k,|\beta|\leq 1$ and $|z|\geq 1$ then
$\displaystyle\Bigg{|}z$ $\displaystyle
D_{\alpha}P(z)+n\beta\left(\dfrac{|\alpha|-k}{1+k}\right)P(z)\Bigg{|}$
$\displaystyle\leq\dfrac{n}{2}\Bigg{[}\left\\{\dfrac{1}{k^{n}}\Bigg{|}\alpha+\beta\left(\dfrac{|\alpha|-k}{1+k}\right)\Bigg{|}+\Bigg{|}z^{s}+\beta\left(\dfrac{|\alpha|-k}{1+k}\right)\Bigg{|}\right\\}\underset{|z|=k}{Max}|P(z)|$
(1.21)
$\displaystyle-\left\\{\dfrac{1}{k^{n}}\Bigg{|}\alpha+\beta\left(\dfrac{|\alpha|-k}{1+k}\right)\Bigg{|}-\Bigg{|}z^{s}+\beta\left(\dfrac{|\alpha|-k}{1+k}\right)\Bigg{|}\right\\}\underset{|z|=k}{Min}|P(z)|\Bigg{]}.$
## 2\. Lemmas
For the proof of Theorems, we need the following Lemmas. The first Lemma
follows by repeated application of Laguerre’s theorem [1] or[8, p. 52].
###### Lemma 2.1.
If all the zeros of $nth$ degree polynomial lie in circular region $C$ and if
none of the points $\alpha_{1},\alpha_{2},\cdots,\alpha_{s}$ lie in circular
region $C$, then each of the polar derivatives
(2.1) $D_{\alpha_{s}}D_{\alpha_{s-1}}\cdots
D_{\alpha_{1}}P(z)=P_{s}(z),\,\,\,\,\,\,\,\,\,s=1,2,\cdots,n-1$
has all its zeros in $C.$
The next Lemma is due to Aziz and Rather [3].
###### Lemma 2.2.
If $P(z)$ is a polynomial of degree $n,$ having all zeros in the closed disk
$|z|\leq k,$ $k\leq 1,$ then for every real or complex number $\alpha$ with
$|\alpha|\geq k$ and $|z|=1$, we have
(2.2) $|D_{\alpha}P(z)|\geq n\left(\dfrac{|\alpha|-k}{1+k}\right)|P(z)|.$
###### Lemma 2.3.
If $P(z)=\sum_{j=0}^{n}a_{j}z^{j}$ is a polynomial of degree $n$ having all
its zeros in $|z|\leq k,$ $k\leq 1$ then
(2.3) $\dfrac{1}{n}\left|\dfrac{a_{n-1}}{a_{n}}\right|\leq k.$
The above lemma follows by taking $\mu=1$ in the result due to Aziz and Rather
[4].
###### Lemma 2.4.
If $P(z)$ be a polynomial of degree $n$ having all zeros in the disk $|z|\leq
k$ where $k\leq 1,$ then for
$\alpha_{1},\alpha_{2},\cdots,\alpha_{s}\in\mathbb{C}$ with $|\alpha_{1}|\geq
k,|\alpha_{2}|\geq k,\cdots,|\alpha_{s}|\geq k,$ $(1\leq s<n),$ and $|z|=1$
(2.4) $\displaystyle|P_{s}(z)|\geq\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}|P(z)|,$
where $n_{s}$ and $\Lambda_{s}$ are defined in (1.9).
###### Proof.
The result is trivial if $|\alpha_{j}|=k$ for at least one $j$ where
$j=1,2,\cdots,t.$ Therefore, we assume $|\alpha_{j}|>k$ for all
$j=1,2,\cdots,t.$ We shall prove Lemma by principle of mathematical induction.
For $t=1$ the result is follows by Lemma 2.2.
We assume that the result is true for $t=q,$ which means that for $|z|=1,$ we
have
(2.5) $|P_{q}(z)|\geq\dfrac{n_{q}A_{\alpha_{q}}}{(1+k)^{q}}|P(z)|\,\,\,q\geq
1,$
and we will prove that the Lemma is true for $t=q+1$ also.
Since
$D_{\alpha_{1}}P(z)=(na_{n}\alpha_{1}+a_{n-1})z^{n-1}+\cdots+(na_{0}+\alpha_{1}a_{1})$
and $|\alpha_{1}|>k,$ $D_{\alpha_{1}}P(z)$ is a polynomial of degree $n-1.$ If
this is not true, then
$na_{n}\alpha_{1}+a_{n-1}=0,$
which implies
$|\alpha_{1}|=\dfrac{1}{n}\left|\dfrac{a_{n-1}}{a_{n}}\right|.$
By Lemma 2.3, we have
$|\alpha_{1}|=\dfrac{1}{n}\left|\dfrac{a_{n-1}}{a_{n}}\right|\leq k.$
But this is the contradiction to the fact $|\alpha|>k.$ Hence,
$D_{\alpha_{1}}P(z)$ is a polynomial of degree $n-1$ and by Lemma 2.1,
$D_{\alpha_{1}}P(z)$ has all its zeros in $|z|\leq k.$ By the similar argument
as before, $D_{\alpha_{2}}D_{\alpha_{1}}P(z)$ must be a polynomial of degree
$n-2$ for $|\alpha_{1}|>k,$ $|\alpha_{2}|>k$ and all its zeros in $|z|\leq k.$
Continuing in this way, we conclude $D_{\alpha_{q}}D_{\alpha_{q-1}}\cdots
D_{\alpha_{1}}P(z)=P_{q}(z)$ is a polynomial of degree $n-q$ for all
$|\alpha_{j}|>k,$ $j=1,2,\dots,q$ and has all zeros in $|z|\leq k.$ Applying
Lemma 2.2 to $P_{q}(z),$ we get for $|\alpha_{q+1}|>k,$
(2.6)
$|P_{q+1}(z)|=|D_{\alpha_{q+1}}P_{q}(z)|\geq\dfrac{(n-q)(|\alpha_{q+1}|-k)}{1+k}|P_{q}(z)|\,\,\,\textnormal{for}\,\,\,\,|z|=1.$
Inequality (2.6) in conjunction with (2.5) gives for $|z|=1,$
(2.7) $|P_{q+1}(z)|\geq\dfrac{n_{q+1}A_{\alpha_{q+1}}}{(1+k)^{q+1}}|P(z)|,$
where $n_{q+1}=n(n-1)\cdots(n-q)$ and
$A_{\alpha_{q+1}}=(|\alpha_{1}|-k)(|\alpha_{2}|-k)\cdots(|\alpha_{q+1}|-k).$
This shows that the result is true for $s=q+1$ also. This completes the proof
of Lemma 2.4. ∎
###### Lemma 2.5.
If $P(z)$ be a polynomial of degree $n,$ then for
$\alpha_{1},\alpha_{2},\cdots,\alpha_{s}\in\mathbb{C}$ with $|\alpha_{1}|\geq
k,|\alpha_{2}|\geq k,\cdots,|\alpha_{s}|\geq k,$ $(1\leq t<n),$ $|\beta|\leq
1$ and $|z|\geq 1,$
$\displaystyle\Bigg{|}z^{s}$ $\displaystyle
P_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}+k^{n}\left|z^{s}Q_{s}(z/k^{2})+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})\right|$
(2.8) $\displaystyle\leq$ $\displaystyle
n_{s}\left\\{\dfrac{|z^{n}|}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}+\Bigg{|}z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}\right\\}\underset{|z|=k}{Max}|P(z)|,$
where $n_{s}$ and $\Lambda_{s}$ are defined in (1.9) and
$Q(z)=z^{n}\overline{P(1/\overline{z})}.$
###### Proof.
Let $M=\underset{|z|=k}{Max}|P(z)|.$ Therefore, for every $\lambda$ with
$|\lambda|>1,$ $|P(z)|<|\lambda Mz^{n}/k^{n}|$ on $|z|=k.$ By Rouche’s theorem
it follows that all the zeros of $F(z)=P(z)+\lambda Mz^{n}/k^{n}$ lie in
$|z|<k.$ If $G(z)=z^{n}\overline{F(1/\overline{z})}$ then
$|k^{n}G(z/k^{2})|=|F(z)|$ for $|z|=k$ and hence for any $\delta$ with
$|\delta|>1,$ the polynomial $H(z)=k^{n}G(z/k^{2})+\delta F(z)$ has all its
zeros in $|z|<k.$ By applying Lemma 2.4 to $H(z),$ we have for
$\alpha_{1},\alpha_{2},\cdots,\alpha_{s}\in\mathbb{C}$ with
$|\alpha_{1}|>k,|\alpha_{2}|>k,\cdots,|\alpha_{s}|>k,$ $(1\leq t<n),$
$|z^{s}H_{s}(z)|\geq\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}|H(z)|\,\,\,\,\,\textnormal{for}\,\,\,\,|z|=1.$
Therefore, for any $\beta$ with $|\beta|<1$ and $|z|=1,$ we have
$|z^{s}H_{s}(z)|>|\beta|\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}|H(z)|.$
Since $|\alpha_{j}|>k,$ $j=1,2,\cdots,s$ by Lemma 2.1 the polynomial
$z^{s}H_{s}(z)$ has all its zeros in $|z|<1$ and by Rouche’s theorem, the
polynomial
$T(z)=z^{s}H_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}H(z)$
has all its zeros in $|z|<1.$ Replacing $H(z)$ by $k^{n}G(z/k^{2})+\delta
F(z),$ we conclude that the polynomial
$T(z)=k^{n}\left\\{z^{s}G_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}G(z/k^{2})\right\\}+\delta\left\\{z^{s}F_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}F(z)\right\\}$
has all its zero in $|z|<1.$ This gives for $|\beta|<1,$ $|\alpha_{j}|\geq k,$
where $j=1,2,\cdots,t$ and $|z|\geq 1.$
(2.9) $k^{n}\left|z^{s}G_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}G(z/k^{2})\right|\leq\left|z^{s}F_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}F(z)\right|$
If inequality (2.9) is not true, then there is a point $z_{0}$ with
$|z_{0}|\geq 1$ such that
$k^{n}\left|z_{0}^{s}G_{s}(z_{0}/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}G(z_{0}/k^{2})\right|>\left|z_{0}^{s}F_{s}(z_{0})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}F(z_{0})\right|$
Since all the zeros of $F(z)$ lie in $|z|<k,$ proceeding similarly as in case
of $H(z),$ it follows that the polynomial $z^{s}F_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}F(z)$ has all its zeros in $|z|<1,$ and hence
$z_{0}^{s}F_{s}(z_{0})+\dfrac{\beta n_{s}\Lambda_{s}}{(1+k)^{s}}F(z_{0})\neq
0.$ Now, choosing
$\delta=-\dfrac{k^{n}\left\\{z_{0}^{s}G_{s}(z_{0}/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}G(z_{0}/k^{2})\right\\}}{z_{0}^{s}F_{s}(z_{0})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}F(z_{0})}.$
then $\delta$ is a well-defined real or complex number, with $|\delta|>1$ and
$T(z_{0})=0,$ which contradicts the fact that $T(z)$ has all zeros in $|z|<1.$
Thus, (2.9) holds. Now, replacing $F(z)$ by $P(z)+\lambda Mz^{n}/k^{n}$ and
$G(z)$ by $Q(z)+\overline{\lambda}M/k^{n}$ in (2.9), we have for $|z|\geq 1,$
$\displaystyle k^{n}\Bigg{|}z^{s}$ $\displaystyle Q_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})+\dfrac{\overline{\lambda}n_{s}}{k^{n}}\left\\{z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right\\}M\Bigg{|}$
(2.10) $\displaystyle\leq\Bigg{|}z^{s}P_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)+\dfrac{\lambda
n_{s}}{k^{n}}\left\\{\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right\\}Mz^{n}\Bigg{|}.$
Choosing the argument of $\lambda$ in the right hand side of (2) such that
$\displaystyle\Bigg{|}z^{s}$ $\displaystyle P_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)+\dfrac{\lambda
n_{s}}{k^{n}}\left\\{\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right\\}Mz^{n}\Bigg{|}$
$\displaystyle=$ $\displaystyle\Bigg{|}\dfrac{\lambda
n_{s}}{k^{n}}\left\\{\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right\\}Mz^{n}\Bigg{|}-\Bigg{|}z^{s}P_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}.$
which is possible by Corollary 1.2, then for $|z|\geq 1,$ we have
$\displaystyle\Bigg{|}z^{s}$ $\displaystyle P_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}+k^{n}\Bigg{|}z^{s}Q_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})\Bigg{|}$ (2.11) $\displaystyle\leq$
$\displaystyle|\lambda|n_{s}\Bigg{[}\dfrac{|z^{n}|}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}+\Bigg{|}z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}\Bigg{]}M$
Making $|\lambda|\rightarrow 1$ and using the continuity for $|\beta|=1$ and
$|\alpha_{j}|=k,$ where $j=1,2,\cdots,t,$ in (2), we get the inequality (2.5).
∎
## 3\. Proof of Theorems
###### Proof of Theorem 1.1.
By hypothesis $F(z)$ is a polynomial of degree $n$ having all in zeros in the
closed disk $|z|\leq k$ and $P(z)$ is a polynomial of degree $n$ such that
(3.1) $|P(z)|\leq|F(z)|\,\,\,\,\textrm{for}\,\,\,\,|z|=k,$
therefore, if $F(z)$ has a zero of multiplicity $s$ at $z=ke^{i\theta_{0}}$,
then $P(z)$ has a zero of multiplicity at least $s$ at $z=ke^{i\theta_{0}}$.
If $P(z)/F(z)$ is a constant, then inequality (3.1) is obvious. We now assume
that $P(z)/F(z)$ is not a constant, so that by the maximum modulus principle,
it follows that
$|P(z)|<|F(z)|\,\,\,\textrm{for}\,\,|z|>k\,\,.$
Suppose $F(z)$ has $m$ zeros on $|z|=k$ where $0\leq m<n$, so that we can
write
$F(z)=F_{1}(z)F_{2}(z)$
where $F_{1}(z)$ is a polynomial of degree $m$ whose all zeros lie on $|z|=k$
and $F_{2}(z)$ is a polynomial of degree exactly $n-m$ having all its zeros in
$|z|<k$. This implies with the help of inequality (3.1) that
$P(z)=P_{1}(z)F_{1}(z)$
where $P_{1}(z)$ is a polynomial of degree at most $n-m$. Again, from
inequality (3.1), we have
$|P_{1}(z)|\leq|F_{2}(z)|\,\,\,for\,\,|z|=k\,$
where $F_{2}(z)\neq 0\,\,for\,\,|z|=k$. Therefore for every real or complex
number $\lambda$ with $|\lambda|>1$, a direct application of Rouche’s theorem
shows that the zeros of the polynomial $P_{1}(z)-\lambda F_{2}(z)$ of degree
$n-m\geq 1$ lie in $|z|<k$ hence the polynomial
$G(z)=F_{1}(z)\left(P_{1}(z)-\lambda F_{2}(z)\right)=P(z)-\lambda F(z)$
has all its zeros in $|z|\leq k.$ Therefore for $r>1,$ all the zeros of
$G(rz)$ lie in $|z|\leq k/r<k$ By applying Lemma 2.4 to the polynomial
$G(rz),$ then for $|z|=1,$ we have
$|z^{s}G_{s}(rz)|\geq\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}|G(rz)|.$
Equivalently for $|z|=1,$ we have
(3.2) $|z^{s}P_{s}(rz)-\lambda
z^{s}F_{s}(rz)|\geq\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}|P(rz)-\lambda F(rz)|.$
Therefore, we have for any $\beta$ with $|\beta|<1$ and $|z|=1,$
(3.3) $|z^{s}P_{s}(rz)-\lambda
z^{s}F_{s}(rz)|>\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}|\beta||P(rz)-\lambda
F(rz)|.$
Since $|\alpha_{j}|>k,$ $j=1,2,\cdots,t$ by Rouche’s theorem, the polynomial
$\displaystyle T(rz)$ $\displaystyle=\left\\{z^{s}P_{s}(rz)-\lambda
z^{s}F_{s}(rz)\right\\}+\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}\beta\left\\{P(rz)-\lambda
F(rz)\right\\}$
$\displaystyle=z^{s}P_{s}(rz)+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}P(rz)-\lambda\left\\{z^{s}F_{s}(rz)+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}F(rz)\right\\}$
has all zeros in $|z|<1.$ This implies for $|z|\geq 1$
(3.4)
$|z^{s}P_{s}(rz)+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}P(rz)|\leq|z^{s}F_{s}(rz)+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}F(rz)|$
If it is true, then we have for some $z_{0}$ with $|z_{0}|\geq k,$
(3.5)
$|z_{0}^{s}P_{s}(rz_{0})+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}P(rz_{0})|>|z_{0}^{s}F_{s}(rz_{0})+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}F(rz_{0})|$
Since
$z_{0}^{s}P_{s}(rz_{0})+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}P(rz_{0})\neq
0,$ we can choose
$\lambda=\dfrac{z_{0}^{s}F_{s}(rz_{0})+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}F(rz_{0})}{z_{0}^{s}P_{s}(rz_{0})+\dfrac{n_{s}\Lambda_{s}\beta}{(1+k)^{s}}P(rz_{0})}$
$\lambda$ is well-defined real or complex number with $|\lambda|<1$ and with
this choice of $\lambda,$ and $T(rz_{0})=0$ for $|z_{0}|\geq k.$ But this is a
contradiction to the fact that $T(rz)\neq 0$ for $|z|\geq 1.$ Thus (3.4)
holds. Letting $r\rightarrow 1,$ in (3.4), we get the desired result. ∎
###### Proof of Theorem 1.9.
Let $Q(z)=z^{n}\overline{P(1/\overline{z})}.$ Since $P(z)$ does not vanish in
the disk $|z|<k,\,\,k\leq 1$, then the polynomial $Q(z/k^{2})$ has all zeros
in $|z|\leq k.$ Applying Theorem 1.1 to $k^{n}Q(z/k^{2})$ and noting that
$|P(z)|=|k^{n}Q(z/k^{2})|$ for $|z|=k,$ we have for all
$\alpha_{j},\beta\in\mathbb{C}$ with $|\alpha_{j}|\geq 1,$ $j=1,2,\cdots,t,$
$|\beta|\leq 1,$ and $|z|\geq 1,$
(3.6)
$\displaystyle\left|z^{s}P_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\right|\leq
k^{n}\left|z^{s}Q_{s}(z/k^{2})+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})\right|.$
Inequality (3.6) in conjunction with Lemma 2.5 gives for all
$\alpha_{j},\beta\in\mathbb{C}$ with $|\alpha_{j}|\geq k,$ $j=1,2,\cdots,t$
$|\beta|\leq 1,$ and $|z|\geq 1,$
$\displaystyle 2\Bigg{|}$ $\displaystyle
z^{s}P_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}$
$\displaystyle\leq\left|z^{s}P_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\right|+k^{n}\left|z^{s}Q_{s}(z/k^{2})+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})\right|$
$\displaystyle\leq
n_{s}\left\\{\dfrac{|z^{n}|}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}+\Bigg{|}z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}\right\\}\underset{|z|=k}{Max}|P(z)|,$
which is equivalent to (1.9). ∎
###### Proof of Theorem 1.11.
Let $m=Min_{|z|=k}|P(z)|.$ If $P(z)$ has a zero on $|z|=k,$ then $m=0$ and
result follows from Theorem 1.9. Therefore, We assume that $P(z)$ has all its
zeros in $|z|>k$ where $k\leq 1$ so that $m>0$. Now for every $\lambda$ with
$|\lambda|<1$, it follows by Rouche’s theorem that $h(z)=P(z)-\lambda m$ does
not vanish in $|z|<k$. Let
$g(z)=z^{n}\overline{h(1/\overline{z})}=z^{n}\overline{P(1/\overline{z})}-\overline{\lambda}mz^{n}=Q(z)-\overline{\lambda}mz^{n}$
then, the polynomial $g(z/k^{2})$ has all its zeros in $|z|\leq k.$ As
$|k^{n}g(z/k^{2})|=|h(z)|$ for $|z|=k,$ applying Theorem 1.1 to
$k^{n}g(z/k^{2}),$ we get for
$\alpha_{1},\alpha_{2},\cdots,\alpha_{s},\beta\in\mathbb{C},$ with
$|\alpha_{j}|\geq k,$ $j=1,2,\cdots,t$ $|\beta|\leq 1$ and $|z|\geq 1,$
(3.7)
$\displaystyle\left|z^{s}h_{s}(z)+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}h(z)\right|\leq
k^{n}\left|z^{s}g_{s}(z/k^{2})+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}g(z/k^{2})\right|.$
Equivalently for $|z|\geq 1,$ we have
$\displaystyle\Bigg{|}z^{s}$ $\displaystyle P_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)-\lambda
n_{s}\left\\{z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right\\}m\Bigg{|}$
(3.8) $\displaystyle\leq k^{n}\Bigg{|}z^{s}Q_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})-\dfrac{\overline{\lambda}n_{s}}{k^{2n}}\left\\{\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right\\}mz^{n}\Bigg{|}.$
Since $Q(z/k^{2})$ has all its zeros in $|z|\leq k$ and
$k^{n}\underset{|z|=k}{Min}|Q(z/k^{2})|=\underset{|z|=k}{Min}|P(z)|,$ by
Corollary 1.5 applied to $Q(z/k^{2}),$ we have for $|z|\geq 1,$
$\displaystyle\Bigg{|}z^{s}Q_{s}(z/k^{2})+\beta\dfrac{n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})\Bigg{|}$
$\displaystyle\geq\dfrac{n_{s}}{k^{n}}\left|\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right|\underset{|z|=k}{Min}|Q(z)|$
$\displaystyle=\dfrac{n_{s}}{k^{2n}}\left|\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right|\underset{|z|=k}{Min}|P(z)|$
(3.9)
$\displaystyle=\dfrac{n_{s}}{k^{2n}}\left|\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right|m.$
Now, choosing the argument of $\lambda$ on the right hand side of inequality
(3) such that
$\displaystyle k^{n}$ $\displaystyle\Bigg{|}z^{s}Q_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})-\dfrac{\overline{\lambda}n_{s}}{k^{2n}}\left\\{\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right\\}mz^{n}\Bigg{|}$
$\displaystyle=k^{n}\Bigg{|}z^{s}Q_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})\Bigg{|}-\dfrac{|\overline{\lambda}|n_{s}}{k^{n}}\Bigg{|}\left\\{\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\right\\}mz^{n}\Bigg{|},$
which is possible by inequality (3), we get for $|z|\geq 1,$
$\displaystyle\Bigg{|}z^{s}$ $\displaystyle P_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}-|\lambda|n_{s}\Bigg{|}z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}m$
$\displaystyle=k^{n}\Bigg{|}z^{s}Q_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})\Bigg{|}-\dfrac{|\lambda||z|^{n}n_{s}}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}m,$
Letting $\lambda\rightarrow 1,$ we have for $|z|\geq 1,$
$\displaystyle\Bigg{|}z^{s}$ $\displaystyle P_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}-k^{n}\Bigg{|}z^{s}Q_{s}(z/k^{2})+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}Q(z/k^{2})\Bigg{|}$ (3.10) $\displaystyle\leq
n_{s}\left\\{\Bigg{|}z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}-\dfrac{|z|^{n}}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}\right\\}m$
Adding (2.5) and (3), we get for $|z|\geq 1$
$\displaystyle 2\Bigg{|}z^{s}$ $\displaystyle P_{s}(z)+\dfrac{\beta
n_{s}\Lambda_{s}}{(1+k)^{s}}P(z)\Bigg{|}$ $\displaystyle\leq
n_{s}\Bigg{[}\left\\{\dfrac{|z^{n}|}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}+\Bigg{|}z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}\right\\}\underset{|z|=k}{Max}|P(z)|$
$\displaystyle+\left\\{\Bigg{|}z^{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}\Bigg{|}-\dfrac{|z|^{n}}{k^{n}}\Bigg{|}\alpha_{1}\alpha_{2}\cdots\alpha_{s}+\dfrac{\beta\Lambda_{s}}{(1+k)^{s}}mz^{n}\Bigg{|}\right\\}m\Bigg{]},$
which is equivalent to (1.11). This completes the proof of Theorem 1.11. ∎
## References
* [1] A. Aziz, A new proof of Laguerre’s theorem about the zeros of polynomials, Bull. Austral. Math. Soc. 33 (1986), 131-138.
* [2] A. Aziz, Inequalities for the polar derivatives of a polynomial, J. Approx. Theory, 55 (1988) 183-193
* [3] A. Aziz, N. A. Rather, A refinement of a theorem of Paul Turán concerning polynomials. Math Ineq Appl. 1, 231–238 (1998).
* [4] A. Aziz and N. A. Rather, Some Zygmund type $L^{q}$ inequalities for polynomials, J. Math. Anal. Appl. 289 (2004) 14-29.
* [5] A. Aziz and Wali Mohammad Shah, Inequalities for polar derivatives of polynomial, Proc. Indian Acad. Sci. (Math. Sci.), 107, 263-270.
* [6] P. D. Lax, Proof of a conjecture of P.Erdös on the derivative of a polynomial, Bull. Amer. Math. Soc., 50 (1944), 509-513
* [7] M. A. Malik, On the derivative of a polynomial, J. Lond. Math. Soc., Second Series 1 (1969), 57-60.
* [8] M. Marden, Geometry of Polynomials, Math. Surveys, Amer. Math. Soc., Providence, 1966.
* [9] G. V. Milovanovic, D. S. Mitrinovic and Th. M. Rassias, Topics in Polynomials: Extremal Properties, Inequalities, Zeros, World scientific Publishing Co., Singapore, (1994)
* [10] Q. I. Rahman and G. Schmessier, Analytic theory of polynomials, Claredon Press, Oxford, 2002.
* [11] A. C. Schaffer, Inequalities of A. Markoff and S. Bernstein for polynomials and related functions, Bull. Amer. Math. Soc., 47(1941), 565-579.
* [12] P. Turán, U¨ber die Ableitung von Polynomen, Compositio Mathematica 7 (1939), 89-95 (German).
|
arxiv-papers
| 2013-03-01T02:33:25 |
2024-09-04T02:49:42.243704
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "N. A. Rather and Suhail Gulzar",
"submitter": "Suhail Gulzar Mattoo Suhail Gulzar",
"url": "https://arxiv.org/abs/1303.0068"
}
|
1303.0089
|
# Estimating Thematic Similarity of Scholarly Papers
with Their Resistance Distance in an Electric Network Model
Frank Havemann1,∗ Michael Heinz1 Jochen Gläser2 Alexander Struck1
1 Institut für Bibliotheks- und Informationswissenschaft, Humboldt-Universität
zu Berlin, Berlin, Germany
2 Zentrum Technik und Gesellschaft, Technische Universität Berlin, Berlin,
Germany
$\ast$ E-mail: Frank (dot) Havemann (at) ibi.hu-berlin.de
## Abstract
We calculate resistance distances between papers in a nearly bipartite
citation network of 492 papers and the sources cited by them. We validate that
this is a realistic measure of thematic distance if each citation link has an
electric resistance equal to the geometric mean of the number of the paper’s
references and the citation number of the cited source.
## 1 Introduction
It is often useful to be able to determine the thematic similarity of two
scholarly papers which is equivalent to knowing their thematic distance in a
hypothetical space of concepts or in the genealogical tree of knowledge.
Modelling a scholarly paper by a set of terms and cited sources we face the
problem that we cannot calculate the thematic similarity of two papers which
themselves do not share terms respectively sources in their reference lists.
In bibliometric terms two such papers are neither bibliographically nor
lexically coupled but it would not be adequate to assume that they are totally
unrelated because both types of lists, that of references and that of terms,
are incomplete in general:
* •
papers do not cite all of their intellectual ancestors,
* •
very general descriptors are often not included in term lists.
In this paper we only discuss the case of citation networks of papers.
Augmenting citation data with terms improves similarity estimation but we
leave this opportunity for future work.
One solution of the problem of incomplete reference lists could be to search
for all intellectual ancestors i.e. for indirect citation links between papers
in the past. This would be a tedious task based on incomplete data because not
all references of references are indexed in citation databases. Here we do not
rely on indirect citation links in the past but on indirect connections in
networks of papers and their cited sources in a time slice of one year.
Earlier we tested whether thematic distances between papers could be estimated
by the length of the shortest path between them in a one-year citation network
havemann2007mdr. Shortest-path length was also used by botafogo1992sah and by
egghe2003bcn to measure compactness of unweighted networks. egghe_measure_2003
generalised it to weighted graphs and applied it to small paper networks. A
drawback of shortest-path length is its high sensitivity to the existence or
nonexistence of single links which can act as shortcuts [Mitesser, Heinz,
Havemann, and GläserMitesser et al.2008].
Here we propose and test another solution, which takes all or at least the
most important possible paths between two nodes into account: we calculate
resistance distances between nodes [Klein and RandićKlein and Randić1993,
TetaliTetali1991]. To the best of our knowledge, resistance distance was not
yet used for estimating the thematic similarity of papers.
We avoid time-consuming exact resistance computation by applying a fast
approximate iteration method applied by wu_finding_2004. We also discuss other
iterative approaches to the estimation of node similarity based on more than
one path (s. sections 2 and 5).
In section 4 we validate that resistance is a realistic measure of thematic
distance if each citation link has an electric conductance equal to the
inverse geometric mean of the number of the paper’s references and the
citation number of the cited source.
## 2 Method
We use the nearly bipartite citation network of papers and their cited sources
because projecting it on a one-mode graph of bibliographically coupled papers
would reduce the information content of the data. The network is not fully
bipartite because some papers are already cited in the year of their
publication.
In the electric model we assume that each link has a conductance equal to its
weight. We calculate the effective resistance between two nodes as if they
would operate as poles of an electric power source. Effective resistance has
been proven to be a distance measure fulfilling the triangular inequation
[Klein and RandićKlein and Randić1993, TetaliTetali1991].
One problem we have to solve before we can calculate distances is the
delineation of the research field. It is not feasible and not necessary that
the electric current between two papers flows through the total citation
network of all papers published in the year considered. Field delineation
should be done by an appropriate method for finding thematically coherent
communities of papers [FortunatoFortunato2010, Havemann, Gläser, Heinz, and
StruckHavemann et al.2012b].
A second problem is the weighting of the network. In bibliometrics, the
strength of citation links is often downgraded by dividing it by the number of
references of the citing paper. We use a weighting of each link with the
inverse geometric mean of its two nodes’ degrees [Havemann, Gläser, Heinz, and
StruckHavemann et al.2012a]:
$w_{ij}=\frac{A_{ij}}{\sqrt{k_{i}k_{j}}}.$ (1)
Then, for citation links, we take not only the number of references into
account but also the number of citations the cited source recieves from the
papers in the network. A citation link from a paper with many references to a
highly cited source is weaker than a link from a paper with a short reference
list to a seldom cited source.111 Such a weighting follows the same reasoning
as the TF-IDF scheme in information retrieval.
A third problem is that an exact calculation of all resistance distances
between $n$ nodes (e.g. papers and cited sources in a field) requires an
inversion of an $n\times n$-matrix, a task of high complexity. Fortunately, we
are only interested in similarities between papers and not between their many
cited sources. Furthermore, we need only approximations of similarity rather
than exact values. Therefore we can apply a fast approximate iteration method
applied by wu_finding_2004 for community finding. We describe its details in
Appendix A.1.
This method is based on the fact that we know the effective resistance between
two pole nodes in a network if we know the currents flowing from one of the
two pole nodes to its neighbouring nodes. We can calculate these currents if
we know the voltages of a pole’s neighbours. From Kirchhoff’s laws we know
that—with the exception of the poles $p$ and $g$—a node’s voltage is the
average of its neighbours’ voltages, more precisely the weighted average with
link conductances as weights.
If we start with all voltages equal to zero (except the positive pole’s
voltage $V_{p}=1$) we obtain the true voltages of all nodes by iteratively
averaging voltages according to the formula $V\leftarrow F(p,g)V$, where $V$
is the voltage vector and $F(p,g)$ is the row normalised weighted adjacency
matrix of the network but with the pole nodes’ row vectors filled with zeros
with the exception of $F_{pp}=1$ (for details cf. Appendix A.1).
There are other iterative approaches to the estimation of node similarity
based on more than one path [Leicht, Holme, and NewmanLeicht et al.2006, Jeh
and WidomJeh and Widom2002]. Their convergence can only be assured by
introducing a parameter $<1$ for downgrading the contributions of longer
pathes. Introducing an auxiliary parameter should be avoided unless its value
could be estimated from theoretical consideration or from empirical data (cf.
section 5).
## 3 Experiment
We experimented with community-finding algorithms on a connected citation
network of 492 information science papers published in 2008
havemann2011identification.222Source of raw data: Web of Science. In this
sample we have identified three topics by inspection of titles, keywords, and
abstracts havemann_identifying_2012. Therefore, we also use it here to
validate the measure of thematic distance of scholarly papers.
Figure 1: Histogram of the logarithms of resistance distances between all
120,786 pairs of 492 papers. The distribution of $R$ is skewed, but that of
$\log(R)$ rather symmetric.
The 492 papers cite 13,755 different sources and 21 other papers in the
sample. We analyse the nearly bipartite graph of papers and sources connected
by 17,196 citation links. For the electric model we have to consider the graph
to be undirected. We can drop all the 12,013 sources cited only once because
no current can flow through their citation links. We cannot neglect the 15
papers cited only once. We weight the links according to equation 1 where
$k_{i}$ is the degree of node $i$ after dropping the sources with only one
citation.
The open-source C++-program (written by Andreas Prescher) took about one hour
to calculate the $492\cdot 491/2=120,786$ distances with a maximal error of
0.1 (s. Figure 1). If one only needs the distribution of distances it can be
approximated by calculating distances of a random sample. Less then one third
of distances (36,590) are needed to obtain an estimated standard error of the
estimated population mean smaller than 0.01 (s. Appendix A.3).
## 4 Validation
Figure 2: Cumulated number of topic papers obtained from ranking all 492
papers according to their normalised median distance to papers of the three
topics. The black lines represent the ideal cases, where all papers of a topic
rank above other papers.
In earlier research we had identified three overlapping topics in our network,
named bibliometrics (224 papers), Hirsch-index (42 papers), and webometrics
(24 papers). We validate the measure of thematic distance by ranking all
papers according to the median distance to papers of a topic and expect the
papers dealing with this topic at top ranks. Because we have not classified
really all papers dealing with the topics considered, the ranking with regard
to thematic similarity cannot be perfect.
Another validation issue is that on average resistance distances between high-
degree nodes are smaller than between low-degree nodes because all currents
must flow through the immediate neighbours of the two nodes. The number of
neighbours of a paper is the number of its references. More referenced sources
suggest that the paper deals with more topics—at least in the discussion
section. Thus, it is not an artifact of the measure that papers with many
references have smaller distances to many other papers than papers with just a
few references. In other words, they are often the central nodes in the graph.
Therefore we have to assure that the central nodes do not distort the ranking
of nodes with regard to distances to a topic when we validate the measure. We
correct for centrality by dividing the median distance of a paper to all topic
papers by its median distance to all papers in the sample. The curves in
Figure 2 show for the three predefined topics that indeed the topic papers
have top ranks if we rank according to this ratio of medians.
This result is confirmed by a further test. We have used the resistance
distances as an input for hierarchical clustering of papers. Ward clustering
reconstructs the three topics with values of precision and recall similar to
the values we obtained with hierarchical link clustering
havemann_identifying_2012.
## 5 Discussion
There is another approach to node similarity which also takes all possible
paths between the nodes into account and also leads to an iterative matrix
multiplication [and references of this paper]leicht_vertex_2006.333See the
paper by zhou_predicting_2009 for a discussion of further measures of node
similarity. It is based on a self-referential definition of node similarity
inspired by self-referential influence definitions [Pinski and NarinPinski and
Narin1976, Brin and PageBrin and Page1998].
One advantage of the self-referential approach compared to the iterative
resistance calculation is that one needs only one global iteration procedure
to obtain all node similarities in one run. The most severe disadvantage we
see is that the self-referential iteration does not converge unless an
auxiliary multiplicative parameter $<1$ is introduced which diminishes the
weight which longer paths is given in the similarity measure.
leicht_vertex_2006 derive their iteration procedure by relating the number of
observed paths of some length to the (approximated) number of expected paths
between the two nodes. Such a relation to expectation is also necessary for
the resistance approach if differences between distances have to be evaluated.
A simple method is the one we apply for validation of our measure. We relate
the observed to the median values of resistance distances.
If we want to obtain a similarity or distance measure which is comparable
between different networks we have to relate resistance distances between
nodes of a network to distances obtained in a null model of the network. The
null model depends on the hypothesis we want to test with the measure of node
similarity.
Applying their approach to the case of any two nodes $i$ and $j$ with distance
2, leicht_vertex_2006 derive a similarity measure defined as the ratio of the
number of common neighbours to the product $k_{i}k_{j}$ of their degrees (in
contrast to the cosine similarity where this number is related to the square
root of this product). If we estimate the current between two nodes which have
common neighbours with the total current to the grounded pole $g$ from its
neighbours after one iteration we get
$w_{pg}+\sum_{i=1}^{n}\frac{w_{gi}w_{pi}}{w_{i}}$
with $w_{i}=\sum w_{il}$ (cf. Appendix A.2). For the network of a volume of
papers and their cited sources there are only a few papers linked by a direct
citation i.e. the first term nearly always vanishes: $w_{pg}=0$. If the
network is unweighted the similarity (measured with inverse distance) of two
papers is then estimated by the sum of the inverse citation numbers $k_{i}$ of
the sources cited by both papers
$\sum_{i=1}^{n}\frac{A_{gi}A_{pi}}{k_{i}},$
a reasonable new absolute measure of bibliographic coupling where highly cited
sources contribute less to the coupling strength than sources cited only by a
few papers. With the weighting defined in equation 1 we obtain another measure
of bibliographic coupling (cf. Appendix A.2):
$\frac{1}{\sqrt{k_{g}k_{p}}}\sum_{i=1}^{n}\frac{A_{gi}A_{pi}}{\sqrt{k_{i}}\sum_{j}A_{ij}/\sqrt{k_{j}}}.$
Its denominator is equal to that of the cosine similarity and the common
sources in the sum are weighted with the inverse product of the square root of
their citation numbers and the sum over their citing papers weigthted with the
inverse square root of their numbers of references.
We do not propose to use this expression as a new similarity measure but argue
that it is a reasonable relative measure of bibliographic coupling which
downgrades the coupling strength of highly cited sources and downgrades the
contribution to their citation numbers coming from papers citing many other
sources. This confirms the weigthing we use here.
## 6 Summary
We have validated that resistance distance calculated in a citation graph is a
realistic measure of thematic distance if each citation link has an electric
resistance equal to the geometric mean of the number of the paper’s references
and the citation number of the cited source.
## Acknowledgements
This work is part of a project in which we develop methods for measuring the
diversity of research. The project is funded by the German Ministry for
Education and Research (BMBF). We thank Andreas Prescher for developing the
fast C++-program for the algorithm.
## Appendix A Appendix
### A.1 Resistance Distance
To calculate the total resistance between two nodes we apply the fast
approximative method described by wu_finding_2004.
To obtain the total resistance between any two nodes $p$ and $g$ we set the
voltage $V_{p}$ of the positive pole $p$ to 1 and the voltage $V_{g}$ of the
grounded pole $g$ to zero. Thus we get the total tension $U=V_{p}-V_{g}=1$. If
we know the total current $I$ between the two poles then we obtain the total
resistance with $R=U/I=1/I$.
If we know the voltages $V_{i}$ of the positive pole’s adjacents $i$ we obtain
the total current $I$ by summing the currents
$I_{pi}=U_{pi}/R_{pi}=(V_{p}-V_{i})/R_{pi}=(1-V_{i})/R_{pi}$
for all adjacents $i$.
Conductance $1/R_{ij}$ equals the link’s weight $w_{ij}$. We therefore get for
the total current $I$ between nodes $p$ and $g$
$I=\sum_{i}I_{pi}=\sum_{i}w_{pi}(1-V_{i})=w_{p}-\sum_{i}w_{pi}V_{i}$ (2)
where $w_{p}=\sum_{i}w_{pi}$ is the weight of node $p$. We can also calculate
the total current from the currents flowing into the grounded pole:
$I=\sum_{i}I_{gi}=\sum_{i}w_{gi}V_{i}.$ (3)
Each current $I_{ij}$ through link $(i,j)$ equals the voltage difference
$U_{ij}$ of nodes $i$ and $j$ divided by the link’s resistance $R_{ij}$:
$I_{ij}=U_{ij}/R_{ij}=(V_{i}-V_{j})/R_{ij}.$
From Kirchhoff’s laws we know that the sum of currents flowing out of a node
$i$ (which is not a voltage source) to its adjacents $j$ is zero:
$\sum_{j}I_{ij}=0$, that means
$\sum_{j}(V_{i}-V_{j})/R_{ij}=V_{i}w_{i}-\sum_{j}V_{j}/R_{ij}=0.$
From this we obtain that the voltage of node $i$ is the weighted average of
its adjacents’ voltages:
$V_{i}=\frac{1}{w_{i}}\sum_{j}w_{ij}V_{j}.$ (4)
We obtain all the nodes’ voltages by an iteration. For this, we turn equation
4 into a command
$V_{i}\leftarrow\frac{1}{w_{i}}\sum_{j}w_{ij}V_{j},$
that means, in each iteration step, we get the new voltage of a node by
averaging the old voltages of the node’s adjacents and expect that the
algorithm converges.
If we introduce the weight matrix $W$ with row sums normalised to one by
$W_{ij}=\frac{1}{w_{i}}\sum_{j}w_{ij}$
we can write the iteration command as $V\leftarrow WV$. Because the poles’
voltages remain unchanged we use a matrix $F(p,g)$ instead of $W$. $F(p,g)$ is
the row normalised weighted adjacency matrix of the network but with the pole
nodes’ row vectors filled with zeros with the exception of $F_{pp}(p,g)=1$.
We only need the voltages of the positives pole’s adjacents to obtain the
total resistance beetween nodes $p$ and $g$ as $1/I$ with equation 2. During
the iteration, we estimate these voltages. We consider the series of estimated
voltages and observe that they cannot decrease. This means, that the current
$I$ estimated with equation 2 does never increase and the total resistance
$R=1/I$ does never decrease. From equation 2 we obtain a lower bound of the
true total resistance. Analogously, from equation 3 we get an upper bound.
Both bounds converge. We stop the iteration if the difference between both
bounds becomes smaller than a small positive number $\epsilon$ which acts as a
measure of precision needed for the analysis.
### A.2 First Approximation for Poles with Common Neighbours
We start with voltages $V_{i}=0,\forall i\neq p$ and $V_{p}=1$. The first
iteration results in voltages
$V_{i}(1)=\dfrac{1}{w_{i}}\sum_{j}w_{ij}V_{j}(0)=\dfrac{w_{ip}}{w_{i}},\forall
i\neq p,g.$ (5)
The current reaching the grounded pole is then
$I_{g}(1)=\sum_{i}w_{gi}V_{i}(1)=w_{pg}+\sum_{i}\dfrac{w_{gi}w_{ip}}{w_{i}}.$
(6)
If positive pole $p$ and grounded pole $g$ have a graph distance of two hops
then $w_{pg}=0$.
In the unweighted case $w_{ij}=A_{ij}$ and
$I_{g}(1)=\sum_{i}\dfrac{A_{gi}A_{ip}}{k_{i}}.$
If we weight according to equation 1 we obtain
$I_{g}(1)=\sum_{i}\dfrac{A_{gi}A_{ip}}{\sqrt{k_{g}k_{i}}\cdot\frac{1}{\sqrt{k_{i}}}\sum_{j}\frac{A_{ij}}{\sqrt{k_{j}}}\cdot\sqrt{k_{i}k_{p}}}$
(7)
or
$I_{g}(1)=\frac{1}{\sqrt{k_{g}k_{p}}}\sum_{i=1}^{n}\frac{A_{gi}A_{pi}}{\sqrt{k_{i}}\sum_{j}A_{ij}/\sqrt{k_{j}}}.$
### A.3 Distances of a Random Sample
If we do not need distances between all $|P|$ papers but only the form of
their distribution we can avoid to calculate all $|P|(|P|-1)/2$ distances. In
this case, we order all paper pairs randomly. Then the first $n$ distances are
a random sample from all distances. The standard error $S_{R}$ of the average
resistance $R$ is then given by the square root of
$S_{R}^{2}=\frac{N-n}{(N-1)(n-1)n}\left[\sum_{i=1}^{n}R_{i}^{2}-\frac{1}{n}\left(\sum_{i=1}^{n}R_{i}\right)^{2}\right].$
We stop calculating resistance distances if standard error $S_{R}$ is smaller
than $\epsilon/10$ for the last ten random samples. We can choose a relative
large $\epsilon$ for precision of each single resistance because the average
remains precise even if the averaged values are rounded. Both sums in the
formula can be updated easily by adding the new terms to the last values of
the sums.
The formula for $S_{R}^{2}$ can be derived from
$S_{R}^{2}=\frac{N-n}{(N-1)n}S^{2},$ (8)
where the variance of distances $S^{2}$ can be estimated by the variance of
the sample $s^{2}$ according to
$S^{2}=\frac{n}{n-1}s^{2}.$ (9)
We have
$s^{2}=\frac{1}{n}\sum_{i=1}^{n}(R_{i}-R)^{2}=\frac{1}{n}\left[\sum_{i=1}^{n}R_{i}^{2}-\frac{1}{n}\left(\sum_{i=1}^{n}R_{i}\right)^{2}\right],$
leading to the formula for $S_{R}^{2}$.
## References
* [Botafogo, Rivlin, and ShneidermanBotafogo et al.1992] Botafogo, R. A., E. Rivlin, and B. Shneiderman (1992). Structural analysis of hypertexts: identifying hierarchies and useful metrics. ACM Transactions on Information Systems (TOIS) 10(2), 142–180.
* [Brin and PageBrin and Page1998] Brin, S. and L. Page (1998). The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems 30(1–7), 107–117.
* [Egghe and RousseauEgghe and Rousseau2003a] Egghe, L. and R. Rousseau (2003a). BRS-compactness in networks: Theoretical considerations related to cohesion in citation graphs, collaboration networks and the internet. Mathematical and Computer Modelling 37(7-8), 879–899.
* [Egghe and RousseauEgghe and Rousseau2003b] Egghe, L. and R. Rousseau (2003b, February). A measure for the cohesion of weighted networks. Journal of the American Society for Information Science and Technology 54(3), 193–202.
* [FortunatoFortunato2010] Fortunato, S. (2010). Community detection in graphs. Physics Reports 486(3-5), 75–174.
* [Havemann, Gläser, Heinz, and StruckHavemann et al.2012a] Havemann, F., J. Gläser, M. Heinz, and A. Struck (2012a, June). Evaluating overlapping communities with the conductance of their boundary nodes. arXiv:1206.3992.
* [Havemann, Gläser, Heinz, and StruckHavemann et al.2012b] Havemann, F., J. Gläser, M. Heinz, and A. Struck (2012b, March). Identifying overlapping and hierarchical thematic structures in networks of scholarly papers: A comparison of three approaches. PLoS ONE 7(3), e33255.
* [Havemann, Heinz, Schmidt, and GläserHavemann et al.2007] Havemann, F., M. Heinz, M. Schmidt, and J. Gläser (2007). Measuring Diversity of Research in Bibliographic-Coupling Networks. In D. Torres-Salinas and H. F. Moed (Eds.), Proceedings of ISSI 2007, Volume 2, Madrid, pp. 860–861. Poster abstract.
* [Havemann, Heinz, Struck, and GläserHavemann et al.2011] Havemann, F., M. Heinz, A. Struck, and J. Gläser (2011). Identification of Overlapping Communities and their Hierarchy by Locally Calculating Community-Changing Resolution Levels. Journal of Statistical Mechanics: Theory and Experiment 2011, P01023. doi: 10.1088/1742-5468/2011/01/P01023, Arxiv preprint arXiv:1008.1004.
* [Jeh and WidomJeh and Widom2002] Jeh, G. and J. Widom (2002). SimRank: a measure of structural-context similarity. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’02, New York, NY, USA, pp. 538–543. ACM.
* [Klein and RandićKlein and Randić1993] Klein, D. J. and M. Randić (1993, December). Resistance distance. Journal of Mathematical Chemistry 12(1), 81–95.
* [Leicht, Holme, and NewmanLeicht et al.2006] Leicht, E. A., P. Holme, and M. E. J. Newman (2006, February). Vertex similarity in networks. Physical Review E 73(2), 026120.
* [Mitesser, Heinz, Havemann, and GläserMitesser et al.2008] Mitesser, O., M. Heinz, F. Havemann, and J. Gläser (2008). Measuring diversity of research by extracting latent themes from bipartite networks of papers and references. In H. Kretschmer and F. Havemann (Eds.), Proceedings of WIS 2008: Fourth International Conference on Webometrics, Informetrics and Scientometrics & Ninth COLLNET Meeting, Berlin. Humboldt-Universität zu Berlin: Gesellschaft für Wissenschaftsforschung.
* [Pinski and NarinPinski and Narin1976] Pinski, G. and F. Narin (1976). Citation influence for journal aggregates of scientific publications: theory, with application to literature of physics. Information Processing & Management 12, 297–312.
* [TetaliTetali1991] Tetali, P. (1991, January). Random walks and the effective resistance of networks. Journal of Theoretical Probability 4(1), 101–109.
* [Wu and HubermanWu and Huberman2004] Wu, F. and B. A. Huberman (2004, March). Finding communities in linear time: a physics approach. The European Physical Journal B – Condensed Matter 38(2), 331–338.
* [Zhou, Lü, and ZhangZhou et al.2009] Zhou, T., L. Lü, and Y.-C. Zhang (2009, October). Predicting missing links via local information. The European Physical Journal B 71(4), 623–630.
|
arxiv-papers
| 2013-03-01T05:49:32 |
2024-09-04T02:49:42.256090
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Frank Havemann and Michael Heinz and Jochen Gl\\\"aser and Alexander\n Struck",
"submitter": "Frank Havemann",
"url": "https://arxiv.org/abs/1303.0089"
}
|
1303.0126
|
# Linear PDEs and eigenvalue problems corresponding to ergodic stochastic
optimization problems on compact manifolds
Joris Bierkens J. Bierkens
Radboud University
Faculty of Science
P.O. Box 9010
6500 GL Nijmegen
The Netherlands [email protected] , Vladimir Y. Chernyak V. Y.
Chernyak
Wayne State University
Department of Chemistry
Detroit (MI), USA [email protected] , Michael Chertkov M. Chertkov
Los Alamos National Laboratory
Center for Nonlinear Studies
Los Alamos (NM), USA [email protected] and Hilbert J. Kappen H. J. Kappen
Department of Science
Radboud University
The Netherlands [email protected]
###### Abstract.
We consider long term average or ‘ergodic’ optimal control poblems with a
special structure: Control is exerted in all directions and the control costs
are proportional to the square of the norm of the control field with respect
to the metric induced by the noise. The long term stochastic dynamics on the
manifold will be completely characterized by the long term density $\rho$ and
the long term current density $J$. As such, control problems may be
reformulated as variational problems over $\rho$ and $J$. We discuss several
optimization problems: the problem in which both $\rho$ and $J$ are varied
freely, the problem in which $\rho$ is fixed and the one in which $J$ is
fixed. These problems lead to different kinds of operator problems: linear
PDEs in the first two cases and a nonlinear PDE in the latter case. These
results are obtained through through variational principle using infinite
dimensional Lagrange multipliers. In the case where the initial dynamics are
reversible we obtain the result that the optimally controlled diffusion is
also symmetrizable. The particular case of constraining the dynamics to be
reversible of the optimally controlled process leads to a linear eigenvalue
problem for the square root of the density process.
###### Key words and phrases:
Key words and phrases: Stochastic optimal control, ergodic theory, calculus of
variations, differential geometry, flux, current, gauge invariance
###### 1991 Mathematics Subject Classification:
Subject Classification: Primary 49K20; Secondary 93E20, 58A25
The research at Radboud University, The Netherlands, leading to these results
has received funding from the European Community’s Seventh Framework Programme
(FP7/2007-2013) under grant agreement no. 231495.
The research at Wayne State University (MI), USA, has received support from
the NSF under grant agreement no. CHE-1111350.
The work at LANL was carried out under the auspices of the National Nuclear
Security Administration of the U.S. Department of Energy at Los Alamos
National Laboratory under Contract No. DE-AC52-06NA25396.
## 1\. Introduction
In this paper we discuss stochastic, long term average optimal, or ‘ergodic’
control problems on compact orientable manifolds. The theory about ergodic
control theory in continuous spaces has been developed relatively recently;
see works by Borkar and Gosh (e.g. [5]) and the recent monograph [1]. To our
knowledge no literature is available about this topic in the setting of
compact manifolds.
We concentrate on a special case of the control problem, in which control is
exerted in all directions and where the control costs are proportional to the
square of the norm of the control field with respect to the metric induced by
the noise, as discussed in Section 2. As such, our emphasis does not lie on
the solution of applied control problems. This setting may however prove
relevant for obtaining results in large deviations theory; see e.g. [11],
where the connection is made between control problems and large deviations
theory. The ‘squared control cost’ is further motivated by recent results we
obtained on stochastic optimal control for finite time horizon problems, with
relative entropy determining control cost [4]. This particular setting
typically leads to linearized systems [12]. In the ergodic setting it leads
typically to operator eigenvalue problems, see e.g. [18] for the diffusion
case and [20] for the Markov chain setting.
On a compact manifold, a few phenomena play a special role. The main advantage
of this setting is that transient behaviour cannot occur. Therefore, an
invariant measure is necessarily unique and ergodicity follows immediately.
The long term stochastic dynamics on the manifold will be completely
characterized by the long term density $\rho$ and the long term current
density $J$ (see Section 3). As such, control problems may be reformulated as
variational problems over $\rho$ and $J$. In the optimization problem, the
density $\rho$ is paired with the scalar cost or potential function $V$, and
the current density $J$ is paired with a _vector potential_ or _gauge field_
$A$ to obtain the cost function. In Section 4 we discuss how to understand the
notion of flux as a particular example of choosing a gauge field $A$.
We then discuss several optimization problems: the problem in which both
$\rho$ and $J$ are varied freely (Section 5, the problem in which $\rho$ is
fixed (Section 6) and the one in which $J$ is fixed (Section 7). These
problems lead to different kinds of operator problems: linear PDEs in the
first two cases and a nonlinear PDE in the latter case. These results are
obtained through through variational principle using infinite dimensional
Lagrange multipliers. This analysis is performed rigourously in Section 5.
In the case where the initial dynamics are reversible, or in other words, in
case the diffusion is symmetrizable, we obtain the result that the optimally
controlled diffusion is also symmetrizable (Section 5.4). The particular case
of insisting $J=0$ coincides with demanding reversible dynamics of the
optimally controlled process. Interestingly, this optimization problem leads
to a linear eigenvalue problem for the square root of the density process,
just as we see in quantum mechanics (but note that our setting is entirely
classical). We conclude this paper with a short discussion (Section 8).
This paper is written for a mathematical audience. The reader interested in
the statistical physics interpretation of this material is referred to our
related publication [6].
## 2\. Problem setting
We will phrase our setting in terms of diffusion processes on manifolds, in
the language of [14, Chapter V]. By smooth we always mean infinitely often
differentiable. $M$ will always denote a smooth compact orientable
$m$-dimensional manifold. Let $C^{\infty}(M)$ denote the space of smooth
functions from $M$ into $\mathbb{R}$, let $\mathfrak{X}(M)$ denote the space
of smooth vectorfields on $M$, and let $\Lambda^{p}(M)$ denote the space of
smooth differential forms of order $p$ on $M$, for $p=0,1,\ldots,m$.
Let $(\Omega,\mathcal{F},(\mathcal{F}_{t}),\mathbb{P})$ denote a filtered
probability space on which is defined a $d$-dimensional standard Brownian
motion. Consider a stochastic process $X$ defined on $M$ by the SDE, given in
local coordinates by
(1) $dX^{i}_{t}=g^{ij}(X_{t})f_{j}(t,X_{t})\
dt+\sigma^{i}_{\alpha}(X_{t})\circ dB^{\alpha}_{t},$
where, for $\alpha=1,\ldots,d$, $\sigma_{\alpha}\in\mathfrak{X}(M)$,
$g^{ij}:=\sum_{\alpha}\sigma^{i}_{\alpha}\sigma^{j}_{\alpha}$, for
$i,j=1,\ldots,m$, is a symmetric positive semidefinite bilinear tensorfield on
$M$, $f$ is a differential 1-form on $M$, denoting force. For any initial
condition $x_{0}\in M$, let $X^{x_{0}}$ denote the unique solution to (1) The
notation $\circ dB^{\alpha}$ indicates that we take Stratonovich integrals
with respect to the Brownian motion. One can think of $f$ as a force field,
resulting from a potential, some external influence, or a combination of both.
We will always assume the following hypothesis.
###### Hypothesis 2.1 (Ellipticity).
$g$ is positive definite on $M$.
Under this assumption, $g$ defines a Riemannian metric on $M$ and we will use
this as the metric of choice without further notice. This Riemannian metric
induces a local inner product $\langle\cdot,\cdot\rangle$ and corresponding
norm $||\cdot||$ on tensors of arbitrary covariant and contravariant orders.
The SDE (1) is referred to as the _uncontrolled dynamics_. These dynamics may
be altered by exterting a ‘force’ or ‘control’ $u$ in the following way,
(2) $dX^{i}_{t}=g^{ij}(X(t))\left[f_{j}(X_{t})+u_{j}(X_{t})\right]\
dt+\sum_{\alpha=1}^{d}\sigma^{i}_{\alpha}(X_{t})\circ dB^{\alpha}_{t},$
where $u\in\Lambda^{1}(M)$. For any $x_{0}\in M$ and $u\in\Lambda^{1}(M)$, the
unique solution to (2) will be denoted by $X^{x_{0},u}$. The SDE (2) is
referred to as the _controlled dynamics_.
Consider the random functional $\mathcal{C}:\Omega\times
M\times\Lambda^{1}(M)\rightarrow\mathbb{R}$ denoting pathwise long term
average cost,
(3)
$\mathcal{C}(\omega,x_{0},u):=\limsup_{T\rightarrow\infty}\frac{1}{T}\left[\int_{0}^{T}V(X_{s}^{x_{0},u})+\frac{1}{2\lambda}||u(X_{s}^{x_{0},u})||^{2}\
ds+\int_{0}^{T}A(X_{s}^{x_{0},u})\circ dX_{s}^{x_{0},u}\right],$
where $\lambda>0$, $V\in C^{\infty}(M)$ is a _potential_ or _state dependent
cost function_ , $||u(\cdot)||^{2}$ represents the _(instantaneous) control
cost_ corresponding to a control field $u\in\Lambda^{1}(M)$, and
$A\in\Lambda^{1}(M)$. The final term in (2) may represent a _flux_ , as
explained in Section 4. The differential form $A$ is often called a _gauge
field_ in physics.
###### Remark 2.2.
The ‘$\limsup$’ in (3) is used to avoid any discussion at this point about the
existence of the limit. Instead of the _pathwise formulation_ in (3) we could
alternatively consider the weaker _average formulation_ , in which case the
cost function would be the long term average of the expectation value
$\mathbb{E}^{x_{0},u}$ of the integrand in (3). We will see in Section 3 that
the limit of (3) exists (and not just the ‘$\limsup$’). Furthermore this limit
will turn out to be equal to a deterministic quantity, so that the pathwise
formulation and the average formulation may be considered equivalent.
We will consider the following problem, along with some variations which we
discuss in Sections 6 and 7.
###### Problem 2.3.
For every $x_{0}\in M$, find a differential 1-form $\hat{u}\in\Lambda^{1}(M)$
such that
$\mathcal{C}(x_{0},\hat{u})=\inf_{u\in\Lambda^{1}(M)}\mathcal{C}(x_{0},u),\quad\mbox{almost
surely.}$
## 3\. Ergodic reformulation of the optimization problem
In this section we will derive two equivalent formulations of Problem 2.3.
These reformulations, Problem 3.8 and Problem 3.13 below, are better suited to
the analysis in the remaining sections. Also some notation will be established
that will be used throughout this paper.
Let $\Omega^{X}=C([0,\infty);M)$ denote the space of sample paths of solutions
to (2). We equip $\Omega^{X}$ with the $\sigma$-algebra $\mathcal{F}^{X}$ and
filtration $(\mathcal{F}_{t}^{X})_{t\geq 0}$ generated by the cylinder sets of
$X$. Furthermore let probability measures $\mathbb{P}^{x_{0},u}$ on
$\Omega^{X}$ be defined as the law of $X^{x_{0},u}$, for all $x_{0}\in M$ and
$u\in\Lambda^{1}(M)$. Note that for all $u\in\Lambda^{1}(M)$ the collection of
probability measures $\mathbb{P}^{\cdot,u}$ defines a Markov process on
$\Omega^{X}$.
For the moment let $u\in\Lambda^{1}(M)$ be fixed. It will be convenient to use
the shorthand notation
$b_{u}^{i}(x)=g^{ij}(x)[f_{j}(x)+u_{j}(x)].$
Recall that associated with the vectorfields $\sigma_{\alpha}$,
$\alpha=1,\ldots,d$, there exist first order differential operators also
denoted by $\sigma_{\alpha}{}:C^{\infty}(M)\rightarrow C^{\infty}(M)$ defined
by
$\sigma_{\alpha}{}f(x)=\sigma^{i}_{\alpha}(x)\partial_{i}f(x),\quad x\in M,$
for $f\in C^{\infty}(M)$, where $\partial_{i}=\frac{\partial}{\partial x^{i}}$
denotes partial differentation with respect to $x^{i}$. Similarly $b_{u}$
defines a first order differential operator also denoted by $b_{u}$. By [14,
Theorem V.1.2], the Markov generator corresponding to (2) is given by
$L_{u}\phi(x)=\mbox{$\frac{1}{2}$}\sum_{\alpha=1}^{d}\sigma_{\alpha}{}\sigma_{\alpha}{}\phi(x)+b_{u}\phi(x).$
On $\Lambda^{p}(M)$ an inner product is defined by
$\langle\alpha,\beta\rangle_{\Lambda^{p}(M)}=\int_{M}\langle\alpha,\beta\rangle\
dx$, where $dx$ denotes the volume form corresponding to $g$. The inner
product $\langle\cdot,\cdot\rangle_{\Lambda^{0}(M)}$ is also denoted by
$\langle\cdot,\cdot\rangle_{L^{2}(M)}$. Let $L^{2}(M,g)=L^{2}(M)$ denote the
usual Hilbert space obtained by completing $C^{\infty}(M)$ with respect to the
$L^{2}(M)$-inner product.
###### Lemma 3.1.
$L_{u}$ may be written as
$L_{u}\Phi=\mbox{$\frac{1}{2}$}\Delta\Phi+\langle
u+\widetilde{f},d\Phi\rangle,\quad\Phi\in C^{2}(M),$
where $\Delta$ the Laplace-Beltrami operator,
$\widetilde{f}_{i}:=f_{i}+\mbox{$\frac{1}{2}$}g_{ik}\left(\nabla_{\sigma_{\alpha}}\sigma_{\alpha}\right)^{k}$,
and $\nabla$ is the covariant derivative, corresponding to the Levi-Civita
connection of the Riemannian metric $g$. The adjoint of $L_{u}$ with respect
to the $L^{2}(M)$ inner product is given by
$L_{u}^{*}\Psi=\mbox{$\frac{1}{2}$}\Delta\Psi+\delta\left(\Psi(\widetilde{f}+u)\right),\quad\Psi\in
C^{2}(M).$
where $\delta:\Lambda^{p}(M)\rightarrow\Lambda^{p-1}(M)$ is the $L^{2}(M,g)$
adjoint of the exterior derivative operator $d$, i.e.
$\int_{M}\langle d\alpha,\beta\rangle\ dx=\int_{M}\alpha\delta\beta\
dx,\quad\alpha\in C^{\infty}(M),\beta\in\Lambda^{1}(M),$
where $dx$ denotes the measure induced by the volume form on $M$.
###### Proof.
The Laplace-Beltrami operator may be expressed as (see [14, p. 285, eqn.
(4.32)])
(4)
$\Delta\phi=g^{ij}\partial_{i}\partial_{j}\phi-g^{ij}\Gamma^{k}_{ij}\partial_{k}\phi,$
where $\Gamma^{k}_{ij}$ denote the Christoffel symbols corresponding to the
Levi-Civita connection. Using this expression, we compute
$\displaystyle\sum_{\alpha}\sigma_{\alpha}\sigma_{\alpha}\phi$
$\displaystyle=\sum_{\alpha}\sigma_{\alpha}^{i}\partial_{i}\left(\sigma_{\alpha}^{j}\partial_{j}\phi\right)=\sum_{\alpha}\left(\sigma_{\alpha}^{i}\sigma_{\alpha}^{j}\partial_{i}\partial_{j}\phi+\sigma_{\alpha}^{i}\left(\partial_{i}\sigma_{\alpha}^{j}\right)\left(\partial_{j}\phi\right)\right)=\sum_{\alpha}\left(g^{ij}\partial_{i}\partial_{j}\phi+\sigma_{\alpha}^{i}\left(\partial_{i}\sigma_{\alpha}^{j}\right)\left(\partial_{j}\phi\right)\right)$
$\displaystyle=\Delta\phi+g^{ij}\Gamma^{k}_{ij}\partial_{k}\phi+\sum_{\alpha}\sigma_{\alpha}^{i}\left(\partial_{i}\sigma_{\alpha}^{j}\right)\partial_{j}\phi=\Delta\phi+\sum_{\alpha}\left(\nabla_{\sigma_{\alpha}}\sigma_{\alpha}\right)\phi,$
where the last equality is a result of the definition of the Levi-Civita
connection and the corresponding Christoffel symbols. The expression for
$L_{u}^{*}$ is immediate from its definition. ∎
In the remainder of this work, we will assume all advection terms are absorbed
in the force field $f$, and thus may omit the tilde in $\widetilde{f}$. This
can alternatively be interpreted as assuming
$\sum_{\alpha}\nabla_{\sigma_{\alpha}}\sigma_{\alpha}=0$. This is further
equivalent to the following hypothesis.
###### Hypothesis 3.2.
The generator corresponding to $X$ equals
$L_{u}=\mbox{$\frac{1}{2}$}\Delta+b_{u}$.
This assumption is justified by the above lemma and the well-known fact that
Markov generator $L^{u}$ determines the law of the diffusion $X$ uniquely.
###### Lemma 3.3.
Let $x_{0}\in M$. The expectation of the trajectory of $X$ over the gauge
field may be expressed as
$\mathbb{E}^{x_{0},u}\int_{0}^{T}A(X_{t})\circ
dX_{t}=\mathbb{E}^{x_{0},u}\int_{0}^{T}\left[\langle
A,f+u\rangle-\mbox{$\frac{1}{2}$}\delta A\right](X_{t})\ dt,$
or in local coordinates,
$\mathbb{E}^{x_{0},u}\int_{0}^{T}A(X_{t})\circ
dX_{t}=\mathbb{E}^{x_{0},u}\int_{0}^{T}g^{ik}A_{i}(f_{k}+u_{k})(X_{t})+\mbox{$\frac{1}{2}$}\frac{1}{\sqrt{|g|}}\partial_{j}\left(\sqrt{|g|}g^{ij}A_{i}\right)(X_{t})\
dt.$
###### Proof.
Using the usual transformation rule between Itô and Stratonovich integrals
[14, Equation (1.4), p. 250], we may write
(5) $dX_{t}^{i}=\overline{b}_{u}^{i}(X_{t})\
dt+\sum_{\alpha=1}^{d}\sigma^{i}_{\alpha}(X_{t})dB^{\alpha}(t),$
where $\overline{b}_{u}(x)$ is given by
$\overline{b}_{u}^{i}(x):=b_{u}^{i}(x)+\mbox{$\frac{1}{2}$}\sum_{\alpha=1}^{d}\left(\partial_{k}\sigma^{i}_{\alpha}(x)\right)\sigma^{k}_{\alpha}(x).$
By the definition of the Stratonovich integral, $Z\circ dY=Z\
dY+\mbox{$\frac{1}{2}$}d[Z,Y]$ for semimartingales $Y$ and $Z$ [14, Equation
(1.10), p.100], with $Z\ dY$ denoting the Itô integral. Therefore
$\displaystyle A_{i}(X_{t})\circ dX_{t}^{i}$ $\displaystyle=A_{i}(X_{t})\
dX_{t}^{i}+\mbox{$\frac{1}{2}$}d[A_{i}(X_{t}),X_{t}^{i}]=A_{i}(X_{t})\
dX^{i}_{t}+\mbox{$\frac{1}{2}$}\partial_{j}A_{i}(X_{t})d[X_{t}^{j},X_{t}^{i}]$
$\displaystyle=A_{i}(X_{t})\left[\overline{b}_{u}^{i}(X_{t})\
dt+\sum_{\alpha=1}^{d}\sigma_{\alpha}^{i}(X_{t})\
dB^{\alpha}(t)\right]+\mbox{$\frac{1}{2}$}\partial_{j}A_{i}(X_{t})\sum_{\alpha=1}^{d}\sigma_{\alpha}^{j}(X_{t})\sigma_{\alpha}^{i}(X_{t})\
dt.$
Integrating over $t$ and taking expectations gives
$\displaystyle\mathbb{E}^{x_{0},u}\int_{0}^{T}A(X_{s})\circ dX_{s}$
$\displaystyle=\mathbb{E}^{x_{0},u}\int_{0}^{T}\left\\{A_{i}(X_{t})\left[b_{u}^{i}(X_{t})+\mbox{$\frac{1}{2}$}\left(\partial_{k}\sigma^{i}_{\alpha}(X_{t})\right)\sigma^{k}_{\alpha}(X_{t})\right]+\mbox{$\frac{1}{2}$}\partial_{j}A_{i}(X_{t})\sigma_{\alpha}^{j}(X_{t})\sigma_{\alpha}^{i}(X_{t})\right\\}\
dt$
$\displaystyle=\mathbb{E}^{x_{0},u}\int_{0}^{T}\left\\{g^{ik}A_{i}(f_{k}+u_{k})(X_{t})+\mbox{$\frac{1}{2}$}\nabla_{j}g^{ij}A_{i}(X_{t})\right\\}\
dt.$
In the last expression we recognize the divergence of the vectorfield
$g^{ij}A_{i}$, resulting in the stated expression. ∎
We recall the notion of an invariant probability distribution. Let
$\mathcal{B}(M)$ denote the Borel $\sigma$-algebra on $M$.
###### Definition 3.4.
A probability measure $\mu_{u}$ on $M$ is called an invariant probability
distribution for (2), if
$\int_{M}\mathbb{P}^{x,u}(X_{t}\in B)\mu_{u}(dx)=\mu_{u}(B),\quad
B\in\mathcal{B}(M).$
The following result on invariant measures for nondegenerate diffusions [14,
Proposition V.4.5] is essential for our purposes.
###### Proposition 3.5 (Existence and uniqueness of invariant probability
measure).
Suppose Hypothesis 2.1 is satisfied. Corresponding to any $u\in\Lambda^{1}(M)$
there exists a unique invariant probability measure $\mu_{u}$ on $M$
corresponding to the diffusion on $M$ defined by (2). Moreover, $\mu_{u}(dx)$
is given as $\rho_{u}(x)\ dx$, and $\rho_{u}\in C^{\infty}(M)$ is a solution
of
(6) $L^{*}_{u}\rho=0.$
Furthermore $\rho_{u}>0$ on $M$.
We will refer to (6) as the _Fokker-Planck equation_ , in agreement with the
physics nomenclature. In the remainder of this work let $\mu_{u}$ and
$\rho_{u}$ as defined by Proposition 3.5.
In the physics literature, the _empirical density_ and _empirical current
density_ are defined respectively as (see [7]):
$\rho_{t}(x,\omega)=\frac{1}{t}\int_{0}^{t}\delta(x-X_{s}(\omega))\ ds,\quad
J_{t}(x,\omega)=\frac{1}{t}\int_{0}^{t}\dot{X}_{s}\delta(x-X_{s}(\omega))\
ds.$
Here (and only here) $\delta$ denotes the Dirac delta function. These fields,
having a clear heuristic meaning, will be very relevant in the remainder of
this work and we will make these precise from a mathematical point of view.
Let $B_{b}(M)$ denote the set of bounded Borel-measurable functions on $M$. We
will work with the set of empirical average measures
$\left(\nu_{t}(dx,\omega)\right)_{t>0}$ on $\mathcal{B}(M)\times\Omega^{X}$,
defined by
(7) $\nu_{t}(B):=\frac{1}{t}\int_{0}^{t}\mathbbm{1}_{B}(X_{s})\ ds,\quad t>0,\
B\in\mathcal{B}(M),$
where $\mathbbm{1}_{B}$ denotes the indicator function of the set $B$. Note
that the measure $\mu_{t}(B):=\int_{0}^{t}\mathbbm{1}_{B}(X_{s})\ ds$ is known
as the _local time_ of $X$. Our primary interest is in the infinite time
horizon limit.
###### Proposition 3.6.
For all $u\in\Lambda^{1}(M),\varphi\in L^{2}(M,\mu_{u})$ (in particular for
$\varphi\in B_{b}(M)$) and $x_{0}\in M$,
(8) $\lim_{t\rightarrow\infty}\int_{M}\varphi\ d\nu_{t}=\int_{M}\varphi\
d\mu_{u},\quad\mbox{$\mathbb{P}^{x_{0},u}$-almost surely.}$
###### Proof.
For $u\in\Lambda^{1}(M)$, we define a stationary probability measure
$\mathbb{P}^{u}$ on $\Omega^{X}$ by
$\mathbb{P}^{u}(G)=\int_{M}\mathbb{P}^{x_{0},u}(G)\ \mu_{u}(dy),\quad
G\in\mathcal{F}.$
For $\varphi\in L^{2}(M,\mu_{u})$ we then have, by the ergodic theorem, see
e.g. [8, Theorem 3.3.1], that $\lim_{t\rightarrow\infty}\int_{M}\varphi\
d\nu_{t}=\int_{M}\varphi\ d\mu_{u},$ $\mathbb{P}^{u}$-almost surely. Since
$\rho_{u}>0$ on $M$, this implies that
$\lim_{t\rightarrow\infty}\int_{M}\varphi\ d\nu_{t}=\int_{M}\varphi\
d\mu_{u}$, $\mathbb{P}^{x_{0},u}$-almost surely for $\mu_{u}$-almost all
$x_{0}\in M$. By smooth dependence of the trajectories of $X$ on the initial
condition the result extends to all $x_{0}\in M$. ∎
Note that $\lim_{t\rightarrow\infty}\nu_{t}$ is deterministic and does not
depend on the choice of the initial condition $x_{0}\in M$.
###### Corollary 3.7.
For $x_{0}\in M$,
(9) $\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}A(X_{t})\circ
dX_{t}=\int_{M}\left\\{\langle A,f+u\rangle-\mbox{$\frac{1}{2}$}\delta
A\right\\}\rho_{u}\ dx,\quad\mbox{$\mathbb{P}^{x_{0},u}$-a.s.}$
###### Proof.
This is an immediate corollary of Lemma 3.3, Proposition 3.5 and Proposition
3.6. ∎
By the above results we may rephrase Problem 2.3 as follows.
###### Problem 3.8.
Minimize
(10)
$\mathcal{C}(\rho,u):=\int_{M}\left\\{V+\frac{1}{2\lambda}||u||^{2}+\langle
A,f+u\rangle-\mbox{$\frac{1}{2}$}\delta A\right\\}\rho\ dx.$
with respect to $(\rho,u)\in C^{\infty}(M)\times\Lambda^{1}(M)$ subject to the
constraints $L_{u}^{*}\rho=0$ and $\int_{M}\rho\ dx=1$.
###### Remark 3.9.
The gauge field $A$ may be completely removed from the problem by redefining
$f$ and $V$ to be
(11) $\widetilde{f}=f-\lambda
A,\quad\mbox{and}\quad\widetilde{V}=V-\mbox{$\frac{1}{2}$}\lambda||A||^{2}-\mbox{$\frac{1}{2}$}\delta
A.$
It may be checked that Problem 3.8 is equivalent to the minimization of
$C(\rho,\widetilde{u})=\int_{M}\left\\{\widetilde{V}+\frac{1}{2\lambda}||\widetilde{u}||^{2}\right\\}\rho\
dx,$
with respect to $\rho$ and $\widetilde{u}$, subject to
$\mbox{$\frac{1}{2}$}\Delta\rho+\delta((\widetilde{f}+\widetilde{u})\rho)=0$.
The control $u$ solving Problem 3.8 for nonzero $A$ may be retrieved by
setting $u=\widetilde{u}-\lambda A$. Using this observation would simplify the
derivation of the results in subsequent sections, but the process of
reintroducing a nonzero gaugefield $A$ in the results would lead to
unnecessary confusion, so we will continue to work with a nonzero gaugefield
$A$.
Note that $\dot{X}_{s}$ is not defined, a.s., so our mathematical analogue of
the empirical current density requires more care. In Appendix A, we derive the
differential 1-form $J_{u}\in\Lambda^{1}(M)$ denoting current density, as
(12) $J_{u}=-\mbox{$\frac{1}{2}$}d\rho_{u}+\rho_{u}\left(f+u\right).$
Note that $J_{u}$ is divergence free: $\delta J_{u}=L_{u}^{*}\rho_{u}=0$.
Furthermore a control $u$ may be expressed in terms of the corresponding
$J_{u}$ and $\rho_{u}$ as
(13)
$u=-f+\frac{1}{\rho_{u}}\left(J_{u}+\mbox{$\frac{1}{2}$}d\rho_{u}\right).$
###### Lemma 3.10.
Suppose $u$, $\rho_{u}$ and $J_{u}$ are related by (12). Then $\delta J_{u}=0$
if and only if $L_{u}^{*}\rho_{u}=0$, i.e. $\rho_{u}$ is an invariant measure.
###### Proof.
This follows immediately from noting that $\delta J_{u}=L_{u}^{*}\rho_{u}$. ∎
Recall Lemma 3.3, where the expectation of the gauge field over the trajectory
was expressed as an expectation over a Lebesgue integral. For the long term
average of the gauge field this leads to the following result.
###### Lemma 3.11.
For $u\in\Lambda^{1}(M)$ and $x_{0}\in M$,
(14) $\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}A(X_{t})\circ
dX_{t}=\int_{M}\langle A,J_{u}\rangle\
dx,\quad\mbox{$\mathbb{P}^{x_{0},u}$-a.s.}$
###### Proof.
From Corollary 3.7 we have (9). By (13), this is equal to
$\displaystyle\int_{M}\left(\left\langle
A,\frac{1}{\rho_{u}}\left(J_{u}+\mbox{$\frac{1}{2}$}d\rho_{u}\right)\right\rangle-\mbox{$\frac{1}{2}$}\delta
A\right)\rho_{u}\ dx$ $\displaystyle=\int_{M}\left\langle
A,\left(J_{u}+\mbox{$\frac{1}{2}$}d\rho_{u}\right)\right\rangle-\mbox{$\frac{1}{2}$}\langle
A,d\rho_{u}\rangle\ dx=\int_{M}\langle A,J_{u}\rangle\ dx.$
∎
Because of the above observations, instead of varying $\rho$ and $u$ in the
optimization problem 3.8, we may as well vary $\rho\in C^{\infty}$ and
$J\in\Lambda^{1}(M)$, while enforcing $\delta J=0$ (equivalent to the Fokker-
Planck equation for $\rho$ by Lemma 3.10) and $\int_{M}\rho\ dx=1$. Because of
(13) the control $u$ is then determined explicitly. Combining (13) and (14),
we may alternatively express the cost functional (10) as a function of $\rho$
and $J$, namely
(15)
$\mathcal{C}(\rho,J)=\int_{M}\left\\{\left(V+\frac{1}{2\lambda}\left|\left|-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\right|\right|^{2}\right)\rho+\langle
A,J\rangle\right\\}\ dx.$
###### Remark 3.12.
Strictly speaking the use of $\mathcal{C}$ for different cost functionals is
an abuse of notation; we trust this will not lead to confusion.
Problem (2.3) can thus be rephrased as the following problem:
###### Problem 3.13.
Minimize $\mathcal{C}(\rho,J)$ with respect to $\rho\in C^{\infty}(M)$ and
$J\in\Lambda^{1}(M)$, subject to the constraints $\delta J=0$ and
$\int_{M}\rho\ dx=1$, where $\mathcal{C}(\rho,J)$ is given by (15).
## 4\. Flux
In this section we will give a natural interpretation of the term
$\int_{M}\langle A,J\rangle\ dx$, namely as the flux of $J$ through a cross-
section $\alpha$, or equivalently, the long term average intersection index of
the stochastic process $(X(t))_{t\geq 0}$ with a cross-section. The section
motivates the gauge field $A$ in the cost function. The remainder of this
paper does not refer to this section. For background reading in differential
geometry, see [2, 21]. See also [6, 7] where the ideas below are described in
more detail.
Let $M$ be a compact, oriented, Riemannian manifold of dimension $m$. Recall
the notion of a _singular $p$-chain in $M$ (with real coefficients)_ as a
finite linear combination $c=\sum a_{i}\sigma_{i}$ of smooth $p$-simplices
$\sigma_{i}$ in $M$ where the $a_{i}$ are real numbers. Let
$S_{p}(M;\mathbb{R})$ denote the real vector space of singular $p$-chains in
$M$. On $S_{p}(M;\mathbb{R})$, $p\in\mathbb{Z}$, $p\geq 0$, are defined
boundary operators $\partial_{p}:S_{p}(M;\mathbb{R})\rightarrow
S_{p-1}(M;\mathbb{R})$. The _$p$ -th singular homology group of $M$ with real
coefficients_ is defined by
$H_{p}(M;\mathbb{R})=\ker\partial_{p}/\operatorname{im}\partial_{p+1}.$
Elements of $\ker\partial_{p}$ are called $p$-cycles, and elements of
$\operatorname{im}\partial_{p+1}$ are called $p$-boundaries. The _deRham
cohomology classes_ $H^{p}_{\operatorname{deRham}}(M;\mathbb{R})$ are defined,
for $0\leq p\leq m=\dim(M)$ as
$H^{p}_{\operatorname{deRham}}(M)=\ker d_{p}/\operatorname{im}d_{p-1},$
where $d_{p}:\Lambda^{p}(M)\rightarrow\Lambda^{p+1}(M)$ denotes exterior
differentiation.
Let $\alpha$ be a $p$-cycle. The functional
$p_{\alpha}\in\Lambda^{p}(M)\rightarrow\int_{\alpha}\beta\in\mathbb{R}$
depends, by Stokes’ theorem, only on the homology class $[\alpha]$ of
$\alpha$, and the deRham cohomology class $[\beta]$ of $\beta$. An element
$[\alpha]\in H_{p}(M;\mathbb{R})$ may therefore be considered an element of
$\left(H^{p}_{\operatorname{deRham}}(M)\right)^{*}$. Now let $M$ be oriented
and of dimension $m$. The mapping $q_{\beta}:[\gamma]\in
H^{p}_{\operatorname{deRham}}(M)\rightarrow\int_{M}\beta\wedge\gamma\in\mathbb{R}$
is an element of $\left(H^{p}_{\operatorname{deRham}}(M)\right)^{*}$. Poincaré
duality states that the mapping $[\beta]\in
H^{m-p}_{\operatorname{deRham}}(M)\rightarrow
q_{\beta}\in\left(H^{p}_{\operatorname{deRham}}(M)\right)^{*}$ is an
isomorphism for compact $M$, i.e.
$H^{m-p}_{\operatorname{deRham}}(M)\cong\left(H^{p}_{\operatorname{deRham}}(M)\right)^{*}$.
Therefore, for compact, oriented $M$, we have
$H_{p}(M;\mathbb{R})\cong\left(H^{p}_{\operatorname{deRham}}(M)\right)^{*}\cong
H^{m-p}_{\operatorname{deRham}}(M).$
Since an equivalence class in $H^{m-p}_{\operatorname{deRham}}(M)$ has a
unique harmonic representative, we conclude that for a $p$-cycle $\alpha$,
there exists a unique harmonic $r_{\alpha}\in\Lambda^{m-p}(M)$ such that
$\int_{\alpha}\beta=p_{\alpha}(\beta)=\int_{M}r_{\alpha}\wedge\beta,\quad[\beta]\in
H^{p}_{\operatorname{deRham}}(M).$
In particular, for $p=m-1$, we may interpret $\int_{\alpha}\star J$ (with
$\star$ the _Hodge star operator_) as the flux of $J$ through $\alpha$. This
quantity may further be interpreted as the long term average intersection
index of the stochastic trajectory $(X(t))_{t\geq 0}$ with respect to
$\alpha$, i.e. the long term average of the number of intersections (with $\pm
1$ signs depending on the direction); see e.g. [13, Section 0.4]. Specializing
the above result to this situation, we obtain the following proposition.
###### Proposition 4.1.
For a given $(m-1)$-cycle $\alpha$, there exists a unique harmonic
$A\in\Lambda^{1}(M)$, which depends only on the singular homology class
$[\alpha]\in H_{m-1}(M;\mathbb{R})$ of $\alpha$, such that $\int_{\alpha}\star
J=\int_{M}\langle A,J\rangle\ dx$ for all $J\in\Lambda^{1}(M)$ satisfying
$\delta J=0$.
###### Example 4.2 ($S^{1}$).
The divergence free $1$-forms $J$ on $S^{1}$ are constant, say $J=J_{0}\
d\theta$ for some $J_{0}\in\mathbb{R}$. A 0-cycle $\alpha$ of $S^{1}$ consists
of a collection of points $\theta_{1},\ldots,\theta_{k}\subset[0,2\pi]$ with
multiplicities $\alpha_{1},\ldots,\alpha_{k}$. The flux of $J$ through
$\alpha$ is then simply given by
$\sum_{i=1}^{k}\alpha_{i}J(\theta_{i})=\sum_{i=1}^{k}\alpha_{i}J_{0}$. By
defining a differential form $A=A_{0}\ d\theta$, with constant component
$A_{0}:=\sum_{i}\alpha_{i}\frac{1}{2\pi}$, we find that
$\int_{S^{1}}\langle A,J\rangle\ d\theta=\int_{S^{1}}A_{0}J_{0}\
d\theta=\sum_{i}\alpha_{i}J_{0}=\int_{\alpha}\star J.$
We see that this choice of $A$ is the constant (and therefore harmonic)
representative in $H^{1}_{\operatorname{deRham}}(S^{1})$ corresponding to
$[\alpha]\in H_{0}(S^{1};\mathbb{R})$.
## 5\. Unconstrained optimization – the HJB equation
In this section we will find necessary conditions for a solution of Problem
3.8 or equivalently Problem 3.13. In fact, for technical reasons we will work
with the the formulation in terms of $\rho$ and $J$, i.e. Problem 3.13. The
main reason for this is the simplicity (in particular, the linearity) of the
constraint $\delta J=0$. This should be compared to the equivalent constraint
$\mbox{$\frac{1}{2}$}\Delta\rho+\delta(\rho(f+u))=0$, which is nonlinear as a
function in $(\rho,u)$.
The approach to Problem 3.8 or Problem 3.13 is to use the method of Lagrange
multipliers to enforce the constraints. Since the constraint $\delta J(x)=0$
needs to be enforced for all $x\in M$, the corresponding Lagrange multiplier
is an element of a function space. A purely formal derivation of the necessary
conditions using Lagrange multipliers is straightforward, but we wanted to be
more rigourous in the derivation of the necessary conditions. In Sections 6
and 7, we will be less rigourous in the derivations, in the comforting
knowledge that we can use the machinery outlined in the current section.
### 5.1. Abstract optimization
We will relax Problem 3.13 to an optimization problem over Sobolev spaces. In
particular, we will rephrase it as the following abstract optimization
problem. Let $X$ and $Z$ be Banach spaces and let $\mathcal{U}$ be an open set
in $X$. Let $\mathcal{C}:\mathcal{U}\subset X\rightarrow\mathbb{R}$ and
$\mathcal{H}:\mathcal{U}\subset X\rightarrow Z$.
###### Problem 5.1.
Minimize $\mathcal{C}(x)$ over $\mathcal{U}$ subject to the constraint
$\mathcal{H}(x)=0$.
The _Fréchet derivative_ [15] of a mapping $T:D\subset X\rightarrow Y$ in
$x\in D$ will be denoted by $T^{\prime}(x)\in L(X;Y)$. We will need the
following notion.
###### Definition 5.2 (Regular point).
Let $T$ be a continuously Fréchet differentiable function from an open set $D$
in a Banach space $X$ into a Banach space $Y$. If $x_{0}\in D$ is such that
$T^{\prime}(x_{0})$ maps $X$ onto $Y$, then the point $x_{0}$ is said to be a
_regular point_ of the transformation $T$.
For the Problem 5.1 the following necessary condition holds for a local
extremum [15, Theorem 9.3.1].
###### Proposition 5.3 (Lagrange multiplier necessary conditions).
Suppose $\mathcal{C}$ and $\mathcal{H}$ are continuously Fréchet
differentiable on $\mathcal{U}$. If $\mathcal{C}$ has a local extremum under
the constraint $\mathcal{H}(x)=0$ at the regular point $x_{0}\in\mathcal{U}$,
then there exists an element $z^{*}_{0}\in Z^{*}$ such that
$f^{\prime}(x_{0})+\langle H^{\prime}(x_{0}),z_{0}^{*}\rangle=0.$
Here $\langle\cdot,\cdot\rangle$ denotes the pairing between $Z$ and $Z^{*}$.
### 5.2. Sobolev spaces
We will define Sobolev spaces of differential forms as follows. For
$k\in\mathbb{N}\cup\\{0\\}$ and $p\in\\{0,\ldots,n\\}$, the space $W^{k,p}(M)$
consists of all differential forms of order $p$ which are in $L^{2}(M)$
together with their covariant derivatives up to order $k$. (Note the special
meaning of the second index $p$, in contrast with the more common definition.)
The norm
$||\omega||_{k,p}^{2}:=\int_{M}\sum_{l=0}^{k}||\nabla^{l}\omega||^{2}\ dx$
gives Hilbert space structure to $W^{k,p}(M)$. We will write
$W^{k}(M):=W^{k,0}(M)$.
Note that the Laplace-Beltrami operator $\Delta=-(\delta d+d\delta)$ is a
bounded mapping $\Delta:W^{k,p}(M)\rightarrow W^{k-2,p}(M)$. Also
$\delta:W^{k,p}(M)\rightarrow W^{k-1,p-1}(M)$, for $p\geq 1$, and
$d:W^{k,p}(M)\rightarrow W^{k-1,p+1}(M)$, for $p<n$, are bounded linear
mappings.
Recall Sobolev’s Lemma [19, Proposition 4.3.3].
###### Lemma 5.4 (Sobolev embedding).
Suppose $\omega\in H^{k}(M)$ and suppose $m\in\mathbb{N}\cup\\{0\\}$ satisfies
$k>n/2+m$. Then $\omega\in C^{m}(M)$.
### 5.3. Obtaining necessary conditions for the optimization problem
We let $k\in\mathbb{N}$ such that
(16) $k>n/2.$
For this $k$, we define
$\displaystyle X$ $\displaystyle:=W^{k+1}(M)\times W^{k,1}(M),$
$\displaystyle\mathcal{U}$ $\displaystyle:=\mathcal{P}\times W^{k,1}(M),\quad$
$\displaystyle\mbox{where}\quad\mathcal{P}$ $\displaystyle:=\left\\{\rho\in
W^{k+1}(M):\rho>0\ \mbox{on}\ M\right\\},\quad\mbox{and}$ $\displaystyle Z$
$\displaystyle:=Z_{1}\times\mathbb{R},\quad$ $\displaystyle\mbox{where}\quad
Z_{1}$ $\displaystyle:=\left\\{\psi\in W^{k-1}(M):\psi=\delta\beta\ \mbox{for
some}\ \beta\in W^{k,1}(M)\right\\}.$
By the condition (16) on $k$, we have by the Sobolev Lemma that $(\rho,J)\in
X$ satisfies $\rho\in C(M)$. In particular, the condition $\rho>0$ defines an
open subset $\mathcal{U}\subset X$.
###### Lemma 5.5.
Suppose (16) holds. The mapping $\mathcal{C}(\rho,J)$, as given by (15), may
be continuously extended to a mapping
$\mathcal{C}:\mathcal{U}\rightarrow\mathbb{R}$. Moreover, the mapping
$\mathcal{C}$ is continuously differentiable on $\mathcal{U}$ with Fréchet
derivative $\mathcal{C}^{\prime}(\rho,J)\in L(X;\mathbb{R})$ given for
$(\rho,J)\in\mathcal{U}$ by
$\displaystyle(\zeta,G)$
$\displaystyle\mapsto\int_{M}\left\\{V+\frac{1}{2\lambda}||f||^{2}-\frac{1}{2\lambda\rho^{2}}||J+\mbox{$\frac{1}{2}$}d\rho||^{2}+\frac{1}{2\lambda}\delta\left(-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\right)\right\\}\zeta
dx$
$\displaystyle\quad+\int_{M}\left\langle\frac{1}{\lambda}\left(-f+\frac{1}{\rho}(J+\mbox{$\frac{1}{2}$}d\rho)\right)+A,G\right\rangle\
dx.$
###### Proof.
We compute $\mathcal{C}^{\prime}(\rho,u)$ to be the linear functional on $X$
given by
$\displaystyle(\zeta,G)$
$\displaystyle\mapsto\int_{M}\left\\{V+\frac{1}{2\lambda}\left|\left|-f+\frac{1}{\rho}(J+\mbox{$\frac{1}{2}$}d\rho)\right|\right|^{2}\right\\}\zeta+\frac{1}{\lambda}\left\langle-f+\frac{1}{\rho}(J+\mbox{$\frac{1}{2}$}d\rho),-\frac{1}{\rho^{2}}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\zeta+\frac{1}{2\rho}d\zeta\right\rangle\
\rho dx$
$\displaystyle\quad+\int_{M}\left\\{\frac{1}{\lambda}\left\langle-f+\frac{1}{\rho}(J+\mbox{$\frac{1}{2}$}d\rho),\frac{1}{\rho}G\right\rangle\rho+\langle
A,G\rangle\right\\}\ dx,$
This is after rearranging, and partial integration of the term containing
$d\zeta$, equal to the stated expression. The derivative is a bounded
functional on $X$ by the uniform boundedness of $V,1/\rho,d\rho,f,J$ and $A$.
∎
We define the constraint mapping $\mathcal{H}:\mathcal{U}\rightarrow Z$ as
$\mathcal{H}(\rho,J):=\left(\delta J,\int_{M}\rho\
dx-1\right),\quad(J,u)\in\mathcal{U}.$
The following lemma is now immediate.
###### Lemma 5.6.
The mapping $\mathcal{H}$ is continuously differentiable on $X$, with Fréchet
derivative $\mathcal{H}^{\prime}(\rho,J)\in L(X;Z)$ given for $(\rho,J)\in X$
by
$(\zeta,G)\mapsto\left(\delta G,\int_{M}\zeta\ dx\right),\quad(\zeta,v)\in X.$
Every $(\rho,J)\in X$ is regular for $\mathcal{H}$, thanks to our choice of
the function space $Z$:
###### Lemma 5.7.
Any $(\rho,J)\in X$ is a regular point of $\mathcal{H}$.
###### Proof.
Let $(\Psi,\kappa)\in Z=Z_{1}\times\mathbb{R}$. In particular there exists a
$\beta\in W^{k,1}(M)$ such that $\Psi=\delta\beta$. We may pick $G=\beta$, and
$\zeta$ a constant function such that $\int_{M}\zeta\ dx=\kappa$. Then
$\mathcal{H}^{\prime}(\rho,J)(\zeta,G)=(\Psi,\kappa)$, showing that
$\mathcal{H}^{\prime}(\rho,J)$ is onto. ∎
In order to apply Proposition 5.3 in a useful manner, we need to give
interpretation to $Z^{*}$, and in particular to $Z_{1}^{*}$. First let us
recall that the spaces $(H^{s}(M))^{*}$, for $s\in\mathbb{N}\cup\\{0\\}$ may
be canonically identified through the $L^{2}(M)$-inner product with spaces of
distributions, denoted by $W^{-s}(M)$ [19, Proposition 4.3.2]. In other words,
if $z\in(H^{s}(M))^{*}$, then there exists a distribution $\Phi\in W^{-s}(M)$
such that $z(\Psi)=\int_{M}\Phi\Psi\ dx$. Now in case $\Psi\in Z_{1}$, i.e.
$\Psi=\delta\beta$ for some $\beta\in W^{k,1}(M)$, then
$z(\Psi)=\int_{M}\Phi\delta\beta\ dx=\int_{M}\langle d\Phi,\beta\rangle\ dx$
for some $\Phi\in W^{-(k-1)}(M)$. Therefore the choice of $\Phi\in
W^{-(k-1)}(M)$ representing $z\in Z_{1}^{*}$ is fixed up to the addition by a
closed form: if $d\gamma=0$ for $\gamma\in W^{-(k-1)}(M)$, then
$\int_{M}(\Phi+\gamma)\delta\beta\ dx=\int_{M}\Phi\delta\beta\ dx$. We
summarize this in the following lemma.
###### Lemma 5.8.
$Z_{1}^{*}\cong W^{-(k-1)}(M)/\left\\{\gamma\in
W^{-(k-1)}(M):d\gamma=0\right\\}$.
We may now apply Proposition 5.3 to obtain the following preliminary result.
###### Lemma 5.9 (Ergodic Hamilton-Jacobi-Bellman equation).
Suppose $(\rho,J)\in\mathcal{U}$ is a local extremum of $\mathcal{C}$, defined
by (15), under the constraint that $\mathcal{H}(\rho,J)=0$. Then there exists
$\Phi\in C^{1}(M)$ and $\gamma\in\mathbb{R}$ such that
$\displaystyle V+\langle
f,A+d\Phi\rangle-\mbox{$\frac{1}{2}$}\lambda||A+d\Phi||^{2}-\mbox{$\frac{1}{2}$}\delta(A+d\Phi)+\gamma=0,\quad\mbox{and}$
(17) $\displaystyle J=-\mbox{$\frac{1}{2}$}d\rho+\rho f-\lambda\rho(A+d\Phi),$
where the first equality holds in weak sense, i.e. in $W^{-1}(M)$. The
corresponding control field $u$ is continuous and given by
$u=-\lambda(A+d\Phi).$
###### Proof.
Let $(\rho,J)$ as specified. By Proposition 5.3, there exists an element
$(\Phi,\gamma)\in W^{-(k-1)}(M)\times\mathbb{R}$ such that the following
equations hold.
$\displaystyle
V+\frac{1}{2\lambda}||f||^{2}-\frac{1}{2\lambda\rho^{2}}||J+\mbox{$\frac{1}{2}$}d\rho||^{2}+\frac{1}{2\lambda}\delta\left(-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\right)+\gamma$
$\displaystyle=0,\quad\mbox{and}$
$\displaystyle\frac{1}{\lambda}\left(-f+\frac{1}{\rho}(J+\mbox{$\frac{1}{2}$}d\rho)\right)+A+d\Phi$
$\displaystyle=0.$
Here $\Phi$ is an arbitrary representative of some equivalence class in
$W^{-(k-1)}(M)/\left\\{\gamma\in W^{-(k-1)}(M):d\gamma=0\right\\}$ as
motivated by Lemma 5.8. Substituting the second equation into the first, and
making some rearrangements, gives the sytem (5.9). Then $\Phi\in C^{1}(M)$ as
a result of the equation for $J$ in (5.9) and the continuity of $J$, $\rho$
and $d\rho$. The expression for $u$ is an immediate result of (13). ∎
###### Theorem 5.10 (Linearized HJB equation).
Suppose $(\rho,J)\in\mathcal{U}$ is a local extremum of $\mathcal{C}$, defined
by (15), under the constraint that $\mathcal{H}(\rho,J)=0$. Then there exists
a $\psi\in C^{\infty}(M)$, $\psi>0$ on $M$, and $\mu\in\mathbb{R}$ such that
$\displaystyle H\psi-W\psi=\mu\psi,\quad\mbox{where}$ (18) $\displaystyle
H\psi:=L_{u_{0}}\psi,\quad u_{0}=-\lambda A,\quad\mbox{and}$ $\displaystyle
W=\lambda V+\lambda\langle
f,A\rangle-\mbox{$\frac{1}{2}$}\lambda^{2}||A||^{2}-\mbox{$\frac{1}{2}$}\lambda\delta
A.$
Furthermore $\rho\in C^{\infty}(M)$, $J\in\Lambda^{1}(M)$ and the control
field $u$, related to $(\rho,J)$ as in (13), is in $\Lambda^{1}(M)$ as well,
and
$u=-\lambda A+d\left(\ln\psi\right),\quad J=-\mbox{$\frac{1}{2}$}d\rho+\rho
f-\lambda\rho A+\rho\ d\left(\ln\psi\right).$
###### Proof.
Let $\Phi$ and $\gamma$ as in Lemma 5.9 and let $\psi:=\exp(-\lambda\Phi)$. We
compute
$d\Phi=-\frac{1}{\lambda\psi}d\psi\quad\mbox{and}\quad\delta
d\Phi=-\frac{1}{\lambda\psi}\delta
d\psi-\frac{1}{\lambda\psi^{2}}||d\psi||^{2}.$
Inserting into (5.9), and multiplication by $\lambda\psi$ makes the
$||d\psi||^{2}$-terms cancel and results in the equation
$\lambda\psi V+\lambda\psi\langle f,A\rangle-\langle
f,d\psi\rangle-\mbox{$\frac{1}{2}$}\lambda^{2}\psi||A||^{2}+\lambda\langle
A,d\psi\rangle-\mbox{$\frac{1}{2}$}\lambda\psi\delta
A+\mbox{$\frac{1}{2}$}\delta d\psi+\lambda\gamma\psi=0,$
or equivalently (5.10), with $\mu=\lambda\gamma$.
_Regularity._ Note that, since $\psi\in C^{1}(M)$, we have $d\psi\in
C(M)\subset L^{2}(M)$, so that $\psi\in H^{1}(M)$. Note that the first
equation of (5.10) may be rewritten into
$\mbox{$\frac{1}{2}$}\Delta\psi=\varphi$, with $\varphi=-\langle f-\lambda
A,d\psi\rangle+W\psi+\mu\psi\in C(M)\subset L^{2}(M)$. By elliptic regularity
([19, Proposition 5.1.6]), it follows that $\psi\in H^{2}(M)$. In particular,
$\varphi\in H^{1}(M)$. We may iterate this bootstrapping argument to conclude
that $\psi\in H^{s}(M)$ for any $s\in\mathbb{N}$, and conclude from Sobolev
embedding (Lemma 5.4) that $\psi\in C^{\infty}(M)$.
By (13), the corresponding control field is given by $u=-\lambda
A+\frac{1}{\psi}d\psi\in\Lambda^{1}(M)$. By Proposition 3.5, $\rho\in
C^{\infty}(M)$ and by the equation for $J$ in (5.9), $J\in\Lambda^{1}(M)$. ∎
Since the problem considered in Theorem 5.10 is a relaxed version of Problem
3.13, and smoothness of $J$ and $\rho$ is established in the relaxed case, we
immediately have the following corollary.
###### Corollary 5.11.
Suppose $(\rho,u)$ is a solution of Problem 3.8, or equivalently that
$(\rho,J)$ a solution of Problem 3.13. Then the results of Theorem 5.10 hold.
###### Remark 5.12.
This result should be compared to [9], in which a variational principle is
derived for the maximal eigenvalue of an operator $L$ satisfying a maximum
principle. Our results give an alternative characterization (as the solution
of a control problem) of the largest eigenvalue in the case of an elliptic
differential operator $L=L_{u_{0}}$.
###### Remark 5.13.
An alternative way of deriving the HJB equation is by using the method of
vanishing discount, see e.g. [1, Chapter 3].
### 5.4. Symmetrizable solution
In a special case we can represent the optimally controlled invariant measure
in terms of $\psi$ and the uncontrolled invariant measure. For this, we recall
the notion of a _symmetrizable diffusion_ [14, Section V.4]. Other equivalent
terminology is that the Markov process _reversible_ or that the invariant
measure satisfies _detailed balance_.
###### Definition 5.14.
Let $(T(t))_{t\geq 0}$ denote the transition semigroup of a diffusion on $M$.
This diffusion is said to be _symmetrizable_ if there exists a Borel measure
$\nu(dx)$ such that
$\int_{M}(T(t)f)(x)g(x)\ \nu\ dx=\int_{M}f(x)(T(t)g)(x)\ \nu\ dx\quad\mbox{for
all}\ f,g\in C(M),t\geq 0.$
In case a diffusion is symmetrizable with respect to a measure $\nu$, this
measure is an invariant measure for the diffusion.
The following results hold for any control field $u\in\Lambda^{1}(M)$.
###### Lemma 5.15.
Let $X$ denote a diffusion with generator given by
$Lh=\mbox{$\frac{1}{2}$}\Delta h+\langle f+u,dh\rangle$, $h\in C^{2}(M)$. The
following are equivalent.
* (i)
$X$ is symmetrizable, with invariant density $\rho_{u}=\exp(-U)$ for some
$U\in C^{\infty}(M)$;
* (ii)
$f+u=-\mbox{$\frac{1}{2}$}dU$ for some $U\in C^{\infty}$;
* (iii)
The long term current density $J_{u}$, given by (12), vanishes.
###### Proof.
The equivalence of (i) and (ii) is well known, see e.g. [14, Theorem V.4.6].
The equivalence of (ii) and (iii) is then immediate from (12). ∎
###### Proposition 5.16.
Let $\rho_{0}$ and $J_{0}=-\mbox{$\frac{1}{2}$}d\rho_{0}+\rho_{0}f$ denote the
density and current corresponding to the uncontrolled dynamics (1). The
following are equivalent.
* (i)
The diffusion corresponding to the optimal control $u$ is symmetrizable, with
density $\rho=\psi^{2}\rho_{0}$, where $\psi$ is as in Theorem 5.10
(normalized such that $\int_{M}\psi^{2}\rho_{0}\ dx=1$);
* (ii)
$J_{0}=\lambda\rho_{0}A$;
* (iii)
$f=-\mbox{$\frac{1}{2}$}dU+\lambda A$, for some $U\in C^{\infty}(M)$.
In particular, if the uncontrolled diffusion is symmetrizable and $A=0$, then
the controlled difussion is symmetrizable and the density admits the
expression given under (i).
###### Proof.
Setting $\rho_{u}=\psi^{2}\rho_{0}$, we have
$\displaystyle J_{u}$
$\displaystyle=-\mbox{$\frac{1}{2}$}d\rho_{u}+\rho_{u}\left(f-\lambda
A-\frac{1}{\psi}d\psi\right)=-\mbox{$\frac{1}{2}$}\psi^{2}d\rho_{0}-\rho_{0}\psi
d\psi+\psi^{2}\rho_{0}\left(f-\lambda A+\frac{1}{\psi}d\psi\right)$
$\displaystyle=-\mbox{$\frac{1}{2}$}\psi^{2}d\rho_{0}+\psi^{2}\rho_{0}(f-\lambda
A)=\psi^{2}(J_{0}-\lambda\rho_{0}A),$
which establishes the equivalence of (i) and (ii). Representing the density
$\rho_{0}$ by $\exp(-U)$ and using (12) with $u=0$ gives the equivalence
between (ii) and (iii). ∎
### 5.5. Gauge invariance
For a special choice of $A$, the solution of Problem 3.13 may be related to
the solution corresponding to $A=0$ in a simple way.
###### Proposition 5.17.
Let $A_{0}\in\Lambda^{1}(M)$. For $\varphi\in C^{\infty}(M)$ and
$A=A_{0}+d\varphi$ let $(\rho_{\varphi},J_{\varphi})$ denote the solution of
Problem 3.13 with corresponding solutions $\psi_{\varphi}$ and $u_{\varphi}$
of (5.10). Then, for $\varphi\in C^{\infty}(M)$,
(19) $\rho_{\varphi}=\rho,\quad
u_{\varphi}=u,\quad\psi_{\varphi}=\exp(\lambda\varphi)\psi,\quad\mbox{and}\quad
J_{\varphi}=J,$
where $\rho$, $J$, $\psi$ and $u$ denote the solution of Problem 3.13 and
(5.10) corrsponding to $A=A_{0}$.
###### Proof.
This is a matter of straightforward computation. ∎
In other words, the solution of Problem 3.13 depends (essentially) only on the
equivalence class of $A$, under the equivalence relation $A\sim B$ if and only
if $A=B+d\varphi$ for some $\varphi\in C^{\infty}(M)$.
###### Remark 5.18.
A standard way in physics to obtain gauge invariant differential operators is
to replace the derivatives with ‘long’ derivatives. This is illustrated by the
observation that the operator $H$ of (5.10) may be expressed, using the Hodge
star operator $\star:\Lambda^{p}(M)\rightarrow\Lambda^{n-p}(M)$,
$p=0,\ldots,n$, as
$H\psi=\mbox{$\frac{1}{2}$}\star\left(d-\lambda
A\wedge\right)\star\left(d-\lambda
A\wedge\right)\psi+\star\left(f\star(d-\lambda A\wedge)\right)\psi-\lambda
V\psi.$
The operator $\psi\mapsto d\psi-\lambda A\wedge\psi$ is called a _‘long’
derivative_ operator. This result is easily obtained using the following
observations
$\star\left(\alpha\wedge\star\beta\right)=\langle\alpha,\beta\rangle,\quad\star
d\star d\phi=\Delta\phi,\quad\star d\star\alpha=-\delta\alpha,$
for $\alpha,\beta\in\Lambda^{1}(M)$ and $\phi\in C^{\infty}(M)$.
## 6\. Fixed density
In this section we consider the problem of fixing the density function $\rho$,
and finding a force $u$ that obtains this density function, at minimum cost.
Let $\rho\in C^{\infty}(M)$ be fixed, with $\rho>0$ on $M$ and $\int_{M}\rho\
dx=1$. Then for some constant $c_{\rho}$ we have
$\mathcal{C}(\rho,u)=c_{\rho}+\int_{M}\left\\{\frac{1}{2\lambda}||u||^{2}+\langle
A,u\rangle\right\\}\rho\ dx.$
Therefore we will consider the following problem.
###### Problem 6.1.
Minimize $\mathcal{C}(u)$ over $\Lambda^{1}(M)$, subject to the constraint
$L_{u}^{*}\rho=0$, where $\mathcal{C}(u)$ is defined by
$\mathcal{C}(u)=\int_{M}\left\\{\frac{1}{2\lambda}||u||^{2}+\langle
A,u\rangle\right\\}\rho\ dx.$
The corresponding problem in terms of the curent density is the following. As
for $\mathcal{C}(u)$, terms that do not depend on $J$ are eliminated from the
cost functional.
###### Problem 6.2.
Minimize $\mathcal{C}(J)$ over $\Lambda^{1}(M)$, subject to the constraint
$\delta J=0$, where $\mathcal{C}(J)$ is defined by
(20)
$\mathcal{C}(J)=\int_{M}\left(\frac{1}{\lambda}\left\langle-f+\frac{1}{2\rho}d\rho+\lambda
A,J\right\rangle+\frac{1}{2\lambda\rho}||J||^{2}\right)\ dx.$
By an analogous argument as in Section 5 (but less involved, since we are only
optimizing over $J$), we obtain the following result: a relaxed version of
Problem 6.2 may be transformed into an elliptic PDE. Essentially, this is
obtained through variation of the Lagrangian functional
$\mathcal{L}(J,\Phi)=\mathcal{C}(J)+\int_{M}\Phi\delta J\ dx.$
###### Theorem 6.3.
Suppose $J\in\Lambda^{1}(M)$ is a local extremum of $\mathcal{C}(J)$ given by
(20) under the constraint that $\delta J=0$. Then there exists a $\Phi\in
C^{\infty}(M)$ such that
(21)
$\Delta\Phi+\left\langle\frac{1}{\rho}d\rho,d\Phi\right\rangle=-\frac{1}{\lambda\rho}\left(\mbox{$\frac{1}{2}$}\Delta\rho+\delta(\rho(f-\lambda
A))\right).$
For this $\Phi$, $J$ is given by
$J=\rho f-\mbox{$\frac{1}{2}$}d\rho-\rho\lambda(A+d\Phi),$
and the corresponding control field $u$ is given by
$u=-\lambda(A+d\Phi).$
###### Example 6.4 (Circle).
On $S^{1}$ every differential 1-form $\beta$, and in particular
$\beta=f-\lambda A$, may be written as $\beta=-\mbox{$\frac{1}{2}$}\
dU+\mbox{$\frac{1}{2}$}k\ d\theta$, where $\theta$ represents the polar
coordinate function, $U\in C^{\infty}(S^{1})$ and $k\in\mathbb{R}$; see e.g.
[21, Example 4.14]. Recall that $\delta f=-\operatorname{div}f$. Equation (21)
then reads
$\Phi^{\prime\prime}(\theta)+\frac{1}{\rho}\Phi^{\prime}(\theta)\rho^{\prime}(\theta)=-\frac{1}{2\lambda\rho}\left(\rho^{\prime\prime}(\theta)-\frac{d}{d\theta}\left(-\rho(\theta)U^{\prime}(\theta)+k\rho\right)\right).$
Based on Remark 6.5, we try a solution of the form
$\Phi=-\frac{1}{2\lambda}(\ln\rho+U-\varphi)$. Inserting this into the
differential equation, we obtain for $\phi$ the equation
$\varphi^{\prime\prime}(\theta)+\gamma(\theta)\varphi^{\prime}(\theta)=k\gamma(\theta),$
where
$\gamma(\theta)=\frac{d}{d\theta}\ln\rho(\theta)=\frac{1}{\rho(\theta)}\rho^{\prime}(\theta)$.
Up to an arbitrary additive constant (which we put to zero), there exists a
unique periodic solution $\varphi$ to this differential equation, given by
$\varphi(\theta)=\frac{k\left(\theta\int_{0}^{2\pi}\rho(\xi)^{-1}\
d\xi-2\pi\int_{0}^{\theta}\rho(\xi)^{-1}\
d\xi\right)}{\int_{0}^{2\pi}\rho(\xi)^{-1}\ d\xi},\quad\theta\in[0,2\pi].$
###### Remark 6.5 (Solution in the symmetrizable case).
If $f-\lambda A=-\mbox{$\frac{1}{2}$}dU$ for some $U\in C^{\infty}(M)$, it may
be checked that $\Phi=-\frac{1}{2\lambda}\left(\ln\rho+dU\right)$ solves (21),
so that the optimal control field $u=\mbox{$\frac{1}{2}$}d(\ln\rho)-f$. In
other words, the optimal way to obtain a particular density function $\rho$ if
$f-\lambda A$ is in ‘gradient form’ is by using a control $u$ so that the
resulting force field $f+u$ is again in gradient form,
$f+u=\mbox{$\frac{1}{2}$}d(\ln\rho)$, resulting in reversible dynamics; see
also Section 5.4.
###### Remark 6.6 (Gauge invariance).
As in Section 5.5, it is straightforward to check that a solution to Problem
6.2 for $A_{\varphi}=A_{0}+d\varphi$ is given by
$\Phi_{\varphi}=\Phi-\varphi,\quad\rho_{\varphi}=\rho,\quad
u_{\varphi}=u,\quad J_{\varphi}=J,$
in terms of the solution $(\Phi,\rho,u,J)$ corresponding to the gauge field
$A_{0}$.
## 7\. Fixed current density
In this section we approach the problem of minimizing the average cost, under
the constraint that $J$ is fixed. In light of the remark just below (12), it
will be necessary to demand that $\delta J=0$, otherwise we will not be able
to obtain a solution. By (13), we may express $u$ in terms of $J$ and $\rho$
by $u=-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)$. Note that by
Lemma 3.10, the Fokker-Planck equation (6) is satisfied. This leads to the
following problem.
In the remainder of this section let $J\in\Lambda^{1}(M)$ satisfying $\delta
J=0$ be fixed.
###### Problem 7.1.
Minimize $\mathcal{C}(\rho)$ subject to the constraint $\int_{M}\rho\ dx=1$,
where
(22)
$\mathcal{C}(\rho)=\int_{M}\left(V+\frac{1}{2\lambda}\left|\left|-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\right|\right|^{2}\right)\rho\
dx.$
###### Remark 7.2.
The constraint $\rho\geq 0$ on $M$ does not need to be enforced, since if we
find $\rho$ solving Problem 7.1 without this constraint, we may compute $u$ by
(13). Then $\rho$ satisfies $L_{u}^{*}\rho=\delta J=0$ by Lemma 3.10, so by
Proposition 3.5 and the constraint $\int_{M}\rho\ dx=1$, it follows that
$\rho>0$ on $M$.
###### Remark 7.3.
Note that by Lemma 3.11, the contribution of $A$ is determined once we fix
$J$. Therefore we may put $A=0$ in the current optimization problem.
Necessary conditions for the solution of Problem 7.1 may be obtained
rigourously in a similar manner as in Section 5, to obtain the following
result.
###### Theorem 7.4.
Suppose $\rho\in C^{\infty}(M)$ minimizes $\mathcal{C}(\rho)$ given by (22)
under the constraint that $\int_{M}\rho\ dx=1$. Then there exists a
$\mu\in\mathbb{R}$ such that
(23)
$\mbox{$\frac{1}{2}$}\Delta\phi-(W+\lambda\mu)\phi=-\frac{||J||^{2}}{2\phi^{3}},$
holds, where $\phi=\sqrt{\rho}$ and $W=\lambda
V+\mbox{$\frac{1}{2}$}||f||^{2}-\mbox{$\frac{1}{2}$}\delta f$.
###### Remark 7.5.
Equation (23) is known (at least in the one-dimensional case) as _Yermakov’s
equation_ [17].
Instead of proving Theorem 7.4 rigourously (which may be done analogously to
Section 5) we provide an informal derivation, which we hope provides more
insight to the reader. We introduce the Lagrangian
$\mathcal{L}:C^{\infty}(M)\times\mathbb{R}\rightarrow\mathbb{R}$ by
$\displaystyle\mathcal{L}(\rho,\mu)$
$\displaystyle=\int_{M}\left(V+\frac{1}{2\lambda}\left|\left|-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\right|\right|^{2}\right)\rho\
dx+\mu\left(\int_{M}\rho\ dx-1\right).$
Varying $\mathcal{L}(\rho,\mu)$ with respect to $\rho$ in the direction
$\zeta\in C^{\infty}(M)$ gives
$\displaystyle\mathcal{L}^{\prime}(\rho,\mu)\zeta$
$\displaystyle=\int_{M}\left(V+\frac{1}{2\lambda}\left|\left|-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\right|\right|^{2}+\mu\right)\zeta+\frac{1}{\lambda}\left\langle-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right),-\frac{1}{\rho^{2}}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\zeta+\frac{1}{2\rho}d\zeta\right\rangle\rho\
dx$ $\displaystyle=\int_{M}\left(\lambda
V+\lambda\mu+\mbox{$\frac{1}{2}$}||f||^{2}-\frac{1}{2\rho^{2}}||J||^{2}-\mbox{$\frac{1}{2}$}\delta
f+\frac{1}{8\rho^{2}}||d\rho||^{2}-\frac{1}{4\rho}\Delta\rho\right)\frac{\zeta}{\lambda}\
dx,$
where we used the identities $\delta J=0$ and $\delta(h\omega)=-\langle
dh,\omega\rangle+h\delta\omega$ for $h\in C^{\infty}(M)$ and
$\omega\in\Lambda^{1}(M)$. We require that for any direction $\zeta$ the above
expression equals zero, which is the case if and only if
(24)
$-\frac{1}{4\rho}\Delta\rho+\frac{1}{8\rho^{2}}||d\rho||^{2}-\frac{1}{2\rho^{2}}||J||^{2}+W+\lambda\mu=0.$
Note that we need to solve this equation for both $\rho$ and $\mu$, in
combination with the constraint that $\rho>0$ on $M$ and $\int_{M}\rho\ dx=1$.
By substituting $\phi=\sqrt{\rho}$, equation (24) transforms into the equation
(23).
We may then compute the cost corresponding to $\rho=\phi^{2}$ as
$\displaystyle\mathcal{C}(\rho)$
$\displaystyle=\int_{M}\left(V+\frac{1}{2\lambda}\left|\left|-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\right|\right|^{2}\right)\rho\
dx$
$\displaystyle=\int_{M}\frac{1}{4\lambda}\Delta\rho-\frac{1}{8\lambda\rho}||d\rho||^{2}+\frac{1}{2\lambda\rho}||J||^{2}-\frac{\rho}{2\lambda}||f||^{2}+\frac{\rho}{2\lambda}\delta
f-\mu\rho+\frac{\rho}{2\lambda}\left|\left|-f+\frac{1}{\rho}\left(J+\mbox{$\frac{1}{2}$}d\rho\right)\right|\right|^{2}\
dx$ (25)
$\displaystyle=\int_{M}\frac{1}{4\lambda}\Delta\rho+\frac{\rho}{2\lambda}\delta
f-\mu\rho-\frac{1}{\lambda}\langle f,J\rangle-\frac{\rho}{\lambda}\left\langle
f,\frac{1}{2\rho}d\rho\right\rangle+\frac{1}{\lambda\rho}\langle
J,\mbox{$\frac{1}{2}$}d\rho\rangle\ dx$
$\displaystyle=-\frac{1}{\lambda}\left(\mu\lambda+\int_{M}\langle f,J\rangle\
dx\right),$
where we used (24) in the first equality, and the following observations for
the last equality:
$\displaystyle\int_{M}\Delta\rho\ dx$ $\displaystyle=-\int\langle
d(1),d\rho\rangle\ dx=0,\quad$ $\displaystyle\int_{M}\rho(\delta f)\ dx$
$\displaystyle=\int_{M}\langle d\rho,f\rangle\ dx,$
$\displaystyle\int_{M}\frac{1}{\rho}\langle J,d\rho\rangle\ dx$
$\displaystyle=\int_{M}\langle J,d(\ln\rho)\rangle\
dx=\int_{M}(\ln\rho)(\delta J)\ dx=0,\quad$ $\displaystyle\int_{M}\rho\ dx$
$\displaystyle=1.$
We can only influence the first term in (7) by choosing $\rho$ or $\mu$, so we
see that minimizing $\mathcal{C}$ therefore corresponds to finding the largest
value of $\mu$ such that (24), or, equivalently, (23), admits a solution.
### 7.1. Symmetrizable solution – time independent Schrödinger equation
In this section we consider the special case of the above problem for zero
current density, $J=0$. By Lemma 5.15, this is equivalent to
$u+f=-\frac{1}{2}d\Psi$ for some unknown $\Psi\in C^{\infty}(M)$, with
$\rho=\exp(-\Psi)$. In other words, we demand the net force field (including
the control) to be in gradient form, and the corresponding diffusion to be
symmetrizable; see Section 5.4.
In this case (23) transforms into the linear eigenvalue problem,
(26) $\mbox{$\frac{1}{2}$}\Delta\phi-W\phi=\lambda\mu\phi.$
This is intriguing since this is in fact a time independent Schrödinger
equation for the square root of a density function, analogous to quantum
mechanics; even though our setting is entirely classical.
By (7), we are interested in the largest value of $\mu$ so that (26) has a
solution $\phi$. The optimal control field is then given by
(27) $u=\frac{1}{\phi}d\phi-f.$
###### Remark 7.6.
It is straightforward to check that if $f=-\mbox{$\frac{1}{2}$}dU$ for some
$U\in C^{\infty}(M)$, then $\phi=\exp(-\mbox{$\frac{1}{2}$}U)$ satisfies (26)
with $V=0$, $\mu=0$, resulting in $u=0$. This corresponds to the intuition
that, if $f$ is already a gradient, no further control is necessary to obtain
a symmetrizable invariant measure.
###### Remark 7.7.
We may also compare the case $f=-\mbox{$\frac{1}{2}$}dU$ with the result of
Section 5.4. There we obtained that, in case $A=0$ and
$f=-\mbox{$\frac{1}{2}$}dU$, the optimization problem for unconstrained $J$
resulted in a symmetrizable solution. In other words, the constraint $J=0$
does not need to be enforced, and the solution of this section should equal
the solution obtained in Proposition 5.16. Apparently, with $\psi$ as in
Proposition 5.16, we have that $\phi^{2}=\psi^{2}\exp(-U)$.
## 8\. Discussion
In this paper we showed how stationary long term average control problems are
related to eigenvalue problems (for the unconstrained problem and the problem
constrained to a symmetrizable solution, Sections 5 and 7.1), elliptic PDEs
(for the problem with fixed density, Section 6) or a nonlinear PDE (for the
problem with fixed current density, Section 7). For this we fruitfully used
the representation of an optimal control field $u$ in terms of the density
function $\rho$ and the current density $J$. We showed in detail how an
infinite dimensional Lagrange multiplier problem may be transformed into a PDE
(Section 5). A striking relation between the classical setting and quantum
mechanics was obtained in Section 7.1.
The theory on existence of solutions and spectrum of operators is classical
and we refer the interested reader to e.g. [10, 19]. Let us again point out
the strong connection of our results with earlier work of Donsker, Varadhan
(1975) [9]; see also Remark 5.12. We will further investigate this connection
as part of our future research.
One may ask the question whether we may obtain solutions when we constrain a
certain flux $\int_{M}\langle A,J\rangle\ dx$ (see Section 4) to a given
value. In this case, one may use $\widetilde{A}=\mu A$ as a Lagrange
multiplier and use the results of Section 5 (for constrained flux) and Section
6 (for constrained flux and density) to obtain necessary conditions. As these
results did not provide us with profound insight, we aim to report on this
topic in a subsequent publication after further analysis of the problem.
## Appendix A Derivation of expression for long term average of current
density
In the physics literature [7], the current density is defined formally as
(28) $J^{i}_{t}(x)=\frac{1}{t}\int_{0}^{t}\dot{X}_{s}^{i}\delta(X_{s}-x)\ ds,$
for $x\in M$, where $\delta$ is the Dirac delta function. We will derive an
alternative expression for this quantity, using the model (2) for the
dynamics. Note that (28) formally defines a vector field that acts on
functions $f\in C^{\infty}(M)$ as
$\displaystyle J_{t}f(x)$
$\displaystyle=\frac{1}{t}\int_{0}^{t}\dot{X}_{s}^{i}\delta(X_{s}-x)\partial_{i}f(x)\
ds=\frac{1}{t}\int_{0}^{t}\dot{X}_{s}^{i}\delta(X_{s}-x)\partial_{i}f(X_{s})\
ds=\frac{1}{t}\int_{0}^{t}\delta(X_{s}-x)\partial_{i}f(X_{s})\circ
dX_{s}^{i}.$
The $\delta$-function is still problematic. We may however formally compute
the $L^{2}(M,g)$ inner product of the above expression with any $h\in
C^{\infty}(M)$ with support in a coordinate neighbourhood $U$ containing $x$.
This results in
$\displaystyle\int_{M}h(x)J_{t}f(x)\ dx$
$\displaystyle=\frac{1}{t}\int_{U}h(x)\int_{0}^{t}\delta(X_{s}-x)\partial_{i}f(X_{s})\circ
dX_{s}^{i}\ dx=\frac{1}{t}\int_{0}^{t}h(X_{s})\partial_{i}f(X_{s})\circ
dX_{s}^{i}.$
Using the relation $Y\circ dZ=Y\ dZ+\mbox{$\frac{1}{2}$}d[Y,Z]$ and (5), we
compute
$\displaystyle\int_{U}h(x)J_{t}f(x)\ dx$
$\displaystyle=\frac{1}{t}\int_{0}^{t}h(X_{s})\partial_{i}f(X_{s})\left(\overline{b}_{u}^{i}(X_{s})ds+\sigma_{\alpha}^{i}(X_{s})\
dB_{s}^{\alpha}\right)+\mbox{$\frac{1}{2}$}\frac{1}{t}\int_{0}^{t}\sigma_{\alpha}^{i}\sigma_{\alpha}^{j}\partial_{j}\left(h\partial_{i}f\right)(X_{s})\
ds$
$\displaystyle\rightarrow\int_{U}\left\\{h(x)(\partial_{i}f)(x)\left(\overline{b}_{u}^{i}(x)\right)+\mbox{$\frac{1}{2}$}g^{ij}\partial_{j}\left(h\partial_{i}f\right)(x)\right\\}\
\mu_{u}(dx)\quad(\mbox{almost surely as}\ t\rightarrow\infty)$
$\displaystyle=\int_{U}h(x)\left\\{\rho_{u}(\partial_{i}f)\left(\overline{b}_{u}^{i}\right)-\mbox{$\frac{1}{2}$}\frac{1}{\sqrt{|g|}}\left(\partial_{i}f\right)\partial_{j}\left(\rho_{u}\sqrt{|g|}g^{ij}\right)\right\\}(x)\
dx,$
using Proposition 3.6 and the law of large numbers for martingales ([16,
Theorem 3.4], or [3, Section 6.4.1] for a proof). We find that the long term
average vector field $J$ has components
$\displaystyle J^{i}$
$\displaystyle=\rho_{u}\overline{b}_{u}^{i}-\mbox{$\frac{1}{2}$}\frac{1}{\sqrt{|g|}}\partial_{j}\left(\rho_{u}\sqrt{|g|}g^{ij}\right)=\rho_{u}\left(b_{u}^{i}+\mbox{$\frac{1}{2}$}\sigma_{\alpha}^{k}\left(\partial_{k}\sigma_{\alpha}^{i}\right)\right)-\mbox{$\frac{1}{2}$}g^{ij}\left(\partial_{j}\rho_{u}\right)-\mbox{$\frac{1}{2}$}\rho_{u}\frac{1}{\sqrt{|g|}}\partial_{j}\left(\sqrt{|g|}g^{ij}\right)$
$\displaystyle=\rho_{u}b_{u}^{i}-\mbox{$\frac{1}{2}$}g^{ij}\left(\partial_{j}\rho_{u}\right),$
where the last equality is a result of the identity
$\left(\nabla_{\sigma_{\alpha}}\sigma_{\alpha}\right)^{i}=\sigma_{\alpha}^{k}\left(\partial_{k}\sigma_{\alpha}^{i}\right)-\frac{1}{\sqrt{|g|}}\partial_{j}\left(\sqrt{|g|}g^{ij}\right),$
which may be verified by straightforward calculation. Lowering the indices
gives the differential form
$J_{u}=-\mbox{$\frac{1}{2}$}d\rho+\rho(f+u).$
Note the abuse of notation: both differential form and vector field are
denoted by $J$. The vector field denoted by $J$ has no relevance in the
remainder of this work.
## References
* [1] Ari Arapostathis, Vivek S. Borkar, and Mrinal K. Ghosh. Ergodic control of diffusion processes, volume 143 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2012.
* [2] V. I. Arnold. Mathematical methods of classical mechanics, volume 60 of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 1989. Translated from the Russian by K. Vogtmann and A. Weinstein.
* [3] J. Bierkens. Long Term Dynamics of Stochastic Evolution Equations. PhD thesis, Universiteit Leiden, 2009.
* [4] J. Bierkens and H.J. Kappen. Explicit solution of relative entropy weighted control. submitted; arXiv:1205.6946, 2012.
* [5] Vivek S. Borkar and Mrinal K. Ghosh. Ergodic control of multidimensional diffusions. I. The existence results. SIAM J. Control Optim., 26(1):112–126, 1988.
* [6] Vladimir Y. Chernyak, Michael Chertkov, J. Bierkens, and H.J. Kappen. Gauge-invariant and current-density controls. In preparation.
* [7] Vladimir Y. Chernyak, Michael Chertkov, Sergey V. Malinin, and Razvan Teodorescu. Non-equilibrium thermodynamics and topology of currents. J. Stat. Phys., 137(1):109–147, 2009.
* [8] G. Da Prato and J. Zabczyk. Ergodicity for Infinite Dimensional Systems. Cambridge University Press, 1996.
* [9] Monroe D. Donsker and S. R. S. Varadhan. On a variational formula for the principal eigenvalue for operators with maximum principle. Proc. Nat. Acad. Sci. U.S.A., 72:780–783, 1975.
* [10] Lawrence C. Evans. Partial differential equations, volume 19 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, second edition, 2010.
* [11] Jin Feng and Thomas G. Kurtz. Large deviations for stochastic processes, volume 131 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2006.
* [12] Wendell H. Fleming. Logarithmic transformations and stochastic control. In Advances in filtering and optimal stochastic control (Cocoyoc, 1982), volume 42 of Lecture Notes in Control and Inform. Sci., pages 131–141. Springer, Berlin, 1982.
* [13] Phillip Griffiths and Joseph Harris. Principles of algebraic geometry. Wiley Classics Library. John Wiley & Sons Inc., New York, 1994. Reprint of the 1978 original.
* [14] Nobuyuki Ikeda and Shinzo Watanabe. Stochastic differential equations and diffusion processes, volume 24 of North-Holland Mathematical Library. North-Holland Publishing Co., Amsterdam, second edition, 1989.
* [15] David G. Luenberger. Optimization by vector space methods. John Wiley & Sons Inc., New York, 1969.
* [16] X. Mao. Stochastic Differential Equations and Applications. Horwood, Chichester, 1997.
* [17] Andrei D. Polyanin and Valentin F. Zaitsev. Handbook of exact solutions for ordinary differential equations. Chapman & Hall/CRC, Boca Raton, FL, second edition, 2003.
* [18] Per Rutquist, Claes Breitholtz, and Torsten Wik. On the infinite time solution to state-constrained stochastic optimal control problems. Automatica, 44(7):1800 – 1805, 2008.
* [19] M. E. Taylor. Partial Differential Equations: Basic Theory. Springer, 1996.
* [20] Emanuel Todorov. Linearly-solvable markov decision problems. In NIPS, pages 1369–1376, 2006.
* [21] Frank W. Warner. Foundations of differentiable manifolds and Lie groups, volume 94 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1983. Corrected reprint of the 1971 edition.
|
arxiv-papers
| 2013-03-01T09:59:31 |
2024-09-04T02:49:42.264750
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Joris Bierkens, Vladimir Y. Chernyak, Michael Chertkov, Hilbert J.\n Kappen",
"submitter": "Joris Bierkens",
"url": "https://arxiv.org/abs/1303.0126"
}
|
1303.0157
|
# RoyalRoad: User Evolution-Path Recommendation via Tensor Analysis
## 1 Introduction
There have been many studies on item recommendation. For example, traditional
item recommendation uses association rule to leverage the relationship between
items [zhou2007association, buczak2010fuzzy]. Collaborative filtering exploits
co-purchasing behaviours to recommend items that bought by users with similar
behaviours [zeng2003similarity, ren2012efficient, li2010improving]. Another
approach is to consider the structure of social networks with multiple
relational domains and utilize the random walk techniques to learn the users’
preference [jiang2012social]. Also, many studies [zheleva2010statistical,
schedl2012mining, volz2006impact, mcfee2012million] have tried to model the
behavior of music listening via various contextual factors such as time, mood,
location and weather.
Nowadays, many people are accustomed to rating items with comments on
websites, e.g. video like/dislike on Youtube, movie rating on IMDB/MovieLens,
commodity reviews on Amazon, and answer rating on Yahoo Q&A, and a recent line
of studies has introduced effective algorithms to quantify review levels of a
user according to her review scores and the quality of corresponding contents.
Moreover, jure show that the learning curves of users evolve with the increase
of users’ experiences over time, and preferences of users change with their
review levels. However, even with the learning curve, none of the literature
in recommendation system have considered the user evolution patterns to
recommend a series of items that enlightens users on the taste from an amateur
to an expert.
Using the learnt learning curves of a large amount of users not only helps
websites recommend the items users like but also teaches users how to evolve
to higher level of tastes and thus keeps users being interested in the domain,
e.g., jazz music and Fauvism painting. Otherwise, if users do not evolve with
experience, they may be frustrated and leave that domain [jure]. Since the
user evolution paths are not considered in existing recommendation systems,
i.e., the tastes of users remains identical through time, Therefore, we
propose an evolution path-based Recommendation framework named RoyalRoad to
lead users to a royal road via tensor analysis. We also design a topic-based
social trust regularization to integrate social network information with cross
domain data. The challenges of our problem are summarized as follows.
* •
_User modeling._ It is important to capture the essential characteristics of
learning curves to determine the similarity between any two learning curves so
that classification or clustering can be applied for facilitating the
recommendation. In psychology, the characteristics of learning curve have been
well studied. For instance, the learning curve through time contains three
phases, i.e., starting the motor, pointing at leaves, and flushing. However,
none of the a prior knowledge has been exploited in user modeling. How to
efficiently exploit the prior knowledge and build general model is
challenging.
* •
_Cold start._ The cold start problem has been used to describe the situation
when recommendations are required for items but no one has yet rated. It also
has been used to describe the situation when almost nothing is known about
customer preferences. In evolution-path recommendation problem, the cold start
problem also exists since the item rating of new users are also unknown and
the level of new users should be discovered as soon as possible. Moreover, the
labelled data may be sparse in the target domain. Therefore, how to use
transfer learning to leverage the knowledge from other domains is helpful.
Moreover, how to fuse the social structure into the recommendation is also
challenging.
* •
_Incremental update._ Since the user population on review websites usually
huge, the incoming amount of data can be regarded as streaming input data.
However, it is difficult to spontaneously update class of learning curve with
the streaming input data since clustering and classification are time-
consuming. How to incrementally update the new data is important and
challenging.
To the best of our knowledge, this is the first study that combines cross
domain recommendation and learning curve in a unified framework. The expected
contributions of our problem are summarized as follows.
* •
We will propose a new recommendation framework, namely RoyalRoad, to
facilitate the recommendation system for helping user evolve from an amateur
to an expert.
* •
We design Algorithms to find the matching paths solution with an approximation
ratio. Experimental results demonstrate that RoyalRoad outperforms baseline
solution both in solution quality and running time
* •
Experimental results show that RoyalRoad outperforms the other state-of-the-
art recommendation systems.
## 2 Preliminary
### 2.1 Scenario
In the following, we present several scenarios that RoyalRoad can be applied
to.
_E-Learning:_ Recently, a variety of e-learning websites have emerged, e.g.
coursera [Coursera] and edX [edX], since . A successful e-learning system
involves a systematic process to create a thorough online environment where
learning is actively fostered and supported. One of the challenges is to
evaluate the learners and guide them to learn knowledge level-by-level
[e-learning]. However, with the rapid growth of number of online learners, it
is infeasible to evaluate learners manually since it’s time-consuming or
manpower-consuming. Therefore, with the evolution paths leveraged from crowds,
RoyalRoad automatically recommends the best learning path for each users and
thus resolves the problem of lack of manpower in guidance of online learners.
_Music Recommendation:_ Nowadays, many people are accustomed to listening
music with online commercial music streaming services, like Spotify [Spotify],
last.fm [lastfm], and JAZZRADIO [JAZZRADIO]. With the listening history and
friend information, current services recommend music to users by song
popularity, association rule, or collaborative Filtering (CF). However, these
approaches do not consider the evolution of users, that is, the tastes of
users change with more aggregated experience. Moreover, the recommendation do
not guide users to have a better taste. RoyalRoad guides users to become more
experienced, which is also a win-win situation in business, since users will
stay interested in the service if they evolve with time, and users with higher
experience level will more easily appreciate music/products with high price.
### 2.2 Related work
Traditional item recommendation uses association rule to leverage the
relationship between items [zhou2007association, buczak2010fuzzy].
Collaborative filtering exploits co-purchasing behaviours to recommend items
that bought by users with similar behaviours [zeng2003similarity,
ren2012efficient, li2010improving]. Another approach is to consider the
structure of social networks with multiple relational domains and utilize the
random walk techniques to learn the users’ preference [jiang2012social]. Also,
many studies [zheleva2010statistical, schedl2012mining, volz2006impact,
mcfee2012million] have tried to model the behavior of music listening via
various contextual factors such as time, mood, location and weather.
overview proposed that two common genres in recommendations are collaborative
filtering and content-based recommendation. However, content-based
recommendation has extra problems like overspecialization, which not only
limits users’ choice but also the possible profits earned by companies, and
the big problem of cold start, that is, it cannot do anything with the users
who is new to the world of products, since the backgrounds (e.g., number of
reviews and ratings) of them are not enough.
Recommendation with Learning Curve is a brand new idea proposed by jure, but
they did not use the result of the learning curve but the learnt experience
level instead, to do the recommendation. In addition, the assumption of the
nondecreasing experience level which progresses over time is not general
according to the learning curve in psychology , since the curve can be
decrease or oscillate [Learning_curve]. In addition, data from RateBeer and
BeerAdvocate may have bias due to price. Users who at the highest stage of
personal evolution may be underestimated by the system because they never buy
costly products due to limited budget.
### 2.3 Big Data Challenges
This work can be implemented in various kind of datasets such as movie
(Netflix, EachMovie, and MovieLens), music (Youtube), book (Book-Crossing),
and publication network (DBLP bibliography), and since the data of all these
products is now combined with the social network, therefore additional actions
and information can be used to find the interesting relationship, and the data
becomes large-scale and has various attributes.
For example, Youtube is now a most popular platform for sharing videos in the
world. There are more than 1 billion unique users visit YouTube each month,
and over 6 billion hours of video are watched each month on YouTube: this is
equal to almost an hour for every person on Earth, and the time is still
growing by 50% more than last year [you_stats]. Comments, shares, and likes
are taken by 100 million people every week, and millions of videos are
favorited every day [likes]. The likes and dislikes pressed by users on
Youtube can also be considered as reviews or ratings, and thus the multi-
aspect information and these social actions from users to videos is quite
interesting to people.
However, the number of social actions on Youtube grows amazingly. The fast-
accumulated data and its range of data types and sources becomes a problem to
be processed, and the problem now is located in the field of big data
containing features of volume, variety, and velocity. Here, we conquer the
issue from big data by the framework of Hadoop and MapReduce by which mining
and analyses are performed parallel and become efficiently.
There are already some related works that try to adjust the classic methods in
data mining and make them process efficiently in the distributed system. For
big data, since the variety, namely the number of types of data attributes,
tensor analysis is a proper method and can solve several problems in
information retrieval, market basket analysis, web and social network, etc.
sun2006beyond propose the dynamic tensor (DTA) which provides a compact
summary for high-order and high-dimensional data, and it also reveals the
hidden correlations. Moreover, they also propose the streaming tensor analysis
(STA), which is fast and can get a streaming approximation to DTA. Although
they only did the improvement on the single desktop, these algorithms are
worth referencing to us in term of incremental update problem.
kang2012gigatensor move tensor analysis to the distributed system via the
framework of hadoop and MapReduce, which makes tensor analysis scale-up and
can deal with the vast volume of big data. By using the algorithm of
GIGATENSOR provided by [kang2012gigatensor] and combining another streaming
process method like [sun2006beyond], our work can face all the challenges
followed by big data.
### 2.4 Experimental Results
leverages the information from different source domains which can be
implemented on real datasets, e.g., such as movie (IMDB, Netflix, EachMovie,
and MovieLens), music (Youtube and last.fm), book (Book-Crossing), Q&A (Stack
Overflow and Yahoo! Q&A), and publication network (DBLP bibliography) to
facilitate the learning curve in since the learning curve is the common
feature among all datasets. Therefore, two experimental results should be
employed. First, the level of objects in target domain should be rated. There
are two possible ways to achieve this goal. We can use the rating ground
truth, e.g., movie ratings in IMDB. If the website of the dataset doest not
contain any rating mechanism, we can make assumption that the taste of
experienced users are better than rookies, which is supported by the theory in
psychology since learning curve model is monotonically increasing. The second
part of experiments is to evaluate the recommendation results of RoyalRoad. By
separating the records of each user into training set and testing set, we
predict the behaviours of target users according to the knowledge extracted by
transfer learning from other sources.
|
arxiv-papers
| 2013-03-01T12:47:07 |
2024-09-04T02:49:42.276703
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Hong-Han Shuai",
"submitter": "Hong-Han Shuai",
"url": "https://arxiv.org/abs/1303.0157"
}
|
1303.0162
|
11institutetext: Institut d’Astrophysique et de Géophysique de l’Université de
Liège, Allée du 6 Août 17, 4000 Liège, Belgium 22institutetext: Observatoire
de Paris, LESIA, CNRS UMR 8109, F-92195 Meudon, France 33institutetext: Georg-
August-Universität Göttingen, Institut für Astrophysik, Friedrich-Hund-Platz
1, D-37077 Göttingen, Germany
33email: [email protected]
# Non-perturbative effect of rotation
on dipolar mixed modes in red giant stars
R-M. Ouazzani 11 2 2 M.J. Goupil 22 M-A. Dupret 11 J.P. Marques 3322
(Received , 2012; accepted , 2013)
###### Abstract
Context. The space missions CoRoT and Kepler provide high quality data that
allow us to test the transport of angular momentum in stars by the seismic
determination of the internal rotation profile.
Aims. Our aim is to test the validity of the seismic diagnostics for red giant
rotation that are based on a perturbative method and to investigate the
oscillation spectra when the validity does not hold.
Methods. We use a non-perturbative approach implemented in the ACOR code
(Ouazzani et al. 2012) that accounts for the effect of rotation on pulsations,
and solves the pulsation eigenproblem directly for dipolar oscillation modes.
Results. We find that the limit of the perturbation to first order can be
expressed in terms of the rotational splitting compared to the frequency
separation between consecutive dipolar modes. Above this limit, non-
perturbative computations are necessary but only one term in the spectral
expansion of modes is sufficient as long as the core rotation rate remains
significantly smaller than the pulsation frequencies. Each family of modes
with different azimuthal symmetry, $m$, has to be considered separately. In
particular, in case of rapid core rotation, the density of the spectrum
differs significantly from one $m$-family of modes to another, so that the
differences between the period spacings associated with each $m$-family can
constitute a promising guideline toward a proper seismic diagnostic for
rotation.
###### Key Words.:
asteroseismology - stars: interiors - stars: oscillations
## 1 Introduction
Seismic measurements of rotation profiles inside the Sun as well as stars
provide tight constraints on models of transport of angular momentum
(Pinsonneault et al., 1989; Zahn, 1992; Zahn et al., 1997; Talon & Charbonnel,
2008, and references therein). In particular, stars in late stages of
evolution, due to the highly condensed core, oscillate with non-radial modes
that have a mixed character: they behave as p modes in the envelope and as g
modes in the core. These modes, also known as mixed modes, are of particular
interest for the determination of the rotation profile throughout the star, as
they carry the signature of the star’s innermost layers and are detectable at
the surface. The CoRoT (Baglin et al., 2006) and _Kepler_ (Borucki et al.,
2010) spacecrafts have dramatically improved the quality of the available
asteroseismic data. Several recent studies reported the detection of mixed
modes that are split by rotation in a subgiant (Deheuvels et al., 2012) and in
several red giants (Beck et al., 2012; Mosser et al., 2012b, a) observed with
_Kepler_. A large number of these stars exhibit frequency spectra that show a
quite simple structure where symmetric patterns around axisymmetric modes are
easily identified. They are interpreted as multiplets of modes split by
rotation. The rotational splittings, i.e. the frequency spacing related to the
lift of degeneracy caused by rotation is then used to determine the core
rotation. The values of the corresponding splittings are quite small and the
use of the lowest order approximation to derive the splittings from stellar
models can be justified. Such studies led to the determination of unexpectedly
low central rotation frequencies (of few hundreds of nHz). These results are
in strong disagreement with the core rotation frequencies predicted by
evolutionary models, which are of the order of few dozens of $\rm\mu$Hz
(Eggenberger et al., 2012; Marques et al., 2013). They show that the transport
processes currently included in stellar models are not able to spin down the
core of red giant stars enough to explain the slowly core rotating red giants.
On the other hand, a large set of red giant stars show complex frequency
spectra (Mosser et al., 2012a), in particular with non symmetric multiplets
and therefore are likely rotating fast. Their rotation must then be
investigated with non-perturbative methods.
In this context, we first report on the relevance of using a first order
approach for the inference of rotation from seismic spectra of red giant stars
with slowly to rapidly rotating core. When not relevant, we adopt the non-
perturbative approach in order to shed light on the behaviour of splitted
mixed modes in red giants spectra.
## 2 Theoretical frequency spectra for red giants
Figure 1: Pulsation frequencies in the inertial frame $\nu$ (in $\mu$Hz)
versus core rotation frequency $\Omega_{c}/2\pi$ (in $\mu$Hz) for dipolar
multiplets. Red curves refer to $\rm m=-1$ (prograde) modes, green curves to
$\rm m=0$ modes (axis-symmetric) , and blue curves to $\rm m=1$ (retrograde)
modes, for central rotation frequency ranging from 0 to 100 $\mu$Hz.
This work is based on the study of a model, (model M1with mass 1.3 M⊙ and
radius 3.6 R⊙), at the bottom of the red giant branch. The stellar model is
computed with the CESTAM code (Code d’Evolution Stellaire, avec Transport,
Adaptatif et Modulaire, Marques et al., 2013). Transport of angular momentum
induced by rotation is included according to Zahn (1992). The central rotation
rate in this model is ${\rm\Omega_{c}/2\pi\simeq 180\mu}$Hz, while the surface
convective region rotates at a rate of $1\,\mu$Hz. The distortion of the model
due to the centrifugal force can be neglected everywhere in red giant stars.
For the model M1, scaling laws (Kjeldsen & Bedding, 1995) give a frequency of
maximum power around $\rm\nu_{max}=289\,\rm\mu$Hz and a large separation of
$\Delta\nu=23\,\rm\mu$Hz. We then compute frequencies ranging between
$\rm\nu_{max}\pm 2\Delta\nu$. In this frequency range, the impact of the
Coriolis force remains small except in the very inner layers of the star where
it can significantly affect the modes. In order to investigate the effect of
core rotation on the frequency spectrum, we compute sets of frequencies for
model M1 for a sequence of rotation profiles. This sequence is obtained by
dividing the whole rotation profile given by CESTAM for model M1 by constant
factors. The oscillation frequencies are calculated by the non-perturbative
pulsation code ACOR (Adiabatic Code of Oscillations including Rotation,
Ouazzani et al., 2012). The eigenmodes are obtained as a result of the
coupling of spherical harmonics. In what follows, M coupling terms means
expansion on the $\ell=1,3,...,2\rm M+1$ spherical harmonics for the scalars
and the poloidal velocity component and $\ell=2,4,...,2\rm M+2$ for the
toroidal velocity component.
Figure 1 shows the frequencies of several dipolar ($\ell=1$) multiplets with
increasing rotation frequency, computed using one coupling term ($M=1$) in the
spectral expansion. Starting at low rotation rate, due to the combined action
of the Doppler effect and the Coriolis force, in the inertial frame, prograde
($m=-1$) modes are shifted towards higher frequencies, whereas retrograde
($m=+1$) modes are shifted towards lower frequencies. Crossings between modes
of different symmetry occur approximately at $\Omega_{c}/2\pi\gtrsim 8\mu$Hz
(for model M1). As a result, for core rotation frequencies above $8\mu$Hz,
modes of different $m$ are no longer gathered by original multiplets, i.e.
triplets of modes which have the same degenerate frequency without rotation.
The choice of $M=1$ coupling term in the spectral expansion is indeed
sufficient for most of the rotation profiles investigated here. This is
illustrated in Fig. 2 which shows the impact of the number of spherical
harmonics used in the eigenmodes expansion. Fig. 2 present the results of non-
perturbative calculations using three coupling terms $(M=3$). Most modes are
clearly dominated by their $\ell=1$ component (Fig. 2, top). Figure 2 (bottom)
shows that the frequency difference between calculations including one and
three spherical harmonics are smaller then the frequency difference between
two consecutive mixed modes (for $\Omega_{c}/2\pi<100\mu$Hz by two orders of
magnitude). From now on, we adopt $M=1$.
Figure 2: Top: Ratio between the contributions to kinetic energy of the
$\ell=3$ component and the $\ell=1$ component of a dominated $\ell=1$ mode.
Bottom: Difference between the pulsation frequencies computed with three
coupling terms ($M=3$) and with one coupling term ($M=1$), divided by the
frequency separation between two consecutive mixed modes. Computed for the
triplet indicated in thick lines in Fig. (1).
The trapping of modes essentially depends on their frequency (see for instance
Unno et al., 1989). As seen in Fig. 1, when $\Omega_{c}$ increases, so does
the frequency difference between the three members of a triplet and the
trapping of these members can be significantly different, to such an extent
that they end up with very different p-g nature. This is illustrated in Fig.
3, where the kinetic energy corresponding to modes of different $m$ around
$\nu_{max}$ is plotted. It gives an indication of the p-g nature of modes: the
p-dominated modes (referred as p-m modes) correspond to the minima of energy,
while the g-dominated ones (g-m modes) are associated with the maxima. The
three modes circled in black belong to the same original multiplet, and show
different p-g nature. The prograde and the retrograde modes are g-m modes,
while the $m=0$ one is a p-m mode. This change of nature induced by rotation,
which depends on the azimuthal order, can also be characterized by the number
of nodes in the p-mode cavity ($\rm n_{p}$) and in the g-mode cavity ($\rm
n_{g}$) (calculated according to the Cowling approximation, Cowling, 1941). As
shown in Fig. 3, the $\rm n_{p}$ and $\rm n_{g}$ values of the members of an
original triplet are modified differently. These gains (for $m=0,1$ modes) or
losses (for $m=-1$ modes) of nodes occur during gravito-acoustic avoided
crossings. When increasing the rotation rate, if the frequencies of two modes
of same symmetry become very close, they avoid to cross each other and
exchange nature. These avoided crossings need to be taken into account all the
more because they mainly affect the p-m modes that are the most likely to be
observed. Therefore, modes of different azimuthal order $m$ probe differently
the stellar interior. Even if they belong to the same triplet with radial
order $n$ and degree $\ell$, they can be of very different nature, and using
the rotational splitting
$\delta\omega_{n,\ell}=(\omega_{n,\ell,m}-\omega_{n,\ell,-m})/2m$ to determine
the rotation rate is therefore questionable for red giants.
## 3 Slow to moderate core rotating red giants
For models with $\Omega_{c}/2\pi\leq$ 8 $\rm\mu$Hz (see Fig. 1), the
frequencies behave linearly with respect to $\Omega_{c}$. In this range, the
sectorial modes ($\ell=\,\mid m\mid$) are symmetrically distributed around the
axisymmetric modes, and have the same radial order.
Figure 3: For a central rotation frequency of 20$\rm\mu$Hz, kinetic energy of
modes around a p-m mode as a function of the mode frequency. The three modes
circled in black belong to the same original triplet. Inside the parentheses
are indicated their number of nodes in the p cavity and in the g cavity.
In this case, a first order perturbation approach approximates well the
effects of rotation on the mode frequencies. The pulsation $\omega_{n,\ell,m}$
is then given by
$\omega_{n,\ell,m}\,=\,\omega_{n,\ell,0}\,+\,m\delta\omega_{n,\ell}^{1}$
(Cowling & Newing 1949 and Ledoux 1949) where the first order rotational
splitting $\delta\omega_{n,\ell}^{1}$ is expressed as a weighted measure of
the star rotation rate:
$\delta\omega_{n,\ell}^{1}\,=\,\int_{0}^{R}\,K_{n,\ell}(r)\,\Omega(r)\,\rm
dr$, where $K_{n,\ell}(r)$ are the rotational kernels of the modes. They
depend on the equilibrium structure and on the eigenfunctions of the
unperturbed modes (see e.g. Goupil, 2009, and references therein). This
formulation relies on two main assumptions. First, only the Coriolis force is
significant and it is enough to account for its contribution to order
$O(\Omega/\omega_{n,\ell})$. Second, it is derived from a variational
principle (see e.g. Lynden-Bell & Ostriker, 1967), which requires that the
eigenfunction of the mode perturbed by rotation is close to the eigenfunction
of the corresponding unperturbed mode. This is the case for model M1 for
$\Omega_{c}/2\pi<8\leavevmode\nobreak\ \mu$Hz.
Between 8 and 20 $\rm\mu$Hz (Fig. 1), the mode frequencies seem to behave
linearly with $\Omega_{c}$, but are no longer gathered by original multiplet.
Figure 4 (top) displays the apparent rotational splittings, taking half the
difference between the closest modes of opposite azimuthal order $m=\pm 1$.
The values taken by the apparent rotational splittings range from few hundreds
nHz to 2 $\rm\mu$Hz. This is of the order of the splittings measured in
observed spectra. Because the apparent multiplets occur by ‘accident’, the
curve in Fig. 4 (top) does not –and usually cannot– follow the particular ‘V’
pattern around the p-m modes as observed for some slowly rotating red giants
(see for observations Beck et al. 2012, Mosser et al. 2012b). This ‘V’ pattern
has the same origin as the modulation seen for mode inertia (Goupil et al.
2013) that is due to the trapping of modes (see Dziembowski et al., 2001;
Dupret et al., 2009). The ‘V’ pattern can then constitute a strong indication
that the selection of multiplet has been done correctly in observed spectra.
Figure 3 confirms that modes which belong to the same original multiplet
(circled in black) are found in different apparent multiplets, and show quite
different p-g nature. Fig 4 (bottom) compares the original splittings –i.e.
half the differences between frequencies of modes from the same original
triplet with opposite $m$– with the splittings given by the first order
approach. The g-m modes (e.g. at 277 $\mu$Hz and 303 $\mu$Hz) have the largest
splittings. For such modes, both computations give very close results (with a
discrepancy of $\sim 4\%$). The p-m modes (e.g. at 290 $\mu$Hz and 315
$\mu$Hz) have the smallest splittings and show the expected ‘V’ shape
variation. For these modes, the discrepancies can reach $30\%$. This is due to
the trapping of these modes which differs from one $m$ to the other. Note that
original splittings including three terms ($M=3$) in the non-perturbative
calculations give relative differences lower than $10^{-6}$ when compared with
results of computations using two $Y_{\ell}^{m}$ (Fig. 2).
Figure 4: For the model M1 and a central rotation frequency of $20\rm\mu$Hz,
Top: spacing between two closest modes of opposite azimuthal order $\rm m=\pm
1$, i.e. apparent rotational splitting. Bottom: spacing between prograde and
the retrograde modes which belong to the same original multiplet (grey,
crosses) and splittings computed with the first order perturbative method
(green, plus signs).
Therefore, for a rotation between 8 and 20 $\rm\mu$Hz, the fact that the
multiplets are correctly selected can be assumed if the splitting given by
apparent triplets do follow the ‘V’ pattern. If so, the first order approach
provides an order of magnitude estimate of the core rotation rate, but non-
perturbative modeling is required for accurate quantitative conclusions on the
rotational profile from p-m modes.
## 4 Rapid core rotating red giants
For $\Omega>20\mu$Hz, the mode frequencies no longer behave linearly with the
rotation rate (see Fig. 1). This is due not only to higher order effects that
come into play, but also to the trapping of modes that is modified by
rotation. Modes which belong to the same original multiplet do not have the
same radial distribution. Under these circumstances, the notion of rotational
splitting as defined by $\delta\omega_{n,\ell}^{1}$ is no longer relevant, and
cannot be simply related to the rotation profile. Figure 5 (top) shows the
kinetic energy of pulsating modes for a core rotation rate of 140 $\rm\mu$Hz.
The large separation, as measured by the difference between consecutive p-m
modes with minimum kinetic energies, is conserved. The higher kinetic energy
of prograde modes indicates that they are more of g nature than retrograde or
axisymmetric ones. This is due to the shift in frequency induced by rotation
that brings, in the same frequency range, modes which belong to very different
parts of the zero rotation spectrum (see Fig. 1).
Figure 5 (top) shows that, in the same range of frequency, there are more
$m=-1$ modes than $m=0$ modes, and in turn, there are more $m=0$ modes than
$m=+1$ ones. In order to highlight this difference in distribution, the period
spacings are plotted for each $m$ value in Fig. 5 (bottom). In this diagram,
the families of modes of different azimuthal orders clearly show different
values of period spacing, ranging from the lowest for the prograde modes to
the highest for the retrograde ones. Note that this phenomenon appears only
for a rotation high enough that the distributions of the three $m$-families of
modes become clearly different ($\Omega_{c}\sim 100\mu$Hz for model M1). If
one is able to measure three different values of period spacings in a observed
spectrum, one is able to identify the values of the azimuthal order. Moreover,
based on our study of synthetic spectra, we find that the differences between
the values of period spacings increase with $\Omega_{c}$, and therefore can
constitute a guideline toward a constraint on the rotation profile.
## 5 Discussions and conclusions
Figure 5: For a central rotation frequency of 140 $\rm\mu$Hz, top: pulsation
modes’ kinetic energy with respect to their frequency. Bottom: period spacing
with respect to the pulsation frequency.
Some observed spectra of red giant stars show structures with nearly symmetric
spacings around axisymmetric modes. For this kind of spectra, we found that
one can figure out if these apparent multiplets correspond to the original
splittings by plotting the apparent splittings as a function of the
axisymmetric modes frequencies. If the resulting curve follows the ‘V’
pattern, then it is reasonnable to assume that the triplets are correctly
selected, and quantitative information on rotation can be derived from the
observed splittings. This corresponds to a low rotation regime when the
splitting is smaller than the frequency spacing between two consecutive modes
of same $m$, i.e. for ${\rm\Omega_{c}/(2\pi)/(\Delta P/P^{2})}\lesssim 2$.
If the apparent splittings do not follow the ‘V’ pattern, the triplets
overlap. The star is therefore located in the regime of moderate rotation. The
frequency differences between the original triplet members are large enough
that the trapping properties differ from one member to another. However, if it
is still possible to select correctly the original triplets (as done in Mosser
et al., 2012a), the first order approach could be used for a first guess of
the core rotation rate. Non-perturbative calculations are nevertheless
required for a precise determination, particularly because of the avoided
crossings that p-m modes undergo. This rotation regime corresponds to values
${\rm\Omega_{c}/(2\pi)/(\Delta P/P^{2})}$ between 2 and 5.
Finally, some observed spectra of red giants do not show any regular or close
to regular structures. We expect these spectra to correspond to rapid
rotators. In this case, modes of different azimuthal orders have a very
different nature. The concept of rotational splitting of modes with the same
radial distribution and different $m$ is no longer relevant, and one should
consider separately sub-spectra associated to each value of $m$. Provided that
the rotation is high enough to give rise to clearly different distributions
with respect to $m$, the differences between the period spacings associated
with $m$ allows to identify the azimuthal order, and thereby offers the
promising opportunity of deriving a proper seismic diagnostic. That
corresponds to the very rapid rotation case i.e. for
${\rm\Omega_{c}/(2\pi)/(\Delta P/P^{2})\gtrsim 20}$. An intermediate case
remains, where the rotational splitting is no longer a relevant seismic
diagnostic, but where the rotation is too slow to allow to distinguish three
different period spacings associated to the $m$-families of modes (for
${\rm\Omega_{c}/(2\pi)/(\Delta P/P^{2})}$ between 5 and 20). Establishing a
diagnostic in this regime constitutes an important issue which needs to be
adressed in a forthcoming paper.
It can be surprising that when $\Omega/(2\pi)/(\Delta P/P^{2})<20$, on the one
hand the perturbative approach gives inaccurate results, but on the other hand
non-perturbative models where the spectral expansions are limited to one term
(plus the toroidal component) give accurate enough results. This can be
understood with the help of the following equation:
$\Omega(r)/2\pi\sim\Delta\nu\ll\nu_{max}$. In the frequency range of interest
here, the rotation frequency remains significantly lower than $\nu_{max}$
throughout the whole star, and thus lower than the pulsation frequencies.
Hence, the Coriolis terms are smaller than other terms in the equation of
motion and the coupling of $\ell=1$ modes with $\ell=3,5,...$ components is
small. However the rotational splitting is not small compared to the large
separation $\Delta\nu$. This implies that the trapping of each member of a
same triplet can be very different, as illustrated in Fig.3. Using the
variational principle to model rotational splittings is thus not justified
because the eigenfunctions are too different. The only way to properly model
the effect of rotation on oscillations is to solve separately the differential
equations associated to each member of the multiplet.
To sum up, the major impact of moderate rotation on red giants’ spectra is the
modification of the trapping that depends on $m$. Let us precise that here the
cavities are not considered to be modified by rotation, but it is the way
modes probe them which differs from one member of a multiplet to another. In
such a case, the first order perturbative approach gives inaccurate results
and the $m$-families of modes carry different information on the stellar
interior. Only methods which compute these $m$-sub-spectra independently, i.e.
non-perturbative methods, are appropriate for studing the moderate to rapidly
core rotating red giants.
###### Acknowledgements.
RMO thanks Benoît Mosser for fruitful discussions. The authors thank the
referee for his comments that helped to improve the manuscript.
## References
* Baglin et al. (2006) Baglin, A., Auvergne, M., Barge, P., et al. 2006, in ESA Special Publication, Vol. 1306, ESA Special Publication, ed. M. Fridlund, A. Baglin, J. Lochard, & L. Conroy, 33
* Beck et al. (2012) Beck, P. G., Montalban, J., Kallinger, T., et al. 2012, Nature, 481, 55
* Borucki et al. (2010) Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977
* Cowling (1941) Cowling, T. G. 1941, MNRAS, 101, 367
* Cowling & Newing (1949) Cowling, T. G. & Newing, R. A. 1949, ApJ, 109, 149
* Deheuvels et al. (2012) Deheuvels, S., García, R. A., Chaplin, W. J., et al. 2012, ApJ, 756, 19
* Dupret et al. (2009) Dupret, M.-A., Belkacem, K., Samadi, R., et al. 2009, A&A, 506, 57
* Dziembowski et al. (2001) Dziembowski, W. A., Gough, D. O., Houdek, G., & Sienkiewicz, R. 2001, MNRAS, 328, 601
* Eggenberger et al. (2012) Eggenberger, P., Montalbán, J., & Miglio, A. 2012, A&A, 544, L4
* Goupil (2009) Goupil, M. J. 2009, in Lecture Notes in Physics, Berlin Springer Verlag, Vol. 765, The Rotation of Sun and Stars, 45–99
* Goupil et al. (2013) Goupil, M. J., Mosser, B., Marques, J. P., et al. 2013, A&A, 549, A75
* Kjeldsen & Bedding (1995) Kjeldsen, H. & Bedding, T. R. 1995, A&A, 293, 87
* Ledoux (1949) Ledoux, P. 1949, Memoires of the Societe Royale des Sciences de Liege, 9, 3
* Lynden-Bell & Ostriker (1967) Lynden-Bell, D. & Ostriker, J. P. 1967, MNRAS, 136, 293
* Marques et al. (2013) Marques, J. P., Goupil, M. J., Lebreton, Y., et al. 2013, A&A, 549, A74
* Mosser et al. (2012a) Mosser, B., Goupil, M. J., Belkacem, K., et al. 2012a, A&A, 548, A10
* Mosser et al. (2012b) Mosser, B., Goupil, M. J., Belkacem, K., et al. 2012b, A&A, 540, A143
* Ouazzani et al. (2012) Ouazzani, R.-M., Dupret, M.-A., & Reese, D. R. 2012, A&A, 547, A75
* Pinsonneault et al. (1989) Pinsonneault, M. H., Kawaler, S. D., Sofia, S., & Demarque, P. 1989, ApJ, 338, 424
* Talon & Charbonnel (2008) Talon, S. & Charbonnel, C. 2008, A&A, 482, 597
* Unno et al. (1989) Unno, W., Osaki, Y., Ando, H., Saio, H., & Shibahashi, H. 1989, Nonradial oscillations of stars
* Zahn (1992) Zahn, J. 1992, A&A, 265, 115
* Zahn et al. (1997) Zahn, J.-P., Talon, S., & Matias, J. 1997, A&A, 322, 320
|
arxiv-papers
| 2013-03-01T13:26:34 |
2024-09-04T02:49:42.282953
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "R-M. Ouazzani, M.J. Goupil, M-A. Dupret, J.P. Marques",
"submitter": "Rhita-Maria Ouazzani",
"url": "https://arxiv.org/abs/1303.0162"
}
|
1303.0213
|
11institutetext: School of Computing Science, Newcastle University
# The Semantic Web takes Wing: Programming Ontologies with Tawny-OWL
Phillip Lord
###### Abstract
The Tawny-OWL library provides a fully-programmatic environment for ontology
building; it enables the use of a rich set of tools for ontology development,
by recasting development as a form of programming. It is built in Clojure – a
modern Lisp dialect, and is backed by the OWL API. Used simply, it has a
similar syntax to OWL Manchester syntax, but it provides arbitrary
extensibility and abstraction. It builds on existing facilities for Clojure,
which provides a rich and modern programming tool chain, for versioning,
distributed development, build, testing and continuous integration. In this
paper, we describe the library, this environment and the its potential
implications for the ontology development process.
## 1 Introduction
Ontology building remains a difficult and demanding task. Partly this is
intrinsic, but partly stems from the tooling. For example, while ontology
editors like Protégé [1] do allow manual ontology development, they are not
ideal for automation or template-driven development; for these reasons
languages such as OPPL[2] have been developed; these allow a slightly higher-
level of abstraction over the base OWL axiomatisation. However, they involve a
move away from OWL syntax, which in turn requires integration into which ever
environment the developers are using. There has also been significant interest
in collaborative development of ontologies, either using collaborative
development tools such as Web-Protege[3], or through copy-modify-merge
versioning[4].
In this work, we111Plurals are used throughout, and do not indicate multiple
authorship. take an alternative approach. Instead of developing tools for
ontology development, many of which are similar or follow on from software
development tools, we attempt to recast ontology development as a software
engineering problem, and then just use the standard tools that exist for
software engineering. We have achieved this through development of a library,
named Tawny OWL, that at its simplest operates as a domain specific language
for OWL, while still retaining the full capabilities of a modern programming
language with all this entails. We demonstrate the application of this library
to a standard exemplar - namely the Pizza Ontology[5], as well as several
other scenarios. Finally, we consider the implications of this approach for
enabling collaborative and more agile forms of ontology development.
## 2 Requirements
Interaction between OWL and a programming API is not a new idea. For example,
OWL2Perl[6] allows generation of Perl classes from an OWL Ontology, while the
OWL API allows OWL ontology development in Java[7]. The OWL API, however, is
rather unwieldy for direct ontology development; for example, it has a complex
type hierarchy, indirect instantiation of objects through factories, and a set
of change objects following a command design pattern; while these support one
of its original intended use case – building a GUI – they would make direct
ontology development cumbersome. One response to this is Brain[8, 9], which is
a much lighter weight facade over the OWL API also implemented in Java. Brain
is, effectively, type-less as expressions are generated using Strings; the API
distinguishes between OWL class creation (addClass) and retrieval (getClass),
throwing exceptions to indicate an illegal state. While Brain is useful, it is
not clear how an ontology should be structured in Java’s object paradigm, and
it suffers the major drawback of Java – an elongated compile-test-debug cycle,
something likely to be problematic for interactive development as the ontology
increases in size.
For programmatic ontology development, we wanted a much more interactive and
dynamic environment; something equivalent to the R environment for statistics,
where the ontology could be explored, extended and reworked on-the-fly,
without restarting. For this reason we choose to build in Clojure; this
language is a modern Lisp derivative with many attractive features: persistent
data structures; specialised mechanisms for storing state. It suffers somewhat
from being built on the Java Virtual Machine (JVM) –in particular this gives
it a rather slow start-up time – however, in this case, it was a key reason
for its use. Interoperability with the JVM is integrated deeply into Clojure
which makes building on top of the OWL API both possible and convenient. Like
all lisps, Clojure has three other advantages: first, it is untyped which, in
common with Brain, in this context, we consider to be an advantage222We do not
argue that type systems are bad; just that the are less appropriate in this
environment; second, it is highly dynamic – almost any aspect of the language
can be redefined at any time – and it has a full featured read-eval-print-loop
(REPL); finally, it has very little syntax, so libraries can manipulate the
look of the language very easily. Consider, for example, a simple class
definition as shown in Listing1, taken from a pizza ontology available at
https://github.com/phillord/tawny-pizza. The syntax has been designed after
Manchester syntax[10].
⬇
(defclass Pizza
:label "Pizza"
:comment
"An␣over-baked␣flat␣bread␣with␣toppings,␣originating␣from␣Italy."
)
Listing 1: A basic class definition
A more complex definition shows the generation of restrictions and anonymous
classes.
⬇
(defclass CheesyPizza
:equivalent
(owland Pizza
(owlsome hasTopping CheeseTopping)))
Listing 2: A Cheesy Pizza
These definitions bind a new symbol (Pizza and CheesyPizza) to a OWLAPI Java
object. These symbols resolve as a normal Var does in Clojure. Strictly, this
binding is not necessary (and can be avoided if the user wishes), however this
provides the same semantics as Brain’s addClass and getClass – classes,
properties, etc must be created before use; this is a valuable feature
protecting against typing errors[11].
### 2.1 Lisp Terminology
Here we give a brief introduction to Clojure and its terminology. Like all
lisps, it has a regular syntax consisting of parenthesis delimited (lists),
defining an expression. The first element is usually a function, giving lisps
a prefix notation. Elements can be literals, such as strings e.g. "Pizza",
symbols e.g. defclass or keywords e.g. :equivalent. Symbols resolve to their
values, keywords resolve to themselves, and literals are, well, literal.
Unlike many languages, these constructs are directly manipulable in the
language itself which combined with macros enable extension of the language.
## 3 A Rich Development Environment
There are a dizzying array of ontology development tools available[12].
Probably the most popular is Protégé; while it provides a very rich
environment for viewing and interacting with an ontology, it lacks many things
that are present in most IDEs. For instance, it lacks support for version
control or adding to ChangeLogs; it is not possible to edit documentation
along side the ontology; nor to edit code in other languages when, for
instance, driving a build process, or using an ontology in an application.
We have previously attempted to work around this problem by providing support
for Manchester syntax – OMN – within Emacs through omn-mode[13]; while this
provides a richer general-purpose environment, the ontology environment is
comparatively poor. In particular, only syntactic completion is available,
there is no support for documentation look-up, beyond file navigation.
Finally, we used Protégé (and the OWL API) to check syntax, which required a
complete re-parse of the file, and with relatively poor feedback from Protégé
when errors occurred333This is not a criticism of the Protégé interface; it
was not designed to operate on hand-edited files.
With tawny, using a general purpose programming language, a richer development
environment comes nearly for free. In this paper, we describe the use within
Emacs; however, support for Clojure is also available within Eclipse,
IntelliJ, Netbeans and other environments[14]. Compared with direct editing of
OMN files, this provides immediate advantages. The use of paren delimiters
makes indentation straight-forward, well-defined, and well-supported; advanced
tools like paredit ensures that expressions are always balanced. Clojure
provides a REPL, and interaction within this allows more semantic completion
of symbols even when they are not syntactically present in the buffer444We
follow Emacs terminology here – a buffer is a file being edited, which is
common when using levels of abstraction (Section 4) or external OWL files
(Section 8). Syntax checking is easy, and can be performed on buffer, marked
region or specific expression. New entities can be added or removed from the
ontology on-the-fly without reloading the entire ontology, enabling
progressive development. We have also provided support for documentation look-
up of OWL entities; this is hooked into Clojure’s native documentation
facility, so should function within all development environments. We do not
currently provide a rich environment for browsing ontologies, except at the
code level; however, Protégé works well here, reloading OWL files when they
are changed underneath it. Similarly, omn-mode can be used to view individual
generated OMN files.
## 4 Supporting Higher Levels of Abstraction
Most ontologies include a certain amount of “boilerplate” code, where many
classes follow a similar pattern. Tools such as OPPL were built specifically
to address this issue; with tawny, the use of a full programming language,
makes the use of levels of abstraction above that in OWL straight-forward. We
have used this in many areas of Tawny; at its simplest, by providing
convenience macros. For example, it is common-place to define many subclasses
for a single superclass; using OMN each subclass must describe its superclass.
Within tawny, a dynamically-scoped block can be used as shown in Listing 3. As
shown here, disjoint axioms can also be added[15]; and, not used here,
covering axioms[16]. The equivalent OMN generated by these expressions is also
shown in Listing 4.
⬇
(as-disjoint-subclasses
PizzaBase
(defclass ThinAndCrispyBase
:annotation (label "BaseFinaEQuebradica" "pt"))
(defclass DeepPanBase
:annotation (label "BaseEspessa" "pt")))
Listing 3: Subclass Specification
⬇
Class: piz:ThinAndCrispyBase
Annotations:
rdfs:label "BaseFinaEQuebradica"@pt
SubClassOf:
piz:PizzaBase
DisjointWith:
piz:DeepPanBase
Class: piz:DeepPanBase
Annotations:
rdfs:label "BaseEspessa"@pt,
SubClassOf:
piz:PizzaBase
DisjointWith:
piz:ThinAndCrispyBase
Listing 4: Subclasses in OMN
It is also possible to add suffixes or prefixes to all classes created within
a lexical scope. For example, we can create classes ending in Topping as shown
in Listing 5. While similar functionality could be provided with a GUI, this
has the significant advantage that the developers intent remains present in
the source; so subsequent addition of new toppings are more likely to preserve
the naming scheme.
⬇
(with-suffix Topping
(defclass GoatsCheese)
(defclass Gorgonzola)
(defclass Mozzarella)
(defclass Parmesan))
Listing 5: Adding Suffixes
Tawny also includes initial support for ontology design patterns; in
particular, we have added explicit support for the value partition[17]. This
generates classes, disjoints and properties necessary to fulfil a pattern, but
is represented in Tawny succinctly (Listing 6)
⬇
(p/value-partition
Spiciness
[Mild
Medium
Hot])
Listing 6: A Value Partition
While some abstractions are generally useful, an important advantage of a
full-programmatic language for OWL is that abstractions can be added to any
ontology including those which likely to be useful only within a single
ontology. These can defined as functions or macros in the same file as their
use. For example, within the pizza ontology, Listing 7 generates two pizzas –
in each case the pizza class comes first, followed by constituent parts; a
closure axiom is added to each pizza. As well, as being somewhat more concise
than the equivalent OMN, this approach also has the significant advantage that
it is possible to change the axiomatisation for all the named pizzas by
altering a single function; this is likely to increase the consistency and
maintainability of ontologies.
⬇
(generate-named-pizza
[MargheritaPizza MozzarellaTopping TomatoTopping]
[CajunPizza MozzarellaTopping OnionTopping PeperonataTopping
PrawnsTopping TobascoPepperSauce TomatoTopping]
Listing 7: Generating Named Pizzas
## 5 Separating Concerns for Different Developer Groups
One common requirement in ontology development is a separation of concerns;
different contributors to the ontology may need different editing
environments, as for instance with RightField or Populous[18]. Tawny enables
this approach also; here, we describe how this enables internationalisation.
Originally, the pizza ontology had identifiers in English and Portuguese but,
ironically, not Italian. While it would be possible to have a translator
operate directly on a tawny source file, this is not ideal as they would need
to need to embed their translations within OWL entity definitions as shown in
Listing 3; this is likely to be particularly troublesome if machine assisted
translation is required due to the non-standard format. We have, therefore
added support with the polyglot library. Labels are stored in a Java
properties file (Listing 8) and are loaded using a single Lisp form (Listing
9). Tawny will generate a skeleton resources file, with no translations, on
demand, and reports missing labels to the REPL on loading.
⬇
AnchoviesTopping=Acciughe Ingredienti
ArtichokeTopping=Carciofi Ingredienti
AsparagusTopping=Asparagi Ingredienti
Listing 8: Italian Resources
⬇
(tawny.polyglot/polyglot-load-label
"pizza/pizzalabel_it.properties" "it")
Listing 9: Loading Multi-Lingual Labels
Currently, only loading labels is supported in this way, but extending this to
comments or other forms of annotation is possible. While, in this case, we are
loading extra-logical aspects of the ontology from file, it would also be
possible to load logical axioms; for instance, named pizzas (Section 4) could
be loaded from text file, spreadsheet or database.
## 6 Collaborative and Distributed Development
Collaborative development is not a new problem; many software engineering
projects involve many developers, geographically separated, in different time
zones, with teams changing over time. Tools for enabling this form of
collaboration are well developed and well supported. Some of these tools are
also available for ontology development; for instance, Web-Protégé enables
online collaborative editing. However, use of this tool requires installation
of a bespoke Tomcat based server, nor does it yet support offline, concurrent
modification[3].
Alternatively, the ContentCVS system does support offline concurrent
modification. It uses the notion of structural equivalence for comparison and
resolution of conflicts[4]; the authors argue that an ontology is a set of
axioms. However, as the named suggests, their versioning system mirrors the
capabilities of CVS – a client-server based system, which is now considered
archaic.
For tawny, the notion of structural equivalence is not appropriate;
critically, it assumes that an ontology is a _set_ of axioms. This is not true
with tawny, for two reasons: first, tawny requires definition of classes
before use, so source code cannot be arbitrarily re-ordered; secondly, even
where this is not the case, only the ontology axioms are a set. Programmer
intent is often represented through non-axiomised sections of the code –
whitespace, indentation and even comments which may drive a “literate”
development approach. A definition of a difference based purely on
axiomatisation cannot account for these differences; the use of a line-
oriented syntactic diff will.
We argue here that by provision of an attractive and well-supported syntax, we
do not need to provide specific collaborative tooling. Tawny itself has been
built using distributed versioning systems (first mercurial and then git).
These are already advanced systems supporting multiple workflows including
tiered development with authorisation, branching, cherry-picking and so on.
While ontology-specific tooling have some advantages, it is unlikely to
replicate the functionality offered by these systems, aside from issues of
developer familiarity.
Later, we also describe support for testing, which can also ease the
difficulty of collaborative working (Section 9).
## 7 Enabling Modularity
Tawny provides explicit support for name spacing and does this by building on
Clojure’s namespace support. It is possible to build a set of ontologies
spread across a number of different files. Normally, each file contains a
single namespace; tawny mirrors this, with each namespace containing a single
ontology, with a defined IRI.
OWL itself does not provide a distribution mechanism for ontologies; the IRI
of an ontology does not need to resolve. In practice, this is often a
distribution mechanism; by default Protégé will check for resolution if other
mechanisms fail; OBO ontologies, for example, are all delivered from their
IRI.
In contrast, Tawny builds on the Clojure environment; most projects are built
using the Leiningen tool which, in turn, uses the repository and dependency
management from Maven. When building the Pizza ontology in Tawny, the build
tool will fetch Tawny itself, the OWL API and HermiT, and their dependencies.
Ontological dependencies can be fetched likewise. Maven builds come with a
defined semantics for versioning, including release and snapshot
differentiation. A key advantage of this system is that multiple versions of a
single ontology can be supported, with different dependency graphs.
## 8 Coping With Semantics Free Identifiers
Importing one ontology from another is straight-forward in tawny. However, not
all ontologies are developed using tawny; we need to be able interact with
external ontologies only accessible through an OWL file. Tawny provides
facilities for this use-case: the library reads the OWL file, creates symbols
for all entities555It is possible to choose a subset, then associates the
relevant Java object with this symbol. This approach is reasonably scalable;
tawny can import the Gene Ontology within a minute on a desktop machine.
Clojure is a highly-dynamic language and allows this form of programmatic
creation of variables as a first-class part of the language; so an ontology
read in this way functions in every sense like a tawny native ontology.
Ontology classes can be queried for their documentation, auto-completion works
and so forth.
However, there is a significant problem with this import mechanism. Tawny must
create a symbol for each OWL entity in the source ontology. By default, tawny
uses the IRI fragment for this purpose; while Clojure symbols have a
restricted character set which is not the same as that of the IRI fragment, in
practice this works well. However, this is unusable for ontologies built
according to the OBO ontology standard, which uses semantics-free, numeric
identifiers such as OBI_0000107. While this is a valid Clojure symbol, it is
unreadable for a developer. This issue also causes significant difficulties
for ontology development in any syntax; OMN is relatively human-readable but
ceases to be so when all identifiers become numeric. We have previously
suggested a number of solutions to this problem either through the use of
comments or specialised denormalisations[19], or through the addition of an
Alias directive providing a mapping between numeric and readable
identifier[20]. However, all of these require changes to the specification and
tooling updates, potentially in several syntaxes.
For tawny, we have worked around this problem by enabling an arbitrary mapping
between the OWL entity and symbol name [21]. For OBO ontologies, a syntactic
transformation of the rdfs:label works well. Thus, OBI_0000107 can be referred
to as provides_service_consumer_with, while BFO_0000051 becomes the rather
more prosaic has_part.
While this solves the usability problem, it causes another issue for ontology
evolution; the label is open to change, independently of any changes in
semantics; unfortunately, any dependent ontology built with tawny will break,
as the relevant symbol will no longer exist. This problem does not exist for
GUI editors such as Protégé because, ironically, they are not WYSIWYG – the
ontology stores an IRI, while the user sees the label; changes to labels
percolate when reloading the dependent ontology. Tawny provides a solution to
this; it is possible to memorise mappings between symbols and IRIs at one
point in time[22]. If the dependency changes its label, while keeping the same
IRI, Tawny will recognise this situation, and generate a deprecated symbol;
dependent ontologies will still work, but will signal warnings stating that a
label has changed and suggesting appropriate updates. Currently these must be
performed manually, although this could be automated.
## 9 Enabling Unit Testing and Continuous Integration
Unit testing is a key additions to the software development process which has
enabled more agile development. Adapting this process for ontology development
has previously been suggested[23], and implemented as a plugin to Protégé
[24]. To add this capability to tawny, we have integrated reasoning; at the
time of writing, only ELK[25] is available as a maven resource in the Maven
Central repository, therefore we have developed a secondary maven build for
HermiT which allows use of this reasoner also[26]666Available at
http://homepages.cs.ncl.ac.uk/phillip.lord/maven/, or on Github, so both these
reasoners are available for use; others can be added trivially as they are
mavenised. A number of test frameworks exist in Clojure; here we use
clojure.test. As shown in Listing 10, we check that various inferences have
occurred (or not as appropriate), using the isuperclass? predicate. We have
also added support for “probe” classes. In our second test, we use the with-
probe-entities macro; this adds a subclass of VegetarianPizza and CajunPizza –
as the latter contains meat, this should result in an incoherent ontology if
both definitions are correct; probe entities are automatically removed by the
macro, returning the ontology to its previous state, ensuring independence of
tests.
⬇
(deftest CheesyShort
(is (r/isuperclass? p/FourCheesePizza p/CheesyPizza))
(is (r/isuperclass? p/MargheritaPizza p/CheesyPizza))
(is
(not (r/isuperclass? p/MargheritaPizza p/FourCheesePizza))))
(deftest VegetarianPizza
(is
(r/isuperclass? p/MargheritaPizza p/VegetarianPizza))
(is
(not
(o/with-probe-entities
[c (o/owlclass "probe"
:subclass p/VegetarianPizza p/CajunPizza)]
(r/coherent?)))))
Listing 10: Unit Testing a Pizza Ontology
The use of Unit testing in this way has implications beyond simple ontology
development; it also allows a richer form of continuous integration where
dependent ontologies can be developed by independent developers, but
continuously checked for breaking changes. The tawny pizza ontology, for
example, is currently being tested using Travis777http://travis-ci.org.
Unlike, other ontology CI systems[27], this requires no installation,
integrates directly with the DVCS in use. It is also useful for integration
with software that operates on the ontology; for example, both our version of
Hermit, the OWL API and tawny-owl are built and tested using this tool.
## 10 Discussion
In this paper, we have described Tawny, a library which enables the user to
develop ontologies, using the tooling and environments that have long been
available to programmers. Although they both involve producing artifacts with
strong computational properties ontology development and software engineering
have long been disjoint. This has significant negative impacts; there are far
more programmers than knowledge engineers, and as a result the tooling that
they use is far better developed. Tawny seeks to address this, not by
providing richer tools for ontology development, but by recasting ontology
development as a form of programming.
By allowing knowledge engineers to use any level of abstraction that they
choose, tawny can also improve current knowledge engineering process
significantly. It can help to remove duplication, for example, in class names.
It can clearly delineate disjoint classes protecting against future additions;
this helps to address a common ontological error[28]. It is also possible to
model directly using common ontology design patterns generating many axioms in
a succinct syntax. Bespoke templates can be built for a specific ontology;
this mirrors functionality of tools like OPPL[2], but uses the power of a full
programming language and environment. Trivially, for example, tawny can log
its activity and comes with debugger support.
Of course, direct use of a programmatic library like tawny is not suitable for
all users; however, even for these users a library like tawny could be useful.
It is straight-forward to integrate ontologies developed directly with tawny
as a DSL with knowledge stored in other formalisms or locations. In this
paper, we described loading multi-lingual labels from properties files,
isolating the translator from the ontology, and interacting with OWL files
generated by another tool. It would also be possible to load axioms from a
database or spreadsheet, using existing JVM libraries.
While with tawny, we have provided a programmatic alternative to many
facilities that exist in other tools, we also seek to provide tooling for a
more agile and reactive form of ontology development. Current waterfall
methodologies, exemplified by BFO-style realism lack agility, failing to meet
the requirement for regular releases to address short-comings, as has been
seen with both BFO 1.1[29] and BFO 2.0[30]. Likewise, the OBO foundry places
great emphasis on a review process which is, unfortunately, backlogged[31] –
in short, as with waterfall software methodologies, the centralised aspects of
this development model appear to scale poorly.
Tawny uses many ready-made and well tested software engineering facilities:
amenability to modern DVCS, a versioning and release semantics, a test
framework and continuous integration. The provision of a test environment is
particularly important; while ontology developers may benefit from testing
their own ontologies, the ability to contribute tests to their ontological
dependencies is even more valuable. They can provide upstream developers
precise and executable descriptions of the facilities which they depend on;
this gives upstream developers more confidence that their changes will not
have unexpected consequences. When this does happen, downstream developers can
track against older versions of their dependencies, obviating the need for co-
ordination of updates; when they do decide to update, the re-factoring
necessary to cope with external changes will be supported by their own test
sets; finally, continuous integration will provide early warning if their own
changes impact others. In short, tawny provides the tools for a more pragmatic
and agile form of ontology development which is more suited to fulfilling the
changing and varied requirements found in the real world[32].
## References
* [1] Various: The protégé; ontology editor and knowledge acquisition system. http://protege.stanford.edu/ [Online. last-accessed: 2012-10-07 18:08:04]
* [2] Egana Aranguren, M., Stevens, R., Antezana, E.: Transforming the axiomisation of ontologies: The ontology pre-processor language. Nature Precedings (Dec 2009)
* [3] Tudorache, T., Nyulas, C., Noy, N.F., Musen, M.A.: WebProtégé: a collaborative ontology editor and knowledge acquisition tool for the web. Semantic Web
* [4] Jiminez Ruiz, E., Grau, B.C., Horrocks, I., Berlanga, R.: Supporting concurrent ontology development: Framework, algorithms and tool. Data & Knowledge Engineering 70(1) (Jan 2011) 146–164
* [5] Stevens, R.: Why the pizza ontology tutorial? http://robertdavidstevens.wordpress.com/2010/01/22/why-the-pizza-ontolo%gy-tutorial/ [Online. last-accessed: 2012-11-09 22:37:14] (2010)
* [6] Kawas, E., Wilkinson, M.D.: Owl2perl: creating perl modules from owl class definitions. Bioinformatics 26(18) (Sep 2010) 2357–2358
* [7] Bechhofer, S., Volz, R., Lord, P.: Cooking the semantic web with the OWL API. In: Internaional Semantic Web Conference. (2003) 659 – 675
* [8] Croset, S., Overington, J., Rebholz-Schuhman, D.: Brain: Biomedical knowledge manipulation. Bioinformatics (2013) Submitted.
* [9] loopasam: Brain. https://github.com/loopasam/Brain
* [10] Horridge, M., Patel-Schneider, P.: Owl 2 web ontology language manchester syntax. http://www.w3.org/TR/owl2-manchester-syntax/ (2012)
* [11] Lord, P.: Owl concepts as lisp atoms. http://www.russet.org.uk/blog/2254 [Online. last-accessed: 2012-10-25 01:36:03] (2012)
* [12] Bergman, M.: The sweet compendium of ontology building tools. http://www.mkbergman.com/862/the-sweet-compendium-of-ontology-building-%tools/ (2010)
* [13] Lord, P.: Ontology building with emacs. http://www.russet.org.uk/blog/2161 [Online. last-accessed: 2012-07-26 09:28:46] (2012)
* [14] Various: Getting started - clojure documentation - clojure development. http://dev.clojure.org/display/doc/Getting+Started [Online. last-accessed: 2013-01-29 08:36:13]
* [15] Lord, P.: Disjoints in clojure-owl. http://www.russet.org.uk/blog/2275 [Online. last-accessed: 2013-02-11 09:34:50] (2012)
* [16] Stevens, R.: Closing down the open world: Covering axioms and closure axioms. http://ontogenesis.knowledgeblog.org/1001 [Online. last-accessed: 2012-06-19 16:13:39] (2011)
* [17] Rector, A.: Representing specified values in owl: “value partitions” and “value sets”. W3C Working Group Note (2005)
* [18] Jupp, S., Horridge, M., Iannone, L., Klein, J., Owen, S., Schanstra, J., Wolstencroft, K., Stevens, R.: Populous: a tool for building owl ontologies from templates. BMC Bioinformatics 13(Suppl 1) (2011) S5
* [19] Lord, P.: Obo format and manchester syntax. http://www.russet.org.uk/blog/1470 [Online. last-accessed: 2012-06-19 16:32:49] (2009)
* [20] Lord, P.: Semantics-free ontologies. http://www.russet.org.uk/blog/2040 [Online. last-accessed: 2012-06-19 16:32:22] (2012)
* [21] Lord, P.: Clojure owl 0.2. http://www.russet.org.uk/blog/2303 [Online. last-accessed: 2012-12-03 16:28:51] (2012)
* [22] Lord, P.: Remembering the world as it used to be. http://www.russet.org.uk/blog/2316 [Online. last-accessed: 2013-01-11 23:00:11] (2013)
* [23] Vrandec̆ić, D., Gangemi, A.: Unit tests for ontologies. In: In OTM Workshops (2
* [24] Drummond, N.: Co-ode & downloads & the owl unit test framework. http://www.co-ode.org/downloads/owlunittest/ [Online. last-accessed: 2013-01-28 15:22:03]
* [25] Kazakov, Y., Krötzsch, M., Simancik, F.: Elk reasoner: Architecture and evaluation. In: Proceedings of the 1st International Workshop on OWL Reasoner Evaluation (ORE-2012). (2012)
* [26] Various: Hermit reasoner: Home. http://hermit-reasoner.com/
* [27] Mungall, C., Dietze, H., Carbon, S., Ireland, A., Bauer, S., Lewis, S.: Continuous integration of open biological ontology libraries. http://bio-ontologies.knowledgeblog.org/405 (2012)
* [28] Rector, A., Drummond, N., Horridge, M., Rogers, J., Knublauch, H., Stevens, R., Wang, H., Wroe, C.: OWL pizzas: Practical experience of teaching OWL-DL: common errors & common patterns. Engineering Knowledge in the Age of the Semantic Web (2004) 63–81
* [29] Various: New version of bfo 1.1 available. https://groups.google.com/d/topic/bfo-discuss/HQSnudUUM4E/discussion
* [30] Various: Proposal for an official bfo 1.2 release. https://groups.google.com/d/topic/bfo-discuss/iKBlfDPv5GM/discussion
* [31] OBO Foundry Outreach Working Group: New obo foundry tracker for feedback, requests, and other issues. http://sourceforge.net/mailarchive/message.php?msg_id=30391720
* [32] Lord, P., Stevens, R.: Adding a little reality to building ontologies for biology. PLoS One (2010)
|
arxiv-papers
| 2013-03-01T16:35:19 |
2024-09-04T02:49:42.291050
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Phillip Lord",
"submitter": "Phillip Lord Dr",
"url": "https://arxiv.org/abs/1303.0213"
}
|
1303.0217
|
# A stochastic diffusion process for the Dirichlet distribution
LA-UR-12-26980
Accepted in International Journal of Stochastic Analysis, March 1, 2013
J. Bakosi J.R. Ristorcelli
{jbakosi,jrrj}@lanl.gov
Los Alamos National Laboratory Los Alamos NM 87545 USA
###### Abstract
The method of potential solutions of Fokker-Planck equations is used to
develop a transport equation for the joint probability of $N$ coupled
stochastic variables with the Dirichlet distribution as its asymptotic
solution. To ensure a bounded sample space, a coupled nonlinear diffusion
process is required: the Wiener-processes in the equivalent system of
stochastic differential equations are multiplicative with coefficients
dependent on all the stochastic variables. Individual samples of a discrete
ensemble, obtained from the stochastic process, satisfy a unit-sum constraint
at all times. The process may be used to represent realizations of a
fluctuating ensemble of $N$ variables subject to a conservation principle.
Similar to the multivariate Wright-Fisher process, whose invariant is also
Dirichlet, the univariate case yields a process whose invariant is the beta
distribution. As a test of the results, Monte-Carlo simulations are used to
evolve numerical ensembles toward the invariant Dirichlet distribution.
###### keywords:
Fokker-Planck equation; Stochastic diffusion; Dirichlet distribution; Monte-
Carlo simulation
## 1 Objective
We develop a Fokker-Planck equation whose statistically stationary solution is
the Dirichlet distribution [1, 2, 3]. The system of stochastic differential
equations (SDE), equivalent to the Fokker-Planck equation, yields a Markov
process that allows a Monte-Carlo method to numerically evolve an ensemble of
fluctuating variables that satisfy a unit-sum requirement. A Monte Carlo
solution is used to verify that the invariant distribution is Dirichlet.
The Dirichlet distribution is a statistical representation of non-negative
variables subject to a unit-sum requirement. The properties of such variables
have been of interest in a variety of fields, including evolutionary theory
[4], Bayesian statistics [5], geology [6, 7], forensics [8], econometrics [9],
turbulent combustion [10], and population biology [11].
## 2 Preview of results
The Dirichlet distribution [1, 2, 3] for a set of scalars
$0\\!\leq\\!Y_{\alpha}$, $\alpha\\!=\\!1,\dots,N-1$,
$\sum_{\alpha=1}^{N-1}Y_{\alpha}\\!\leq\\!1$, is given by
$\mathscr{D}({\mbox{\boldmath$\mathbf{Y}$}},{\mbox{\boldmath$\mathbf{\omega}$}})=\frac{\Gamma\left(\sum_{\alpha=1}^{N}\omega_{\alpha}\right)}{\prod_{\alpha=1}^{N}\Gamma(\omega_{\alpha})}\prod_{\alpha=1}^{N}Y_{\alpha}^{\omega_{\alpha}-1},$
(1)
where $\omega_{\alpha}\\!>\\!0$ are parameters,
$Y_{N}\\!=\\!1-\sum_{\beta=1}^{N-1}Y_{\beta}$, and $\Gamma(\cdot)$ denotes the
gamma function. We derive the stochastic diffusion process, governing the
scalars, $Y_{\alpha}$,
$\mathrm{d}Y_{\alpha}(t)=\frac{b_{\alpha}}{2}\big{[}S_{\alpha}Y_{N}-(1-S_{\alpha})Y_{\alpha}\big{]}\mathrm{d}t+\sqrt{\kappa_{\alpha}Y_{\alpha}Y_{N}}\mathrm{d}W_{\alpha}(t),\qquad\alpha=1,\dots,N-1,$
(2)
where $\mathrm{d}W_{\alpha}(t)$ is an isotropic vector-valued Wiener process
[12], and $b_{\alpha}\\!>\\!0$, $\kappa_{\alpha}\\!>\\!0$, and
$0\\!<\\!S_{\alpha}\\!<\\!1$ are coefficients. We show that the statistically
stationary solution of Eq. (2) is the Dirichlet distribution, Eq. (1),
provided the SDE coefficients satisfy
$\frac{b_{1}}{\kappa_{1}}(1-S_{1})=\dots=\frac{b_{N-1}}{\kappa_{N-1}}(1-S_{N-1}).$
(3)
The restrictions imposed on the SDE coefficients, $b_{\alpha}$,
$\kappa_{\alpha}$, and $S_{\alpha}$, ensure reflection towards the interior of
the sample space, which is a generalized triangle or tetrahedron (more
precisely, a simplex) in $N\\!-\\!1$ dimensions. The restrictions together
with the specification of the $N^{\mathrm{th}}$ scalar as
$Y_{N}\\!=\\!1-\sum_{\beta=1}^{N-1}Y_{\beta}$ ensure
$\sum_{\alpha=1}^{N}Y_{\alpha}=1.$ (4)
Indeed, inspection of Eq. (2) shows that for example when $Y_{1}\\!=\\!0$, the
diffusion is zero and the drift is strictly positive, while if
$Y_{1}\\!=\\!1$, the diffusion is zero ($Y_{N}\\!=\\!0$) and the drift is
strictly negative.
## 3 Development of the diffusion process
The diffusion process (2) is developed by the method of potential solutions.
We start from the Itô diffusion process [12] for the stochastic vector,
$Y_{\alpha}$,
$\mathrm{d}Y_{\alpha}(t)=a_{\alpha}({\mbox{\boldmath$\mathbf{Y}$}})\mathrm{d}t+b_{\alpha\beta}({\mbox{\boldmath$\mathbf{Y}$}})\mathrm{d}W_{\beta}(t),\qquad\alpha,\beta=1,\dots,N-1,$
(5)
with drift, $a_{\alpha}({\mbox{\boldmath$\mathbf{Y}$}})$, diffusion,
$b_{\alpha\beta}({\mbox{\boldmath$\mathbf{Y}$}})$, and the isotropic vector-
valued Wiener process, $\mathrm{d}W_{\beta}(t)$, where summation is implied
for repeated indices. Using standard methods given in [12] the equivalent
Fokker-Planck equation governing the joint probability,
$\mathscr{F}({\mbox{\boldmath$\mathbf{Y}$}},t)$, derived from Eq. (5), is
$\frac{\partial\mathscr{F}}{\partial t}=-\frac{\partial}{\partial
Y_{\alpha}}\big{[}a_{\alpha}({\mbox{\boldmath$\mathbf{Y}$}})\mathscr{F}\big{]}+\frac{1}{2}\frac{\partial^{2}}{\partial
Y_{\alpha}\partial
Y_{\beta}}\big{[}B_{\alpha\beta}({\mbox{\boldmath$\mathbf{Y}$}})\mathscr{F}\big{]},$
(6)
with diffusion $B_{\alpha\beta}\\!=\\!b_{\alpha\gamma}b_{\gamma\beta}$. Since
the drift and diffusion coefficients are time-homogeneous,
$a_{\alpha}({\mbox{\boldmath$\mathbf{Y}$}},t)\\!=\\!a_{\alpha}({\mbox{\boldmath$\mathbf{Y}$}})$
and
$B_{\alpha\beta}({\mbox{\boldmath$\mathbf{Y}$}},t)\\!=\\!B_{\alpha\beta}({\mbox{\boldmath$\mathbf{Y}$}})$,
Eq. (5) is a statistically stationary process and the solution of Eq. (6)
converges to a stationary distribution, [12] Sec. 6.2.2. Our task is to
specify the functional forms of $a_{\alpha}({\mbox{\boldmath$\mathbf{Y}$}})$
and $b_{\alpha\beta}({\mbox{\boldmath$\mathbf{Y}$}})$ so that the stationary
solution of Eq. (6) is $\mathscr{D}({\mbox{\boldmath$\mathbf{Y}$}})$, defined
by Eq. (1).
A potential solution of Eq. (6) exists if
$\frac{\partial\ln\mathscr{F}}{\partial
Y_{\beta}}=B_{\alpha\beta}^{-1}\left(2a_{\alpha}-\frac{\partial
B_{\alpha\gamma}}{\partial
Y_{\gamma}}\right)\equiv-\frac{\partial\phi}{\partial
Y_{\beta}},\qquad\alpha,\beta,\gamma=1,\dots,N-1,$ (7)
is satisfied, [12] Sec. 6.2.2. Since the left hand side of Eq. (7) is a
gradient, the expression on the right must also be a gradient and can
therefore be obtained from a scalar potential denoted by
$\phi({\mbox{\boldmath$\mathbf{Y}$}})$. This puts a constraint on the possible
choices of $a_{\alpha}$ and $B_{\alpha\beta}$ and on the potential, as
$\phi,_{\alpha\beta}=\phi,_{\beta\alpha}$ must also be satisfied. The
potential solution is
$\mathscr{F}({\mbox{\boldmath$\mathbf{Y}$}})=\exp[-\phi({\mbox{\boldmath$\mathbf{Y}$}})].$
(8)
Now functional forms of $a_{\alpha}({\mbox{\boldmath$\mathbf{Y}$}})$ and
$B_{\alpha\beta}({\mbox{\boldmath$\mathbf{Y}$}})$ that satisfy Eq. (7) with
$\mathscr{F}({\mbox{\boldmath$\mathbf{Y}$}})\equiv\mathscr{D}({\mbox{\boldmath$\mathbf{Y}$}})$
are sought. The mathematical constraints on the specification of $a_{\alpha}$
and $B_{\alpha\beta}$ are as follows:
$B_{\alpha\beta}$ must be symmetric positive semi-definite. This is to ensure
that
the square-root of $B_{\alpha\beta}$ (e.g. the Cholesky-decomposition,
$b_{\alpha\beta}$) exists, required by the correspondence of the SDE (5) and
the Fokker-Planck equation (6),
Eq. (5) represents a diffusion, and
$\det(B_{\alpha\beta})\neq 0$, required by the existence of the inverse in Eq.
(7).
For a potential solution to exist Eq. (7) must be satisfied. With
$\mathscr{F}({\mbox{\boldmath$\mathbf{Y}$}})\equiv\mathscr{D}({\mbox{\boldmath$\mathbf{Y}$}})$
Eq. (8) shows that the scalar potential must be
$\phi({\mbox{\boldmath$\mathbf{Y}$}})=-\sum_{\alpha=1}^{N}(\omega_{\alpha}-1)\ln
Y_{\alpha}.$ (9)
It is straightforward to verify that the specifications
$\displaystyle a_{\alpha}({\mbox{\boldmath$\mathbf{Y}$}})$
$\displaystyle=\frac{b_{\alpha}}{2}\big{[}S_{\alpha}Y_{N}-(1-S_{\alpha})Y_{\alpha}\big{]},$
(10) $\displaystyle B_{\alpha\beta}({\mbox{\boldmath$\mathbf{Y}$}})$
$\displaystyle=\left\\{\begin{array}[]{lr}\kappa_{\alpha}Y_{\alpha}Y_{N}&\quad\mathrm{for}\quad\alpha=\beta,\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr
0&\quad\mathrm{for}\quad\alpha\neq\beta,\end{array}\right.$ (13)
satisfy the above mathematical constraints, 1. and 2. Here
$b_{\alpha}\\!>\\!0$, $\kappa_{\alpha}\\!>\\!0$, $0\\!<\\!S_{\alpha}\\!<\\!1$,
and $Y_{N}\\!=\\!1-\sum_{\beta=1}^{N-1}Y_{\beta}$. Summation is not implied
for Eqs. (9–13).
Substituting Eqs. (9–13) into Eq. (7) yields a system with the same functions
on both sides with different coefficients, yielding the correspondence between
the $N$ coefficients of the Dirichlet distribution, Eq. (1), and the Fokker-
Planck equation (6) with Eqs. (10–13) as
$\displaystyle\omega_{\alpha}$
$\displaystyle=\frac{b_{\alpha}}{\kappa_{\alpha}}S_{\alpha},\qquad\alpha=1,\dots,N-1,$
(14) $\displaystyle\omega_{N}$
$\displaystyle=\frac{b_{1}}{\kappa_{1}}(1-S_{1})=\dots=\frac{b_{N-1}}{\kappa_{N-1}}(1-S_{N-1}).$
(15)
For example, for $N\\!=\\!3$ one has
${\mbox{\boldmath$\mathbf{Y}$}}=(Y_{1},Y_{2},Y_{3}=1-Y_{1}-Y_{2})$ and from
Eq. (9) the scalar potential is
$-\phi(Y_{1},Y_{2})=(\omega_{1}-1)\ln Y_{1}+(\omega_{2}-1)\ln
Y_{2}+(\omega_{3}-1)\ln(1-Y_{1}-Y_{2}).$ (16)
Eq. (7) then becomes the system
$\displaystyle\frac{\omega_{1}-1}{Y_{1}}-\frac{\omega_{3}-1}{Y_{3}}$
$\displaystyle=\left(\frac{b_{1}}{\kappa_{1}}S_{1}-1\right)\frac{1}{Y_{1}}-\left[\frac{b_{1}}{\kappa_{1}}(1-S_{1})-1\right]\frac{1}{Y_{3}},$
(17) $\displaystyle\frac{\omega_{2}-1}{Y_{2}}-\frac{\omega_{3}-1}{Y_{3}}$
$\displaystyle=\left(\frac{b_{2}}{\kappa_{2}}S_{2}-1\right)\frac{1}{Y_{2}}-\left[\frac{b_{2}}{\kappa_{2}}(1-S_{2})-1\right]\frac{1}{Y_{3}},$
(18)
which shows that by specifying the parameters, $\omega_{\alpha}$, of the
Dirichlet distribution as
$\displaystyle\omega_{1}$ $\displaystyle=\frac{b_{1}}{\kappa_{1}}S_{1},$ (19)
$\displaystyle\omega_{2}$ $\displaystyle=\frac{b_{2}}{\kappa_{2}}S_{2},$ (20)
$\displaystyle\omega_{3}$
$\displaystyle=\frac{b_{1}}{\kappa_{1}}(1-S_{1})=\frac{b_{2}}{\kappa_{2}}(1-S_{2}),$
(21)
the stationary solution of the Fokker-Planck equation (6) with drift (10) and
diffusion (13) is
$\mathscr{D}({\mbox{\boldmath$\mathbf{Y}$}},{\mbox{\boldmath$\mathbf{\omega}$}})$
for $N\\!=\\!3$. The above development generalizes to $N$ variables, yielding
Eqs. (14–15), and reduces to the beta distribution, a univariate
specialization of $\mathscr{D}$ for $N\\!=\\!2$, where $Y_{1}\\!=\\!Y$ and
$Y_{2}\\!=\\!1-Y$, see [13].
If Eqs. (14–15) hold, the stationary solution of the Fokker-Planck equation
(6) with drift (10) and diffusion (13) is the Dirichlet distribution, Eq. (1).
Note that Eqs. (10–13) are one possible way of specifying a drift and a
diffusion to arrive at a Dirichlet distribution; other functional forms may be
possible. The specifications in Eqs. (10–13) are a generalization of the
results for a univariate diffusion process, discussed in [13, 14], whose
invariant distribution is beta.
The shape of the Dirichlet distribution, Eq. (1), is determined by the $N$
coefficients, $\omega_{\alpha}$. Eqs. (14–15) show that in the stochastic
system, different combinations of $b_{\alpha}$, $S_{\alpha}$, and
$\kappa_{\alpha}$ may yield the same $\omega_{\alpha}$ and that not all of
$b_{\alpha}$, $S_{\alpha}$, and $\kappa_{\alpha}$ may be chosen independently
to keep the invariant Dirichlet.
## 4 Corroborating that the invariant distribution is Dirichlet
For any multivariate Fokker-Planck equation there is an equivalent system of
Itô diffusion processes, such as the pair of Eqs. (5–6) [12]. Therefore, a way
of computing the (discrete) numerical solution of Eq. (6) is to integrate Eq.
(5) in a Monte-Carlo fashion for an ensemble [15]. Using a Monte-Carlo
simulation we show that the statistically stationary solution of the Fokker-
Planck equation (6) with drift and diffusion (10–13) is a Dirichlet
distribution, Eq. (1).
The time-evolution of an ensemble of particles, each with $N=3$ variables
($Y_{1},Y_{2},Y_{3}$), is numerically computed by integrating the system of
equations (5), with drift and diffusion (10–13), for $N=3$ as
$\displaystyle\qquad\mathrm{d}Y^{(i)}_{1}=\frac{b_{1}}{2}\left[S_{1}Y^{(i)}_{3}-(1-S_{1})Y^{(i)}_{1}\right]\mathrm{d}t+\sqrt{\kappa_{1}Y^{(i)}_{1}Y^{(i)}_{3}}\mathrm{d}W^{(i)}_{1}$
(22)
$\displaystyle\qquad\mathrm{d}Y^{(i)}_{2}=\frac{b_{2}}{2}\left[S_{2}Y^{(i)}_{3}-(1-S_{2})Y^{(i)}_{2}\right]\mathrm{d}t+\sqrt{\kappa_{2}Y^{(i)}_{2}Y^{(i)}_{3}}\mathrm{d}W^{(i)}_{2}$
(23) $\displaystyle\enskip\qquad Y^{(i)}_{3}=1-Y^{(i)}_{1}-Y^{(i)}_{2}$ (24)
for each particle $i$. In Eqs. (22–23) $\mathrm{d}W_{1}$ and $\mathrm{d}W_{2}$
are independent Wiener processes, sampled from Gaussian streams of random
numbers with mean ${\langle{\mathrm{d}W_{\alpha}}\rangle}\\!=\\!0$ and
covariance
${\langle{\mathrm{d}W_{\alpha}\mathrm{d}W_{\beta}}\rangle}\\!=\\!\delta_{\alpha\beta}\mathrm{d}t$.
$400,\\!000$ particle-triplets, $(Y_{1},Y_{2},Y_{3})$, are generated with two
different initial distributions, displayed in the upper-left of Figures 1 and
2, a triple-delta and a box, respectively. Each member of both initial
ensembles satisfy $\sum_{\alpha=1}^{3}Y_{\alpha}\\!=\\!1$. Eqs. (22–24) are
advanced in time with the Euler-Maruyama scheme [16] with time step $\Delta
t\\!=\\!0.05$. Table 1 shows the coefficients of the stochastic system
(22–24), the corresponding parameters of the final Dirichlet distribution, and
the first two moments at the initial times for the triple-delta initial
condition case. The final state of the ensembles are determined by the SDE
coefficients, constant for these exercises, also given in Table 1, the same
for both simulations, satisfying Eq. (21).
The time-evolutions of the joint probabilities are extracted from both
calculations and displayed at different times in Figures 1 and 2. At the end
of the simulations two distributions are plotted at the bottom-right of both
figures: the one extracted from the numerical ensemble and the Dirichlet
distribution determined analytically using the SDE coefficients – in excellent
agreement in both figures. The statistically stationary solution of the
developed stochastic system is the Dirichlet distribution.
For a more quantitative evaluation, the time evolution of the first two
moments,
$\mu_{\alpha}\\!=\\!{\langle{Y_{\alpha}}\rangle}\\!=\\!\int_{0}^{1}\int_{0}^{1}Y_{\alpha}\mathscr{F}(Y_{1},Y_{2})\mathrm{d}Y_{1}\mathrm{d}Y_{2}$,
and
${\langle{y_{\alpha}y_{\beta}}\rangle}\\!=\\!{\langle{(Y_{\alpha}\\!-\\!{\langle{Y_{\alpha}}\rangle})(Y_{\beta}\\!-\\!{\langle{Y_{\beta}}\rangle})}\rangle}$,
are also extracted from the numerical simulation with the triple-delta-peak
initial condition as ensemble averages and displayed in Figures 3 and 4. The
figures show that the statistics converge to the precise state given by the
Dirichlet distribution that is prescribed by the SDE coefficients, see Table
1.
The solution approaches a Dirichlet distribution, with non-positive
covariances [2], in the statistically stationary limit, Figure 4(b). Note that
during the evolution of the process, $0\\!<\\!t\\!\lesssim\\!80$, the solution
is not necessarily Dirichlet, but the stochastic variables sum to one at all
times. The point ($Y_{1}$, $Y_{2}$), governed by Eqs. (22–23), can never leave
the $(N\\!-\\!1)$-dimensional (here $N\\!=\\!3$) convex polytope and by
definition $Y_{3}\\!=\\!1-Y_{1}-Y_{2}$. The rate at which the numerical
solution converges to a Dirichlet distribution is determined by the vectors
$b_{\alpha}$ and $\kappa_{\alpha}$.
The above numerical results confirm that starting from arbitrary realizable
ensembles the solution of the stochastic system converges to a Dirichlet
distribution in the statistically stationary state, specified by the SDE
coefficients.
## 5 Relation to other diffusion processes
It is useful to relate the Dirichlet diffusion process, Eq. (2), to other
multivariate stochastic diffusion processes with linear drift and quadratic
diffusion.
A close relative of Eq. (2) is the multivariate Wright-Fisher (WF) process
[11], used extensively in population and genetic biology,
$\mathrm{d}Y_{\alpha}(t)=\frac{1}{2}(\omega_{\alpha}-\omega
Y_{\alpha})\mathrm{d}t+\sum_{\beta=1}^{N-1}\sqrt{Y_{\alpha}(\delta_{\alpha\beta}-Y_{\beta})}\mathrm{d}W_{\alpha\beta}(t),\qquad\alpha=1,\dots,N-1,$
(25)
where $\delta_{\alpha\beta}$ is Kronecker’s delta,
$\omega\\!=\\!\sum_{\beta=1}^{N}\omega_{\beta}$ with $\omega_{\alpha}$ defined
in Eq. (1) and, $Y_{N}\\!=\\!1\\!-\\!\sum_{\beta=1}^{N-1}Y_{\beta}$. Similarly
to Eq. (2), the statistically stationary solution of Eq. (25) is the Dirichlet
distribution [17]. It is straightforward to verify that its drift and
diffusion also satisfy Eq. (7) with $\mathscr{F}\equiv\mathscr{D}$, i.e. WF is
a process whose invariant is Dirichlet and this solution is potential. A
notable difference between Eqs. (2) and (25), other than the coefficients, is
that the diffusion matrix of the Dirichlet diffusion process is diagonal,
while that of the WF process it is full.
Another process similar to Eqs. (2) and (25) is the multivariate Jacobi
process, used in econometrics,
$\mathrm{d}Y_{\alpha}(t)=a(Y_{\alpha}-\pi_{\alpha})\mathrm{d}t+\sqrt{cY_{\alpha}}\mathrm{d}W_{\alpha}(t)-\sum_{\beta=1}^{N-1}Y_{\alpha}\sqrt{cY_{\beta}}\mathrm{d}W_{\beta}(t),\qquad\alpha=1,\dots,N$
(26)
of Gourieroux & Jasiak [9] with $a<0$, $c>0$, $\pi_{\alpha}>0$, and
$\sum_{\beta=1}^{N}\pi_{\beta}=1$.
In the univariate case the Dirichlet, WF, and Jacobi diffusions reduce to
$\mathrm{d}Y(t)=\frac{b}{2}(S-Y)\mathrm{d}t+\sqrt{\kappa
Y(1-Y)}\mathrm{d}W(t),$ (27)
see also [13], whose invariant is the beta distribution, which belongs to the
family of Pearson diffusions, discussed in detail by Forman & Sorensen [14].
## 6 Summary
The method of potential solutions of Fokker-Planck equations has been used to
derive a transport equation for the joint distribution of $N$ fluctuating
variables. The equivalent stochastic process, governing the set of random
variables, $0\\!\leq\\!Y_{\alpha}$, $\alpha\\!=\\!1,\dots,N-1$,
$\sum_{\alpha=1}^{N-1}Y_{\alpha}\\!\leq\\!1$, reads
$\mathrm{d}Y_{\alpha}(t)=\frac{b_{\alpha}}{2}\big{[}S_{\alpha}Y_{N}-(1-S_{\alpha})Y_{\alpha}\big{]}\mathrm{d}t+\sqrt{\kappa_{\alpha}Y_{\alpha}Y_{N}}\mathrm{d}W_{\alpha}(t),\qquad\alpha=1,\dots,N-1,$
(28)
where $Y_{N}\\!=\\!1\\!-\\!\sum_{\beta=1}^{N-1}Y_{\beta}$, and $b_{\alpha}$,
$\kappa_{\alpha}$ and $S_{\alpha}$ are parameters, while
$\mathrm{d}W_{\alpha}(t)$ is an isotropic Wiener process with independent
increments. Restricting the coefficients to $b_{\alpha}\\!>\\!0$,
$\kappa_{\alpha}\\!>\\!0$ and $0\\!<\\!S_{\alpha}\\!<\\!1$, and defining
$Y_{N}$ as above ensure $\sum_{\alpha=1}^{N}Y_{\alpha}\\!=\\!1$ and that
individual realizations of ($Y_{1},Y_{2},\dots,Y_{N})$ are confined to the
($N\\!-\\!1$)-dimensional convex polytope of the sample space. Eq. (28) can
therefore be used to numerically evolve the joint distribution of $N$
fluctuating variables required to satisfy a conservation principle. Eq. (28)
is a coupled system of nonlinear stochastic differential equations whose
statistically stationary solution is the Dirichlet distribution, Eq. (1),
provided the coefficients satisfy
$\frac{b_{1}}{\kappa_{1}}(1-S_{1})=\dots=\frac{b_{N-1}}{\kappa_{N-1}}(1-S_{N-1}).$
(29)
In stochastic modeling, one typically begins with a physical problem, perhaps
discrete, _then_ derives the stochastic differential equations whose solution
yields a distribution. In this paper we reversed the process: we assumed a
desired stationary distribution and derived the stochastic differential
equations that converge to the assumed distribution. A potential solution form
of the Fokker-Planck equation was posited, from which we obtained the
stochastic differential equations for the diffusion process whose
statistically stationary solution is the Dirichlet distribution. We have also
made connections to other stochastic processes, such as the Wright-Fisher
diffusions of population biology and the Jacobi diffusions in econometrics,
whose invariant distributions possess similar properties but whose stochastic
differential equations are different.
## Acknowledgements
It is a pleasure to acknowledge a series of informative discussions with J.
Waltz. This work was performed under the auspices of the U.S. Department of
Energy under the Advanced Simulation and Computing Program.
## References
* [1] N. L. Johnson. An approximation to the multinomial distribution some properties and applications. Biometrika, 47(1-2):93–102, 1960.
* [2] J. E. Mosimann. On the compound multinomial distribution, the multivariate-distribution, and correlations among proportions. Biometrika, 49(1-2):65–82, 1962.
* [3] S. Kotz, N.L. Johnson, and N. Balakrishnan. Continuous Multivariate Distributions: Models and applications. Wiley series in probability and statistics: Applied probability and statistics. Wiley, 2000.
* [4] K. Pearson. Mathematical Contributions to the Theory of Evolution. On a Form of Spurious Correlation Which May Arise When Indices Are Used in the Measurement of Organs. Royal Society of London Proceedings Series I, 60:489–498, 1896\.
* [5] C.D.M. Paulino and de Braganca Pereira C.A. Bayesian methods for categorical data under informative general censoring. Biometrika, 82(2):439–446, 1995.
* [6] F. Chayes. Numerical correlation and petrographic variation. J. Geol., 70:440–452, 1962.
* [7] P. S. Martin and J. E. Mosimann. Geochronology of pluvial lake cochise, southern arizona; [part] 3, pollen statistics and pleistocene metastability. Am. J. Sci., 263:313–358, 1965.
* [8] K. Lange. Applications of the Dirichlet distribution to forensic match probabilities. Genetica, 96:107–117, 1995. 10.1007/BF01441156.
* [9] C. Gourieroux and J. Jasiak. Multivariate Jacobi process with application to smooth transitions. Journal of Econometrics, 131:475–505, 2006.
* [10] S. S. Girimaji. Assumed $\beta$-pdf model for turbulent mixing: validation and extension to multiple scalar mixing. Combust. Sci. Technol., 78(4):177 – 196, 1991.
* [11] M. Steinrucken, Y.X. Rachel Wang, and Y.S. Song. An explicit transition density expansion for a multi-allelic Wright–Fisher diffusion with general diploid selection. Theoretical Population Biology, 83(0):1–14, 2013.
* [12] C. W. Gardiner. Stochastic methods, A Handbook for the Natural and Social Sciences. Springer-Verlag, Berlin Heidelberg, 4 edition, 2009.
* [13] J. Bakosi and J.R. Ristorcelli. Exploring the beta distribution in variable-density turbulent mixing. J. Turbul., 11(37):1–31, 2010.
* [14] J. L. Forman and M. Sorensen. The Pearson Diffusions: A Class of Statistically Tractable Diffusion Processes. Scandinavian Journal of Statistics, 35:438–465, 2008.
* [15] S. B. Pope. PDF methods for turbulent reactive flows. Prog. Energ. Combust., 11:119–192, 1985.
* [16] P. E. Kloeden and E. Platen. Numerical Solution of Stochastic Differential Equations. Springer, Berlin, 1999.
* [17] S. Karlin and H.M. Taylor. A Second Course in Stochastic Processes. Number v. 2. Acad. Press, 1981.
$Y_{1}$$Y_{1}$$t=10$$t=0$$t=20$00.250.50.75100.250.50.75100.250.50.751$Y_{1}$
$Y_{2}$
$Y_{2}$
$Y_{2}$
00.250.50.75100.250.50.75100.250.50.751$Y_{1}$$t=140$00.250.50.751
$Y_{2}$
00.250.50.751$\omega_{1}=5$$\omega_{2}=2$$\omega_{3}=3$$S_{1}=5/8$$S_{2}=2/5$$b_{1}=1/10$$b_{2}=3/2$$\kappa_{1}=1/80$$\kappa_{2}=3/10$
Figure 1: Time evolution of the joint probability, $\mathscr{F}(Y_{1},Y_{2})$,
extracted from the numerical solution of Eqs. (22–24). The initial condition
is a triple-delta distribution, with unequal peaks at the three corners of the
sample space. At the end of the simulation, $t=140$, the solid lines are that
of the distribution extracted from the numerical ensemble, dashed lines are
that of a Dirichlet distribution to which the solution converges in the
statistically stationary state, implied by the constant SDE coefficients,
sampled at the same heights.
$\omega_{1}=5$$\omega_{2}=2$$\omega_{3}=3$$S_{1}=5/8$$S_{2}=2/5$$b_{1}=1/10$$b_{2}=3/2$$\kappa_{1}=1/80$$\kappa_{2}=3/10$$t=0$$t=5$0.250.50.75100.250.50.7510.250.50.75100.250.50.7510.750.25$t=160$0.250.50.751$t=1$0.250.50.75100.510.250.50.7501$Y_{1}$
$Y_{2}$
$Y_{2}$
$Y_{2}$
$Y_{2}$
$Y_{1}$$Y_{1}$$Y_{1}$
Figure 2: Time evolution of the joint probability, $\mathscr{F}(Y_{1},Y_{2})$,
extracted from the numerical solution of Eqs. (22–24). The top-left panel
shows the initial condition: a box with diffused sides. By $t=160$, bottom-
right panel, the distribution converges to the same Dirichlet distribution as
in Figure 1.
02040608010012014000.10.20.30.40.5${\langle{Y_{1}}\rangle}$${\langle{Y_{3}}\rangle}$${\langle{Y_{2}}\rangle}$time
means
Figure 3: Time evolution of the means extracted from the numerically
integrated system of Eqs. (22–24) starting from the triple-delta initial
condition. Dotted-solid lines – numerical solution, dashed lines – statistics
of the Dirichlet distribution determined analytically using the constant
coefficients of the SDE, see Table 1.
-0.12-0.1-0.08-0.06-0.04-0.02000.020.040.060.080.10.120.14020406080100120140020406080100120140$1/44$$21/1100$$4/275$$-3/550$$-1/110$$-3/220$0(a)(b)timetime$\langle{y_{3}^{2}}\rangle$$\langle{y_{1}^{2}}\rangle$$\langle{y_{2}^{2}}\rangle$${\langle{y_{2}y_{3}}\rangle}$${\langle{y_{1}y_{3}}\rangle}$$\langle{y_{1}^{2}}\rangle$$\langle{y_{2}^{2}}\rangle$801001201400.030.020.01-0.01-0.0280100120140${\langle{y_{2}y_{3}}\rangle}$${\langle{y_{1}y_{2}}\rangle}$${\langle{y_{1}y_{3}}\rangle}$$\langle{y_{3}^{2}}\rangle$${\langle{y_{1}y_{2}}\rangle}$
covariances
variances
Figure 4: Time evolution of the second central moments extracted from the numerically integrated system of Eqs. (22–24) starting from the triple-delta initial condition. The legend is the same as in Figure 3. Table 1: Initial and final states of the Monte-Carlo simulation starting from a triple-delta. The coefficients, $b_{1}$, $b_{2}$, $S_{1}$, $S_{2}$, $\kappa_{1}$, $\kappa_{2}$, of the system of SDEs (22–24) determine the distribution to which the system converges. The Dirichlet parameters, implied by the SDE coefficients via Eqs. (19-21), are in brackets. The corresponding statistics are determined by the well-known formulae of Dirichlet distributions [2]. Initial state: triple-delta, see Figure 1 | SDE coefficients and the statistics of their implied Dirichlet distribution in the stationary state
---|---
| | | | | | $b_{1}$ | = | $1/10$ | $b_{2}$ | = | $3/2$ | $(\omega_{1}$ | = | $5)$
| | | | | | $S_{1}$ | = | $5/8$ | $S_{2}$ | = | $2/5$ | $(\omega_{2}$ | = | $2)$
| | | | | | $\kappa_{1}$ | = | $1/80$ | $\kappa_{2}$ | = | $3/10$ | $\qquad(\omega_{3}$ | = | $3)$
${\langle{Y_{1}}\rangle}_{0}$ | $\approx$ | $0.05$ | | | | ${\langle{Y_{1}}\rangle}_{\mathrm{s}}$ | = | $1/2$ | | | | | |
${\langle{Y_{2}}\rangle}_{0}$ | $\approx$ | $0.42$ | | | | ${\langle{Y_{2}}\rangle}_{\mathrm{s}}$ | = | $1/5$ | | | | | |
${\langle{Y_{3}}\rangle}_{0}$ | $\approx$ | $0.53$ | | | | ${\langle{Y_{3}}\rangle}_{\mathrm{s}}$ | = | $3/10$ | | | | | |
$\langle{y_{1}^{2}}\rangle_{0}$ | $\approx$ | $0.03$ | | | | $\langle{y_{1}^{2}}\rangle_{\mathrm{s}}$ | = | $1/44$ | | | | | |
$\langle{y_{2}^{2}}\rangle_{0}$ | $\approx$ | $0.125$ | | | | $\langle{y_{2}^{2}}\rangle_{\mathrm{s}}$ | = | $4/275$ | | | | | |
$\langle{y_{3}^{2}}\rangle_{0}$ | $\approx$ | $0.13$ | | | | $\langle{y_{3}^{2}}\rangle_{\mathrm{s}}$ | = | $21/1100$ | | | | | |
${\langle{y_{1}y_{2}}\rangle}_{0}$ | $\approx$ | $-0.012$ | | | | ${\langle{y_{1}y_{2}}\rangle}_{\mathrm{s}}$ | = | $-1/110$ | | | | | |
${\langle{y_{1}y_{3}}\rangle}_{0}$ | $\approx$ | $-0.017$ | | | | ${\langle{y_{1}y_{3}}\rangle}_{\mathrm{s}}$ | = | $-3/220$ | | | | | |
${\langle{y_{2}y_{3}}\rangle}_{0}$ | $\approx$ | $-0.114$ | | | | ${\langle{y_{2}y_{3}}\rangle}_{\mathrm{s}}$ | = | $-3/550$ | | | | | |
|
arxiv-papers
| 2013-03-01T16:42:25 |
2024-09-04T02:49:42.298647
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "J. Bakosi, J.R. Ristorcelli",
"submitter": "Jozsef Bakosi",
"url": "https://arxiv.org/abs/1303.0217"
}
|
1303.0446
|
# Statistical sentiment analysis performance in Opinum
Boyan Bonev, Gema Ramírez-Sánchez, Sergio Ortiz Rojas Prompsit Language
Engineering
Avenida Universidad, s/n. Edificio Quorum III.
03202 Elche, Alicante (Spain)
{boyan,gramirez,sortiz}@prompsit.com
###### Abstract
The classification of opinion texts in positive and negative is becoming a
subject of great interest in sentiment analysis. The existence of many labeled
opinions motivates the use of statistical and machine-learning methods. First-
order statistics have proven to be very limited in this field. The Opinum
approach is based on the order of the words without using any syntactic and
semantic information. It consists of building one probabilistic model for the
positive and another one for the negative opinions. Then the test opinions are
compared to both models and a decision and confidence measure are calculated.
In order to reduce the complexity of the training corpus we first lemmatize
the texts and we replace most named-entities with wildcards. Opinum presents
an accuracy above 81% for Spanish opinions in the financial products domain.
In this work we discuss which are the most important factors that have an
impact on the classification performance.
###### keywords:
sentiment analysis , opinion classification , language model
††journal: arXiv
## 1 Introduction
Most of the texts written by humans reflect some kind of sentiment. The
interpretation of these sentiments depend on the linguistic skills and
emotional intelligence of both the author and the reader, but above all, this
interpretation is subjective to the reader. They don’t really exist in a
string of characters, for they are subjective states of mind. Therefore
sentiment analysis is a prediction of how most readers would react to a given
text.
There are texts which intend to be objective and texts which are intentionally
subjective. The latter is the case of opinion texts, in which the authors
intentionally use an appropriate language to express their positive or
negative sentiments about something. In this paper we work on the
classification of opinions in two classes: those expressing positive sentiment
(the author is in favour of something) and those expressing negative
sentiment, and we will refer to them as positive opinions and negative
opinions.
Sentiment analysis is possible thanks to the opinions available on-line. There
are vast amounts of text in fora, user reviews, comments in blogs and social
networks. It is valuable for marketing and sociological studies to analyse
these freely available data on some definite subject or entity. Some of the
texts available do include opinion information like stars, or recommend-or-
not, but most of them do not. A good corpus for building sentiment analysis
systems would be a set of opinions separated by domains. It should include
some information about the cultural origin of authors and their job, and each
opinion should be sentiment-evaluated not only by its own author, but by many
other readers as well. It would also be good to have a marking of the
subjective and objective parts of the text. Unfortunately this kind of corpora
are not available at the moment.
In the present work we place our attention at the supervised classification of
opinions in positive and negative. Our system, which we call Opinum111An
Opinum installation can be tested from a web interface at
http://aplica.prompsit.com/en/opinum, is trained from a corpus labeled with a
value indicating whether an opinion is positive or negative. The corpus was
crawled from the web and it consists of a 160MB collection of Spanish opinions
about financial products. Opinum’s approach is general enough and it is not
limited to this corpus nor to the financial domain.
There are state-of-the-art works on sentiment analysis which care about
differentiating between the objective and the subjective part of a text. For
instance, in the review of a film there is an objective part and then the
opinion (Raaijmakers et al. (2008)). In our case we work directly with opinion
texts and we do not make such difference. We have noticed that in customer
reviews, even when stating objective facts, some positive or negative
sentiment is usually expressed.
Many works in the literature of sentiment analysis take lexicon-based
approaches, like Taboada et al. (2011). For instance Hu and Liu (2004); Blair-
Goldensohn et al. (2008) use WordNet to extend the relation of positive and
negative words to other related lexical units. However the combination of
which words appear together may also be important and there are comparisons of
different Machine learning approaches (Pang et al. (2002)) in the literature,
like Support Vector Machines, k-Nearest Neighbours, Naive-Bayes, and other
classifiers based on global features. In the work of Mcdonald et al. (2007),
structured models are used to infer the sentiment from different levels of
granularity. They score cliques of text based on a high-dimensional feature
vector.
In the Opinum approach we score each sentence based on its $n$-gram
probabilities. For a complete opinion we sum the scores of all its sentences.
Thus, if an opinion has several positive sentences and it finally concludes
with a negative sentence which settles the whole opinion as negative, Opinum
would probably fail. The $n$-gram sequences are good at capturing phrasemes
(multiwords), the motivation for which is stated in Section 2. Basically,
there are phrasemes which bear sentiment. They may be different depending on
the domain and it is recommendable to build the models with opinions belonging
to the target domain, for instance, financial products, computers, airlines,
etc. A study of domain adaptation for sentiment analysis is presented in the
work of Blitzer et al. (2007). In Opinum different classifiers would be built
for different domains. Building the models does not require the aid of
experts, only a labeled set of opinions is necessary. Another contribution of
Opinum is that it applies some simplifications on the original text of the
opinions for improving the performance of the models.
In the remainder of the paper we first state the motivation of our approach in
Section 2, then in Section 3 we describe in detail the Opinum approach. In
Section 4 we present our experiments with Spanish financial opinions. In
Section 5 we discuss which are the most important factors that have an effect
on the classification performance. Finally we state some conclusions and
future work in Section 6.
## 2 Hypothesis
When humans read an opinion, even if they do not understand it completely
because of the technical details or domain-specific terminology, in most cases
they can notice whether it is positive or negative. The reason for this is
that the author of the opinion, consciously or not, uses nuances and
structures which show a positive or negative feeling. Usually, when a user
writes an opinion about a product, the intention is to communicate that
subjective feeling, apart from describing the experience with the product and
giving some technical details.
The hypothesis underlying the traditional keyword or lexicon-based approaches
(Blair-Goldensohn et al. (2008); Hu and Liu (2004)) consist in looking for
some specific positive or negative words. For instance, “great” should be
positive and “disgusting” should be negative. Of course there are some
exceptions like “not great”, and some approaches detect negation to invert the
meaning of the word. More elaborate cases are constructions like “an offer you
can’t refuse” or “the best way to lose your money”.
There are domains in which the authors of the opinions might not use these
explicit keywords. In the financial domain we can notice that many of the
opinions which express the author’s insecurity are actually negative, even
though the words are mostly neutral. For example, “I am not sure if I would
get a loan from this bank” has a negative meaning. Another difficulty is that
the same words could be positive or negative depending on other words of the
sentence: “A loan with high interests” is negative while “A savings account
with high interests” is positive. In general more complex products have more
complex and subtle opinions. The opinion about a cuddly toy would contain many
keywords and would be much more explicit than the opinion about the conditions
of a loan. Even so, the human readers can get the positive or negative feeling
at a glance.
The hypothesis of our approach is that it is possible to classify opinions in
negative and positive based on canonical (lemmatized) word sequences. Given a
set of positive opinions $\mathbf{O}^{p}$ and a set of negative opinions
$\mathbf{O}^{n}$, the probability distributions of their $n$-gram word
sequences are different and can be compared to the $n$-grams of a new opinion
in order to classify it. In terms of statistical language models, given the
language models $\mathit{M}^{p}$ and $\mathit{M}^{n}$ obtained from
$\mathbf{O}^{p}$ and $\mathbf{O}^{n}$, the probability
$p^{p}_{o}=P(o|\mathbf{O}^{p})$ that a new opinion would be generated by the
positive model is smaller or greater than the probability
$p^{n}_{o}=P(o|\mathbf{O}^{N})$ that a new opinion would be generated by the
negative model.
We build the models based on sequences of canonical words in order to simplify
the text, as explained in the following section. We also replace some named
entities like names of banks, organizations and people by wildcards so that
the models do not depend on specific entities.
## 3 The Opinum approach
The proposed approach is based on $n$-gram language models. Therefore building
a consistent model is the key for its success. In the field of machine
translation a corpus with size of 500MB is usually enough for building a
$5$-gram language model, depending on the morphological complexity of the
language.
In the field of sentiment analysis it is very difficult to find a big corpus
of context-specific opinions. Opinions labeled with stars or a
positive/negative label can be automatically downloaded from different
customers’ opinion websites. The sizes of the corpora collected that way range
between 1MB and 20MB for both positive and negative opinions.
Such a small amount of text would be suitable for bigrams and would capture
the difference between “not good” and “really good”, but this is not enough
for longer sequences like “offer you can’t refuse”. In order to build
consistent $5$-gram language models we need to simplify the language
complexity by removing all the morphology and replacing the surface forms by
their canonical forms. Therefore we make no difference between “offer you
can’t refuse” and “offers you couldn’t refuse”.
We also replace named entities by wildcards: person_entity,
organization_entity and company_entity. Although these replacements also
simplify the language models to some extent, their actual purpose is to avoid
some negative constructions to be associated to concrete entities. For
instance, we do not care that “do not trust John Doe Bank” is negative,
instead we prefer to know that “do not trust company_entity” is negative
regardless of the entity. This generality allows us to better evaluate
opinions about new entities. Also, in the cases when all the opinions about
some entity E1 are good and all the opinions about some other entity E2 are
bad, entity replacement prevents the models from acquiring this kind of bias.
Following we detail the lemmatization process, the named entities detection
and how we build and evaluate the positive and negative language models.
### 3.1 Lemmatization
Working with the words in their canonical form is for the sake of generality
and simplification of the language model. Removing the morphological
information does not change the semantics of most phrasemes (or multiwords).
There are some lexical forms for which we keep the surface form or we add some
morphological information to the token. These exceptions are the subject
pronouns, the object pronouns and the possessive forms. The reason for this is
that for some phrasemes the personal information is the key for deciding the
positive or negative sense. For instance, let us suppose that some opinions
contain the sequences
$\displaystyle o_{t}$ $\displaystyle=$ $\displaystyle\text{``They made money
from me''},$ $\displaystyle o_{i}$ $\displaystyle=$ $\displaystyle\text{``I
made money from them''}.$
Their lemmatization, referred to as $\mathcal{L}_{0}(\cdot)$, would be222The
notation we use here is for the sake of readability and it slightly differs
from the one we use in Opinum.
$\displaystyle\mathcal{L}_{0}(o_{t})=\mathcal{L}_{0}(o_{i})=\text{``SubjectPronoun
make money}$ $\displaystyle\text{ from ObjectPronoun''},$
Therefore we would have equally probable $P(o_{t}|M^{p})=P(o_{i}|M^{p})$ and
$P(o_{t}|M^{n})=P(o_{i}|M^{n})$, which does not express the actual sentiment
of the phrasemes. In order to capture this kind of differences we prefer to
have
$\displaystyle\mathcal{L}_{1}(o_{t})$ $\displaystyle=$ “SubjectPronoun_3p make
money $\displaystyle\text{ from ObjectPronoun\\_1p''},$
$\displaystyle\mathcal{L}_{1}(o_{i})$ $\displaystyle=$ “SubjectPronoun_1p make
money $\displaystyle\text{ from ObjectPronoun\\_3p''}.$
The probabilities still depend on how many times do these lexical sequences
appear in opinions labeled as positive or negative, but with
$\mathcal{L}_{1}(\cdot)$ we would have that
$\displaystyle P(o_{t}|M^{p})<P(o_{i}|M^{p}),$ $\displaystyle
P(o_{t}|M^{n})>P(o_{i}|M^{n}),$
that is, $o_{i}$ fits better the positive model than $o_{t}$ does, and vice
versa for the negative model.
In our implementation lemmatization is performed with Apertium, which is an
open-source rule-based machine translation engine. Thanks to its modularized
architecture (described in Tyers et al. (2010)) we use its morphological
analyser and its part-of-speech disambiguation module in order to take one
lexical form as the most probable one, in case there are several possibilities
for a given surface. Apertium currently has morphological analysers for 30
languages (most of them European), which allows us to adapt Opinum to other
languages without much effort.
### 3.2 Named entities replacement
The corpora with labeled opinions are usually limited to a number of
enterprises and organizations. For a generalization purpose we make the texts
independent of concrete entities. We do make a difference between names of
places, people and organizations/companies. We also detect dates, phone
numbers, e-mails and URL/IP. We substitute them all by different wildcards.
All the rest of the numbers are substituted by a “Num” wildcard. For instance,
the following subsequence would have a $\mathcal{L}_{2}(o_{e})$ lemmatization
+ named entity substitution:
$\displaystyle o_{e}=$ “Joe bought 300 shares of Acme Corp. in 2012”
$\displaystyle\mathcal{L}_{2}(o_{e})=$ “Person buy Num share of Company in
Date”
The named entity recognition task is integrated within the lemmatization
process. We collected a list of names of people, places, companies and
organizations to complete the morphological dictionary of Apertium. The
morphological analysis module is still very fast, as the dictionary is first
compiled and transformed to the minimal deterministic finite automaton. For
the dates, phone numbers, e-mails, IP and URL we use regular expressions which
are also supported by the same Apertium module.
Regarding the list of named entities, for a given language (Spanish in our
experiments) we download its Wikipedia database which is a freely available
resource. We heuristically search it for organizations, companies, places and
people. Based on the number of references a given entity has in Wikipedia’s
articles, we keep the first 1.500.000 most relevant entities, which cover the
entities with 4 references or more (the popular entities are referenced from
tens to thousands of times).
Finally, unknown surface forms are replaced by the “Unknown” lemma (the known
lemmas are lowercase). These would usually correspond to strange names of
products, erroneous words and finally to words which are not covered by the
monolingual dictionary of Apertium. Therefore our approach is suitable for
opinions written in a rather correct language. If unknown surfaces were not
replaced, the frequently misspelled words would not be excluded, which is
useful in some domains. This is at the cost of increasing the complexity of
the model, as all misspelled words would be included. Alternatively, the
frequently misspelled words could be added to the dictionary.
### 3.3 Language models
The language models we build are based on $n$-gram word sequences. They model
the likelihood of a word $w_{i}$ given the sequence of $n-1$ previous words,
$P(w_{i}|w_{i-(n-1)},\ldots,w_{i-1})$. This kind of models assume independence
between the word $w_{i}$ and the words not belonging to the $n$-gram,
$w_{j},\,j<i-n$. This is a drawback for unbounded dependencies but we are not
interested in capturing the complete grammatical relationships. We intend to
capture the probabilities of smaller constructions which may hold
positive/negative sentiment. Another assumption we make is independence
between different sentences.
In Opinum the words are lemmas (or wildcards replacing entities), and the
number of words among which we assume dependence is $n=5$. A maximum $n$ of 5
or 6 is common in machine translation where huge amounts of text are used for
building a language model (Koehn et al. (2007)). In our case we have at our
disposal a small amount of data but the language is drastically simplified by
removing the morphology and entities, as previously explained. We have
experimentally found that $n>5$ does not improve the classification
performance of lemmatized opinions and could incur over-fitting.
In our setup we use the IRSTLM open-source library for building the language
model. It performs an $n$-gram count for all $n$-grams from $n=1$ to $n=5$ in
our case. To deal with data sparseness a redistribution of the zero-frequency
probabilities is performed for those sets of words which have not been
observed in the training set $\mathcal{L}(\mathbf{O})$. Relative frequencies
are discounted to assign positive probabilities to every possible $n$-gram.
Finally a smoothing method is applied. Details about the process can be found
in Federico and Cettolo (2007). Another language model approach based on
$n$-grams was reported in Cui et al. (2006), where they used the CMU-Cambridge
Language Modeling Toolkit on the original texts.
For Opinum we run IRSTLM twice during the training phase: once taking as input
the opinions labeled as positive and once taking the negatives:
$\displaystyle\mathit{M^{p}}$ $\displaystyle\leftarrow$
$\displaystyle\text{Irstlm}\left(\mathcal{L}\left(\mathbf{O}^{p}\right)\right)$
$\displaystyle\mathit{M^{n}}$ $\displaystyle\leftarrow$
$\displaystyle\text{Irstlm}\left(\mathcal{L}\left(\mathbf{O}^{n}\right)\right)$
These two models are further used for querying new opinions on them and
deciding whether it is positive or negative, as detailed in the next
subsection.
### 3.4 Evaluation and confidence
In the Opinum system we query the $\mathit{M^{p}},\mathit{M^{n}}$ models with
the Heafield (2011) KenLM open-source library because it answers the queries
very quickly and has a short loading time, which is suitable for a web
application. It also has an efficient memory management which is positive for
simultaneous queries to the server.
The queries are performed at sentence level. Each sentence $s\in o_{t}$ is
assigned a score which is the log probability of the sentence being generated
by the language model. The decision is taken by comparing its scores for the
positive and for the negative models. For a given opinion $o_{t}$, the log-
probability sums can be taken:
$d_{o_{t}}=\sum_{s\in o_{t}}\log P(s|\mathit{M}^{p})-\sum_{s\in o_{t}}\log
P(s|\mathit{M}^{n})\begin{array}[]{c}\\\ \gtrless\\\ ?\end{array}0$
If this difference is close to zero, $|d_{o_{t}}|/w_{o_{t}}<\varepsilon_{0}$,
it can be considered that the classification is neutral. The number of words
$w_{o_{t}}$ is used as a normalization factor. If it is large,
$|d_{o_{t}}|/w_{o_{t}}>\varepsilon_{1}$, it can be considered that the opinion
has a very positive or very negative sentiment. Therefore Opinum classifies
the opinions with qualifiers: very/somewhat/little positive/negative depending
on the magnitude $|d_{o_{t}}|/w_{o_{t}}$ and $\operatorname{sign}(d_{o_{t}})$,
respectively.
The previous assessment is also accompanied by a confidence measure given by
the level of agreement among the different sentences of an opinion. If all its
sentences have the same positivity/negativity, measured by
$\operatorname{sign}(d_{s_{j}}),\,s_{j}\in o$, with large magnitudes then the
confidence is the highest. In the opposite case in which there is the same
number of positive and negative sentences with similar magnitudes the
confidence is the lowest. The intermediate cases are those with sentences
agreeing in sign but some of them with very low magnitude, and those with most
sentences of the same sign and some with different sign. We use Shannon’s
entropy measure $H(\cdot)$ to quantify the amount of disagreement. For its
estimation we divide the range of possible values of $d$ in $B$ ranges,
referred to as bins:
$H_{o_{t}}=\displaystyle\sum_{b=1}^{B}p(d_{b})\log\dfrac{1}{p(d_{b})}.$
The number of bins should be low (less than 10), otherwise it is difficult to
get a low entropy measure because of the sparse values of $d_{b}$. We set two
thresholds $\eta_{0}$ and $\eta_{1}$ such that the confidence is said to be
high/normal/low if $H_{o_{t}}<\eta_{0}$, $\,\eta_{0}<H_{o_{t}}<\eta_{1}$ or
$H_{o_{t}}>\eta_{1}$, respectively
The thresholds $\varepsilon$, $\eta$ and the number of bins $B$ are
experimentally set. The reason for this is that they are used to tune
subjective qualifiers (very/little, high/low confidence) and will usually
depend on the training set and on the requirements of the application. Note
that the classification in positive or negative sentiment is not affected by
these parameters. From a human point of view it is also a subjective
assessment but in our setup it is looked at as a feature implicitly given by
the labeled opinions of the training set.
## 4 Experiments
Similar words, different meaning |
---|---
Original Spanish text | Meaning in English | Result
| “Al tener la web, no pierdes
---
el tiempo por teléfono.”
| As you have the website you
---
don’t waste time on the phone.
Positive
| “En el teléfono os hacen perder
---
el tiempo y no tienen web.”
| They waste your time on the phone
---
and they don’t have a website.
Negative
| “De todas formas me
---
solucionaron el problema.”
| Anyway, they solved my problem.
---
Positive
| “No hay forma de que
---
me solucionen el problema.”
| There is no way to make them
---
solve my problem.
Negative
Table 1: Opinum for financial opinions in Spanish. Short examples of successful classification which may be attributed to the $n$-grams models (order $n=5$). These examples can be tested on-line at http://www.prompsit.com/en/opinum, in the 2012 version. A negative opinion with several sentences |
---|---
Original Spanish text | Meaning in English | Result
| “Con ENTIDAD me fue muy
---
bien.”
| I was fine with ENTITY.
---
Positive
| “Hasta que surgieron los
---
problemas.”
| Until the problems began.
---
Negative
| “Por hacerme cliente me
---
regalaban 100 euros.”
| They gave me 100 euro for
---
becoming a client.
Positive
| “Pero una vez que eres cliente
---
no te aportan nada bueno.”
| But once you are a client, they
---
they do not offer anything good.
Negative
| “Estoy pensando cambiar de
---
banco.”
| I am considering switching to
---
another bank.
Negative
Classification of the complete opinion | Negative
Table 2: Example of an opinion with several sentences which is classified as
Negative.
In our experimental setup we have a set of positive and negative opinions in
Spanish, collected from a web site for user reviews and opinions. The opinions
are constrained to the financial field including banks, savings accounts,
loans, mortgages, investments, credit cards, and all other related topics. The
authors of the opinions are not professionals, they are mainly customers.
There is no structure required for their opinions, and they are free to tell
their experience, their opinion or their feeling about the entity or the
product. The users meant to communicate their review to other humans and they
don’t bear in mind any natural language processing tools. The authors decide
whether their own opinion is positive or negative and this field is mandatory.
The users provide a number of stars as well: from one to five, but we have not
used this information. It is interesting to note that there are 66 opinions
with only one star which are marked as positive. There are also 67 opinions
with five stars which are marked as negative. This is partially due to human
errors, a human can notice when reading them. However we have not filtered
these noisy data, as removing human errors could be regarded as biasing the
data set with our own subjective criteria.
Regarding the size of the corpus, it consists of 9320 opinions about 180
different Spanish banks and financial products. From these opinions 5877 are
positive and 3443 are negative. There is a total of 709741 words and the mean
length of the opinions is 282 words for the positive and 300 words for the
negative ones. In the experiments we present in this work, we randomly divide
the data set in 75% for training and 25% for testing. We check that the
distribution of positive and negative remains the same among test and train.
After the $\mathcal{L}_{2}(\cdot)$ lemmatization and entity substitution, the
number of different words in the data set is 13067 in contrast with the 78470
different words in the original texts. In other words, the lexical complexity
is reduced by 83%. Different substitutions play a different role in this
simplification. The “Unknown” wildcard represents a 7,13% of the original
text. Entities were detected and replaced 33858 times (7807 locations, 5409
people, 19049 companies, 502 e-mails addresses and phone numbers, 2055 URLs,
1136 dates) which is a 4,77% of the text. There are also 46780 number
substitutions, a 7% of the text. The rest of complexity reduction is due to
the removal of the morphology as explained in Subsection 3.1.
In our experiments, the training of Opinum consisted of lemmatizing and
substituting entities of the 6990 opinions belonging the training set and
building the language models. The positive model is built from 4403 positive
opinions and the negative model is built from 2587 negative opinions.
Balancing the amount of positive and negative samples does not improve the
performance. Instead, it obliges us to remove an important amount of positive
opinions and the classification results are decreased by approximately 2%.
This is why we use all the opinions available in the training set. Both
language models are $n$-grams with $n\in[1,5]$. Having a 37% less samples for
the negative opinions is not a problem thanks to the smoothing techniques
applied by IRSTLM. Nonetheless if the amount of training texts is too low we
would recommend taking a lower $n$. A simple way to set $n$ is to take the
lowest value of $n$ for which classification performance is improved.
Depending on the language model tool used, an unnecessarily high $n$ could
overfit the models. The effect of the order is further analysed in Section 5.
An example of the contribution of $n$-grams to successfully capture small
order differences is shown in Table 1.
The tests are performed with 2330 opinions (not involved in building the
models). For measuring the accuracy we do not use the qualifiers information
but only the decision about the positive or negative class. In Figure 1 we
show the scores of the opinions for the positive and negative models. The
score is the sum of scores of the sentences, thus it can be seen that longer
opinions (bigger markers) have bigger scores. Independence of the size is not
necessary for classifying in positive and negative. In the diagonal it can be
seen that positive samples are close to the negative ones, this is to be
expected: both positive and negative language models are built for the same
language. However the small difference in their scores yields an $81.98$%
success rate in the classification. An improvement of this rate would be
difficult to achieve taking into account that there is noise in the training
set and that there are opinions without a clear positive or negative feeling.
A larger corpus would also contribute to a better result. Even though we have
placed many efforts in simplifying the text, this does not help in the cases
in which a construction of words is never found in the corpus. A construction
could even be present in the corpus but in the wrong class. For instance, in
our corpus “no estoy satisfecho” (meaning “I am not satisfied”) appears 3
times among the positive opinions and 0 times among the negative ones. This
weakness of the corpus is due to sentences referring to a money back
guarantee: “si no esta satisfecho le devolvemos el dinero” which are used in a
positive context.
Figure 1: Relation between similarity to the models (x and y axis) and the
relative size of the opinions (size of the points).
Opinum performs slightly better with long opinions, rather than short
sentences. We attribute this to the fact that in long opinions a single
sentence does not usually change much the positiveness score. For some
examples see Table 2. In long opinions every sentence is prone to show the
sentiment except for the cases of irony or opinions with an objective part.
An installation of Opinum is available on-line. It lacks usability facilities
like batch processing and files uploading because it is installed for small
test purposes. In the web interface we only provide the single opinion query
and we output the decision, the qualifiers information and the confidence
measure. For a better performance, the system can be installed in a Linux-
based machine and its scripts can be used in batch mode.
The query time of Opinum on a standard computer ranges from $1.63$ s for the
shortest opinions to $1.67$ s for those with more than 1000 words. In our
setup, most of the time is spent in loading the morphological dictionary, few
milliseconds are spent in the morphological analysis of the opinion and the
named entity substitution, and less than a millisecond is spent in querying
each model. In a batch mode, the morphological analysis could be done for all
the opinions together and thousands of them could be evaluated in seconds.
## 5 Discussion on performance
The performance of Opinum depending on the size of the opinions of the test
set is shown in Figure 2. In Figures 3, 4 and 5 the ROC curve of the
classifier shows its stability against changing the true-positive versus
false-negative rates. The success rate of Opinum is $81.98\%$, improving
significantly the $69\%$ baseline given by a classifier based on the
frequencies of single words, on the same data set. A comparison with other
methods would be a valuable source of evaluation. It is not feasible at this
moment because of the lack of free customers opinions databases and opionion
classifiers as well. However in the present work we discuss important aspects
of performance under different experimental conditions.
The first of them is the size of the opinions. In Figure 2 we show the
relation between successful and erroneous classifications. On the one hand
this figure gives an idea of the distribution of opinion lengths in the
corpus. On the other hand it shows for which opinions lengths the success rate
is better. The performance is similar for a wide range of opinion lengths:
from 2-3 sentences to opinions of several paragraphs. It can be seen that for
unusually long opinions the performance is worse. This can be attributed to
their different style. In middle sized opinions users focus on expressing a
positive or negative feeling in a few sentences, while in longer opinions they
are not so clear. One way to tackle this problem would be to take an approach
for detecting the parts of the opinion which matter for classification.
Another recommendation would be to filter the unusually short or long opinions
from the training set, or even to construct different classifiers for
different ranges of opinion lengths.
Figure 2: Number of successful and erroneous classifications (vertical axis)
depending on the size of the test opinions (horizontal axis).
Another of the characteristics of Opinum which is worth evaluating is the
morphology substitution approach. In Figure 3 we show a comparison of the ROC
curves when using different morphological substitutions. For instance in the
classifier denoted with label A, morphology is removed but the information
about 1st, 2nd or 3rd person is kept. In B morphology is removed and named
entities (companies, people, dates, numbers) are substituted by wildcards. In
C only lemmas are used and in D the original text (tokenized) is used. This
information is summarized in Table 3. The performances of these four
classifiers are similar. B outperforms the rest, which means that the proposed
strategy to keep the person information is not useful. Moreover the language
model size for B is smaller. A possible reason reason for the simpler model
outperforming the more complex is that there is not enough training data. As
expected, the model based on the original text performs worse and is the
biggest (the size includes the sum of the positive and negative models).
Figure 3: Left: Receiver Operating Characteristic (ROC) curves for four language models (A,B,C and D) with different morphological changes, as described in Table 3. Right: The size of these models. | A | B | C | D
---|---|---|---|---
Morphology removed | ✓ | ✓ | ✓ |
Person not included | | ✓ | ✓ |
Unknown words replaced | ✓ | ✓ | |
Named entities replaced | ✓ | ✓ | |
Classifier success rate | $80.76\%$ | $81.97\%$ | $81.61\%$ | $79.85\%$
Area under the curve | $83.44\%$ | $86.51\%$ | $83.97\%$ | $81.00\%$
Table 3: Characteristics of the language models of Figure 3.
Another question that has to be addressed is the maximum order of the
$n$-grams of the language model. In statistical machine translation orders
between 5 and 7 are common. However in machine translation the corpora used to
train the models are considerably larger than our 709741-words corpus of
opinions. Due to this there is no significant difference in the performance of
higher order models, and even bi-grams perform well (Figure 4-Left). The
reason for similar performance is that infrequent $n$-grams are pruned. The
actual difference is in the size of the models (Figure 4-Right). In our case
we select a maximal order 5, because the classification performance is
slightly higher. The benefit of a high order would be increased with bigger
corpora.
Figure 4: Left: Receiver Operating Characteristic (ROC) curves for four
language models with different $n$-grams order. Right: size of the models for
different $n$-grams order. In all of them the same morphological changes are
used: only lemmas and no named entities.
Thus the size of the labeled corpus of opinions is of prime importance. It is
also the main limitation of most companies which need opinion classification
for their particular context, an so, the first bottleneck to take into account
in statistical sentiment analysis. In Figure 5-Left the ROC plots show the
quality of the classifiers when trained with smaller subsets of our original
corpus. In Figure 5-Right we show how the classification performance and the
area under the curve increase with increasing size of corpus. In these figures
the maximal amount of $100\%$ and the maximal amount of $140,000$
lines/paragraphs correspond to the $75\%$ of our corpus because a $25\%$ of it
is kept for performance evaluation.
Figure 5: Receiver Operating Characteristic (ROC) curve of the Opinum
classifier for financial opinions.
Although in Figure 5-Right the performance exhibits a linear-like growth, this
only means that the training set is insufficient for this setup. Of course,
language models have a limit for capturing the positive or negative character
of a text. In Cui et al. (2006) the authors evaluate opinion classification
based on $n$-grams with a larger dataset consisting of $159,558$ positive and
$27,366$ negative opinions. Their results are slightly superior but the data
sets are not comparable. They do not present an evaluation for increasing size
of the corpus. The fact that their number of positive and negative opinions
are very different is another question to address. In our setup we have a
still differing but better balanced amount of both classes. We found no
improvement by removing the surplus, on the contrary, the performance slightly
decreased.
## 6 Conclusions and future work
Opinum is a sentiment analysis system designed for classifying customer
opinions in positive and negative. Its approach based on morphological
simplification, entity substitution and $n$-gram language models, makes it
easily adaptable to other classification targets different from
positive/negative. In this work we present experiments for Spanish in the
financial domain but Opinum could easily be trained for a different language
or domain. To this end an Apertium morphological analyser would be necessary
(30 languages are currently available) as well as a labeled data set of
opinions. Setting $n$ for the $n$-gram models depends on the size of the
corpus but it would usually range from 4 to 6, 5 in our case. There are other
parameters which have to be experimentally tuned and they are not related to
the positive or negative classification but to the subjective qualifier
very/somewhat/little and to the confidence measure.
The classification performance of Opinum in our financial-domain experiments
is 81,98% which would be difficult to improve because of the noise in the data
and the subjectivity of the labeling in positive and negative. The next steps
would be to study the possibility to classify in more than two classes by
using several language models. The use of an external neutral corpus should
also be considered in the future.
In this work we show that one of the most important factors to take into
account is the available number of labeled opinions. The size of the corpus
shows to have much more impact on the performance than the order of the
$n$-grams or even the morphological simplifications that we perform. Arguably,
morphological simplification would have a greater benefit on larger corpora,
but this has to be further analysed with larger data sets.
In practice the classification performance can be improved by filtering the
corpus from noise. We have kept all the noise and outliers in order to be fair
with the reality of the data for the present study. However, many wrong-
labeled opinions can be removed. In our data set there were some users
checking the “recommend” box and placing one or two stars, or not recommending
the product and giving four or five stars. Also, we have seen that too short
opinions perform insufficiently, and in too long opinions the language does
not reflect well the positive or negative sense. Thus these opinions should be
filtered in an application based on Opinum.
As a future work, an important question is to establish the limitations of
this approach for different domains. Is it equally successful for a wider
domain? For instance, trying to build the models from a mixed set of opinions
of the financial domain and the IT domain. Would it perform well with a
general domain?
Regarding applications, Opinum could be trained for a given domain without
expert knowledge. Its queries are very fast which makes it feasible for free
on-line services. An interesting application would be to exploit the named
entity recognition and associate positive/negative scores to the entities
based on their surrounding text. If several domains were available, then the
same entities would have different scores depending on the domain, which would
be a valuable analysis.
## References
* Blair-Goldensohn et al. (2008) Blair-Goldensohn, S., Neylon, T., Hannan, K., Reis, G. A., Mcdonald, R., Reynar, J., 2008. Building a sentiment summarizer for local service reviews. In: In NLP in the Information Explosion Era, NLPIX2008.
* Blitzer et al. (2007) Blitzer, J., Dredze, M., Pereira, F., 2007. Biographies, bollywood, boomboxes and blenders: Domain adaptation for sentiment classification. In: In ACL. pp. 187–205.
* Cui et al. (2006) Cui, H., Mittal, V., Datar, M., 2006. Comparative experiments on sentiment classification for online product reviews. In: proceedings of the 21st national conference on Artificial intelligence - Volume 2. AAAI’06. AAAI Press, pp. 1265–1270.
* Federico and Cettolo (2007) Federico, M., Cettolo, M., 2007. Efficient handling of n-gram language models for statistical machine translation. In: Proceedings of the Second Workshop on Statistical Machine Translation. StatMT ’07. Association for Computational Linguistics, Stroudsburg, PA, USA, pp. 88–95.
* Heafield (2011) Heafield, K., 2011. Kenlm: faster and smaller language model queries. In: Proceedings of the Sixth Workshop on Statistical Machine Translation. WMT ’11. Association for Computational Linguistics, Stroudsburg, PA, USA, pp. 187–197.
* Hu and Liu (2004) Hu, M., Liu, B., 2004. Mining and summarizing customer reviews. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. KDD ’04. ACM, New York, NY, USA, pp. 168–177.
* Koehn et al. (2007) Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., Herbst, E., 2007. Moses: open source toolkit for statistical machine translation. In: Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. ACL ’07. Association for Computational Linguistics, Stroudsburg, PA, USA, pp. 177–180.
* Mcdonald et al. (2007) Mcdonald, R., Hannan, K., Neylon, T., Wells, M., Reynar, J., 2007. Structured models for fine-to-coarse sentiment analysis. In: Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics.
* Pang et al. (2002) Pang, B., Lee, L., Vaithyanathan, S., 2002. Thumbs up? sentiment classification using machine learning techniques. In: In proceedings of EMNLP. pp. 79–86.
* Raaijmakers et al. (2008) Raaijmakers, S., Truong, K. P., Wilson, T., 2008. Multimodal subjectivity analysis of multiparty conversation. In: EMNLP. pp. 466–474.
* Taboada et al. (2011) Taboada, M., Brooke, J., Tofiloski, M., Voll, K., Stede, M., 2011. Lexicon-based methods for sentiment analysis. Comput. Linguist. 37, 267–307.
* Tyers et al. (2010) Tyers, F. M., Sánchez-Martínez, F., Ortiz-Rojas, S., Forcada, M. L., 2010. Free/open-source resources in the apertium platform for machine translation research and development. The Prague Bulletin of Mathematical Linguistics (93), 67––76, iSSN: 0032-6585.
|
arxiv-papers
| 2013-03-03T01:38:03 |
2024-09-04T02:49:42.312735
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Boyan Bonev, Gema Ram\\'irez-S\\'anchez, Sergio Ortiz Rojas",
"submitter": "Boyan Bonev",
"url": "https://arxiv.org/abs/1303.0446"
}
|
1303.0449
|
Bayesian learning of joint distributions of objects
Anjishnu Banerjee Jared Murray David B. Dunson
Statistical Science, Duke University
###### Abstract
There is increasing interest in broad application areas in defining flexible
joint models for data having a variety of measurement scales, while also
allowing data of complex types, such as functions, images and documents. We
consider a general framework for nonparametric Bayes joint modeling through
mixture models that incorporate dependence across data types through a joint
mixing measure. The mixing measure is assigned a novel infinite tensor
factorization (ITF) prior that allows flexible dependence in cluster
allocation across data types. The ITF prior is formulated as a tensor product
of stick-breaking processes. Focusing on a convenient special case
corresponding to a Parafac factorization, we provide basic theory justifying
the flexibility of the proposed prior and resulting asymptotic properties.
Focusing on ITF mixtures of product kernels, we develop a new Gibbs sampling
algorithm for routine implementation relying on slice sampling. The methods
are compared with alternative joint mixture models based on Dirichlet
processes and related approaches through simulations and real data
applications.
## 1 INTRODUCTION
There has been considerable recent interest in joint modeling of data of
widely disparate types, including not only real numbers, counts and
categorical data but also more complex objects, such as functions, shapes, and
images. We refer to this general problem as mixed domain modeling (MDM), and
major objectives include exploring dependence between the data types, co-
clustering, and prediction. Until recently, the emphasis in the literature was
almost entirely on parametric hierarchical models for joint modeling of mixed
discrete and continuous data without considering more complex object data. The
two main strategies are to rely on underlying Gaussian variable models
(Muthen, 1984) or exponential family models, which incorporate shared latent
variables in models for the different outcomes (Sammel et al., 1997; Dunson,
2000, 2003). Recently, there have been a number of articles using these models
as building blocks in discrete mixture models relying on Dirichlet processes
(DPs) or closely-related variants (Cai et al., 2011; Song et al., 2009; Yang &
Dunson, 2010). DP mixtures for mixed domain modeling were also considered by
Hannah et al. (2011); Shahbaba & Neal (2009); Dunson & Bhattacharya (2010)
among others. Related approaches are increasingly widely-used in broad machine
learning applications, such as for joint modeling of images and captions (Li
et al., 2011), and have rapidly become a standard tool for MDM.
Although such joint Dirichlet process mixture models (DPMs) are quite
flexible, and can accommodate joint modeling with complicated objects such as
functions (Bigelow & Dunson, 2009), they suffer from a key disadvantage in
relying on conditional independence given a single latent cluster index. For
example, as motivated in Dunson (2009, 2010), the DP and related approaches
imply that two subjects $i$ and $i^{\prime}$ are either allocated to the same
cluster ($C_{i}=C_{i^{\prime}}$) globally for all their parameters or are not
clustered. The soft probabilistic clustering of the DP is appealing in leading
to substantial dimensionality reduction, but a single global cluster index
conveys several substantial practical disadvantages. Firstly, to realistically
characterize joint distributions across many variables, it may be necessarily
to introduce many clusters, degrading the performance in the absence of large
sample sizes. Secondly, as the DP and the intrinsic Bayes penalty for model
complexity both favor allocation to few clusters, one may over cluster and
hence obscure important differences across individuals, leading to misleading
inferences and poor predictions. Often, the posterior for the clusters may be
largely driven by certain components of the data, particularly when more data
are available for those components, at the expense of poorly characterizing
components for which less, or more variable, data are available.
To overcome these problems we propose Infinite Tensor Factorization (ITF)
models, which can be viewed as next generation extensions of the DP to
accommodate dependent object type-specific clustering. Instead of relying on a
single unknown cluster index, we propose separate but dependent cluster
indices for each of the data types whose joint distribution is given by a
random probability tensor. We use this to build a general framework for
hierarchical modeling. The other main contribution in this article is to
develop a general extension of blocked sliced sampling, which allows for an
efficient and straightforward algorithm for sampling from the posterior
distributions arising with the ITF; with potential application in other
multivariate settings with infinite tensors, without resorting to finite
truncation of the infinitely many possible levels.
## 2 PRELIMINARIES
We start by considering a simple bivariate setting $p=2$ in which data for
subject $i$ consist of $y_{i}=(y_{i1},y_{i2})^{\prime}\in\mathcal{Y}$, with
$\mathcal{Y}=\mathcal{Y}_{1}\otimes\mathcal{Y}_{2}$,
$y_{i1}\in\mathcal{Y}_{1}$, and $y_{i2}\in\mathcal{Y}_{2}$ for $i=1,\ldots,n$.
We desire a joint model in which $y_{i}\sim f$, with $f$ a probability measure
characterizing the joint distribution. In particular, letting
$\mathcal{B}(\mathcal{Y})$ denote an appropriate sigma-algebra of subsets of
$\mathcal{Y}$, $f$ assigns probability $f(B)$ to each
$B\in\mathcal{B}(\mathcal{Y})$. We assume $\mathcal{Y}$ is a measurable Polish
space, as we would like to keep the domains $\mathcal{Y}_{1}$ and
$\mathcal{Y}_{2}$ as general as possible to encompass not only subsets of
Euclidean space and the set of natural numbers but also function spaces that
may arise in modeling curves, surfaces, shapes and images. In many cases, it
is not at all straightforward to define a parametric joint measure, but there
is typically a substantial literature suggesting various choices for the
marginals $y_{i1}\sim f_{1}$ and $y_{i2}\sim f_{2}$ separately.
If we only had data for the $j$th variable, $y_{ij}$, then one possible
strategy is to use a mixture model in which
$\displaystyle
f_{j}(B)=\int_{\Theta_{j}}\mathcal{K}_{j}(B;\theta_{j})dP_{j}(\theta_{j}),\quad
B\in\mathcal{B}(\mathcal{Y}_{j}),$ (1)
where $\mathcal{K}_{j}(\cdot;\theta_{j})$ is a probability measure on
$\\{\mathcal{Y}_{1},\mathcal{B}(\mathcal{Y}_{1})\\}$ indexed by parameters
$\theta_{j}\in\Theta_{j}$, $\mathcal{K}_{j}$ obeys a parametric law (e.g.,
Gaussian), and $P_{j}$ is a probability measure over
$\\{\Theta_{j},\mathcal{B}(\Theta_{j})\\}$. A nonparametric Bayesian approach
is obtained by treating $P_{j}$ as a random probability measure and choosing
an appropriate prior. By far the most common choice is the Dirichlet process
(Ferguson, 1973), which lets $P_{j}\sim DP(\alpha P_{0j})$. Under the
Sethuraman (1994) stick-breaking representation, one then obtains,
$\displaystyle
f_{j}(B)=\sum_{h=1}^{\infty}\pi_{h}\mathcal{K}_{j}(B;\theta_{h}^{*}),\mbox{with}$
$\displaystyle\pi_{h}=V_{h}\prod_{l<h}(1-V_{l}),\quad\theta_{h}^{*}\sim
P_{0j},$ (2)
and $V_{h}\sim\mbox{Be}(1,\alpha)$, so that $f_{j}$ can be expressed as a
discrete mixture. This discrete mixture structure implies the following simple
hierarchical representation, which is crucially used for efficient
computation:
$\displaystyle
y_{ij}\sim\mathcal{K}_{j}(\theta_{C_{i}}^{*}),\quad\theta_{h}^{*}\sim
P_{0j},\quad\mbox{pr}(C_{i}=h)=\pi_{h},$ (3)
where $C_{i}$ is a cluster index for subject $i$. The great success of this
model is largely attributable to the divide and conquer structure in which one
allocates subjects to clusters probabilistically, and then can treat the
observations within each cluster as separate instantiations of a parametric
model. In addition, there is a literature showing appealing properties, such
as minimax optimal adaptive rates of convergence for DPMs of Gaussians (Shen &
Ghosal, 2011; Tokdar, 2011).
The standard approach to adapt expression (1) to accommodate mixed domain data
is to simply let $f(B)=\int_{\Theta}\mathcal{K}(B;\theta)dP(\theta)$, for all
$B\in\mathcal{B}(\mathcal{Y})$, where $\mathcal{K}(\cdot;\theta)$ is an
appropriate joint probability measure over
$\\{\mathcal{Y},\mathcal{B}(\mathcal{Y})\\}$ obeying a parametric law.
Choosing such a joint law is straightforward in simple cases. For example,
Hannah et al. (2011) rely on a joint exponential family distribution
formulated via a sequence of generalized linear models. However, in general
settings, explicitly characterizing dependence within
$\mathcal{K}(\cdot;\theta)$ is not at all straightforward and it becomes
convenient to rely on a product measure (Dunson & Bhattacharya, 2010):
$\displaystyle\mathcal{K}(B;\theta)=\prod_{j}\mathcal{K}(B_{j};\theta_{j}),\quad
B=\bigotimes_{j=1}^{p}B_{j},\quad B_{j}\in\mathcal{B}(\mathcal{Y}_{j}).$ (4)
If we then choose $P\sim DP(\alpha P_{0})$ with
$P_{0}=\bigotimes_{j=1}^{p}P_{0j}$, we obtain an identical hierarchical
specification to (3), but with the elements of $y_{i}=\\{y_{ij}\\}$
conditionally independent given the cluster allocation index $C_{i}$.
As mentioned in §$1$, this conditional independence assumption given a single
latent class variable is the nemesis of the joint DPM approach. We consider
more generally a multivariate
$C_{i}=(C_{i1},\ldots,C_{ip})^{T}\in\\{1,\ldots,\infty\\}^{p}$, with separate
but dependant indices across the disparate data types. We let,
$\displaystyle\mbox{pr}(C_{i1}=h_{1},\ldots,C_{ip}=h_{p})=\pi_{h_{1}\cdots
h_{p}},$ $\displaystyle\mbox{with}\quad h_{j}=1,\ldots,\infty,j=1,\ldots,p,$
(5)
where $\pi=\\{\pi_{h_{1}\cdots h_{p}}\\}\in\Pi_{p}^{\infty}$ is an infinite
$p$-way probability tensor characterizing the joint probability mass function
of the multivariate cluster indices. It remains to specify the prior for the
probability tensor $\pi$, which is considered next in §3.
## 3 PROBABILISTIC TENSOR FACTORIZATIONS
### 3.1 PARAFAC Extension
Suppose that $C_{ij}\in\\{1,\ldots,d_{j}\\}$, with $d_{j}$ the number of
possible levels of the $j$th cluster index. Then, assuming that $C_{i}$ are
observed unordered categorical variables, Dunson & Xing (2009) proposed a
probabilistic Parafac factorization of the tensor $\pi$:
$\displaystyle\pi=\sum_{h=1}^{k}\lambda_{h}\psi_{h}^{(1)}\otimes\cdots\otimes\psi_{h}^{(p)},$
(6)
where $\lambda=\\{\lambda_{h}\\}$ follows a stick-breaking process,
$\psi_{h}^{(j)}=(\psi_{h1}^{(j)},\ldots,\psi_{hd_{j}}^{(j)})^{T}$ is a
probability vector specific to component $h$ and outcome $j$, $\otimes$
denotes the outer product.
We focus primarily on generalizations of the Parafac factorization to the case
in which $C_{i}$ is unobserved and can take infinitely-many different levels.
We let,
$\displaystyle\pi_{c_{1}\cdots c_{p}}$ $\displaystyle=$
$\displaystyle\mbox{pr}(C_{1}=c_{1},\ldots,C_{p}=c_{p})=\sum_{h=1}^{\infty}\lambda_{h}\prod_{j=1}^{p}\psi_{hc_{j}}^{(j)}$
$\displaystyle\lambda_{h}$ $\displaystyle=$ $\displaystyle
V_{h}\prod_{l<h}(1-V_{l}),\quad V_{h}\sim\mbox{Be}(1,\alpha)$
$\displaystyle\psi_{hr}^{(j)}$ $\displaystyle=$ $\displaystyle
U_{hr}^{(j)}\prod_{s<r}(1-U_{hs}^{(j)}),\quad
U_{hr}^{(j)}\sim\mbox{Be}(1,\beta_{j}),$ (7)
A more compact notation for this factorization of the infinite probability
tensor $\pi$ is,
$\displaystyle\pi=\sum_{h=1}^{\infty}\lambda_{h}\bigotimes_{j=1}^{p}\psi_{h}^{(j)},$
(8) $\displaystyle\lambda\sim\mbox{Stick}(\alpha),\
\psi_{h}^{(j)}\sim\mbox{Stick}(\beta_{j}),$ (9)
which takes the form of a stick-breaking mixture of outer products of stick-
breaking processes. This form is carefully chosen so that the elements of
$\pi$ are stochastically larger in those cells having the smallest indices,
with rapid decreases towards zero as one moves away from the upper right
corner of the tensor.
It can be shown that tensors realizations from the ITF distribution are valid
in the sense that they sum to $1$ with probability $1$. We can be flexible in
terms where exactly these cluster indices occur in a hierarchical Bayesian
model. Next in §3.2, we formulate a generic mixture model for MDM, where the
ITF is used characterize the cluster indices of the parameters governing the
distributions of the disparate data-types.
### 3.2 Infinite Tensor Factorization Mixture
Assume that for each individual $i$ we have a data ensemble
$(y_{i1},\ldots,y_{ip})\in\mathcal{Y}$ where
$\mathcal{Y}=\bigotimes_{j=1}^{p}\mathcal{Y}_{j}$. Let
$\mathcal{B}(\mathcal{Y})$ be the sigma algebra generated by the product sigma
algebra
$\mathcal{B}(\mathcal{Y}_{1})\times\cdots\times\mathcal{B}(\mathcal{Y}_{p})$.
Consider any Borel set
$B=\bigotimes_{j=1}^{p}B_{j}\in\mathcal{B}(\mathcal{Y})$. Given cluster
indices $(C_{i1}=c_{i1},\ldots,C_{ip}=c_{ip})$, we assume that the ensemble
components are independent with
$\displaystyle f(y_{i1}\in B_{1},\ldots,y_{ip}\in
B_{p}\,|\,C_{i1}=h_{1},\ldots,C_{ip}=h_{p})$
$\displaystyle=\prod_{j=1}^{p}\mathcal{K}_{j}(B_{j};\theta_{j,h_{j}}).$ (10)
$\mathcal{K}_{j}(\cdot;\theta_{j,h})$ is an appropriate probability measure on
$\\{\mathcal{Y}_{j},\mathcal{B}(\mathcal{Y}_{j})\\}$ as in equation (1).
Marginalizing out the cluster indices, we obtain
$\displaystyle f(y_{i1}\in B_{1},\ldots,y_{ip}\in B_{p})$
$\displaystyle=\sum_{h_{1}=1}^{\infty}\cdots\sum_{h_{p}=1}^{\infty}\pi_{h_{1},\ldots,h_{p}}\prod_{j=1}^{p}\mathcal{K}_{j}(B_{j};\theta_{j,h_{j}}),$
(11)
where $\pi_{h_{1},\ldots,h_{p}}=\mbox{pr}(C_{i1}=h_{1},\ldots,C_{ip}=h_{p})$.
We let $\pi\sim\mbox{{ITF}}(\alpha,\beta)$ and we call the resulting mixture
model an infinite tensor factorization mixture,
$f\sim\mbox{{ITM}}(\alpha,\beta)$. To complete the model specification, we let
$\theta_{j,h_{j}}\sim P_{0j}$ independently as in (2).
The model $y_{i}\sim f$, $f\sim\mbox{{ITM}}(\alpha,\beta)$, can be
equivalently expressed in hierarchical form as
$\displaystyle y_{ij}$ $\displaystyle\sim$
$\displaystyle\mathcal{K}_{j}(\theta_{ij}^{*}),\
\theta_{i}^{*}=P\sum_{h_{1}=1}^{\infty}\cdots\sum_{h_{p}=1}^{\infty}\pi_{h_{1},\ldots,h_{p}}\prod_{j=1}^{p}\delta_{\theta_{j,h_{j}}},$
$\displaystyle\pi$ $\displaystyle\sim$
$\displaystyle\mbox{{ITF}}(\alpha,\beta),\quad\theta_{j,h_{j}}\sim P_{0j},$
(12)
Here, $P$ is a joint mixing measure across the different data types and is
given a infinite tensor process prior,
$P\sim\mbox{{ITP}}(\alpha,\beta,\bigotimes_{j=1}^{p}P_{0j})$. Marginalizing
out the random measure $P$, we obtain the same form as in (3.2). The proposed
infinite tensor process prior provide a much more flexible generalization of
existing priors for discrete random measures, such as the Dirichlet process or
Pitman Yor process.
## 4 POSTERIOR INFERENCE
### 4.1 Markov Chain Monte Carlo Sampling
We propose a novel algorithm for efficient exact MCMC posterior inference in
the ITM model, utilizing blocked and partially collapsed steps. We adapt ideas
from Walker (2007); Papaspiliopoulos & Roberts (2008) to derive slice sampling
steps with label switching moves, entirely avoiding truncation approximations.
Begin by defining the augmented joint likelihood for an observation $y_{i}$,
cluster labels $c_{i}=(c_{i0},c_{i1},\dots,c_{ip})$ and slice variables
$u_{i}=(u_{i0},u_{i1},\dots,u_{ip})$ as
$\displaystyle p(y_{i},c_{i},u_{i}\mid\lambda,\Psi,\Theta)$
$\displaystyle=\bm{1}\left({u_{i0}<\lambda_{c_{i0}}}\right)\prod_{j=1}^{p}{\mathcal{K}_{j}}(y_{ij};\theta^{(j)}_{c_{ij}})\bm{1}\left({u_{ij}<\psi_{c_{i0}c_{ij}}^{(j)}}\right)$
(13)
It is straightforward to verify that on marginalizing $u_{i}$ the model is
unchanged, but including $u_{i}$ induces full conditional distributions for
the cluster indices with finite support. Let
$m_{0h}=\sum_{i=1}^{n}\bm{1}\left({c_{i0}=h}\right)$ and
$\mathcal{D}_{0}=\\{h:m_{0h}>0\\}$. Similarly define
$m_{jhk}=\sum_{i=1}^{n}\bm{1}\left({c_{i0}=h}\right)\bm{1}\left({c_{ij}=k}\right)$
and $\mathcal{D}_{j}=\\{k:\sum_{h=1}^{\infty}m_{jhk}>0\\}$, and let
$k^{*}_{j}=\max(\mathcal{D}_{j})$ for $0\leq j\leq p$. Define
$\mathcal{U}_{0}=\\{u_{i0}:1\leq i\leq n\\}$, $\mathcal{C}_{0}=\\{c_{i0}:1\leq
i\leq n\\}$, $\mathcal{U}_{1}=\\{u_{ij}:1\leq i\leq n,1\leq j\leq p\\}$ and
$\mathcal{C}_{1}=\\{c_{ij}:1\leq i\leq n,1\leq j\leq p\\}$. The superscript
$(-i)$ denotes that the quantity is computed excluding observation $i$.
1. 1.
Block update $(\mathcal{U}_{0},\lambda,\alpha)$
1. (a)
Sample $(\alpha\mid\mathcal{C}_{0})$. Standard results (Antoniak, 1974) give
$p(\alpha\mid\mathcal{C}_{0})\propto
p(\alpha)\alpha^{\tilde{c}}\frac{\Gamma(\alpha)}{\Gamma(\alpha+n)}$
for $\tilde{c}=|\mathcal{D}_{0}|$ which can be sampled via Metropolis-Hastings
or using auxiliary variables when $p(\alpha)$ is a mixture of Gamma
distributions (Escobar & West, 1995).
2. (b)
Sample $(\lambda\mid\alpha,\mathcal{C}_{0})$ by drawing $V_{h}\sim
Beta(1+m_{0h},\alpha+\sum_{l=h+1}^{k_{0}^{*}}m_{0l})$ for $1\leq h\leq
k_{0}^{*}$ and setting $\lambda_{h}=V_{h}\prod_{l<h}(1-V_{l})$
3. (c)
Label switching moves:
1. i.
From $\mathcal{D}_{0}$ choose two elements $h_{1},h_{2}$ uniformly at random
and change their labels with probability
$\min(1,(\lambda_{h_{1}}/\lambda_{h_{2}})^{m_{0h_{2}}-m_{0h_{1}}})$
2. ii.
Sample a label $h$ uniformly from $1,2,\dots,k_{0}^{*}$ and propose to swap
the labels $h,h+1$ and corresponding stick breaking weights $V_{h},V_{h+1}$.
Accept with probability $\min(1,a)$ where
$a=\left(\frac{k_{0}^{*}}{k_{0}^{*}+1}\right)^{\bm{1}\left({h=k^{*}_{0}}\right)}\frac{\left(1-V_{h}\right)^{m_{0(h+1)}}}{\left(1-V_{h+1}\right)^{m_{0h}}}$
4. (d)
Sample $(u_{i0}|c_{i0},\lambda)\sim U(0,\lambda_{c_{i0}})$ independently for
$1\leq i\leq n$
2. 2.
Update $\mathcal{C}_{0}$. From (13) the relevant probabilities are
$\displaystyle Pr(c_{i0}=h|u_{i},c_{i},\Psi,\lambda)$
$\displaystyle\propto\bm{1}\left({u_{i0}<\lambda_{h}}\right)\prod_{j=1}^{p}\bm{1}\left({u_{ij}<\psi_{hc_{ij}}^{(j)}}\right)$
(14)
However, it is possible to obtain more efficient updates through _partial
collapsing_ , which allows us to integrate over the lower level slice
variables and $\Psi$ instead of conditioning on them. Then we have
$\displaystyle Pr$ $\displaystyle(c_{i0}=k\mid
u_{i0},\mathcal{C}_{1},\mathcal{C}_{0}^{(-i)},\lambda)\propto\bm{1}\left({u_{i0}<\lambda_{h}}\right)$
$\displaystyle\times\prod_{j=1}^{p}\frac{\left(1+m^{(-i)}_{jkc_{ij}}\right)\prod_{l<c_{ij}}\left(\beta_{k}^{(j)}+\sum_{s>l}+m^{(-i)}_{jks}\right)}{\prod_{l\leq
c_{ij}}\left(1+\beta_{k}^{(j)}+\sum_{s\geq c_{ij}}+m^{(-i)}_{jks}\right)}$
(15)
To determine the support of (15) we need to ensure that
$u^{*}_{0}=\min\\{u_{i0}:1\leq i\leq n\\}$ satisfies
$u_{0}^{*}>1-\sum_{l=1}^{k_{0}^{*}}\lambda_{l}.$ If
$\sum_{l=1}^{k_{0}^{*}}\lambda_{l}<1-u^{*}_{0}$ then draw additional stick
breaking weights $V_{k_{0}^{*}+1},\dots,V_{k_{0}^{*}+d}$ independently from
$Beta(1,\alpha)$ until $\sum_{l=1}^{k_{0}^{*}+d}\lambda_{l}>1-u^{*}_{0}$,
ensuring that
$\sum_{l=k_{0}^{*}+d+1}^{\infty}\bm{1}\left({u_{i0}<\lambda_{h}}\right)=0$ for
all $1\leq i\leq n$. Then the support of (15) is contained within
$1,2,\dots,k_{j}^{*}+d$ and we can compute the normalizing constant exactly.
3. 3.
Block update $(\mathcal{U}_{1},\Psi,\beta)$:
1. (a)
Update $(\beta_{r}^{(j)}\mid\\{c_{ij}:c_{i0}=r\\},\mathcal{C}_{0})$ for $1\leq
j\leq p$, $1\leq r\leq k_{0}^{*}$. If the concentration parameter is shared
across global clusters (that is, $\beta_{r}^{(j)}\equiv\beta^{(j)}$) then a
straightforward conditional independence argument gives
$\displaystyle p(\beta^{(j)}\mid\\{c_{ij}:c_{i0}=r\\},\mathcal{C}_{0})$
$\displaystyle\propto
p(\beta^{(j)})\prod_{r\in\mathcal{D}_{0}}\left(\beta^{(j)}\right)^{\tilde{c}_{jr}}\frac{\Gamma(\beta^{(j)})}{\Gamma(\beta^{(j)}+n_{r})}$
(16)
where $n_{r}=|\\{i:c_{i0}=r\\}|$ and $\tilde{c}_{jr}=|\\{h:m_{jrh}>0\\}|$.
Note that terms with $n_{r}=1$ (corresponding to top-level singleton
components) do not contribute, since
$\beta^{(j)}{\Gamma(\beta^{(j)})}={\Gamma(\beta^{(j)}+1)}$. The updating
scheme of Escobar & West (1995) is simple to adapt here using
$|\mathcal{D}_{0}|$ independent auxiliary variables.
2. (b)
For $r\in\mathcal{D}_{0}$ update
$(\psi^{(j)}_{r}\mid\mathcal{C}_{0},\mathcal{C}_{1},\beta_{r}^{(j)})$ by
drawing $U^{(j)}_{rh}\sim
Beta(1+m_{jrh},\beta_{r}^{(j)}+\sum_{l=h+1}^{k_{0}^{*}}m_{jlh})$ for $1\leq
h\leq k^{*}_{j}$
3. (c)
Label switching moves: For $1\leq j\leq p$,
1. i.
From $\mathcal{D}_{j}$ choose two elements $h_{1},h_{2}$ uniformly at random
and change their labels with probability $\min(1,a)$ where
$a=\prod_{h_{0}\in\mathcal{D}_{0}}\left(\frac{\psi^{(j)}_{h_{0}h_{1}}}{\psi^{(j)}_{h_{0}h_{2}}}\right)^{m_{jh_{0}h_{2}}-m_{jh_{0}h_{1}}}$
2. ii.
Sample a label $h$ uniformly from $1,2,\dots,k^{*}_{j}$ and propose to swap
the labels $h,h+1$ and corresponding stick breaking weights. Accept with
probability $min(1,a)$ where
$\displaystyle
a=\left(\frac{k_{j}^{*}}{k_{j}^{*}+1}\right)^{\bm{1}\left({h=k^{*}_{j}}\right)}$
$\displaystyle\times\prod_{h_{0}\in\mathcal{D}_{0}}\frac{\left(1-U^{(j)}_{rh}\right)^{m_{jr(h+1)}}}{\left(1-U^{(j)}_{r(h+1)}\right)^{m_{jrh}}}$
(17)
4. (d)
Sample $(u_{ij}|c_{i},\Psi)\sim U(0,\psi^{(m)}_{c_{i0}c_{ij}})$ independently
for $1\leq j\leq p$, $1\leq i\leq n$.
4. 4.
Update $\mathcal{C}_{j}$ for $1\leq j\leq p$ independently. We have
$\displaystyle Pr(c_{ij}=k\mid y,\Theta,u_{ij},c_{i0},\Psi)$
$\displaystyle\propto{\mathcal{K}_{j}}(y_{ij};\theta^{(j)}_{k})\bm{1}\left({u_{ij}<\psi_{c_{i0}k}^{(j)}}\right)$
(18)
As in step 2 we determine the support of the full conditional distribution as
follows: Let $u_{j}^{*}=\min\\{u_{ij}:1\leq i\leq n\\}$. For all
$r\in\mathcal{D}_{0}$, if $\sum_{h=1}^{k_{j}^{*}}\psi_{rh}^{(j)}<1-u_{j}^{*}$
then extend the stick breaking measure $\psi^{(j)}_{r}$ by drawing $d_{r}$ new
stick breaking weights from the prior so that
$\sum_{h=1}^{k_{j}^{*}+d_{r}}\psi_{rh}^{(j)}>1-u_{j}^{*}$. Draw
$\theta^{(j)}_{k_{j}^{*}+1},\dots,\theta^{(j)}_{k_{j}^{*}+d}\sim
p(\theta^{(j)})$ independently (where
$d=\max\\{d_{r}:r\in\mathcal{D}_{j}\\}$). Then update $c_{ij}$ from
$\displaystyle Pr(c_{ij}=k\mid y,\Theta,u_{ij},c_{i0},\Psi)$
$\displaystyle=\frac{{\mathcal{K}_{j}}(y_{ij};\theta^{(j)}_{k})\bm{1}\left({u_{ij}<\psi_{c_{i0}k}^{(j)}}\right)}{\sum_{h=1}^{k_{j}^{*}+d}{\mathcal{K}_{j}}(y_{ij};\theta^{(j)}_{k})\bm{1}\left({u_{ij}<\psi_{c_{i0}k}^{(j)}}\right)}$
(19)
5. 5.
Update $(\Theta|-)$ by drawing from
$p(\theta_{h}^{(j)}\mid y,\mathcal{C}_{j})\propto
p(\theta_{h}^{(m)})\prod_{\\{i:c_{ij}=h\\}}{\mathcal{K}_{j}}(y_{ij};\theta^{(j)}_{h})$
for each $1\leq j\leq p$ and $1\leq h\leq k^{*}_{j}$
### 4.2 Inference
Given samples from the MCMC scheme above we can estimate the predictive
distribution as
$\displaystyle\hat{f}(y_{n+1}\mid
y_{n})=\frac{1}{T}\sum_{t=1}^{T}\sum_{h_{0}=1}^{k^{*}_{0}}\sum_{h_{1}=1}^{k^{*}_{1}}\cdots\sum_{h_{p}=1}^{k^{*}_{p}}\lambda^{(t)}_{h_{0}}$
$\displaystyle\times\prod_{j=1}^{p}\psi^{(t)}_{h_{0}h_{j}}{\mathcal{K}_{j}}\left(y_{(n+1)j};\theta^{(j)(t)}_{h_{j}}\right)$
(20)
Each of the inner sums in (20) is a truncation approximation, but it can be
made arbitrarily precise by extending the stick breaking measures with draws
from the prior and drawing corresponding atoms from $p(\theta^{(j)})$. In
practice this usually isn’t necessary as any error in the approximation is
small relative to Monte Carlo error.
The other common inferential question of interest in the MDM settings is the
dependence between components, for example testing whether component $j1$ and
$j2$ are independent of each other. As already noted, the dependence between
the components comes in through the dependence between the cluster allocations
and therefore, tests for independence between $j1$ and $j2$ is equivalent to
testing for independence between their latent cluster indicators $C_{j1}$ and
$C_{j2}$. Such a test can be constructed in terms of the divergence between
the joint and marginal posterior distributions of $C_{j1}$ and $C_{j2}$. The
Monte Carlo estimate of the Kulback Leibler divergence between the joint and
marginal posterior distributions is given as,
$\displaystyle I$
$\displaystyle(j1,j2)=\frac{1}{T}\sum_{t=1}^{T}\sum_{h_{j1}=1}^{k^{*}_{j1}}\sum_{h_{j2}=1}^{k^{*}_{j2}}\left(\sum_{h_{0}=1}^{k^{*}_{0}}\lambda^{(t)}_{h_{0}}\psi^{(t)}_{h_{0}h_{j1}}\psi^{(t)}_{h_{0}h_{j2}}\right)$
$\displaystyle\times\log\left(\frac{\sum_{h_{0}=1}^{k^{*}_{0}}\lambda^{(t)}_{h_{0}}\psi^{(t)}_{h_{0}h_{j1}}\psi^{(t)}_{h_{0}h_{j2}}}{\left[\sum_{h_{0}=1}^{k^{*}_{0}}\lambda^{(t)}_{h_{0}}\psi^{(t)}_{h_{0}h_{j1}}\right]\left[\sum_{h_{0}=1}^{k^{*}_{0}}\lambda^{(t)}_{h_{0}}\psi^{(t)}_{h_{0}h_{j1}}\right]}\right)$
(21)
Under independence, the divergence should be $0$. Analogous divergences can be
considered for testing other general dependancies, like $3$-way, $4$-way
independences.
## 5 EXPERIMENTS
Our approach can be used for two different objectives in the context of mixed
domain data - for prediction and for inference on the dependence structure
between different data types. We outline results of experiments with both
simulated and real data that show the performance of our approach with respect
to both the objectives.
### 5.1 Simulated Data Examples
To the best of our knowledge, there is no standard model to jointly predict
for mixed domain data as well as evaluate the dependence structure, so as a
competitor, we use a joint DPM. To keep the evaluations fair, we use two
scenarios. In the first the ground truth is close to that of the joint DPM, in
the sense that all the components of the mixed data have the same cluster
structure. The other simulated experiment considers the case when the ground
truth is close to the ITF, where different components of the mixed data
ensemble have their own cluster structure but clustering is dependent. The
goal here in each of the scenarios is to compare joint DPM vs ITF in terms of
recovery of dependence structure and predictive accuracy.
For scenario 1, we consider a set of 1,000 individuals from whom an ensemble
comprising of T, a time series R, a multivariate real-valued response
($\in\Re^{4}$) and C1,C2,C3, 3 different categorical variables have been
collected, to emulate the type of data collected from patients in cancer
studies and other medical evaluations. For the purposes of scenario 1, we
simulate T, R, C1, C2, C3 each from a mixture of 3 clusters. For example, R is
simulated from a two-component mixture of multivariate normals with different
means, R is simulated from a mixture of two autoregressive kernels and each of
the categorical variables from a mixture of two multinomial distributions. If
we label the clusters as $1$ and $2$, for each simulation, either all of the
ensemble (T,R,C1,C2,C3) comes from $1$ or all of it comes from $2$. After
simulation we randomly hold out R in 50 individuals, C1, C2 in 10 each, for
the purposes of measuring prediction accuracy. For the categorical variables
prediction accuracy is considered with a $0-1$ loss function and is expressed
as a percent missclassification rate. For the multivariate real variable R, we
consider squared error loss and accuracy is expressed as relative predictive
error. We also evaluate for some of the pairs their dependence via estimated
mutual information.
For scenario 2, the same set-up as in scenario 1 is used, except for the
cluster structure of the ensemble. Now simulations are done such that T falls
into three clusters and this is dependent on R and C1. C2 and C3 depend on
each other and are simulated from two clusters each but their clustering is
independent of the other variables in the ensemble. We measure prediction
accuracy using a hold out set of the same size as in scenario 1 and also
evaluate the dependence structure from the ITF model.
In each case, we take 100,000 iterations of the MCMC scheme with the first few
1,000 discarded as a burn-in. These are reported in table 1 (left). We also
summarize the recovered dependence structure in table 1 and in table 2. In
scenario 1, the prediction accuracy of ITF and DPM are comparable, with DPM
performing marginally better in a couple of cases. Note that the recovered
dependence structure with the ITF is exactly accurate which shows that the ITF
can reduce to joint co-clustering when that is the truth. In scenario 2,
however there is significant improvement in using the ITF over the DPM with
predictive accuracy. In fact the predictions from the DPM for the categorical
variable are close to noise. The dependence structure recovered the ITF almost
reflects the truth as compared to that from the DPM which predicts every pair
is dependent, by virtue of its construction.
### 5.2 Real Data Examples
For generic real mixed domain data the dependence structure is wholly unknown.
To evaluate how well the ITF does in capturing pairwise dependencies, we first
consider a network example in which recovering dependencies is of principal
interest and prediction is not relevant. We consider data comprising of 105
political blogs (Adamic & Glance, 2005) where the edges in the graph are
composed of the links between websites. Each blog is labeled with its
ideology, and we also have the source(s) which were used to determine this
label. Our model includes the network, ideology label, and binary indicators
for 7 labeling sources (including “manually labeled”, which are thought to be
the most subject to errors in labelings). We assume that ideology impacts
links through cluster assignment only, which is a reasonable assumption here.
We collect 100,000 MCMC iterations after a short burn-in and save the iterate
with the largest complete-data likelihood for exploratory purposes.
Fig. 2 shows the network structure, with nodes colored by ideology. It is
immediately clear that there is significant clustering, apparently driven
largely by ideology, but that ideology alone does not account for all the
structure present in the graph. Joint DPM approach would allow for only one
type of clustering and prevent us from exploring this additional structure.
The recovered clustering in fig. 2 reveals a number of interesting structural
properties of the graph; for example, we see a tight cluster of conservative
blogs which have high in- and out- degrees but do not link to one another
(green) and a partitioning of the liberal blogs into a tightly connected
component (purple) and a periphery component with low degree (blue). The
conservative blogs do not exhibit the same level of assortative mixing
(propensity to link within a cluster) as the liberal blogs do, especially
within the purple component.
To get a sense for how stable the clustering is, we estimate the posterior
probability that nodes $i$ and $j$ are assigned to the same cluster by
recording the number of times this event occurs in the MCMC. We observe that
the clusters are generally quite stable, with two notable exceptions. First,
there is significant posterior probability that points 90 and 92 are assigned
to the red cluster rather than the blue cluster. This is significant because
these two points are the conservative blogs which are connected only to
liberal blogs (see fig. 2). While the graph topology strongly suggests that
these belong to the blue cluster, the labels are able to exert some influence
as well. Note that we do not observe the same phenomenon for points 7, 15, and
25, which are better connected. We also observe some ambiguity between the
purple and blue clusters. These are nodes 6, 14, 22, 33, 35 and 36, which
appear at the intersection of the purple/blue clusters in the graph projection
because they are not quite as connected as the purple “core” but better
connected than most of the blue clusters.
Finally, we examine the posterior probability of being labeled ”conservative”
(fig. 3). Most data points are assigned very high or low probability. The five
labeled points stand out as having uncharacteristic labels for their link
structure (see fig 2). Since the observed label doesn’t agree with the graph
topology, the probability is pulled away from 0/1 toward a more conservative
value. This effect is most pronounced in the three better-connected liberal
blogs (lower left) versus the weakly connected conservative blogs (upper
right).
For the second example, we use data obtained from the Osteoarthritis
Initiative (OAI) database, which is available for public access at
http://www.oai.ucsf.edu/. The question of interest for this data is
investigate relationships between physical activity and knee disease symptoms.
For this example we use a subset of the baseline clinical data, version 0.2.2.
The data ensemble comprises of variables including biomarkers, knee joint
symptoms, medical history, nutrition, physical exam and subject
characteristics. In our subset we take an ensemble of size $120$ for $4750$
individuals. We hold out some of the biomarkers and knee joint symptoms and
consider prediction accuracy of the ITF versus the joint DPM model. For the
real variables, mixtures of normal kernels are considered, for the
categorical, mixtures of multinomials and for the time series, mixtures of
fixed finite wavelet basis expansion.
Results for this experiment are summarized in table 3 for 4 held-out
variables. ITF outperforms the DPM in 3 of these 4 cases and marginally worse
prediction accuracy in case of the other variable. It is also interesting to
note that ITF helps to uncover useful relationships between medical history,
physical activity and knee disease symptoms, which has a potential application
for clinical action and treatments for the subsequent patient visits.
## 6 CONCLUSIONS
We have developed a general model to accommodate complex ensembles of data,
along with a novel algorithm to sample from the posterior distributions
arising from the model. Theoretically, extension to any number of levels of
stick breaking processes should be possible, the utility and computational
feasibility of such extensions is being studied. Also under investigation is
connections with random graph/network models and theoretical rates of
posterior convergence.
Table 1: Simulation Example, Scenario 1: Prediction error (top), tests of
independence (bottom)
| ITF | DPM
---|---|---
T | 1.79 | 1.43
C2 | 31$\%$ | 23 $\%$
C3 | 37$\%$ | 36 $\%$
| ITF | DPM | “Truth”
---|---|---|---
C1 vs T | Yes | Yes | Yes
C2 vs T | Yes | Yes | Yes
C3 vs T | Yes | Yes | Yes
C2 vs R | Yes | Yes | Yes
Table 2: Simulation Example, Scenario 2: Prediction error (top), tests of
independence (bottom)
| ITF | DPM
---|---|---
T | 4.61 | 10.82
C2 | 27$\%$ | 55 $\%$
C3 | 34$\%$ | 57 $\%$
| ITF | DPM | “Truth”
---|---|---|---
C1 vs T | Yes | Yes | Yes
C2 vs T | No | Yes | No
C3 vs T | No | Yes | No
C2 vs R | No | Yes | No
Figure 1: Network Example: True Clustering
Figure 2: Network Example: Recovered Clustering
Figure 3: Network Example: Pairwise cluster assignment probability. Left bars correspond to clustering in Fig. 2, top bars correspond to clustering on the ideology label. Table 3: OAI Data example: Relative Predictive Accuracy. The variables are respectively, left knee baseline pain, isometric strength left knee extension,left knee paired X ray reading, left knee baseline radiographic OA. | ITF | DPM
---|---|---
P01BL12SXL | 31.21 | 100.92
V00LEXWHY1 | 7.94 | 7.56 $\%$
V00XRCHML | 23.01 | 31.84 $\%$
P01LXRKOA | 65.78 | 90.30 $\%$
#### Acknowledgements
This work was support by Award Number R01ES017436 from the National Institute
of Environmental Health Sciences and DARPA MSEE. The content is solely the
responsibility of the authors and does not necessarily represent the official
views of the National Institute of Environmental Health Sciences or the
National Institutes of Health or DARPA MSEE.
## References
* Adamic & Glance (2005) Adamic, L. & Glance, N. (2005). The political blogosphere and the 2004 us election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery. ACM.
* Antoniak (1974) Antoniak, C. (1974). Mixtures of dirichlet processes with applications to bayesian nonparametric problems. The annals of statistics , 1152–1174.
* Bigelow & Dunson (2009) Bigelow, J. & Dunson, D. (2009). Bayesian semiparametric joint models for functional predictors. Journal of the American Statistical Association 104, 26–36.
* Cai et al. (2011) Cai, J., Song, X., Lam, K. & Ip, E. (2011). A mixture of generalized latent variable models for mixed mode and heterogeneous data. Computational Statistics & Data Analysis 55, 2889–2907.
* Dunson (2000) Dunson, D. (2000). Bayesian latent variable models for clustered mixed outcomes. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 62, 355–366.
* Dunson (2003) Dunson, D. (2003). Dynamic latent trait models for multidimensional longitudinal data. Journal of the American Statistical Association 98, 555–563.
* Dunson (2009) Dunson, D. (2009). Nonparametric bayes local partition models for random effects. Biometrika 96, 249–262.
* Dunson (2010) Dunson, D. (2010). Multivariate kernel partition process mixtures. Statistica Sinica 20, 1395.
* Dunson & Bhattacharya (2010) Dunson, D. & Bhattacharya, A. (2010). Nonparametric bayes regression and classification through mixtures of product kernels. Bayesian Stats .
* Dunson & Xing (2009) Dunson, D. & Xing, C. (2009). Nonparametric bayes modeling of multivariate categorical data. Journal of the American Statistical Association 104, 1042–1051.
* Escobar & West (1995) Escobar, M. & West, M. (1995). Bayesian density estimation and inference using mixtures. Journal of the american statistical association , 577–588.
* Ferguson (1973) Ferguson, T. (1973). A bayesian analysis of some nonparametric problems. The annals of statistics , 209–230.
* Hannah et al. (2011) Hannah, L. A., Blei, D. M. & Powell, W. B. (2011). Dirichlet process mixtures of generalized linear models. The Journal of Machine Learning Research 12, 1923–1953.
* Li et al. (2011) Li, L., Zhou, M., Wang, E. & Carin, L. (2011). Joint dictionary learning and topic modeling for image clustering. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE.
* Muthen (1984) Muthen, B. (1984). A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators. Psychometrika 49, 115–132.
* Papaspiliopoulos & Roberts (2008) Papaspiliopoulos, O. & Roberts, G. (2008). Retrospective markov chain monte carlo methods for dirichlet process hierarchical models. Biometrika 95, 169–186.
* Sammel et al. (1997) Sammel, M., Ryan, L. & Legler, J. (1997). Latent variable models for mixed discrete and continuous outcomes. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 59, 667–678.
* Sethuraman (1994) Sethuraman, J. (1994). A constructive definition of Dirichlet priors. Statistica Sinica 4, 639–650.
* Shahbaba & Neal (2009) Shahbaba, B. & Neal, R. (2009). Nonlinear models using dirichlet process mixtures. The Journal of Machine Learning Research 10, 1829–1850.
* Shen & Ghosal (2011) Shen, W. & Ghosal, S. (2011). Adaptive bayesian multivariate density estimation with dirichlet mixtures. Arxiv preprint arXiv:1109.6406 .
* Song et al. (2009) Song, X., Xia, Y. & Lee, S. (2009). Bayesian semiparametric analysis of structural equation models with mixed continuous and unordered categorical variables. Statistics in medicine 28, 2253–2276.
* Tokdar (2011) Tokdar, S. (2011). Adaptive convergence rates of a dirichlet process mixture of multivariate normals. Arxiv preprint arXiv:1111.4148 .
* Walker (2007) Walker, S. (2007). Sampling the dirichlet mixture model with slices. Communications in Statistics Simulation and Computation® 36, 45–54.
* Yang & Dunson (2010) Yang, M. & Dunson, D. (2010). Bayesian semiparametric structural equation models with latent variables. Psychometrika 75, 675–693.
|
arxiv-papers
| 2013-03-03T01:55:10 |
2024-09-04T02:49:42.321938
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Anjishnu Banerjee, Jared Murray, David B. Dunson",
"submitter": "Anjishnu Banerjee",
"url": "https://arxiv.org/abs/1303.0449"
}
|
1303.0467
|
# Photoacoustic Tomography using a Michelson Interferometer with Quadrature
Phase Detection
Rory W. Speirs School of Physics, The University of Melbourne, Victoria,
3010, Australia Alexis I. Bishop [email protected] School of Physics,
Monash University, Victoria, 3800, Australia
###### Abstract
We present a pressure sensor based on a Michelson interferometer, for use in
photoacoustic tomography. Quadrature phase detection is employed allowing
measurement at any point on the mirror surface without having to retune the
interferometer, as is typically required by Fabry-Perot type detectors. This
opens the door to rapid full surface detection, which is necessary for
clinical applications. Theory relating acoustic pressure to detected acoustic
particle displacements is used to calculate the detector sensitivity, which is
validated with measurement. Proof-of-concept tomographic images of blood
vessel phantoms have been taken with sub-millimeter resolution at depths of
several millimeters.
Photoacoustic imaging has the potential to become a routinely used medical
imaging modality, combining the superior contrast of optical techniques, with
the penetration depth of ultrasound.Minghua and Lihong V. (2006); Li and Wang
(2009); Wang and Hu (2012) Its inherent ability to distinguish regions of
contrasting optical absorption make it an ideal candidate for imaging vascular
structure, with possible applications in diagnosis of strokeJoshi and Agarwal
(2010) and early stage cancers.Esenaliev _et al._ (1999); Oraevsky _et al._
(2001); Yang _et al._ (2012)
Photoacoustic tomography (PAT) is potentially capable of producing real time,
three-dimensional (3D), high resolution images to depths of several
centimeters.Wang (2008); Xiang _et al._ (2013) The requirements of the
ultrasonic detection system to achieve this are formidable, and as yet, no
system has emerged which satisfy all criteria simultaneously.
Though much emphasis has been placed on detector sensitivity, there are other,
equally pressing requirements of a high resolution system. These include the
need for a large detection surface, with high spatial resolution. Of
particular importance for a system that can be used in clinical context is the
ability to rapidly capture data, ideally over the whole detector surface
simultaneously. A detector which is transparent to the excitation light is
also favourable, as this allows a large amount of optical energy to be dumped
uniformly on the region being imaged.
Piezoelectric detectors struggle with many of these requirements,Hou _et al._
(2007) and so a wide variety of optical detectors have been developed.Beard
and Mills (1996); Hamilton and O’Donnell (1998); Paltauf _et al._ (2007);
Chow _et al._ (2011) Planar Fabry-Perot based systems show good sensitivity
and bandwidth response, but are typically slow to acquire data because of the
need to tune the probing laser at each point on the detector surface to
achieve peak sensitivity.Zhang _et al._ (2008) Simultaneous two-dimensional
(2D) data collection has been demonstrated with these systems,Lamont and Beard
(2006) but large detection areas are challenging to produce because of the
difficulty in creating polymer coatings of uniform thickness.
Microring resonators have been made with impressive sensitivity, element size
and can be made transparent. However coupling and addressing a large array of
microrings will be difficult, so it is yet to be seen if simultaneous full
surface detection is achievable.Huang _et al._ (2008)
Pressure dependent optical reflectance detectors have been demonstrated with
the ability to capture pressure data over a whole surface
simultaneously,Paltauf _et al._ (1999); Paltauf and Schmidt-Kloiber (1997)
without the need for complicated nanofabrication techniques of some other
methods. Moreover, use of fast-framing, or gated charge-coupled devices (CCDs)
simplifies data collection, and allows for high spatial resolution over a
large detection surface. However, the detection sensitivity of this type of
system has so far been only modest, and may be insufficient for high
resolution imaging of biological tissue.
We have developed a detector based on a Michelson interferometer (MI) with
quadrature phase detection. This detector has comparable sensitivity to other
optical detectors in the literature, but has the potential to perform high
resolution measurements over a full 2D surface simultaneously, without the
need for any position dependent sensitivity tuning.
The MI acts as an ultrasound sensor simply by acoustically coupling an
ultrasound source to a mirror in one of the arms. The acoustic wave of
pressure $p$, has an associated particle displacement $\xi$, which shifts the
position of the mirror as the wave passes through it. This change in position
adjusts the relative phase of the laser beams in the two arms, resulting in a
change in fringe brightness at the output of the interferometer. For a small
amplitude wave travelling in the $x$ direction at time $t$, pressure and
displacement are related by:
$p(x,t)=-E\frac{\partial\xi(x,t)}{\partial x},$ (1)
where $E$ is the appropriate modulus of elasticity for the medium.Blitz (1963)
The intensity, $I$ of the recombined beam in a standard MI varies sinusoidally
with mirror position:
$I=\frac{I_{0}}{2}(1+\cos(\phi)),$ (2)
where $\phi=4\pi n\xi/\lambda$. Here, $I_{0}$ is the input intensity, $\phi$
is the phase, $\lambda$ is the wavelength of the probe beam, and $n$ is the
refractive index of the arm where the mirror position is changing.
Michelson interferometers previously used in ultrasonic detection therefore
suffer from the same problem as Fabry-Perot type detectors, in that the laser
wavelength (or mirror position) must be tuned to a sensitive region at each
point in order to obtain good optical modulation for a given mirror
displacement.Mezrich _et al._ (1976)
The need for tuning was removed from our system by employing quadrature phase
detection. In quadrature phase detection, two orthogonal linear polarizations
are used to simultaneously obtain two separate interference patterns at the
output of the interferometer, which have a relative phase difference of
$\pi/2$. This phase difference ensures that the interference pattern of at
least one of the polarization components is always in a sensitive region.
The phase shift between the two polarization components is created by first
linearly polarizing the light at 45∘ from the vertical or horizontal axis of
the polarizing beamsplitter. A liquid crystal variable waveplate is placed in
one of the arms, which retards the phase of one polarization component
relative to the other by nominally $\pi/4$ in both the forward and reverse
trips. A variable retarder is used instead of a fixed $\lambda/8$ wave plate
to compensate for small amounts of birefringence present in other optical
components.
The _phase sensitivity_ , $\frac{dI}{d\phi}$ (which is the optical intensity
modulation per radian of phase) of a normal MI varies between $0~{}\rm
rad^{-1}$ and $I_{0}/2~{}\rm rad^{-1}$. For an MI with quadrature detection,
the phase sensitivity for each polarization is simply added together, so is
always between $I_{0}/2~{}\rm rad^{-1}$ and $I_{0}/\sqrt{2}~{}\rm rad^{-1}$.
This ensures that the total sensitivity of the system is always at least as
high as the maximum of a standard MI, irrespective of absolute mirror
position.
A diagram of the detector setup can be seen in Fig. 1. The mirror position is
recovered from the detected intensities of the two polarizations, $I_{1,2}$ by
first scaling them between $-1$ and $1$, then treating them as points on the
unit circle: $\phi=\mathrm{atan2}(I_{1},I_{2})$.
Figure 1: A schematic of the Michelson interferometer detector. Ultrasound
passing through the sensing mirror changes the path length of the light, which
alters the phase of the interference pattern. Quadrature phase detection
allows high sensitivity to be achieved, whatever the mirror positions.
Our setup uses an expanded $5~{}\rm mW$, $633~{}\rm nm$ continuous wave
helium-neon laser as the probe, and the detectors are amplified photodiodes
with a $20~{}\rm MHz$ bandwidth. The photodiodes are apertured, which sets the
spatial resolution of the system. The signal of each photodiode is recorded
using a digital oscilloscope. The sensing mirror is a $150~{}\rm\mu m$ thick
glass substrate with a gold reflective coating. To maintain mechanical
stability, the mirror is mounted on an optical window by bonding it around the
perimeter with epoxy resin. This method of bonding also ensures the presence
of an air gap between the reflective surface and the window, eliminating the
possibility of acoustic waves propagating into the window. The glass surface
of the mirror is acoustically coupled to the medium being imaged, with the
window becoming part of the interferometer arm. The resulting mirror can be
seen in Fig. 2.
Figure 2: (a) Schematic of the sensing mirror showing probe laser. (b) The
back side of the sensing mirror used in this experiment.
It is necessary to use such a thin mirror for two reasons. Firstly, a thick
mirror suffers from the acoustic wave reflecting back and forth off the
boundary of the substrate. This reflected wave interferes with the incoming
wave, making the detected displacement useless. Multiple reflections are
strongly suppressed in a mirror that is much thinner than the acoustic
wavelength, so the displacement on the mirror surface accurately represents
the incoming wave. Secondly, a mirror of thickness significantly greater than
the acoustic wavelength is able to support surface waves. These surface, or
_Rayleigh_ waves can exist whenever there is an impedance mismatch between two
media, and are generated on the surface of the mirror when the photoacoustic
pulse first reaches the boundary. The Rayleigh waves then propagate outwards
across the surface and interfere with incoming photoacoustic waves.
To see the fundamental limitations of the detection system, it is useful to
look at the theoretically achievable sensitivity. Like all Fabry-Perot and
piezoelectric type detectors, the MI detector is sensitive to particle
displacements, rather than directly to pressure. However because the MI
detects the absolute position of a single plane, rather than the relative
position of two planes, its sensitivity to pressure is easier to describe
analytically. For a sinusoidal acoustic wave of pressure amplitude $p_{0}$,
travelling in the positive $x$ direction, the corresponding particle
displacement is given by:
$\xi(x,t)=\frac{p_{0}}{2\pi\nu z}\cos(kx-2\pi\nu t),$ (3)
where $k$ is wavenumber, $\nu$ is frequency, and $z$ is the specific acoustic
impedance of the propagation medium. For optical based pressure sensors the
sensitivity, $S$ can be simply given as a proportion of optical intensity
modulation per unit acoustic pressure: $S=1/I_{0}\frac{dI}{dp}$. Using the
chain rule, this may be expanded to
$\frac{dI}{dp}=\frac{dI}{d\phi}\frac{d\phi}{d\xi}\frac{d\xi}{dp}$. For the
quadrature MI, $\frac{dI}{d\phi}$ is always at least $0.5I_{0}$, and
$\frac{d\phi}{d\xi}$ is simply calculated from the expression for $\phi$.
$\frac{d\xi}{dp}$ can be calculated from Eqn. 3, however it must be modified
to describe the setup employed in our system. Firstly, since the pressure wave
must propagate from the original medium into the glass mirror substrate, the
pressure must be multiplied by the transmission coefficient:
$T_{p}=2z_{2}/(z_{1}+z_{2})$, where $z_{1,2}$ are the specific acoustic
impedances of the first and second media respectively.Shutilov (1988) Also,
the mirror is essentially on a free boundary (since $z_{air}\ll z_{glass}$),
so the particle displacement will be twice as great as in the bulk. Combining
these terms gives the expression for the frequency dependent sensitivity:
$S(\nu)=\frac{4n}{\nu\lambda(z_{1}+z_{2})}.$ (4)
Typical values of acoustic impedance for water and glass are $1.5\times
10^{6}~{}\rm Pa~{}s~{}m^{-1}$ and $13.1\times 10^{6}~{}\rm Pa~{}s~{}m^{-1}$
respectively. Taking $n=1$ as the refractive index of air, and letting
$\lambda=633~{}\rm nm$, the sensitivity of our detector is
$S(\nu)=0.43/\nu~{}\rm Hz~{}Pa^{-1}$. For a photoacoustic wave of frequency
$1~{}\rm MHz$, the sensitivity of the detector should be 4.3% optical
modulation per $100~{}\rm kPa$ of peak acoustic pressure. This value can be
compared directly with the sensitivity for an optical reflectance based
detector of 0.19 to 0.81% reported elsewhere in the literature.Paltauf _et
al._ (1999)
Fig. 3 shows the actual detected displacement caused by a photoacoustic wave
from a single source, positioned $3.3~{}\rm mm$ directly behind the detector.
To recover the acoustic pressure from the displacement, the temporal
derivative must be taken according to Eqn. 1, where $c=dx/dt$ has been used to
change the variable of differentiation. A post processing low pass filter is
applied to the detected displacement before the derivative is taken. This
ensures the calculated pressure does not contain unphysical spikes which are
artefacts of taking the derivative of a noisy signal.
Figure 3: (a) The detected mirror displacement due to a photoacoustic wave
produced from a single source and (b), the corresponding pressure wave. A
comparison between simulated and experimental pressure data shows the
calculated sensitivity for the system is accurate.
The noise equivalent displacement in this trace is approximately $5~{}\rm\AA$,
where the signal has been averaged for 64 pulses. The majority of the noise in
this case is due to the amplified photodiodes, which may be improved using
different equipment.
Fig. 3 also shows how the experimentally detected photoacoustic wave compares
with a simulated one, and the agreement between the two support the values of
sensitivity calculated previously. In the experiment, a long straight silicone
tube with internal and external diameters of $0.5~{}\rm mm$ and $1.3~{}\rm mm$
respectively, was filled with diluted India ink with absorption coefficient
$35~{}\rm cm^{-1}$ at $1064~{}\rm nm$. This value is similar to the optical
absorption coefficient of blood at wavelengths commonly used in PAT. The tube
was illuminated with a $10~{}\rm ns$ pulse of $1064~{}\rm nm$ light with
fluence of $25~{}\rm mJ~{}cm^{-2}$. This fluence is well below the maximum
permissible exposure for human skin of $100~{}\rm mJ~{}cm^{-2}$ based on
American National Standards Institute recommendations.ANS (2007) The pulsed
beam was collimated, and had a 1/e diameter of $1~{}\rm cm$. The simulation
was performed using the method described by Köstli _et al._ Kostli _et al._
(2001), where Fig. 3b shows a one-dimensional (1D) slice of the full 3D
simulation. The parameters used in the simulation were the same as described
for the experiment, however the simulation assumed an acoustically homogeneous
propagation medium, so any acoustic effects of the silicone tube were ignored.
The large diameter excitation beam minimized intensity variation across the
tube, though due to the Gaussian nature of the beam, the stated fluence at the
sample is only accurate to within 15%. The fluence used in the simulation was
adjusted within this range until the simulated amplitude most closely matched
the detected one. As such, the calculated sensitivity is also only accurate to
within this margin.
For the experimental measurement, the tube was submerged in a water filled
glass cell. One wall of the cell was made of acoustically transparent
polyethylene film, which was coupled to the sensing mirror of the
interferometer with commercial ultrasound coupling gel. The detected pressure
represents the pressure in the bulk of the glass substrate. This value was
divided by the pressure transmission coefficient to give the acoustic pressure
in the water, allowing a direct comparison with simulation.
To create photoacoustic images, the photodiodes could be moved laterally to
build up 1D or 2D scans. However the lower laser intensity at the edges of the
expanded probe beam meant that data collected in these regions had a lower
signal to noise ratio (SNR). Instead, the sample itself was scanned laterally.
This allowed different regions of the generated ultrasonic wavefield to be
sampled in an equivalent manner to scanning the photodiodes, while maintaining
a high SNR. A 1D scan of the wavefield produced by the cylindrical optical
absorber is shown in Fig. 4. The source was the same India ink filled silicone
tube as previously described, illuminated by the same pulsed laser. The
photodiode aperture was set to a diameter of $200~{}\rm\mu m$, which was the
same as the lateral step size.
Figure 4: (a) Detected acoustic pressure from a single photoacoustic source
and (b), the corresponding reconstructed image of the source. The size and
location of the source is indicated by the circle in the magnified inset.
The reconstruction shown in Fig. 4b was performed using the _kspaceLineRecon_
function of the _k-Wave_ photoacoustic package.Treeby and Cox (2010) The
algorithm is based on Fourier transforms, and it is theoretically exact if
pressure is detected over an infinite plane for infinite time.Kostli _et al._
(2001) In the reconstruction, the source has been positioned correctly, but
has suffered some blurring and distortion which is consistent with other
implementations of this inversion technique.Paltauf _et al._ (2007) The
blurring is unsurprising given that the diameter of the source was only
$500~{}\rm\mu m$ and the spatial separation of each data point was
$200~{}\rm\mu m$.
The resolution of our system is currently limited by the size of the aperture
in front of the photodiodes, which in turn is limited by the need to get
sufficient laser power to the photodiodes. This is easily improved by
increasing the laser power, and could be achieved using inexpensive diode
lasers. Using a shorter wavelength probe laser would also be a simple way to
boost sensitivity according to Eqn. 4.
Manually scanning the photodetectors (or the source) to build up an image is
too slow for real-time imaging applications, so any useful system must
ultimately be capable of performing simultaneous detection over the whole
surface. This could be simply achieved in our system by replacing the
photodiode and aperture arrangement with fast gated intensified charge-coupled
device cameras. CCDs detect intensity at many points across their surface
simultaneously, eliminating the need to move the sample or the detector. The
potential to use CCDs is a significant advantage over other proposed optical
photoacoustic detectors, which have no easy route to simultaneous measurement
of all elements over a large surface.
The current configuration of our sensing mirror (constructed from a thin glass
substrate) may cause limitations to the detectable bandwidth needed in higher
resolution systems, due to the possible reappearance of Rayleigh waves at
higher frequencies. However this could be addressed by using polymer
substrates impedance matched to water. Also, where deeper imaging is required,
the bandwidth requirements of the detector are much more forgiving, since very
high frequency acoustic waves are strongly attenuated in tissue. As such, the
current sensing mirror is suitable for imaging depths beyond a few millimeters
with no reduction in attainable resolution.
In summary, we have demonstrated the use of a Michelson interferometer as a
photoacoustic detector. Quadrature phase detection removes the need to tune
the sensitivity of the interferometer, allowing for the possibility of
simultaneous full surface detection. We used the detector to produce proof-of-
concept photoacoustic images with sub-millimeter resolution, and suggested
ways that resolution could be improved. Future work will involve demonstrating
simultaneous full surface measurement, with the aim of producing real-time 3D
photoacoustic visualizations of the sub-cranial blood vessel vasculature. This
would allow stroke researchers to visualize the dynamics of reperfusion in
small animal subjects after inducing a stroke, in both high resolution and on
a usefully short timescale.
Please cite the Applied Physics Letters version of this article, available
online at: http://link.aip.org/link/doi/10.1063/1.4816427
## References
* Minghua and Lihong V. (2006) X. Minghua and W. Lihong V., Rev. Sci. Instrum., 77, 041101 (2006).
* Li and Wang (2009) C. Li and L. V. Wang, Phys. Med. Biol., 54, R59 (2009).
* Wang and Hu (2012) L. V. Wang and S. Hu, Science, 335, 1458 (2012).
* Joshi and Agarwal (2010) S. Joshi and S. Agarwal, Ann. N.Y. Acad. Sci., 1199, 149 (2010).
* Esenaliev _et al._ (1999) R. Esenaliev, A. Karabutov, and A. Oraevsky, IEEE J. Sel. Top. Quant. Electron., 5, 981 (1999).
* Oraevsky _et al._ (2001) A. A. Oraevsky, A. A. Karabutov, S. V. Solomatin, E. V. Savateeva, V. A. Andreev, Z. Gatalica, H. Singh, and R. D. Fleming, Proc. SPIE, 4256, 6 (2001).
* Yang _et al._ (2012) Y. Yang, S. Wang, C. Tao, X. Wang, and X. Liu, Appl. Phys. Lett., 101, 034105 (2012).
* Wang (2008) L. V. Wang, Med. Phys., 35, 5758 (2008).
* Xiang _et al._ (2013) L. Xiang, B. Wang, L. Ji, and H. Jiang, Sci. Rep., 3 (2013).
* Hou _et al._ (2007) Y. Hou, J.-S. Kim, S. Ashkenazi, S.-W. Huang, L. J. Guo, and M. ODonnell, Appl. Phys. Lett., 91, 073507 (2007).
* Beard and Mills (1996) P. C. Beard and T. N. Mills, Appl. Opt., 35, 663 (1996).
* Hamilton and O’Donnell (1998) J. Hamilton and M. O’Donnell, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, 45, 216 (1998).
* Paltauf _et al._ (2007) G. Paltauf, R. Nuster, M. Haltmeier, and P. Burgholzer, Appl. Opt., 46, 3352 (2007).
* Chow _et al._ (2011) C. M. Chow, Y. Zhou, Y. Guo, T. B. Norris, X. Wang, C. X. Deng, and J. Y. Ye, J. Biomed. Opt., 16, 017001 (2011).
* Zhang _et al._ (2008) E. Zhang, J. Laufer, and P. Beard, Appl. Opt., 47, 561 (2008).
* Lamont and Beard (2006) M. Lamont and P. Beard, Electron. Lett., 42, 187 (2006).
* Huang _et al._ (2008) S.-W. Huang, S.-L. Chen, T. Ling, A. Maxwell, M. ODonnell, L. J. Guo, and S. Ashkenazi, Appl. Phys. Lett., 92, 193509 (2008).
* Paltauf _et al._ (1999) G. Paltauf, H. Schmidt-Kloiber, K. P. Köstli, and M. Frenz, Appl. Phys. Lett., 75, 1048 (1999).
* Paltauf and Schmidt-Kloiber (1997) G. Paltauf and H. Schmidt-Kloiber, J. Appl. Phys., 82, 1525 (1997).
* Blitz (1963) J. Blitz, _Fundamentals Of Ultrasonics_ (Butterworths Co., London, 1963).
* Mezrich _et al._ (1976) R. Mezrich, D. Vilkomerson, and K. Etzold, Appl. Opt., 15, 1499 (1976).
* Shutilov (1988) V. A. Shutilov, _Fundamental Physics of Ultrasound_ (Gordon and Breach Science Publishers, New York, 1988).
* ANS (2007) _ANSI Z136.1_ , American National Standards Institute (2007).
* Kostli _et al._ (2001) K. P. Kostli, M. Frenz, H. Bebie, and H. P. Weber, Phys. Med. Biol., 46, 1863 (2001).
* Treeby and Cox (2010) B. E. Treeby and B. T. Cox, J. Biomed. Opt., 15, 021314 (2010).
|
arxiv-papers
| 2013-03-03T06:39:05 |
2024-09-04T02:49:42.332487
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Rory W. Speirs and Alexis I. Bishop",
"submitter": "Rory Speirs BSc (Hons)",
"url": "https://arxiv.org/abs/1303.0467"
}
|
1303.0498
|
# Double loop quantum enveloping algebras
Wu Zhixiang Department of Mathematics, Zhejiang University, Hangzhou, 310027,
P.R.China [email protected]
###### Abstract.
In this paper we describe certain homological properties and representations
of a two-parameter quantum enveloping algebra $U_{g,h}$ of
${\mathfrak{sl}}(2)$, where $g,h$ are group-like elements.
###### Key words and phrases:
Quantum enveloping algebra, Gelfand-Kirillov dimension,
$\operatorname{Tdeg}$-stable, finite-dimensional representation, BGG category
###### 2000 Mathematics Subject Classification:
Primary 17B10,17B37, Secondary 16T20, 81R50
The author is sponsored by NNSF No.11171296, ZJNSF No. Y6100148, Y610027 and
Education Department of Zhejiang Province No. 201019063.
## 1\. Introduction
It is well-known that there is a bijective map $L\rightarrow P_{L}$ from the
set of all oriented links $L$ in $\mathbb{R}^{3}$ to the ring
$\mathbb{Z}[g^{\pm 1},h^{\pm 1}]$ of two-variable Laurent polynomials. $P_{L}$
is called the Jones-Conway polynomial of the link $L$. The Jones-Conway
polynomial is an isotopy invariant of oriented links satisfying what knot
theorists call ``skein relations" (see [11]). Suppose $\mathbbm{K}$ is a field
with characteristic zero and $q$ is a nonzero element in $\mathbbm{K}$
satisfying $q^{2}\neq$ $1$. Let $U_{q}(\mathfrak{sl}(2))$ be the usual quantum
enveloping algebra of the Lie algebra $\mathfrak{sl}(2)$ with generators
$E,F,K^{\pm 1}$. Then the vector space
$U_{g,h}:=\mathbbm{K}[g^{\pm 1},h^{\pm
1}]\otimes_{\mathbbm{K}}U_{q}(\mathfrak{sl}(2))$
has been endowed with a Hopf algebra structure in [13].
We abuse notation and write $g^{\pm 1},h^{\pm 1},E,F,K^{\pm 1}$ for $g^{\pm
1}\otimes 1$, $h^{\pm 1}\otimes 1$, $1\otimes E$, $1\otimes F$, $1\otimes
K^{\pm 1}$ respectively. In addition, $g^{+1},h^{+1},K^{+1}$ are abbreviated
to $g,h,K$ respectively. Then $U_{g,h}$ is an algebra over $\mathbbm{K}$
generated by $g,$ $g^{-1},$ $h,$ $h^{-1},$ $E,$ $F,$ $K,$ $K^{-1}$. These
generators satisfy the following relations.
(1.1) $\displaystyle K^{-1}K=KK^{-1}=1,\quad g^{-1}g=gg^{-1}=1,\quad
h^{-1}h=hh^{-1}=1,$ (1.2) $\displaystyle KEK^{-1}=q^{2}E,\quad gh=hg,\quad
gK=Kg,\quad gE=Eg,\quad hE=Eh,$ (1.3) $\displaystyle KFK^{-1}=q^{-2}F,\quad
hK=Kh,\quad hF=Fh,\quad gF=Fg,$ (1.4) $\displaystyle EF-
FE=\frac{K-K^{-1}g^{2}}{q-q^{-1}}.$
The other operations of the Hopf algebra $U_{g,h}$ are defined as follows:
(1.5) $\displaystyle\Delta(E)=h^{-1}\otimes E+E\otimes hK,$ (1.6)
$\displaystyle\Delta(F)$ $\displaystyle=K^{-1}hg^{2}\otimes F+F\otimes
h^{-1},$ (1.7) $\displaystyle\Delta(K)=K\otimes
K,\quad\Delta(K^{-1})=K^{-1}\otimes K^{-1},$ (1.8)
$\displaystyle\Delta(a)=a\otimes a,\quad a\in G,$
where $G=\\{g^{m}h^{n}|m,n\in\mathbb{Z}\\}$,
(1.9) $\displaystyle\varepsilon(K)=\varepsilon(K^{-1})=\varepsilon(a)=1,\quad
a\in G,$ (1.10) $\displaystyle\varepsilon(E)=\varepsilon(F)=0,$
and
(1.11) $\displaystyle S(E)=-EK^{-1},\quad S(F)=-KFg^{-2},$ (1.12)
$\displaystyle S(a)=a^{-1},\quad a\in G,\quad S(K)=K^{-1},\quad S(K^{-1})=K.$
The Hopf algebra $U_{g,h}$ is a special case of the Hopf algebras defined in
[14]. It is isomorphic to the tensor product of $U_{q}(\mathfrak{sl}(2))$ and
$\mathbb{K}[g^{\pm 1},h^{\pm 1}]$ as algebras. However, the coproduct of
$U_{g,h}$ is not the usual coproduct of the tensor product of two coalgebras.
Neither is the antipode.
Homological methods have been used to study Hopf algebras by many authors (see
[2], [15] and their references). However, there are few examples of Hopf
algebras satisfying a given set of homological properties. In this paper, we
describe certain homological properties of the Hopf algebra $U_{g,h}$ and
consequently give an example satisfying some homological properties. Moreover,
we study the representation theory of the algebra $U_{g,h}$. Similar to [7]
and [8], we can define some version of the Bernstein-Gelfand-Gelfand
(abbreviated as BGG) category $\mathcal{O}$. Furthermore, we decompose the BGG
category $\mathcal{O}$ into a direct sum of subcategories, which are
equivalent to categories of finitely generated modules over some finite-
dimensional algebras.
Let us outline the structure of this paper. In Section 2, we study the
homological properties of $U_{g,h}$. We prove that $U_{g,h}$ is Auslander-
regular and Cohen-Macaulay, and the global dimension and Gelfand-Kirillov
dimension of $U_{g,h}$ are equal. We also prove that the center of $U_{g,h}$
is equal to $\mathbbm{K}[g^{\pm 1},h^{\pm 1},C]$, where $C$ is the Casimir
element of $U_{g,h}$. To study the category $\mathcal{O}$ in Section 4, we
prove that $U_{g,h}$ has an anti-involution that acts as the identity on all
of $\mathbb{K}[K^{\pm 1},g^{\pm 1},h^{\pm 1}]$.
Since there is a finite-dimensional non-semisimple module over the algebra
$\mathbb{K}[g^{\pm 1},h^{\pm 1}]$, there is a finite-dimensional non-
semisimple module over $U_{g,h}$. In Section 3, we compute the extension group
$\operatorname{Ext}^{1}(M,M^{\prime})$ in the case that the nonzero $q$ is not
a root of unity, where $M,M^{\prime}$ are finite-dimensional simple modules
over $U_{g,h}$. We prove that the tensor functor $V\otimes-$ determines an
isomorphism from
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha^{\prime},\beta^{\prime}},\mathbb{K}_{\alpha,\beta})$
to
$\operatorname{Ext}^{1}(V\otimes\mathbb{K}_{\alpha^{\prime},\beta^{\prime}},V\otimes\mathbb{K}_{\alpha,\beta})$
for any finite-dimensional simple $U_{q}(\mathfrak{sl}(2))$-module $V$. We
also obtain a decomposition theory about the tensor product of two simple
$U_{g,h}$-modules. From this, we obtain a Hopf subalgebra of the finite dual
Hopf algebra $U_{g,h}^{\circ}$ of $U_{g,h}$, which is generated by coordinate
functions of finite-dimensional simple modules of $U_{g,h}$.
In Section 4, we briefly discuss the Verma modules of $U_{g,h}$. The BGG
subcategory $\mathcal{O}$ of the category of representations of $U_{g,h}$ is
introduced and studied. The main results in [8] also hold in the category
$\mathcal{O}$ over the algebra $U_{g,h}$.
Throughout this paper $\mathbbm{K}$ is a fixed algebraically closed field with
characteristic zero; $\mathbb{N}$ is the set of natural numbers; $\mathbb{Z}$
is the set of all integers. $*^{+1}$ is usually abbreviated to $*$. All
modules over a ring $R$ are left $R$-modules.
It is worth mentioning that some results of this article are also true if
$\mathbb{K}$ is not an algebraically closed field. We always assume that
$\mathbb{K}$ is an algebraically closed field for simplicity throughout this
paper.
### Acknowledgment
The author would like to thank the referee for carefully reading earlier
versions of this paper. His helpful comments and illuminating suggestions have
greatly improved the final version. In particular, the main idea of the proofs
of Theorem 2.1 and Theorem 3.10 was provided by the referee.
## 2\. some properties of $U_{g,h}$
In this section, we firstly prove that $U_{g,h}$ is a Noetherian domain with a
PBW basis. Then we compute the global dimension and Gelfand-Kirillov dimension
of $U_{g,h}$. Moreover, we show that $U_{g,h}$ is Auslander regular, Auslander
Gorenstein, Cohen-Macaulay and $\operatorname{Tdeg}$-stable. For the undefined
terms in this section, we refer the reader to [2] and [3].
###### Theorem 2.1 (PBW Theorem).
The algebra $U_{g,h}$ is a Noetherian domain. Moreover, it has a PBW basis
$\\{F^{l}K^{m}g^{n}h^{s}E^{t}|l,t\in\mathbb{Z}_{\geq
0};m,n,s\in\mathbb{Z}\\}$.
###### Proof.
Let $R=\mathbbm{K}[K^{\pm 1},g^{\pm 1},h^{\pm 1}]$. Since $R$ is a homomorphic
image of the polynomial ring $\mathbbm{K}[x_{1},x_{2},\cdots,x_{5},x_{6}]$
($\varphi(x_{1})=K$, $\varphi(x_{2})=K^{-1}$, $\varphi(x_{3})=g$,
$\varphi(x_{4})=g^{-1}$, $\varphi(x_{5})=h$, $\varphi(x_{6})=h^{-1}$), $R$ is
a Noetherian ring with a basis $\\{K^{m}g^{n}h^{s}|m,n,s$ $\in$
$\mathbb{Z}\\}$. It is easy to prove that $R$ is a domain.
Define $\sigma(K^{c}g^{a}h^{b})=q^{2c}K^{c}g^{a}h^{b}$, $\forall
a,b,c\in\mathbb{Z}$, $\delta(R)\equiv 0$, and extend $\sigma$ by additivity
and multiplicativity. It is trivial to check that $\sigma$ is a ring
automorphism of $R$, and $\delta:R\rightarrow R$ is a $\sigma$-skew
derivation. Hence $R^{\prime}:=R[F;\sigma,\delta]$ is a Noetherian domain with
a basis $\\{K^{a}g^{b}h^{c}F^{d}|a,b,c\in\mathbb{Z},d\in$ $\mathbb{Z}_{\geq
0}\\}$ by [9, Theorem 1.2.9].
Next, define $\sigma^{\prime}$ on $R^{\prime}$ via:
$\sigma^{\prime}(K^{a}g^{b}h^{c}F^{d})=q^{-2a}K^{a}g^{b}h^{c}F^{d},$
(for all integers $d\geq 0$, and $a,b,c\in\mathbb{Z}$), and extend
$\sigma^{\prime}$ by additivity and multiplicativity. One can check that
$\sigma^{\prime}$ is indeed a ring automorphism of $R^{\prime}$. Define
$\delta^{\prime}$ on $R^{\prime}$ via
$\delta^{\prime}(R)\equiv
0,\qquad\delta^{\prime}(F)=\frac{K-K^{-1}g^{2}}{q-q^{-1}}.$
Also extend $\delta^{\prime}$ to all of $R^{\prime}$ by additivity and the
following equation:
$\delta^{\prime}(ab):=\delta^{\prime}(a)b+\sigma^{\prime}(a)\delta^{\prime}(b),\qquad\forall
a,b\in R^{\prime}.$
One can check that $\delta^{\prime}$ is a $\sigma^{\prime}$-skew derivation of
$R^{\prime}$. Now by the above results, $U_{g,h}$
$=R^{\prime}[E;\sigma^{\prime},\delta^{\prime}]$ is indeed a Noetherian
domain, since $R^{\prime}$ is. Moreover, $U_{g,h}$ has a basis
$\\{K^{a}g^{b}h^{c}F^{d}E^{t}|a,b,c\in\mathbb{Z},d,t\in$ $\mathbb{Z}_{\geq
0}\\}$. Since $K^{a}g^{b}h^{c}F^{d}E^{t}=q^{-2ad}F^{d}K^{a}g^{b}h^{c}E^{t}$,
$\\{F^{l}K^{m}g^{n}h^{s}E^{t}|l,t\in\mathbb{Z}_{\geq 0};m,n,s\in\mathbb{Z}\\}$
is also a basis of $U_{g,h}$. This basis is called a PBW basis. ∎
###### Proposition 2.2.
(1) $U_{g,h}$ is isomorphic to $\mathbbm{K}[g^{\pm 1},h^{\pm 1}]\otimes
U_{q}(\mathfrak{sl}(2))$ as algebras;
(2) $U_{g,h}$ is an Auslander regular, Auslander Gorenstein and
$\operatorname{Tdeg}$-stable algebra with Gelfand-Kirillov dimension 5.
###### Proof.
Define $E^{\prime}:=g^{-1}E$, $K^{\prime}:=g^{-1}K$. By Theorem 2.1,
$\\{F^{a}K^{\prime b}g^{c}h^{d}E^{\prime
t}|b,c,d\in\mathbb{Z},a,t\in\mathbb{Z}_{\geq 0}\\}$
is also a basis of $U_{g,h}$. Let $\varphi(E^{\prime})=1\otimes E$,
$\varphi(F)=1\otimes F$, $\varphi(K^{\prime})=1\otimes K$,
$\varphi(g)=g\otimes 1$, $\varphi(h)=h\otimes 1$ and $\varphi$ extends by
additivity and multiplicativity. One can check that $\varphi$ is an
epimorphism of algebras from $U_{g,h}$ to $\mathbbm{K}[g^{\pm 1},h^{\pm
1}]\otimes$ $U_{g}(\mathfrak{sl}(2))$. Similarly, define $\phi(1\otimes E)$
$=$ $E^{\prime},\phi(1\otimes F)=F,$ $\phi(1\otimes K)$ $=K^{\prime},$
$\phi(g\otimes 1)$ $=g,$ $\phi(h\otimes 1)=h$, and extend $\phi$ by additivity
and multiplicativity. Then $\phi$ is an epimorphism of algebras from
$\mathbbm{K}[g^{\pm 1},h^{\pm 1}]\otimes$ $U_{q}(\mathfrak{sl}(2))$ to
$U_{g,h}$. It is easy to verify that $\phi\circ\varphi=\operatorname{id}$ and
$\varphi\circ\phi=\operatorname{id}$. So $\varphi$ is an isomorphism of
algebras.
Let us recall that if the global dimension of a Noetherian ring $A$, denoted
by $\operatorname{gldim}(A)$, is finite, then
$\operatorname{gldim}(A)=\operatorname{injdim}(A)$, the injective dimension of
$A$. From [9, Section 7.1.11], one obtains that the right global dimension of
a Noetherian algebra $A$ is equal to $\operatorname{gldim}(A)$ as well. In
[1], H. Bass proved that if $A$ is a commutative Noetherian ring with a finite
injective dimension, then $A$ is Auslander-Gorenstein. Thus
$\operatorname{gldim}({\mathbbm{K}}[g^{\pm 1},h^{\pm 1},K^{\pm 1}])$ $=3$, and
$\operatorname{gldim}U_{g,h}\leq\operatorname{gldim}({\mathbbm{K}}[g^{\pm
1},h^{\pm 1},K^{\pm 1}])+2=5$
by [9, Theorem 7.5.3]. Hence $U_{g,h}$ is an Auslander regular and Auslander
Gorenstein ring by [3, Theorem 4.2].
Recall that an algebra $A$ with total quotient algebra $Q(A)$ is said to be
$\operatorname{Tdeg}$-stable if
$\operatorname{Tdeg}(Q(A))=\operatorname{Tdeg}(A)=\text{GKdim}(A),$
where $\text{GKdim}(A)$ is the Gelfand-Kirillov dimension of $A$. By [15,
Example 7.1], $U_{q}(\mathfrak{sl}(2))$ is $\operatorname{Tdeg}$-stable, and
$\text{GKdim}(U_{q}(\mathfrak{sl}(2)))=3$. Since
$U_{g,h}\cong\mathbbm{K}[g^{\pm 1},h^{\pm 1}]\otimes
U_{q}(\mathfrak{sl}(2))\cong U_{q}(\mathfrak{sl}(2))[g,g^{-1}][h,h^{-1}],$
$\text{GKdim}(U_{g,h})=2+\text{GKdim}(U_{q}(\mathfrak{sl}(2))=5,$
and $U_{g,h}$ is $\operatorname{Tdeg}$-stable by [15, Theorem 1.1]. ∎
###### Remark 2.3.
(1) Since $U_{g,h}\cong{\mathbbm{K}}[g^{\pm 1},h^{\pm 1}]\otimes
U_{q}(\mathfrak{sl}(2))$ as algebras, we call the Hopf algebra $U_{g,h}$ a
double loop quantum enveloping algebra.
(2) Since ${\mathbbm{K}}[g^{\pm 1},h^{\pm 1}]$ and $U_{q}(\mathfrak{sl}(2))$
are Hopf algebras, ${\mathbbm{K}}[g^{\pm 1},h^{\pm 1}]\otimes
U_{q}(\mathfrak{sl}(2))$ has a natural Hopf algebra structure. However, as
$\Delta(E^{\prime})=h^{-1}g^{-1}\otimes E^{\prime}+E^{\prime}\otimes
hK^{\prime},$
and
$\Delta(F)=K^{\prime-1}hg\otimes F+F\otimes h^{-1},$
by (1.5) and (1.6), the above isomorphism of algebras is not an isomorphism of
Hopf algebras, i.e., $U_{g,h}$ has a different coproduct than the usual
coproduct of the tensor product of the two coalgebras.
###### Corollary 2.4.
Suppose $q$ is not a root of unity. Then the center of $U_{g,h}$ is equal to
${\mathbbm{K}}[g^{\pm 1},h^{\pm 1},C]$, where
$C=FE+\frac{qK+q^{-1}K^{-1}g^{2}}{(q-q^{-1})^{2}}$.
###### Proof.
Let
$c^{\prime}=F^{\prime}E^{\prime}+\frac{qK^{\prime}+q^{-1}K^{{\prime}-1}}{(q-q^{-1})^{2}}$,
where $K^{\prime\pm},E^{\prime},F^{\prime}$ are the Chevalley generators of
$U_{q}(\mathfrak{sl}(2))$. Then the center of $U_{q}(\mathfrak{sl}(2))$ is
generated by $c^{\prime}$ by [6, Theorem VI.4.8]. Since
$U_{g,h}\cong\mathbbm{K}[g^{\pm 1},h^{\pm 1}]\otimes U_{q}(\mathfrak{sl}(2))$
as algebras by Proposition 2.2, the center of $U_{g,h}$ is isomorphic to
$\mathbbm{K}[g^{\pm 1},h^{\pm 1}]\otimes\mathbb{K}[c^{\prime}]$. So the center
of $U_{g,h}$ is equal to ${\mathbbm{K}}[g^{\pm 1},h^{\pm 1},c_{1}]$, where
$c_{1}=g^{-1}FE+\frac{g^{-1}qK+gq^{-1}K^{-1}}{q-q^{-1}}$. Hence the center of
$U_{g,h}$ is equal to ${\mathbbm{K}}[g^{\pm 1},h^{\pm 1},C]$, where
$C=FE+\frac{qK+q^{-1}K^{-1}g^{2}}{(q-q^{-1})^{2}}$. ∎
The element $C=FE+\frac{qK+q^{-1}K^{-1}g^{2}}{(q-q^{-1})^{2}}$ is called a
Casimir element of $U_{g,h}$.
###### Proposition 2.5.
There exists an anti-involution $i$ of $U_{g,h}$ that acts as the identity on
all of $\mathbb{K}[K^{\pm 1},g^{\pm 1},h^{\pm 1}]$.
###### Proof.
Let $i(E)=-KF$, $i(F)=-EK^{-1}$, $i(K^{\pm 1})=K^{\pm 1}$, $i(g^{\pm
1})=g^{\pm 1}$, $i(h^{\pm 1})=h^{\pm 1}$. Extend $i$ by additivity and
multiplicativity. Then $i$ is an anti-involution of $U_{g,h}$, which acts as
the identity on all of $\mathbb{K}[K^{\pm 1},g^{\pm 1},h^{\pm 1}]$. ∎
Suppose $M$ is a finitely generated module over an algebra $A$. Then the grade
of $M$, denoted by $j(M)$, is defined to be
$j(M):=\min\\{j\geq 0|\operatorname{Ext}^{j}_{A}(M,A)\neq 0\\}.$
Recall that an algebra $A$ is Cohen-Macaulay if
$j(M)+\text{GKdim}(M)=\text{GKdim}(A)$
for every nonzero finitely generated $A$-module $M$.
###### Proposition 2.6.
The algebra $U_{g,h}$ is a Cohen-Macaulay algebra with
$\operatorname{gldim}U_{g,h}=5$.
###### Proof.
Let $A={\mathbbm{K}}[g,h,u,v,K,L][F;\alpha][E;\alpha,\delta]$, where
$\alpha|_{{\mathbbm{K}}[g,h,u,v]}=\operatorname{id}$, $\alpha(K)=$ $q^{2}K,$
$\alpha(L)=q^{-2}L$, $\alpha(F)=F$, $\delta(F)=\frac{K-L}{q-q^{-1}}$, and
$\delta({\mathbbm{K}}[g,h,u,v,K,L])=0$. Then $A$ is Auslander-regular and
Cohen-Macaulay by [2, Lemma II.9.10]. Since
$U_{g,h}\cong A/(gu-1,hv-1,KL-1),$
$U_{g,h}$ is Auslander-Gorenstein and Cohen-Macaulay by [2, Lemma II.9.11].
Let ${\mathbbm{K}}$ be the trivial $U_{g,h}$-module defined by $a\cdot
1=\varepsilon(a)1$. Then $\text{GKdim}(\mathbbm{K})=0$ and
$\operatorname{gldim}(U_{g,h})=5$ by [2, Exercise II.9.D].∎
In the presentation for $U_{g,h}$ given in Section 1, the generators $K^{\pm
1}$, and the generators $E,F$ play a different role respectively. Similar to
[4], we write down an equitable presentation for $U_{g,h}$ as follows.
###### Theorem 2.7.
The algebra $U_{g,h}$ is isomorphic to the unital associative
$\mathbbm{K}$-algebra with generators $x^{\pm 1}$, $y,z$; $u^{\pm 1},v^{\pm
1}$ and the following relations:
(2.1) $\displaystyle x^{-1}x=xx^{-1}=1,\quad u^{-1}u=uu^{-1}=1,\quad
v^{-1}v=vv^{-1}=1,$ (2.2) $\displaystyle ux=xu,\quad uy=yu,\quad uz=zu,\quad
uv=vu,$ (2.3) $\displaystyle vx=xv,\quad yv=vy,\quad zv=vz,$ (2.4)
$\displaystyle\frac{qxy-q^{-1}yx}{q-q^{-1}}=1,$ (2.5)
$\displaystyle\frac{qzx-q^{-1}xz}{q-q^{-1}}=1,$ (2.6)
$\displaystyle\frac{qyz-q^{-1}zy}{q-q^{-1}}=1.$
###### Proof.
Let $\mathscr{U}_{u,v}$ be the algebra generated by $x^{\pm 1}$, $y$, $z$,
$u^{\pm 1}$, $v^{\pm 1}$ satisfying the relations from (2.1) to (2.6). Let us
define $\Phi(x^{\pm 1})=g^{\mp 1}K^{\pm 1}$, $\Phi(y)=K^{-1}g+F(q-q^{-1})$,
$\Phi(z)$ $=$ $K^{-1}g-K^{-1}Eq(q-q^{-1})$, $\Phi(u^{\pm 1})=g^{\mp 1}$,
$\Phi(v^{\pm 1})=h^{\pm 1}$, and extend $\Phi$ by additivity and
multiplicativity. Then $\Phi$ is a homomorphism of algebras from
$\mathscr{U}_{u,v}$ to $U_{g,h}$.
Define $\Psi(K^{\pm 1})=u^{\mp 1}x^{\pm 1}$,
$\Psi(F)=\frac{y-x^{-1}}{q-q^{-1}}$, $\Psi(E)=\frac{1-xz}{(q-q^{-1})qu}$,
$\Psi(g)=u^{-1}$, and $\Psi(h)=v$. We extend $\Psi$ by additivity and
multiplicativity. It is routine to check that $\Psi$ is a homomorphism of
algebras from $U_{g,h}$ to $\mathscr{U}_{u,v}$. Since $\Phi\Psi$ fixes each of
the generators $E,F,K^{\pm 1},g^{\pm 1},h^{\pm 1}$ of $U_{g,h}$,
$\Phi\Psi=\operatorname{id}$. Similarly we can check that
$\Psi\Phi=\operatorname{id}$. So $\Phi$ is the inverse of $\Psi$. ∎
Since $\mathscr{U}_{u,v}$ is isomorphic to $U_{g,h}$ as algebras, we can
regard $U_{g,h}$ as an algebra generated by $x^{\pm 1},u^{\pm 1},v^{\pm 1}$,
$y$ and $z$ with relations (2.1)–(2.6). To make the above algebra isomorphisms
$\Phi,\Psi$ into isomorphisms of Hopf algebras, we only need to define the
other operations of the Hopf algebra $U_{g,h}$ with these new generators as
follows:
(2.7) $\displaystyle\Delta(x^{\pm 1})=x^{\pm 1}\otimes x^{\pm 1},$ (2.8)
$\displaystyle\Delta(u^{\pm 1})=u^{\pm 1}\otimes u^{\pm 1},$ (2.9)
$\displaystyle\Delta(v^{\pm 1})=v^{\pm 1}\otimes v^{\pm 1},$ (2.10)
$\displaystyle\Delta(y)$
$\displaystyle=x^{-1}\otimes(x^{-1}-v^{-1})+u^{-1}vx^{-1}\otimes(y-x^{-1})+y\otimes
v^{-1},$ (2.11) $\displaystyle\Delta(z)$ $\displaystyle=x^{-1}\otimes
x^{-1}+uv^{-1}x^{-1}\otimes(z-x^{-1})+(z-x^{-1})\otimes v,$ (2.12)
$\displaystyle\varepsilon(x^{\pm 1})=\varepsilon(u^{\pm 1})=\varepsilon(v^{\pm
1})=1,$ (2.13) $\displaystyle\varepsilon(y)=\varepsilon(z)=1,$
and
(2.14) $\displaystyle S(x^{\pm})=x^{\mp 1},\qquad S(u^{\pm 1})=u^{\mp
1},\qquad S(v^{\pm 1})=v^{\mp 1},$ (2.15) $\displaystyle
S(y)=x-x^{-1}y+u,\qquad S(z)=x+u^{-1}-u^{-1}xz.$
Then one can check that the above isomorphisms $\Phi,\Psi$ are isomorphisms of
Hopf algebras. For example, $\Delta(\Psi(g^{\mp 1}K^{\pm 1}))=\Delta(x^{\pm
1})=x^{\pm 1}\otimes x^{\pm 1}=(\Psi\otimes\Psi)\Delta(g^{\mp 1}K^{\pm 1})$.
## 3\. Finite-dimensional representations of $U_{g,h}$
Let $q$ be a nonzero element in an algebraically closed field ${\mathbbm{K}}$
with characteristic zero. Moreover, we assume that $q$ is not a root of unity.
The main purpose of this section is to classify all extensions between two
finite-dimensional simple $U_{g,h}$-modules. Let us start with a description
of the finite-dimensional simple $U_{g,h}$-modules.
For any three elements
$\lambda,\alpha,\beta\in{\mathbbm{K}}^{\times}(=\mathbbm{K}\setminus\\{0\\})$
and any $U_{g,h}$-module $V$, let
$V^{\lambda,\alpha,\beta}=\\{v\in V|Kv=\lambda v,gv=\alpha v,hv=\beta v\\}.$
The $(\lambda,\alpha,\beta)$ is called a weight of $V$ if
$V^{\lambda,\alpha,\beta}\neq 0$. A nonzero vector in
$V^{\lambda,\alpha,\beta}$ is called a weight vector with weight
$(\lambda,\alpha,\beta)$.
The next result is proved by a standard argument.
###### Lemma 3.1.
We have $EV^{\lambda,\alpha,\beta}\subseteq V^{q^{2}\lambda,\alpha,\beta}$ and
$FV^{\lambda,\alpha,\beta}\subseteq V^{q^{-2}\lambda,\alpha,\beta}$.
###### Definition 3.2.
Let $V$ be a $U_{g,h}$-module and
$(\lambda,\alpha,\beta)\in{\mathbbm{K}}^{\times 3}$. A nonzero vector $v$ of
$V$ is a highest weight vector of weight $(\lambda,\alpha,\beta)$ if
$Ev=0,\quad Kv=\lambda v,\quad gv=\alpha v,\quad hv=\beta v.$
A $U_{g,h}$-module $V$ is a standard cyclic module with highest weight
$(\lambda,\alpha,\beta)$ if it is generated by a highest weight vector $v$ of
weight $(\lambda,\alpha,\beta)$.
###### Proposition 3.3.
Any nonzero finite-dimensional $U_{g,h}$-module contains a highest weight
vector. Moreover, the endomorphisms induced by $E$ and $F$ are nilpotent.
###### Proof.
By Lie's theorem, there is a nonzero vector $w\in V$ and $(\mu,\alpha,\beta)$
$\in\mathbbm{K}^{\times 3}$ such that
$Kw=\mu w,\qquad gw=\alpha w,\qquad hw=\beta w.$
In fact, there is an elementary and more direct proof as follows. Since
${\mathbbm{K}}$ is algebraically closed and $V$ is finite-dimensional, there
is a nonzero vector $v\in V$ such that $Kv=\mu v$ for some element
$\mu\in\mathbb{K}$. Moreover $\mu\in\mathbb{K}^{\times}$ as $K$ is invertible.
Let
$V_{\mu}=\\{v\in V|Kv=\mu v\\}\not=0.$
Then $V_{\mu}$ is also a finite-dimensional vector space. For any $v\in
V_{\mu}$, we have
$K(gv)=g(Kv)=\mu gv.$
So $gv\in V_{\mu}$ and $g$ induces a linear transformation on the nonzero
finite-dimensional vector space $V_{\mu}$. There is a nonzero vector
$v^{\prime}\in V_{\mu}$ such that $gv^{\prime}=\alpha v^{\prime}$ for some
nonzero element $\alpha\in\mathbb{K}$. Let $V_{\mu,\alpha}=\\{v^{\prime}\in
V_{\mu}|gv^{\prime}=\alpha v^{\prime}\\}$. Then $V_{\mu,\alpha}$ is also a
nonzero finite-dimensional linear space. Similarly we can prove that
$h(V_{\mu,\alpha})\subseteq V_{\mu,\alpha}$ as $gh=hg$, $hK$ $=Kh$ by (1.2)
and (1.3). Hence there exists a nonzero vector $w\in V_{\mu,\alpha}$ and
$(\mu,\alpha,\beta)$ $\in\mathbbm{K}^{\times 3}$ such that
$Kw=\mu w,\qquad gw=\alpha w,\qquad hw=\beta w.$
The proof now follows [6, Proposition VI.3.3], using Lemma 3.1. ∎
For any positive integer $m$, let $[m]=\frac{q^{m}-q^{-m}}{q-q^{-1}}$, and
$[m]!=[1][2]\cdots[m]$. Similar to the proof of [6, Lemma VI.3.4], we get the
following:
###### Lemma 3.4.
Let $v$ be a highest weight vector of weight $(\lambda,\alpha,\beta)$. Set
$v_{p}=\frac{1}{[p]!}F^{p}v$ for $p>0$ and $v_{0}=v$. Then
$Kv_{p}=q^{-2p}\lambda v_{p},\qquad gv_{p}=\alpha v_{p},\qquad
Fv_{p-1}=[p]v_{p},\qquad hv_{p}=\beta v_{p}$
and
(3.1) $\displaystyle
Ev_{p}=\frac{q^{-(p-1)}\lambda-q^{p-1}\lambda^{-1}\alpha^{2}}{q-q^{-1}}v_{p-1}.$
###### Theorem 3.5.
(a) Let $V$ be a finite-dimensional $U_{g,h}$-module generated by a highest
weight vector $v$ of weight $(\lambda,\alpha,\beta)$. Then
(i) $\lambda=\varepsilon{\alpha}q^{n}$, where $\varepsilon=\pm 1$ and $n$ is
the integer defined by $dimV=n+1$.
(ii) Setting $v_{p}=\frac{1}{[p]!}F^{p}v$, we have $v_{p}=0$ for $p>n$ and in
addition the set $\\{v=v_{0},v_{1},\cdots,v_{n}\\}$ is a basis of $V$.
(iii) The operator $K$ acting on $V$ is diagonalizable with $(n+1)$ distinct
eigenvalues
$\\{\varepsilon{\alpha}q^{n},\varepsilon{\alpha}q^{n-2},\cdots,\varepsilon{\alpha}q^{-n+2},\varepsilon{\alpha}q^{-n}\\},$
and the operators $g,h$ act on $V$ by scalars $\alpha,\beta$ respectively.
(iv) Any other highest weight vector in $V$ is a scalar multiple of $v$ and is
of weight $(\lambda,\alpha,\beta)$.
(v) The module is simple.
(b) Any simple finite-dimensional $U_{g,h}$-module is generated by a highest
weight vector. Two finite-dimensional $U_{g,h}$-modules generated by highest
weight vectors of the same weight are isomorphic.
###### Proof.
The proof follows that of [6, Theorem VI.3.5] or [13, Theorem 3.4]. It is
omitted here. ∎
Let us denote the $(n+1)$-dimensional simple $U_{g,h}$-module generated by a
highest weight vector $v$ of weight $(\varepsilon\alpha q^{n},\alpha,\beta)$
in Theorem 3.5 by $V_{\varepsilon,n,\alpha,\beta}$. Since $\mathbbm{K}$ is an
algebraically closed field, the dimension of a simple module over
$\mathbbm{K}[g^{\pm 1},h^{\pm 1}]$ is equal to one. Any such simple
$\mathbbm{K}[g^{\pm 1},h^{\pm 1}]$-module is determined by $g\cdot
1=\alpha,h\cdot 1=\beta,$ for $\alpha,\beta\in\mathbbm{K}^{\times}$. This
simple module is denoted by $\mathbbm{K}_{\alpha,\beta}:=\mathbb{K}\cdot 1$ in
the sequel. The finite-dimensional simple $U_{q}(\mathfrak{sl}(2))$-modules
are characterized in [6, Theorem VI.3.5]. These simple modules are denoted by
$V_{\varepsilon,n}$, where $\varepsilon=\pm 1$, and $n\in\mathbb{Z}_{\geq 0}$.
By Proposition 2.2 and [8, Proposition 16.1], every finite-dimensional simple
$U_{g,h}$-module is isomorphic to $\mathbbm{K}_{\alpha,\beta}\otimes
V_{\varepsilon,n}$. It is not difficult to verify that
$\mathbbm{K}_{\alpha,\beta}\otimes V_{\varepsilon,n}$ is isomorphic to
$V_{\varepsilon,n,\alpha,\beta}$.
###### Corollary 3.6 (Clebsch-Gordan Formula).
Let $n\geq m$ be two non-negative integers. There exists an isomorphism of
$U_{g,h}$-modules
$V_{\varepsilon,n,\alpha,\beta}\otimes
V_{\varepsilon^{\prime},m,\alpha^{\prime},\beta^{\prime}}\cong
V_{\varepsilon\varepsilon^{\prime},n+m,\alpha\alpha^{\prime},\beta\beta^{\prime}}\oplus
V_{\varepsilon\varepsilon^{\prime},n+m-2,\alpha\alpha^{\prime},\beta\beta^{\prime}}\oplus\cdots\oplus
V_{\varepsilon\varepsilon^{\prime},n-m,\alpha\alpha^{\prime},\beta\beta^{\prime}}.$
###### Proof.
Since $V_{\varepsilon,n,\alpha,\beta}\otimes
V_{\varepsilon^{\prime},m,\alpha^{\prime},\beta^{\prime}}\cong\mathbbm{K}_{\alpha\alpha^{\prime},\beta\beta^{\prime}}\otimes(V_{\varepsilon,n}\otimes
V_{\varepsilon^{\prime},m})$, and
$V_{\varepsilon,n}\otimes V_{\varepsilon^{\prime},m}\cong
V_{\varepsilon\varepsilon^{\prime},n+m}\oplus
V_{\varepsilon\varepsilon^{\prime}n+m-2}\oplus\cdots\oplus
V_{\varepsilon\varepsilon^{\prime},n-m}$
as modules over $U_{q}(\mathfrak{sl}(2))$ by [6, Theorem VII.7.1],
$V_{\varepsilon,n,\alpha,\beta}\otimes
V_{\varepsilon^{\prime},m,\alpha^{\prime},\beta^{\prime}}\cong
V_{\varepsilon\varepsilon^{\prime},n+m,\alpha\alpha^{\prime},\beta\beta^{\prime}}\oplus
V_{\varepsilon\varepsilon^{\prime},n+m-2,\alpha\alpha^{\prime},\beta\beta^{\prime}}\oplus\cdots\oplus
V_{\varepsilon\varepsilon^{\prime},n-m,\alpha\alpha^{\prime},\beta\beta^{\prime}}.$
This completes the proof.∎
###### Lemma 3.7.
Let $m\in\mathbbm{N}$. Then
$[E,F^{m}]=[m]F^{m-1}\frac{q^{-(m-1)}K-q^{m-1}K^{-1}g^{2}}{q-q^{-1}}.$
###### Proof.
Let $E^{\prime},F^{\prime},K^{\prime}$ be the Chevalley generators of
$U_{q}(\mathfrak{sl}(2))$. Then
$[E^{\prime},F^{\prime m}]=[m]F^{\prime
m-1}\frac{q^{-(m-1)}K^{\prime}-q^{m-1}K^{\prime-1}}{q-q^{-1}},$
by [6, Lemma VI.1.3]. Subsituting $Eg^{-1}$, $g^{-1}K$, $F$ for
$E^{\prime},K^{\prime},F^{\prime}$ in the above identity respectively, we
obtain
$[E,F^{m}]=[m]F^{m-1}\frac{q^{-(m-1)}K-q^{m-1}K^{-1}g^{2}}{q-q^{-1}}.$
∎
Let $M:=\mathbb{K}_{\alpha,\beta}=\mathbb{K}\cdot 1$ be a module over
$\mathbb{K}[g^{\pm 1},h^{\pm 1}]$, where $g\cdot 1=\alpha$ and $h\cdot
1=\beta$. About the simple modules over the algebra $\mathbb{K}[g^{\pm
1},h^{\pm 1}]$, we have the following
###### Proposition 3.8.
Given two simple $\mathbb{K}[g^{\pm 1},h^{\pm 1}]$-modules
$M:=\mathbb{K}_{\alpha,\beta}$ and
$M^{\prime}:=\mathbb{K}_{\alpha^{\prime},\beta^{\prime}},$ if $M$ is not
isomorphic to $M^{\prime}$, then $\operatorname{Ext}^{n}(M^{\prime},M)=0$ for
all $n\geq 0$; if $M\cong M^{\prime}$, then
$\operatorname{Ext}^{n}(M^{\prime},M)\cong\left\\{\begin{array}[]{lr}\mathbb{K},&n=0\\\
\mathbb{K}^{2},&n=1\\\ 0,&n\geq 2.\end{array}\right.$
###### Proof.
Denote the algebra $\mathbb{K}[g^{\pm 1},h^{\pm 1}]$ by $R$. Construct a
projective resolution of the simple $R$-module $\mathbb{K}_{\alpha,\beta}$ as
follows:
(3.4)
where
$\varphi_{0}(r(g,h))=r(\alpha,\beta),\qquad\varphi_{1}(r(g,h),s(g,h))=r(g,h)(g-\alpha)+s(g,h)(h-\beta),$
and
$\varphi_{2}(r(g,h))=(r(g,h)(h-\beta),-r(g,h)(g-\alpha))$
for $r(g,h),s(g,h)\in R$. Applying the functor $\operatorname{Hom}_{R}(-,$
$\mathbb{K}_{\alpha,\beta})$ to the exact sequence (3.2), we obtain the
following complex:
(3.7)
For any $\theta\in\operatorname{Hom}_{R}(R,\mathbb{K}_{\alpha,\beta})$,
$\varphi^{*}_{1}(\theta)((1,0))=\theta(g-\alpha)=(g-\alpha)\theta(1)=0,$
$\varphi^{*}_{1}(\theta)((0,1))=\theta(h-\beta)=(h-\beta)\theta(1)=0.$
This means that $\varphi^{*}_{1}=0$. Similarly, one can prove that
$\varphi_{2}^{*}=0$. So
$\operatorname{Ext}^{0}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})\cong\mathbb{K},\qquad\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})\cong\mathbb{K}^{2},$
$\operatorname{Ext}^{2}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})\cong\mathbb{K},\qquad\operatorname{Ext}^{n}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})=0$
for $n\geq 3$.
If we use the functor
$\operatorname{Hom}_{R}(-,\mathbb{K}_{\alpha^{\prime},\beta^{\prime}})$ to
replace the functor $\operatorname{Hom}_{R}(-,\mathbb{K}_{\alpha,\beta})$ in
the above proof, we can also obtain the complex (3.3). In this case, we have
$\varphi_{1}^{*}(\theta)((1,0))=\alpha^{\prime}-\alpha,\qquad\varphi_{1}^{*}(\theta)((0,1))=\beta^{\prime}-\beta,$
and
$\varphi_{2}^{*}(\eta)(a)=(\beta^{\prime}-\beta)a\eta((1,0))-(\alpha^{\prime}-\alpha)a\eta((0,1))$
for
$\theta\in\operatorname{Hom}_{R}(R,\mathbb{K}_{\alpha^{\prime},\beta^{\prime}})$,
$\eta\in\operatorname{Hom}_{R}(R^{2},\mathbb{K}_{\alpha^{\prime},\beta^{\prime}})$,
and $a\in R$. Hence both $\varphi_{1}^{*}$ and $\varphi_{2}^{*}$ are not zero
linear mappings provided that either $\alpha\neq\alpha^{\prime}$, or
$\beta\neq\beta^{\prime}$. Consequently, the sequence (3.3) is exact in this
case. So
$\operatorname{Ext}^{n}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha^{\prime},\beta^{\prime}})=0$
for $n\geq 0$. ∎
It is well-known that the group $\operatorname{Ext}^{1}(M^{\prime},M)$ can be
described by short exact sequences. Next, we describe
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$
by short exact sequences.
Let
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{N\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
be an element in
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$.
Suppose $\\{w_{1},w_{2}\\}$ be a basis of $N$ such that $\psi(w_{2})=1$ and
$w_{1}=\varphi(1)$. Then $gw_{1}=\alpha w_{1}$, $hw_{1}=\beta w_{1}$. Suppose
$gw_{2}=aw_{2}+xw_{1}$. Then
$\alpha\psi(w_{2})=\psi(gw_{2})=a\psi(w_{2}).$
So $gw_{2}=\alpha w_{2}+xw_{1}$. Similarly, we can prove $hw_{2}=\beta
w_{2}+yw_{1}$. If $\\{u_{1},u_{2}\\}$ is another basis satisfying
$u_{1}=\varphi(1)=w_{1}$ and $\psi(u_{2})=1$, then $u_{2}-w_{2}=\lambda w_{1}$
for some $\lambda\in\mathbb{K}$. Thus $gu_{2}=\alpha
w_{2}+xw_{1}+\lambda\alpha w_{1}=\alpha u_{2}+xu_{1}$. Similarly, we obtain
that $hu_{2}=\beta u_{2}+yu_{1}$. Hence $x,y$ are independent of the choice of
the bases of $N$. So we can use $M_{x,y}$ to denote the module $N$. In the
following, we abuse notation and use $M_{x,y}$ to denote the following exact
sequence
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{M_{x,y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
meanwhile.
Let
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi^{\prime}}$$\textstyle{M_{x^{\prime},y^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi^{\prime}}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
be another element in
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$,
and $\\{w_{1}^{\prime},w_{2}^{\prime}\\}$ be a basis of
$M_{x^{\prime},y^{\prime}}$ such that $w_{1}^{\prime}=\varphi^{\prime}(1),$
$\psi^{\prime}(w_{2}^{\prime})=1$ and
$gw_{1}^{\prime}=\alpha w_{1}^{\prime},\qquad gw_{2}^{\prime}=\alpha
w_{2}^{\prime}+x^{\prime}w_{1}^{\prime},$ $hw_{1}^{\prime}=\beta
w_{1}^{\prime},\qquad hw_{2}^{\prime}=\beta
w_{2}^{\prime}+y^{\prime}w_{1}^{\prime}.$
Consider the following commutative diagram
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{1}}$$\scriptstyle{\varphi}$$\textstyle{N\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{2}}$$\scriptstyle{\psi}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{3}}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi^{\prime}}$$\textstyle{N^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi^{\prime}}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$
where $\mu_{i}$ are isomorphisms. Then
$\mu_{2}(w_{1})=\mu_{2}(\varphi(1))=\varphi^{\prime}\mu_{1}(1)=\mu_{1}(1)w_{1}^{\prime}$.
Suppose $\mu_{2}(w_{2})=aw_{1}^{\prime}+bw_{2}^{\prime}$. Then
$g\mu_{2}(w_{2})=(a\alpha+bx^{\prime})w_{1}^{\prime}+b\alpha w_{2}^{\prime},$
and
$\mu_{2}(gw_{2})\\\ =\mu_{2}(\alpha
w_{2}+xw_{1})=(a\alpha+x\mu_{1}(1))w_{1}^{\prime}+b\alpha w_{2}^{\prime}.$
Since $g\mu_{2}(w_{2})=\mu_{2}(gw_{2}),$ $bx^{\prime}=x\mu_{1}(1)$. Similarly,
we have $by^{\prime}=y\mu_{1}(1)$. Moreover,
$\mu_{3}(1)=\mu_{3}(\psi(w_{2}))=\psi^{\prime}(\mu_{2}(w_{2}))=b.$
If $\mu_{1}(1)=\mu_{3}(1)=1$, then $b=1$ and $(x,y)=(x^{\prime},y^{\prime})$.
Thus $M_{x,y}=M_{x^{\prime},y^{\prime}}$ as elements in the group
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$
if and only if $(x,y)=(x^{\prime},y^{\prime})$.
From Proposition 3.8, we know that
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$
is a vector space over $\mathbb{K}$. To describe the operations of the vector
space
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$
in the terms of exact sequences, we use $I$ to denote the ideal of
$R=\mathbb{K}[g^{\pm 1},h^{\pm 1}]$ generated by $g-\alpha$ and $h-\beta$,
i.e., $I=R(g-\alpha)+R(h-\beta)$. Let $\xi$ be the embedding homomorphism, and
$f$ be the epimorphism of $R$-modules from $R$ to $\mathbb{K}_{\alpha,\beta}$,
given by
$f(a(g,h))=a(\alpha,\beta),\quad a(g,h)\in R.$
Then we have the following exact sequence of $R$-modules:
(3.10) $\displaystyle\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
5.5pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&&&\crcr}}}\ignorespaces{\hbox{\kern-5.5pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
19.72636pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
19.72636pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
34.11124pt\raise 6.1111pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.75pt\hbox{$\scriptstyle{\xi}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 45.13327pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
45.13327pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{R\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
62.92957pt\raise 6.1111pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.75pt\hbox{$\scriptstyle{f}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 73.02977pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
73.02977pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
109.33932pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
109.33932pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{0}$}}}}}}}\ignorespaces}}}}\ignorespaces.$
Applying the functor $\operatorname{Hom}_{R}(-,\mathbb{K}_{\alpha,\beta})$ to
the exact sequence (3.4) yields the exact sequence
(3.13) $\displaystyle\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
32.7188pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&&\crcr}}}\ignorespaces{\hbox{\kern-32.7188pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\operatorname{Hom}_{R}(R,\mathbb{K}_{\alpha,\beta})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
34.67953pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{\tau}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 46.94516pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
46.94516pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\operatorname{Hom}_{R}(I,\mathbb{K}_{\alpha,\beta})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
113.28471pt\raise 5.43056pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.43056pt\hbox{$\scriptstyle{\partial}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 124.11949pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
124.11949pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
205.84009pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
205.84009pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{0}$}}}}}}}\ignorespaces}}}}\ignorespaces.$
For any exact sequence of $R$-modules
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{M_{x,y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$
and a basis $\\{w_{1},w_{2}\\}$ of $M_{x,y}$ satisfying $\psi(w_{2})=1$,
$w_{1}=\varphi(1)$, define a homomorphism of $R$-modules
$\sigma:R\rightarrow M_{x,y},\qquad\sigma(1)=w_{2}.$
Let $\eta_{x,y}$ be a homomorphism of $R$-modules from $I$ to
$\mathbb{K}_{\alpha,\beta}$, where
(3.14)
$\displaystyle\eta_{x,y}(a(g,h)(g-\alpha)+b(g,h)(h-\beta))=xa(\alpha,\beta)+yb(\alpha,\beta),$
for $a(g,h),b(g,h)\in R$. Now, we have the following commutative diagram:
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta_{x,y}}$$\scriptstyle{\xi}$$\textstyle{R\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sigma}$$\scriptstyle{f}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{id}}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{M_{x,y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
It is easy to check that $M_{x,y}$ is the pushout of $\eta_{x,y}$ and $\xi$.
If we use $M_{kx,ky}$ for any $k\in\mathbb{K}$ to replace $M_{x,y}$, we get a
homomorphism $\eta_{kx,ky}$ from $I$ to $\mathbb{K}_{\alpha,\beta}$.
Similarly, we have a homomorphism $\eta_{x+x^{\prime},y+y^{\prime}}$ from $I$
to $\mathbb{K}_{\alpha,\beta}$ by using $M_{x+x^{\prime},y+y^{\prime}}$ to
replace $M_{x,y}$. From the definition of $\eta_{x,y}$, one obtains the
following:
(3.15)
$\displaystyle\eta_{kx,ky}=k\eta_{x,y},\qquad\eta_{x+x^{\prime},y+y^{\prime}}=\eta_{x,y}+\eta_{x^{\prime},y^{\prime}}.$
Define
$M_{x,y}\boxplus M_{x^{\prime},y^{\prime}}=M_{x+x^{\prime},y+y^{\prime}},\quad
k\boxdot M_{x,y}=M_{kx,ky},$
for $k\in\mathbb{K}$. Then $\\{M_{x,y}|x,y\in\mathbb{K}\\}$ becomes a vector
space over $\mathbb{K}.$ By [12, Theorem 3.4.3], we have a bijection
$\Psi_{1}$ from $\\{M_{x,y}|x,y\in\mathbb{K}\\}$ to
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$
such that
$\Psi_{1}(M_{x,y})=\partial(\eta_{x,y})\in\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta}).$
It follows from (3.7) that
$\Psi_{1}(M_{kx,ky})=k\Psi_{1}(M_{x,y}),\qquad\Psi_{1}(M_{x+x^{\prime},y+y^{\prime}})=\Psi_{1}(M_{x,y})+\Psi_{1}(M_{x^{\prime},y^{\prime}}).$
Thus $\Psi_{1}$ is an isomorphism of vector spaces.
###### Proposition 3.9.
Let $V_{\varepsilon,n}$ be a simple $U_{q}(\mathfrak{sl}(2))$-module with a
basis $\\{v_{0},\cdots,v_{n}\\}$ satisfying
$E^{\prime}v_{0}=0,E^{\prime}v_{p}=\varepsilon[n-p+1]v_{p-1}$,
$v_{p}=\frac{F^{\prime p}}{[p]!}v_{0}$, for $p=1,\cdots,n$;
$F^{\prime}v_{n}=0$, $K^{\prime}v_{p}=\varepsilon q^{n-2p}v_{p}$ for
$p=0,\cdots,n$, where $E^{\prime},K^{\prime},F^{\prime}$ are Chevalley
generators of $U_{q}(\mathfrak{sl}(2))$. Then
$V_{\varepsilon,n}\otimes
M_{x,y}\in\operatorname{Ext}^{1}(V_{\varepsilon,n,\alpha,\beta},V_{\varepsilon,n,\alpha,\beta}),$
where
$M_{x,y}\in\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$.
The action of $U_{g,h}$ on $V_{\varepsilon,n}\otimes M_{x,y}$ with the basis
$\\{v_{0}\otimes w_{1},\cdots,v_{n}\otimes w_{1};v_{0}\otimes
w_{2},\cdots,v_{n}\otimes w_{2}\\}$
is given by
(3.19) $\displaystyle\left\\{\begin{array}[]{l}E(v_{0}\otimes
w_{1})=E(v_{0}\otimes w_{2})=F(v_{n}\otimes w_{1})=F(v_{n}\otimes w_{2})=0,\\\
E(v_{p}\otimes w_{1})=E^{\prime}v\otimes gw_{1}=\varepsilon[n-p+1]\alpha
v_{p-1}\otimes w_{1},\\\ E(v_{p}\otimes
w_{2})=\varepsilon\alpha[n-p+1]v_{p-1}\otimes
w_{2}+\varepsilon[n-p+1]xv_{p-1}\otimes w_{1},\end{array}\right.$
for $p=1,\cdots,n$;
(3.20) $\displaystyle v_{p}\otimes w_{i}=\frac{F^{p}}{[p]!}v_{1}\otimes
w_{i},\quad F(v_{n}\otimes w_{i})=0$
for $p=1,\cdots,n$, $i=1,2$;
(3.23) $\displaystyle\left\\{\begin{array}[]{l}K(v_{p}\otimes
w_{1})=K^{\prime}v_{p}\otimes gw_{1}=\varepsilon\alpha q^{n-2p}(v_{p}\otimes
w_{1}),\\\ K(v_{p}\otimes w_{2})=\varepsilon\alpha q^{n-2p}(v_{p}\otimes
w_{2})+\varepsilon q^{n-2p}x(v_{p}\otimes w_{1}),\end{array}\right.$
for $p=0,1,\cdots,n$; and
(3.26) $\displaystyle\left\\{\begin{array}[]{lr}g(v_{p}\otimes
w_{1})=\alpha(v_{p}\otimes w_{1}),&g(v_{p}\otimes w_{2})=\alpha(v_{p}\otimes
w_{2})+x(v_{p}\otimes w_{1}),\\\ h(v_{p}\otimes w_{1})=\beta(v_{p}\otimes
w_{1}),&h(v_{p}\otimes w_{2})=\beta(v_{p}\otimes w_{2})+y(v_{p}\otimes
w_{1}),\end{array}\right.$
for $p=0,1,\cdots,n$. Moreover, $V_{\varepsilon,n}\otimes-$ is an injective
linear mapping from the linear space
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$
to the linear space
$\operatorname{Ext}^{1}(V_{\varepsilon,n,\alpha,\beta},V_{\varepsilon,n,\alpha,\beta})$.
###### Proof.
We only need to prove that the mapping $V_{\varepsilon,n}\otimes-$ is an
injective linear mapping, since it is easy to check the other results.
Consider the following commutative diagram
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{id}}$$\scriptstyle{\operatorname{id}\otimes\varphi}$$\textstyle{V_{\varepsilon,n}\otimes
M_{x,y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu}$$\scriptstyle{\operatorname{id}\otimes\psi}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{id}}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{id}\otimes\varphi^{\prime}}$$\textstyle{V_{\varepsilon,n}\otimes
M_{x^{\prime},y^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{id}\otimes\psi^{\prime}}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
Since $(\operatorname{id}\otimes\psi^{\prime})\mu(v_{0}\otimes
w_{2})=(\operatorname{id}\otimes\psi)(v_{0}\otimes w_{2})=v_{0}\otimes
1=(\operatorname{id}\otimes\psi^{\prime})(v_{0}\otimes w_{2}^{\prime})$,
$\mu(v_{0}\otimes w_{2})=v_{0}\otimes w_{2}^{\prime}+v\otimes w_{1}^{\prime}$
for some $v\in V_{\varepsilon,n}$. Then $g\mu(v_{0}\otimes
w_{2})=\alpha(v_{0}\otimes w_{2}^{\prime}+v\otimes
w_{1}^{\prime})+x^{\prime}v_{0}\otimes w_{1}^{\prime},$ and
$\mu(g(v_{0}\otimes w_{2}))=\mu(\alpha v_{0}\otimes w_{2}+xv_{0}\otimes
w_{1})=\alpha(v_{0}\otimes w_{2}^{\prime}+v\otimes
w_{1}^{\prime})+xv_{0}\otimes w_{1}^{\prime}.$
Since $g\mu(v_{0}\otimes w_{2})=\mu(g(v_{0}\otimes w_{2})),$ we have
$x=x^{\prime}$. Similarly, we can prove that $y=y^{\prime}$. So
$V_{\varepsilon,n}\otimes-$ induces an injective mapping from
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha,\beta},\mathbb{K}_{\alpha,\beta})$
to
$\operatorname{Ext}^{1}(V_{\varepsilon,n,\alpha,\beta},V_{\varepsilon,n,\alpha,\beta})$.
To prove $V_{\varepsilon,n}\otimes-$ is linear, we choose an exact sequence of
$U_{q}(\mathfrak{sl}(2))$-modules
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{L\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{V_{\varepsilon,n}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$
where $P$ is a finitely generated projective $U_{q}(\mathfrak{sl}(2))$-module.
Then $P\otimes R$ is a projective $U_{g,h}$-module, and the kernel $Q$ of $F$
is $\operatorname{Ker}f\otimes R+P\otimes I,$ where $F$ is a homomorphism from
$P\otimes R$ to $V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}$ given by
$F(a\otimes b)=f(a)\otimes b\cdot 1$,
$I=R(g-\alpha)+R(h-\beta).$
Applying the functor
$\operatorname{Hom}_{U_{g,h}}(-,V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta})$
to the exact sequence
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{P\otimes
R\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{F}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
yields an exact sequence
(3.29)
where $A=V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}.$ Define a
homomorphism of $U_{g,h}$-modules $\sigma:P\otimes R\rightarrow
V_{\varepsilon,n}\otimes M_{x,y}$ by
$\sigma(a\otimes b)=f(a)\otimes b\cdot w_{2},\qquad a\otimes b\in P\otimes R,$
and a homomorphism of $U_{g,h}$-modules $\zeta_{x,y}:Q\rightarrow
V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}$ by
$\zeta_{x,y}(a\otimes b)=\left\\{\begin{array}[]{lll}0,&&a\otimes
b\in\operatorname{Ker}f\otimes L,\\\ \eta_{x,y}(b)f(a)\otimes 1,&&a\otimes
b\in P\otimes(R(g-\alpha)+R(h-\beta)),\end{array}\right.$
where $\eta_{x,y}$ is defined by (3.6). Let $\nu$ be the embedding mapping
from $Q$ to $P\otimes R$. Then we have the following commutative diagram:
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\zeta_{x,y}}$$\scriptstyle{\nu}$$\textstyle{P\otimes
R\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sigma}$$\scriptstyle{F}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{id}}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1\otimes\varphi}$$\textstyle{V_{\varepsilon,n}\otimes
M_{x,y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1\otimes\psi}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
It is easy to check that $V_{\varepsilon,n}\otimes M_{x,y}$ is the pushout of
$\zeta_{x,y}$ and $\nu$. If we use $M_{kx,ky}$ for any $k\in\mathbb{K}$ to
replace $M_{x,y}$, we will obtain a homomorphism $\zeta_{kx,ky}$ from $Q$ to
$V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}$. Similarly, we get a
homomorphism $\zeta_{x+x^{\prime},y+y^{\prime}}$ from $Q$ to
$V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}$ by using
$M_{x+x^{\prime},y+y^{\prime}}$ to replace $M_{x,y}$. From the definitions of
these mappings and (3.7), we obtain the following
(3.30)
$\displaystyle\zeta_{kx,ky}=k\zeta_{x,y},\qquad\zeta_{x+x^{\prime},y+y^{\prime}}=\zeta_{x,y}+\zeta_{x^{\prime},y^{\prime}}.$
We abuse notation and write $V_{\varepsilon,n}\otimes M_{x,y}$ for the
following exact sequence
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1\otimes\varphi}$$\textstyle{V_{\varepsilon,n}\otimes
M_{x,y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1\otimes\psi}$$\textstyle{V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
Define
$(V_{\varepsilon,n}\otimes M_{x,y})\boxplus(V_{\varepsilon,n}\otimes
M_{x^{\prime},y^{\prime}})=V_{\varepsilon,n}\otimes
M_{x+x^{\prime},y+y^{\prime}}\quad k\boxdot(V_{\varepsilon,n}\otimes
M_{x,y})=V_{\varepsilon,n}\otimes M_{kx,ky},$
for $k\in\mathbb{K}.$ Then $\\{V_{\varepsilon,n}\otimes
M_{x,y}|x,y\in\mathbb{K}\\}$ becomes a vector space with the above operations.
By [12, Theorem 3.4.3], we have an injective linear mapping $\Psi_{2}$ of
linear spaces from
$\\{V_{\varepsilon,n}\otimes M_{x,y}|x,y\in\mathbb{K}\\}$
to
$\operatorname{Ext}^{1}(V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta},V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta})$
such that
$\Psi_{2}(V_{\varepsilon,n}\otimes
M_{x,y})=\partial(\zeta_{x,y})\in\operatorname{Ext}^{1}(V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta},V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}).$
Therefore,
$\Psi_{2}(V_{\varepsilon,n}\otimes
M_{kx,ky})=\partial(k\zeta_{x,y})=k\Psi_{2}(V_{\varepsilon,n}\otimes M_{x,y})$
and
$\Psi_{2}(V_{\varepsilon,n}\otimes
M_{x+x^{\prime},y+y^{\prime}})=\Psi_{2}(V_{\varepsilon,n}\otimes
M_{x,y})+\Psi_{2}(V_{\varepsilon,n}\otimes M_{x^{\prime},y^{\prime}})$
by (3.13). Since $\Psi_{2}$ is injective and linear,
$V_{\varepsilon,n}\otimes k\boxdot M_{x,y}=V_{\varepsilon,n}\otimes
M_{kx,ky}=k\boxdot(V_{\varepsilon,n}\otimes M_{x,y})$
and
$V_{\varepsilon,n}\otimes(M_{x,y}\boxplus
M_{x^{\prime},y^{\prime}})=V_{\varepsilon,n}\otimes
M_{x+x^{\prime},y+y^{\prime}}=(V_{\varepsilon,n}\otimes
M_{x,y})\boxplus(V_{\varepsilon,n}\otimes M_{x^{\prime},y^{\prime}}).$
By now, we have completed the proof. ∎
We now completely classify all extensions between two finite-dimensional
simple $U_{g,h}$-modules.
###### Theorem 3.10.
Suppose $q$ is not a root of unity. Given two simple $\mathbb{K}[g^{\pm
1},h^{\pm 1}]$-modules $\mathbb{K}_{\alpha,\beta}$,
$\mathbb{K}_{\alpha^{\prime},\beta^{\prime}}$ and a finite-dimensional simple
$U_{q}(\mathfrak{sl}(2))$-module $V_{\varepsilon,n}$, the assignment
$V_{\varepsilon,n}\otimes-$ is an isomorphism of vector spaces from
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha^{\prime},\beta^{\prime}},\mathbb{K}_{\alpha,\beta})$
to $\operatorname{Ext}^{1}(M^{\prime},M)$. Here,
$M:=V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}\cong
V_{\varepsilon,n,\alpha,\beta},\qquad
M^{\prime}:=V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha^{\prime},\beta^{\prime}}\cong
V_{\varepsilon,n,\alpha^{\prime},\beta^{\prime}}.$
Moreover,
$\operatorname{Ext}^{1}(V_{\varepsilon,m,\alpha,\beta},V_{\varepsilon^{\prime},n,\alpha^{\prime},\beta^{\prime}})=0$
provided that
$(\varepsilon,m,\alpha,\beta)\neq(\varepsilon^{\prime},n,\alpha^{\prime},\beta^{\prime})$.
###### Proof.
Let $C$ be the Casimir element of $U_{g,h}$ defined in Corollary 2.4,
$d_{m,n}=\frac{q^{m+1}}{(q^{m-n}-\varepsilon\varepsilon^{\prime})(q^{m+n+2}-\varepsilon\varepsilon^{\prime})}\left(C-\varepsilon^{\prime}\alpha^{\prime}\frac{q^{n+1}+q^{-n-1}}{(q-q^{-1})^{2}}\right),$
and
$a=\left\\{\begin{array}[]{ll}\frac{h-\beta^{\prime}}{\beta-\beta^{\prime}},&if\qquad\beta\neq\beta^{\prime}\\\
\\\
\frac{g-\alpha^{\prime}}{\alpha-\alpha^{\prime}},&if\qquad\alpha\neq\alpha^{\prime}\\\
\\\
\frac{\varepsilon(q-q^{-1})^{2}}{\alpha}d_{m,n},&otherwise.\end{array}\right.$
Then $a$ is in the center of $U_{g,h}$ by Corollary 2.4.
Observe that $V_{\varepsilon,m,\alpha,\beta}\cong
V_{\varepsilon^{\prime},n,\alpha^{\prime},\beta^{\prime}}$ if and only if
$\varepsilon=\varepsilon^{\prime}$, $\alpha=\alpha^{\prime}$,
$\beta=\beta^{\prime}$, and $m=n$.
Suppose $V_{\varepsilon,m,\alpha,\beta}$ is not isomorphic to
$V_{\varepsilon^{\prime},n,\alpha^{\prime},\beta^{\prime}}$, then
$(\varepsilon,m,\alpha,\beta)\neq(\varepsilon^{\prime},n,\alpha^{\prime},\beta^{\prime})$.
Let $v\in V_{\varepsilon,m,\alpha,\beta}$ be a nonzero highest weight vector
satisfying
$Kv=\varepsilon\alpha q^{m}v,\quad gv=\alpha v,\quad hv=\beta v,\quad Ev=0.$
Then
$\begin{array}[]{l}d_{m,n}v=\frac{q^{m+1}}{q^{2m+2}-\varepsilon\varepsilon^{\prime}q^{m+n+2}-\varepsilon\varepsilon^{\prime}q^{m-n}+1}(FE+\frac{qK+q^{-1}K^{-1}g^{2}}{(q-q^{-1})^{2}}-\varepsilon^{\prime}\alpha\frac{q^{n+1}+q^{-n-1}}{(q-q^{-1})^{2}})v\\\
\\\ \qquad\quad=\frac{\varepsilon\alpha}{(q-q^{-1})^{2}}v,\end{array}$
in the case $\alpha^{\prime}=\alpha$. Therefore $am=m$ for any $m\in
V_{\varepsilon,m,\alpha,\beta}$ by Schur's Lemma, since
$V_{\varepsilon,m,\alpha,\beta}$ is a simple module and $a$ induces an
endomorphism of $V_{\varepsilon,m,\alpha,\beta}$. Similarly, we can prove that
$am=0$ for any $m\in
V_{\varepsilon^{\prime},n,\alpha^{\prime},\beta^{\prime}}$.
Consider the short exact sequence of $U_{g,h}$-modules
(3.31) $\displaystyle
0\xrightarrow[]{\qquad}V_{\varepsilon,m,\alpha,\beta}\xrightarrow[]{\quad\phi\quad}V\xrightarrow[]{\quad\varphi\quad}V_{\varepsilon^{\prime},n,\alpha^{\prime},\beta^{\prime}}\xrightarrow[]{\qquad}0.$
Since $a\varphi(V)=\varphi(aV)=0$,
$\phi(V_{\varepsilon,m,\alpha,\beta})=\operatorname{Ker}\varphi\supseteq
aV\supseteq
a\phi(V_{\varepsilon,m,\alpha,\beta})=\phi(aV_{\varepsilon,m,\alpha,\beta})=\phi(V_{\varepsilon,m,\alpha,\beta}).$
So $\phi(V_{\varepsilon,m,\alpha,\beta})=aV$. In particular, $a(av)=av$ for
any $v\in V$. Therefore
$V=\operatorname{Ker}a\oplus
aV=\operatorname{Ker}a\oplus\phi(V_{\varepsilon,m,\alpha,\beta}).$
Hence the sequence (3.14) is splitting and
$\operatorname{Ext}^{1}(V_{\varepsilon^{\prime},n,\alpha^{\prime},\beta^{\prime}},V_{\varepsilon,m,\alpha,\beta})=0$.
Suppose $(\alpha,\beta)\neq(\alpha^{\prime},\beta^{\prime})$. Then
$\operatorname{Ext}^{1}(M^{\prime},M)=0$ and
$\operatorname{Ext}^{1}(\mathbb{K}_{\alpha^{\prime},\beta^{\prime}},\mathbb{K}_{\alpha,\beta})=0$
by Proposition 3.8. It is trivial that $V_{\varepsilon,n}\otimes-$ is an
isomorphism of linear spaces.
Next, we assume that $V_{\varepsilon,n,\alpha^{\prime},\beta^{\prime}}\cong
V_{\varepsilon,m,\alpha,\beta}\cong
V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta}$. Consider the following
exact sequence of $U_{g,h}$-modules
(3.32) $\displaystyle
0\xrightarrow[]{\qquad}M\xrightarrow[]{\quad\phi\quad}V\xrightarrow[]{\quad\varphi\quad}M\xrightarrow[]{\qquad}0.$
Since $U_{q}(\mathfrak{sl}(2))$ is a subalgebra of $U_{g,h}$, we can regard
the exact sequence (3.15) as a sequence of $U_{q}(\mathfrak{sl}(2))$-modules.
Since every finite-dimensional $U_{q}(\mathfrak{sl}(2))$-module is semisimple,
there is a homomorphism $\lambda$ of $U_{q}(\mathfrak{sl}(2))$-modules from
$M$ to $V$ such that $\varphi\lambda=\operatorname{id}_{M}$. For any $v\in V$,
we have $v=(v-\lambda\varphi(v))+\lambda\varphi(v)$. Moreover,
$\varphi(v-\lambda\varphi(v))=0$. Hence
$V=\operatorname{Ker}\varphi\oplus\operatorname{Im}\lambda=\operatorname{Im}\phi\oplus\operatorname{Im}\lambda,$
where $\operatorname{Im}\lambda\cong V_{\varepsilon,n}$ as
$U_{q}(\mathfrak{sl}(2))$-modules. Let $K^{\prime}=Kg^{-1}$. Suppose
$u_{1},u_{2}$ are the highest weight vectors of the $U_{g,h}$-module
$\operatorname{Im}\phi$ and the $U_{q}(\mathfrak{sl}(2))$-module
$\operatorname{Im}\lambda$ respectively. Then
$\\{\frac{F^{i}}{[i]!}u_{1},\frac{F^{i}}{[i]!}u_{2}|i=0,\cdots,n\\}$
is a basis of $V$. Moreover,
$K\varphi(u_{2})=g\varphi(K^{\prime}u_{2})=\varepsilon\alpha
q^{n}\varphi(u_{2}),\qquad E\varphi(u_{2})=g\varphi(Eg^{-1}u_{2})=0,$
$g\varphi(u_{2})=\alpha\varphi(u_{2}),\qquad
h\varphi(u_{2})=\beta\varphi(u_{2}).$
So $\varphi(u_{2})$ is a highest weight vector of $M$. Suppose
$gu_{2}=\sum\limits_{i=0}^{n}a_{i}\frac{1}{[i]!}F^{i}u_{1}+\sum\limits_{i=0}^{n}x_{i}\frac{1}{[i]!}F^{i}u_{2}$.
Then
(3.33) $\displaystyle\qquad\varepsilon
q^{n}gu_{2}=gK^{\prime}u_{2}=K^{\prime}gu_{2}=\varepsilon(\sum\limits_{i=0}^{n}q^{n-2i}a_{i}\frac{1}{[i]!}F^{i}u_{1}+\sum\limits_{i=0}^{n}q^{n-2i}x_{i}\frac{1}{[i]!}F^{i}u_{2}).$
Since $q^{m}\neq 1$ for any positive integer $m$, we obtain $a_{i}=x_{i}=0$,
$i=1,2,\cdots,n$ from (3.16). Hence $gu_{2}=a_{0}u_{2}+x_{0}u_{1}$. Moreover,
$a_{0}\varphi(u_{2})=\varphi(gu_{2})=g\varphi(u_{2})=\alpha\varphi(u_{2}).$ So
$a_{0}=\alpha$. Similarly, we can prove that $hu_{2}=\beta u_{2}+y_{0}u_{1}$.
Moreover, by using Lemma 3.7, one can prove that
(3.37)
$\displaystyle\left\\{\begin{array}[]{l}E(u_{1})=E(u_{2})=F(\frac{1}{[n]!}F^{n}u_{1})=F(\frac{1}{[n]!}F^{n}u_{2})=0,\\\
E(\frac{1}{[p]!}F^{p}u_{1})=\varepsilon[n-p+1]\alpha F^{p-1}u_{1},\\\
E(\frac{F^{p}}{[p]!}u_{2})=\varepsilon\alpha[n-p+1]\frac{F^{p-1}}{[p-1]!}u_{2}+\varepsilon[n-p+1]x_{0}\frac{F^{p-1}}{[p-1]!}u_{1},\end{array}\right.$
for $p=1,\cdots,n$;
(3.38) $\displaystyle K(\frac{F^{p}}{[p]!}u_{2})=\varepsilon\alpha
q^{n-2p}\frac{F^{p}}{[p]!}u_{2}+\varepsilon
q^{n-2p}x_{0}\frac{F^{p}}{[p]!}u_{1},$
for $p=0,1,\cdots,n$; and
(3.41)
$\displaystyle\left\\{\begin{array}[]{ll}g(\frac{F^{p}}{[p]!}u_{1})=\alpha\frac{F^{p}}{[p]!}u_{1},&g(\frac{F^{p}}{[p]!}u_{2})=\alpha\frac{F^{p}}{[p]!}u_{2}+x_{0}\frac{F^{p}}{[p]!}u_{1},\\\
h(\frac{F^{p}}{[p]!}u_{1})=\beta\frac{F^{p}}{[p]!}u_{1},&h(\frac{F^{p}}{[p]!}u_{2})=\beta\frac{F^{p}}{[p]!}u_{2}+y_{0}\frac{F^{p}}{[i]!}u_{1},\end{array}\right.$
for $p=0,1,\cdots,n$.
Define $\tau(\frac{F^{i}}{[i]!}u_{j})=v_{i}\otimes w_{j}$ for
$i=0,1,\cdots,n;j=1,2$, and extend it by linearity. Comparing the relations
from (3.8) to (3.11) in Proposition 3.9 with the above relations from (3.17)
to (3.19), we know that $\tau$ is an isomorphism of $U_{g,h}$-modules from $V$
to $V_{\varepsilon,n}\otimes M_{x_{0},y_{0}}$. Hence
$V_{\varepsilon,n}\otimes-$ is an isomorphism of linear spaces by Proposition
3.9. ∎
###### Remark 3.11.
Since $\operatorname{Ext}^{1}(V_{\varepsilon,n},V_{\varepsilon,n})=0$ and
$\operatorname{Ext}^{1}(V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta},V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta})\neq
0$, the functor $-\otimes\mathbb{K}_{\alpha,\beta}$ is the zero mapping from
$\operatorname{Ext}^{1}(V_{\varepsilon,n},V_{\varepsilon,n})$ to
$\operatorname{Ext}^{1}(V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta},V_{\varepsilon,n}\otimes\mathbb{K}_{\alpha,\beta})$.
Hence the functor $-\otimes\mathbb{K}_{\alpha,\beta}$ does not induce an
isomorphism.
Since $U_{g,h}$ is a Hopf algebra, the dual $M^{*}$ of any $U_{g,h}$-module
$M$ is still a $U_{g,h}$ module. For $a\in U_{g,h}$, $f\in M^{*}$, the action
of $a$ on $f$ is given by
$(af)(m):=f((Sa)m),\qquad m\in M,$
where $S$ is the antipode of $U_{g,h}$. Next we describe the dual module of a
simple module over $U_{g,h}$.
###### Theorem 3.12.
The dual module $V_{\varepsilon,n,\alpha,\beta}^{*}$ of the simple
$U_{g,h}$-module $V_{\varepsilon,n,\alpha,\beta}$ is a simple module, and
$V_{\varepsilon,n,\alpha,\beta}^{*}\cong
V_{\varepsilon,n,\alpha^{-1},\beta^{-1}}$.
###### Proof.
By Theorem 3.5, we can assume that the simple module
$V_{\varepsilon,n,\alpha,\beta}$ has a basis $\\{v_{0},\cdots,$ $v_{n}\\}$
with relations:
$Kv_{p}=\varepsilon q^{n-2p}\alpha v_{p},\qquad gv_{p}=\alpha v_{p},\qquad
hv_{p}=\beta v_{p}$
for $p=0,1,\cdots,n,$
$Fv_{n}=0,\qquad Ev_{0}=0$
and
$Ev_{p}=\varepsilon\frac{q^{n-(p-1)}\alpha-q^{p-1-n}\alpha}{q-q^{-1}}v_{p-1}=\varepsilon\alpha[n-p+1]v_{p-1}$
for $p=1,\cdots,n$. Let $\\{v_{0}^{*},\cdots,v_{n}^{*}\\}$ be the dual basis
of $\\{v_{0},\cdots,v_{n}\\}$. Then
$(Ev_{n}^{*})(v_{i})=-v_{n}^{*}(EK^{-1}v_{i})=-q^{2i-n}[n-i+1]v_{n}^{*}(v_{i-1})=0$
for $i=1,\cdots,n$, and
$(Ev_{n}^{*})(v_{0})=-v_{n}^{*}(EK^{-1}v_{0})=-\varepsilon\alpha^{-1}q^{-n}v_{n}^{*}(0)=0.$
Hence $E(v_{n}^{*})=0$. Since
$(Kv_{n}^{*})(v_{i})=v_{n}^{*}(K^{-1}v_{i})=q^{2i-n}\varepsilon\alpha^{-1}v_{n}^{*}(v_{i})=\delta_{ni}q^{n}\varepsilon\alpha^{-1}$
for $i=0,1,\cdots,n$, $Kv_{n}^{*}=\varepsilon\alpha^{-1}q^{n}v_{n}^{*}$.
Similarly, that $gv^{*}_{n}=\alpha^{-1}v^{*}_{n}$ follows from
$(gv_{n}^{*})(v_{i})=v_{n}^{*}(g^{-1}v_{i})=\alpha^{-1}v_{n}^{*}(v_{i})$
for $i=0,1,\cdots,n$, and that $hv^{*}_{n}=\beta^{-1}v^{*}_{n}$ follows from
$(hv_{n}^{*})(v_{i})=v_{n}^{*}(h^{-1}v_{i})=\beta^{-1}v_{n}^{*}(v_{i})$
for $i=0,1,\cdots,n$. So $V_{\varepsilon,n,\alpha,\beta}^{*}$ is a simple
$U_{g,h}$-module generated by the highest weight vector $v_{n}^{*}$ with
weight $(\varepsilon\alpha^{-1}q^{n},\alpha^{-1},\beta^{-1})$. Hence
$V_{\varepsilon,n,\alpha,\beta}^{*}\cong
V_{\varepsilon,n,\alpha^{-1},\beta^{-1}}$.∎
Let $H$ be a Hopf algebra, and $H^{\circ}=\\{f\in H^{*}|\ker f$ contains an
ideal $I$ such that the dimension of $H/I$ is finite$\\}$. Then $H^{\circ}$ is
a Hopf algebra, which is called the finite dual Hopf algebra of $H$. Now let
$M$ be a left module over the Hopf algebra $U_{g,h}$. For any $f\in M^{*}$ and
$v\in M$, define a coordinate function $c_{f,v}^{M}\in U_{g,h}^{*}$ via
$c_{f,v}^{M}(x)=f(xv)\qquad\qquad for\quad x\in H.$
If $M$ is finite dimensional, then $c_{f,v}^{M}\in U_{g,h}^{\circ}$, the
finite dual Hopf algebra of $U_{g,h}$. The coordinate space ${C}(M)$ of $M$ is
a linear subspace of $U_{g,h}^{*}$, spanned by the coordinate functions
$c_{f,v}^{M}$ as $f$ runs over $M^{*}$ and $v$ over $M$.
###### Corollary 3.13.
Let $A$ be the subalgebra of $U_{g,h}^{\circ}$ generated by all the coordinate
functions of all finite dimensional simple $U_{g,h}$-modules. Then $A$ is a
sub-Hopf algebra of $U_{g,h}^{\circ}$.
###### Proof.
Let $\hat{\mathcal{C}}$ be the subcategory of the left $U_{g,h}$-module
category consisting of all finite direct sums of finite dimensional simple
$U_{g,h}$-modules. Then $\hat{\mathcal{C}}$ is closed under tensor products
and duals by Corollary 3.6 and Theorem 3.12. Thus $A$ is a sub-Hopf algebra of
$U^{\circ}_{g,h}$ and is the directed union of the coordinate spaces ${C}(V)$
for $V\in\hat{\mathcal{C}}$ by [2, Corollary I.7.4].∎
Finally, we describe the simple modules over $U_{g,h}$ when $q$ is a root of
unity. Assume that the order of $q$ is $d>2$ and define
$e=\left\\{\begin{array}[]{ll}d,&if\ d\ is\ odd,\\\
\frac{d}{2},&otherwise.\end{array}\right.$
We will use the notations $V(\lambda,a,b)$, $V(\lambda,a,0)$,
$\widetilde{V}(\pm q^{1-j},c)$ to denote finite-dimensional simple
$U_{q}(\mathfrak{sl}(2))$-modules. These simple modules have been described in
[6, Theorem VI.5.5]. The next results follow from [6, Proposition VI.5.1,
Proposition VI.5.2, Theorem VI.5.5] and [8, Proposition 16.1].
###### Proposition 3.14.
Suppose $q$ is a root of unity. Then
(1) Any simple $U_{g,h}$-module of dimension $e$ is isomorphic to a module of
the following list:
(i) $\mathbbm{K}_{\alpha,\beta}\otimes V(\lambda,a,b)$, where
$\mathbbm{K}_{\alpha,\beta}=\mathbb{K}\cdot 1$ is a one-dimensional module
over $\mathbbm{K}[g^{\pm 1},h^{\pm 1}]$, and $g\cdot 1=\alpha,$ $h\cdot
1=\beta$ for some $\alpha,\beta\in\mathbbm{K}^{\times}$.
(ii) $\mathbbm{K}_{\alpha,\beta}\otimes V(\lambda,a,0)$, where $\lambda$ is
not of the form $\pm q^{j-1}$ for any $1\leq j\leq e-1$,
(iii) $\mathbbm{K}_{\alpha,\beta}\otimes\widetilde{V}(\pm q^{1-j},c)$.
(2) Any simple $U_{g,h}$-module of dimension $n<e-1$ is isomorphic to a module
of the form $V_{\varepsilon,n,\alpha,\beta}$, where the structure of
$V_{\varepsilon,n,\alpha,\beta}$ is given by Theorem 3.5.
(3) The dimension of any simple $U_{g,h}$-module is not larger than $e$.
## 4\. Verma modules and the category $\mathcal{O}$
In this section, we assume that the nonzero element $q\in\mathbb{K}$ is not a
root of unity. We will study the BGG subcategory of the category of all left
$U_{g,h}$-modules. For the undefined terms in this section, we refer the
reader to [8] and [10].
If $M$ is a $U_{g,h}$-module, a maximal weight vector is any nonzero $m\in M$
that is killed by $E$, and is a common eigenvector for $K,g,h$. A standard
cyclic module is one which is generated by exactly one maximal weight vector.
For each $(a,b,c)\in{\mathbbm{K}}^{\times 3}$, define the Verma module
$V(a,b,c):=U_{g,h}/I(a,b,c),$
where $I(a,b,c)$ is the left ideal of $U_{g,h}$ generated by $E$, $K-a$,
$g-b,$ $h-c$. $V(a,b,c)$ is a free ${\mathbbm{K}}[F]$-module of rank one, by
the PBW Theorem 2.1 for $U_{g,h}$. Hence the set $W(V(a,b,c))$ of weights of
the Verma module $V(a,b,c)$ is equal to $\\{(q^{-2n}a,b,c)|n\geq 0\\}$.
About the extension group
$\operatorname{Ext}^{1}(V(a^{\prime},b^{\prime},c^{\prime}),V(a,b,c))$ of two
Verma modules $V(a^{\prime},b^{\prime},c^{\prime}),$ $V(a,b,c)$, we have the
following:
###### Proposition 4.1.
Suppose $V(a,b,c)$ and $V(a^{\prime},b^{\prime},c^{\prime})$ are two Verma
modules. Then $\operatorname{Ext}^{1}((V(a,b,c),V(a,b,c))\neq 0$ and
$\operatorname{Ext}^{1}((V(a^{\prime},b^{\prime},c^{\prime}),V(a,b,c))=0$ if
$a,b,c;a^{\prime},b^{\prime},c^{\prime}$ satisfy one of the following
conditions.
(1) $(b,c)\neq(b^{\prime},c^{\prime})$;
(2) $(b,c)=(b^{\prime},c^{\prime})$, $a\neq a^{\prime}$ and $aa^{\prime}\neq
q^{-2}b^{2}$.
###### Proof.
Let $M_{x,y}\in\operatorname{Ext}^{1}(\mathbb{K}_{b,c},\mathbb{K}_{b,c})$ be
the module described in Proposition 3.9, where either $x\neq 0$ or $y\neq 0$.
Consider the $U_{g,h}$-module $M=V(ab^{-1})\otimes M_{x,y}$, where
$V(ab^{-1})$ is a Verma module over $U_{q}(\mathfrak{sl}(2))$ generated by a
highest weight vector $v$ with weight $ab^{-1}$. Suppose $w_{1},w_{2}$ is a
basis of $M_{x,y}$ such that $gw_{1}=bw_{1}$, $gw_{2}=bw_{2}+xw_{1}$,
$hw_{1}=cw_{1}$, $hw_{2}=cw_{2}+yw_{1}$. Then $K(v\otimes w_{1})=a(v\otimes
w_{1})$ and
$K(v\otimes w_{2})=a(v\otimes w_{2})+ab^{-1}x(v\otimes w_{1}).$
Therefore the subspace $V_{1}$ of $M$ generated by
$\frac{F^{n}}{[n]!}v\otimes w_{1},\qquad n\in\mathbb{Z}_{\geq 0}$
is a $U_{g,h}$-module, which is isomorphic to $V(a,b,c)$. Moreover $M/V_{1}$
is also isomorphic to $V(a,b,c)$. Thus
$M\in\operatorname{Ext}^{1}(V(a,b,c),V(a,b,c))$. Suppose $M\cong
V(a,b,c)\oplus V(a,b,c)$. Then the actions of $g,h$ on $M$ are given via
multiplications by $b,c$ respectively. This is impossible when either $x\neq
0$, or $y\neq 0$. So $M$ is a nonzero element in
$\operatorname{Ext}^{1}(V(a,b,c),V(a,b,c))$.
Now let
$u=\left\\{\begin{array}[]{ll}\frac{g-b^{\prime}}{b-b^{\prime}},&if\ b\neq
b^{\prime}\\\ \frac{h-c^{\prime}}{c-c^{\prime}},&if\ c\neq c^{\prime}\\\
\frac{aa^{\prime}(q-q^{-1})^{2}}{(a-a^{\prime})(qaa^{\prime}-q^{-1}b^{2})}(C-\frac{qa^{\prime}+q^{-1}a^{\prime-1}b^{2}}{(q-q^{-1})^{2}}),&if\
a\neq a^{\prime},aa^{\prime}\neq
q^{-2}b^{2},(b,c)=(b^{\prime},c^{\prime}),\end{array}\right.$
where $C$, which is given in Corollary 2.4, is the Casimir element of
$U_{g,h}$. Then $u$ is in the center of $U_{g,h}$ by Corollary 2.4. Suppose
$V(a,b,c)$ and $V(a^{\prime},b^{\prime},c^{\prime})$ are generated by the
highest weight vectors $v,v^{\prime}$ respectively. It is easy to check that
$uv=v$ and $uv^{\prime}=0$. So $u$ induces the identity endomorphism of
$V(a,b,c)$ and the zero endomorphism of $V(a^{\prime},b^{\prime},c^{\prime})$.
Similar to the proof of Theorem 3.10, we can prove that every short exact
sequence
$0\rightarrow V(a,b,c)\rightarrow N\rightarrow
V(a^{\prime},b^{\prime},c^{\prime})\rightarrow 0$
is splitting. Hence
$\operatorname{Ext}^{1}((V(a^{\prime},b^{\prime},c^{\prime}),V(a,b,c))=0$. ∎
###### Remark 4.2.
It is unknown whether
$\operatorname{Ext}^{1}(V(q^{-2}a^{-1}b^{2},b,c),V(a,b,c))=0$ in the case when
$b^{2}\neq q^{2}a^{2}$.
The proof of the following proposition is standard (see e.g. [5], [7] or [8]).
###### Proposition 4.3.
(1) The Verma module $V(a,b,c)$ has a unique maximal submodule $N(a,b,c)$, and
the quotient $V(a,b,c)/N(a,b,c)$ is a simple module $L(a,b,c)$.
(2) Any standard cyclic module is a quotient of some Verma module.
By [8, Theorem 4.2] and Proposition 2.2, every Verma module over $U_{g,h}$ is
isomorphic to $V(\lambda)\otimes\mathbb{K}_{b,c}$, where $V(\lambda)$ is a
Verma module over $U_{q}(\mathfrak{sl}(2))$. Conversely,
$V(\lambda)\otimes\mathbb{K}_{b,c}$ is a Verma $U_{g,h}$ module if
$V(\lambda)$ is a Verma module over $U_{q}(\mathfrak{sl}(2))$. In the
following, we determine when the Verma module
$V(\lambda)\otimes\mathbb{K}_{b,c}$ is isomorphic to the Verma module
$V(a,b,c)$, using the isomorphism in Proposition 2.2(1).
###### Proposition 4.4.
Suppose $V(\lambda)$ is a Verma module over $U_{q}(\mathfrak{sl}(2))$ and
$\mathbb{K}_{b,c}$ is a simple module over $\mathbb{K}[g^{\pm 1},h^{\pm 1}]$.
Then $V(\lambda)\otimes\mathbb{K}_{b,c}$ is a Verma module over $U_{g,h}$ with
the highest weight $(b\lambda,b,c)$. Conversely, every Verma module $V(a,b,c)$
over $U_{g,h}$ is isomorphic to
$V(ab^{-1})\otimes\mathbb{K}_{b,c},$
where $V(ab^{-1})$ is a Verma module over $U_{q}(\mathfrak{sl}(2))$.
Therefore the Verma module $V(a,b,c)$ is isomorphic to
$V(\lambda)\otimes\mathbb{K}_{b^{\prime},c^{\prime}}$ if and only if
$(a,b,c)=(b^{\prime}\lambda,b^{\prime},c^{\prime})$.
###### Proof.
Suppose $E^{\prime},K^{\prime},F^{\prime}$ are Chevalley generators of
$U_{q}(\mathfrak{sl}(2))$. Let $V(\lambda)$ be a Verma module over
$U_{q}(\mathfrak{sl}(2))$. Then $V(\lambda)$ has a basis
$\\{v_{p}|p\in\mathbb{Z}_{\geq 0}\\}$ satisfying
$K^{\prime}v_{p}=\lambda q^{-2p}v_{p},\qquad
K^{\prime-1}v_{p}=\lambda^{-1}q^{2p}v_{p},$
$E^{\prime}v_{p+1}=\frac{q^{-p}\lambda-q^{p}\lambda^{-1}}{q-q^{-1}}v_{p},\qquad
F^{\prime}v_{p}=[p+1]v_{p+1}$
and $E^{\prime}v_{0}=0$. Since $U_{g,h}\cong U_{q}(\mathfrak{sl}(2))\otimes$
$\mathbb{K}[g^{\pm 1},h^{\pm 1}]$, then $V(\lambda)\otimes\mathbb{K}_{b,c}$ is
a cyclic module with the highest vector $v_{0}\otimes 1$, where the action of
$x\otimes y\in U_{q}(\mathfrak{sl}(2))\otimes\mathbb{K}[g^{\pm 1},h^{\pm 1}]$
on $v\otimes 1\in V(\lambda)\otimes\mathbb{K}_{b,c}$ is given by
$(x\otimes y)\cdot(v\otimes 1)=x\cdot v\otimes y\cdot 1.$
The highest weight of $V(\lambda)\otimes\mathbb{K}_{b,c}$ is $(b\lambda,b,c)$.
Let $v=1+I(b\lambda,b,c)$ be the highest weight vector of the Verma module
$V(b\lambda,b,c)$. Define a linear map $f$ from
$V(\lambda)\otimes\mathbb{K}_{b,c}$ to $V(a,b,c)$ by $f(v_{p}\otimes
1)=\frac{1}{[p]!}F^{p}v$. Similar to [6, Proposition VI.3.7], we can prove
that $f$ is a homomorphism of $U_{g,h}$-modules. Therefore
$V(\lambda)\otimes\mathbb{K}_{b,c}$ is the Verma module with highest weight
$(b\lambda,b,c)$ by Proposition 4.3(2).
Conversely, let $\lambda=ab^{-1}$. Consider an infinite-dimensional vector
space $V(\lambda)$ with basis $\\{v_{i}|i\in\mathbb{Z}_{\geq 0}\\}$. For
$p\geq 0$, set
$K^{\prime}v_{p}=\lambda q^{-2p}v_{p},\qquad
K^{\prime-1}v_{p}=\lambda^{-1}q^{2p}v_{p},$
$E^{\prime}v_{p+1}=\frac{q^{-p}\lambda-q^{p}\lambda^{-1}}{q-q^{-1}}v_{p},\qquad
F^{\prime}v_{p}=[p+1]v_{p+1}$
and $E^{\prime}v_{0}=0$, where $E^{\prime},K^{\prime},F^{\prime}$ are
Chevalley generators of $U_{q}(\mathfrak{sl}(2))$. Then $V(\lambda)$ is a
Verma module over $U_{q}(\mathfrak{sl}(2))$ with the above actions by [6,
Lemma VI.3.6]. The highest weight of $V(\lambda)\otimes\mathbb{K}_{b,c}$ is
$(a,b,c)$. Therefore $V(\lambda)\otimes\mathbb{K}_{b,c}$ is isomorphic to the
Verma module over $U_{g,h}$ with highest weight $(a,b,c)$. ∎
One of the basic questions about a Verma module is to determine its maximal
weight vectors. We now answer this question.
###### Theorem 4.5.
Let $V(a,b,c),V(a^{\prime},b^{\prime},c^{\prime})$ be two Verma modules, where
$a,b,c;a^{\prime},b^{\prime},c^{\prime}$ $\in$ ${\mathbbm{K}}^{\times}$.
(1) If $V(a,b,c)$ has a maximal weight vector of weight $(q^{-2n}a,b,c)$, then
it is unique up to scalars and $a=\varepsilon bq^{n-1}$ with $n>0$.
(2)
$\dim_{\mathbbm{K}}\operatorname{Hom}_{U_{g,h}}(V(a^{\prime},b^{\prime},c^{\prime}),V(a,b,c))=0$
or $1$ for all $(a^{\prime},b^{\prime},c^{\prime})$ and $(a,b,c)$, and all
nonzero homomorphisms between two Verma modules are injective.
(3) The nonzero submodule of $V(a,b,c)$ (which is unique if it exists) is
precisely of the form
$V(q^{-2n}a,b,c)={\mathbbm{K}}[F]v_{q^{-2n}a,b,c}.$
###### Proof.
Suppose $p(F)=(a_{n}F^{n}+a_{n-1}F^{n-1}+\cdots+a_{0})\bar{1}$ is a maximal
weight vector, where $\bar{1}$ is the maximal weight vector of $V(a,b,c)$ and
$a_{n}\neq 0$. Then
$E(p(F))=[n]\frac{q^{-n+1}a-q^{n-1}a^{-1}b^{2}}{q-q^{-1}}a_{n}F^{n-1}\bar{1}+(lower\
degree\ terms)\bar{1}=0,$
by Lemma 3.7. This implies $a=\varepsilon bq^{n-1}.$ Moreover
$p(X)=a_{n}F^{n}\bar{1}$ and (1) follows.
(2) follows from (1) and the fact that ${\mathbbm{K}}[F]$ is a principal ideal
domain directly.
If $M$ is a nonzero submodule of $V(a,b,c)$, then $M$ contains a vector of the
highest possible weight $(q^{-2n}a,b,c)$. We claim that
$M=V(q^{-2n}a,b,c)={\mathbbm{K}}[F]v_{q^{-2n}a,b,c}$, where $v_{q^{-2n}a,b,c}$
is the weight vector in $M$ with weight $(q^{-2n}a,b,c)$. The weight vector
$v_{q^{-2n}a,b,c}$ is unique up to scalar by (1). To prove the above claim, we
only need to show that $M\subseteq{\mathbbm{K}}[F]v_{q^{-2n}a,b,c}$.
Suppose, to the contrary, that $v\in M$ is of the form
$v=p(F)v_{q^{-2n}a,b,c}+a_{n-1}F^{n-1}\bar{1}+\cdots+a_{1}F\bar{1}+a_{0}\bar{1}.$
We may assume that $p(F)=0$ because $v_{q^{-2n}a,b,c}\in M$. Since $K^{i}v\in
M$ for any $i$, $a_{n-k}F^{n-k}\bar{1}\in M$, $k=1,2,\cdots,n$. This is a
contradiction since $(q^{-2i}a,b,c)$ is not a weight of $M$ if $i<n$. ∎
###### Remark 4.6.
(1) If $\frac{a}{b}\neq\varepsilon q^{n}$ for any $n\geq 1$, then the Verma
module is a simple module by Theorem 4.5.
(2) It is well-known that the Verma module $V(\lambda)$ over
$U_{q}(\mathfrak{sl}(2))$ is simple provided that $\lambda\neq\varepsilon
q^{n}$ for any integer $n>0$, where $\varepsilon=\pm 1$. Since
$V(\lambda)\otimes\mathbb{K}_{b,c}\cong V(b{\lambda},b,c)$
by Proposition 4.4, $V(\lambda)\otimes\mathbb{K}_{b,c}$ is a simple
$U_{g,h}$-module provided that $\lambda\neq\varepsilon q^{n}$ for any $n$,
where $\varepsilon=\pm 1$.
(3) The simple module $L(a,b,c)$ is finite-dimensional if and only if the only
maximal submodule $N(a,b,c)$ of $V(a,b,c)$ is equal to $V(q^{-2n}a,b,c)$ and
$a=\varepsilon bq^{n-1}$ for some $n\in\mathbb{N}$. In this case,
$L(\varepsilon bq^{n-1},b,c)\cong V_{\varepsilon,n-1,b,c}$, which is given by
Theorem 3.5.
Finally, we study the BGG category $\mathcal{O}$, which is defined below.
###### Definition 4.7.
The BGG category $\mathcal{O}$ consists of all finitely generated
$U_{g,h}$-modules and all homomorphisms of modules with the following
properties:
(1) The actions of $K,g,h$ are diagonalized with finite-dimensional weight
spaces.
(2) The $B_{+}$-action is locally finite, where $B_{+}$ is the subalgebra
generated by $E,$ $K^{\pm 1},$ $g^{\pm 1},$ $h^{\pm 1}$.
It is obvious that every Verma module is in $\mathcal{O}$. By Theorem 3.5, all
finite-dimensional simple $U_{g,h}$-modules are in $\mathcal{O}$. Any simple
module in $\mathcal{O}$ is isomorphic to either a simple Verma module or a
finite-dimensional simple module $V_{\varepsilon,n,\alpha,\beta}$ described in
Theorem 3.5. In fact, if $M$ is a simple module in $\mathcal{O}$, then
$M=U_{g,h}v$ for some common eigenvector $v$ of $K,g,h$. Suppose $Kv=\lambda
v$. Then $KE^{n}v=q^{2n}\lambda E^{n}v$ for any positive integer $n$. Since
the action of $E$ is locally finite, there is an $n$ such that $E^{n}v\neq 0$
and $E^{n+1}v=0$. Thus $M=U_{g,h}E^{n}v$ is a standard cyclic
$U_{g,h}$-module. So it is a quotient of a Verma module. Hence it is
isomorphic to either a simple Verma module or a finite-dimensional simple
module $V_{\varepsilon,n,\alpha,\beta}$.
Suppose
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V_{\varepsilon,n,\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{M\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V_{\varepsilon,n,\alpha,\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
is a nonzero element in
$\operatorname{Ext}^{1}(V_{\varepsilon,n,\alpha,\beta},V_{\varepsilon,n,\alpha,\beta})$.
We remark that this $M$ is not in $\mathcal{O}$ since the actions of $g,h$ on
$M$ can not be diagonalized by Proposition 3.8 and Theorem 3.10. Similarly, if
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V(a,b,c)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{M\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V(a,b,c)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
is a nonzero element in $\operatorname{Ext}^{1}(V(a,b,c),V(a,b,c))$, then $M$
is not in $\mathcal{O}$.
By using results in [8], we obtain that every finite-dimensional module in
$\mathcal{O}$ is semisimple. In the following we give a direct proof of this
fact.
###### Proposition 4.8.
Every finite-dimensional module in $\mathcal{O}$ is semisimple.
###### Proof.
Let $0=M_{0}\subseteq M_{1}\subseteq\cdots\subseteq M_{n}=M$ be a composition
series of a finite-dimensional module $M$ for $M\in\mathcal{O}$. We prove that
$M$ is semisimple by using induction. If $n=2$, then we have the following
exact sequence
$0\rightarrow M_{1}\rightarrow M\rightarrow M/M_{1}\rightarrow 0.$
Suppose the above sequence is not splitting, then the either the action of $g$
or the action of $h$ on $M$ is not semisimple by Theorem 3.10. Thus
$M\notin\mathcal{O}$. This contradiction implies that $M$ is semisimple.
Suppose $M$ is semisimple in the case when $n=k\geq 2$. Now let $n=k+1$. Then
$M_{k}=\oplus_{i=1}^{k}S_{i}$ is a direct sum of simple $U_{g,h}$-modules
$S_{i}$ by the assumption. Now let
$N_{i}=S_{1}\oplus\cdots\oplus\widehat{S_{i}}\oplus\cdots\oplus S_{k}$, where
$\widehat{S_{i}}$ means that $S_{i}$ is omitted. Consider the following
commutative diagrams for $i=1,2,\cdots,k$:
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{M_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{i}}$$\scriptstyle{\phi}$$\textstyle{M\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{i}}$$\scriptstyle{\pi}$$\textstyle{M/M_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{id}}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{N_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{i}}$$\textstyle{M/S_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi_{i}}$$\textstyle{M/M_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$
where $\phi,\varphi_{i}$ are embedding mappings, and
$\lambda_{i},\pi_{i},\pi,\psi_{i}$ are the canonical projections. Since the
bottom exact sequences are splitting by the inductive assumption, there are
homomorphisms $\xi_{i}:M/S_{i}\rightarrow$ $N_{i}$ such that
$\xi_{i}\varphi_{i}=\operatorname{id}_{N_{i}}.$ Define $\xi:M\rightarrow
M_{k}$ via
$\xi(m)=\frac{1}{k-1}\sum\limits_{i=1}^{k}\xi_{i}\pi_{i}(m)$
for $m\in M$. Now let $m=m_{1}+\cdots+m_{k}\in M_{k}$, where $m_{i}\in S_{i}$.
Then
$\xi_{i}\pi_{i}(m)=\xi_{i}\pi_{i}\phi(m)=\xi_{i}\varphi_{i}\lambda_{i}(m)=m-m_{i},$
and
$\xi\phi(m)=\frac{1}{k-1}\sum\limits_{i=1}^{k}\xi_{i}\pi_{i}(m)=m.$
This means that the top exact sequence of the above commutative diagrams is
splitting. Hence $M\cong M_{k}\oplus\operatorname{Ker}\xi\cong M_{k}\oplus
M/M_{k}\cong S_{1}\oplus\cdots\oplus S_{k}\oplus M/M_{k}$ is semisimple. ∎
By the PBW Theorem 2.1, the algebra $U_{g,h}$ has a triangular decomposition
$\mathbb{K}[F]\otimes H\otimes$ $\mathbb{K}[E]$, where $H=\mathbb{K}[K^{\pm
1},g^{\pm 1},h^{\pm 1}]$. In the same way as [8, Definition 11.1], we can
define the Harish-Chandra projection $\xi$ as follows:
$\xi:=\varepsilon\otimes\operatorname{id}\otimes\varepsilon:U_{g,h}=\mathbb{K}[F]\otimes
H\otimes\mathbb{K}[E]\rightarrow H.$
Let $V(a,b,c)$ be a Verma module generated by a nonzero highest weight vector
$v$. Then
(4.1) $\displaystyle Cv=\frac{qa+q^{-1}a^{-1}b^{2}}{(q-q^{-1})^{2}}v,\qquad
gv=bv,\qquad hv=cv,$
where $C$ is the Casimir element of $U_{g,h}$. By Corollary 2.4, the center of
$U_{g,h}$ is $\mathbb{K}[C,g^{\pm 1},h^{\pm 1}]$. For any element
$z\in\mathbb{K}[C,g^{\pm 1},h^{\pm 1}]$, $zv=\xi_{(a,b,c)}(z)v$ for some
$\xi_{(a,b,c)}(z)\in\mathbb{K}$. Then
$\xi_{(a,b,c)}\in\operatorname{Hom}_{alg}(\mathbb{K}[C,g^{\pm 1},h^{\pm
1}],\mathbb{K}).$
We call $\xi_{(a,b,c)}$ the central character determined by $V(a,b,c)$.
###### Proposition 4.9.
(1) Suppose $V(a,b,c)$ and $V(a^{\prime},b^{\prime},c^{\prime})$ are two Verma
modules. Then $\xi_{(a^{\prime},b^{\prime},c^{\prime})}=\xi_{(a,b,c)}$ if and
only if
(4.2) $\displaystyle(a-a^{\prime})(aa^{\prime}-q^{-2}b^{2})=0,\qquad
b=b^{\prime},\qquad c=c^{\prime}.$
(2)
$\operatorname{Hom}_{U_{g,h}}(V(a,b,c),V(a^{\prime},b^{\prime},c^{\prime}))\neq
0$ if and only if $a=\varepsilon q^{-n-1}b$ and $a^{\prime}=$ $\varepsilon
q^{n-1}b$ for some nonnegative integer $n$ and
$(b,c)=(b^{\prime},c^{\prime})$.
###### Proof.
Let $v,v^{\prime}$ be the nonzero highest weight vectors of $V(a,b,c)$ and
$V(a^{\prime},b^{\prime},c^{\prime})$ respectively. Then
$\xi_{(a^{\prime},b^{\prime},c^{\prime})}=\xi_{(a,b,c)}$ if and only if
$Cv^{\prime}=\xi_{(a,b,c)}(C)v^{\prime}$,
$gv^{\prime}=\xi_{(a,b,c)}(g)v^{\prime}$ and
$hv^{\prime}=\xi_{(a,b,c)}(h)v^{\prime}$. Thus (4.2) follows from (4.1).
If there is a nonzero homomorphism $\varphi$ from $V(a,b,c)$ to
$V(a^{\prime},b^{\prime},c^{\prime})$, then
$\xi_{(a^{\prime},b^{\prime},c^{\prime})}=\xi_{(a,b,c)}.$
Thus (4.2) holds. Suppose
$\varphi(v)=(\sum\limits_{i=0}^{n}a_{i}F^{i})v^{\prime}$, where $a_{n}\neq 0$.
Since $\varphi(Kv)=K\varphi(v)$,
$aa_{i}=q^{-2i}a_{i}a^{\prime}$
for $i=0,1,\cdots,n$. Hence $a=q^{-2n}a^{\prime}$ and $a_{i}=0$ for $0\leq
i\leq n-1$. Observe that
$0=\varphi(Ev)=E\varphi(v)=a_{n}EF^{n}v^{\prime}=a_{n}[n]\frac{a^{\prime}q^{-n+1}-a^{\prime-1}q^{n-1}b^{2}}{q-q^{-1}}F^{n-1}v^{\prime}.$
Hence $aa^{\prime}=q^{2n-2}b^{2}$. So $a^{\prime}=\varepsilon q^{n-1}b$ and
$a=\varepsilon q^{-n-1}b$.
Conversely, notice that $V(a,b,c)=\mathbb{K}[F]v$ and
$V(a^{\prime},b^{\prime},c^{\prime})=\mathbb{K}[F]v^{\prime}$ are two free
$\mathbb{K}[F]$-modules. Thus the mapping
$\varphi(f(F)v)=f(F)F^{n}v^{\prime},\qquad f(F)\in\mathbb{K}[F]$
is a nonzero linear mapping. Since $b=b^{\prime}$ and $c=c^{\prime}$,
$\varphi(gf(F)v)=g\varphi(f(F)v)$ and $\varphi(hf(F)v)=h\varphi(f(F)v)$. It is
routine to check that $\varphi(Ef(F)v)=E\varphi(f(F)v)$ and
$\varphi(Kf(F)v)=K\varphi(f(F)v)$. So $\varphi$ is a nonzero homomorphism of
$U_{g,h}$-modules. ∎
For any $\nu\in\operatorname{Hom}_{alg}(\mathbb{K}[C,g^{\pm 1},h^{\pm
1}],\mathbb{K})$, define a full subcategory $\mathcal{O}(\nu)$ of
$\mathcal{O}$ as follows:
$\mathcal{O}(\nu)=\\{M\in\mathcal{O}|\forall m\in M,z\in\mathbb{K}[C,g^{\pm
1},h^{\pm 1}],\exists n\in\mathbb{N}\text{ such that }(z-\nu(z))^{n}m=0\\}.$
For any $\nu\in\operatorname{Hom}_{alg}(\mathbb{K}[C,g^{\pm 1},h^{\pm
1}],\mathbb{K})$, suppose $\nu(C)=\mu$, $\nu(g)=b$, $\nu(h)=c$. Then
$b,c\in\mathbb{K}^{\times}$. Since $\mathbb{K}$ is an algebraically closed
field, there is $a\in\mathbb{K}$ such that
$\frac{qa+q^{-1}a^{-1}b^{2}}{(q-q^{-1})^{2}}=\mu$. Therefore the Verma module
$V(a,b,c)\in\mathcal{O}(\nu)$ by (4.1), and $\mathcal{O}(\nu)$ is not empty.
By results in [8, Theorem 11.2], we have the following decomposition of
$\mathcal{O}$.
###### Theorem 4.10.
The category
$\mathcal{O}=\bigoplus\limits_{\nu\in\operatorname{Hom}_{alg}(\mathbb{K}[C,g^{\pm
1},h^{\pm 1}],\mathbb{K})}\mathcal{O}(\nu).$
Let $\mathcal{H}$ be the Harish-Chandra category over $(U_{g,h},H)$, which
consists of all $U_{g,h}$-modules $M$ with a simultaneous weight space
decomposition for $H=\mathbb{K}[K^{\pm 1},g^{\pm 1},h^{\pm 1}]$, and finite-
dimensional weight spaces. By Proposition 2.5, $U_{g,h}$ has an anti-
involution $i$. Thus we can define a duality functor
$F:\mathcal{H}\rightarrow\mathcal{H}$ as follows: $F(M)$ is the vector space
spanned by all ${H}$-weight vectors in
$M^{*}=\operatorname{Hom}_{\mathbb{K}}(M,\mathbb{K}).$ It is a module under
the action determined by
$\langle am^{*},m\rangle=\langle m^{*},i(a)m\rangle$
for $a\in U_{g,h}$, $m^{*}\in F(M)$, $m\in M$. By results in [8], $F$ defines
a duality functor $F:\mathcal{O}\rightarrow\mathcal{O}^{op}$. Moreover,
$F(L(a,b,c))=L(a,b,c)$, $F(V(a,b,c))$ has the socle $L(a,b,c)$ and so on.
By Proposition 4.9, $U_{g,h}$ satisfies the condition (S4) defined in [8].
Therefore it satisfies the conditions (S1), (S2), and (S3) by [8, Proposition
11.3] and [8, Theorem 10.1], where (S1), (S2) and (S3) are defined in [8]. By
[8, Theorem 4.3], we have the following theorem since $\Gamma$ is trivial.
###### Theorem 4.11.
Let $\nu\in\operatorname{Hom}_{\mathbb{K}}(\mathbb{K}[K^{\pm 1},g^{\pm
1},h^{\pm}],\mathbb{K})$ and $\mathcal{O}(\nu)$ have the same meaning as in
Theorem 4.10. Then:
(1) Each object of the block $\mathcal{O}(\nu)$ has a filtration whose
subquotients are quotients of Verma modules.
(2) Each block $\mathcal{O}(\nu)$ has enough projective objects.
(3) Each block $\mathcal{O}(\nu)$ is a highest weight category, equivalent to
the category of finitely generated right modules over a finite-dimensional
$\mathbb{K}$-algebra.
In particular, BGG Reciprocity holds in $\mathcal{O}$.
## References
* [1] Bass H., On the ubiquity of Gorenstein rings, Math. Z. 83(1963),8-28.
* [2] Brown K. A., Goodearl K. R., Lectures on algebraic quantum groups, Birkhauser Verlag, 2002.
* [3] Ekström, E.K., The Auslander condition on graded and filtered Noetherian ring, Lecture Notes in Mathematics 1404, 220-245.
* [4] Ito T. , Terwilliger, P., Weng C.W., The quantum algebra $U_{q}(\mathfrak{sl}_{2})$ and its equitable representation, J. of Algebra, 298(2006), 284-301.
* [5] Kac V.G., Infinite dimensional Lie algebras, 3rd ed., Cambridge University Press, Cambridge, 1990.
* [6] Kassel C., Quantum groups, GTM155, Springer-Verlag, New York, Berlin Heidelberg, 1995.
* [7] Khare A., Category $\mathcal{O}$ over a deformation of the symplectic oscillator algebra, J. Pure Appl. Algebra, 195(2)(2005), 131-166.
* [8] Khare A., Functoriality of the BGG category $\mathcal{O}$, Communications in Algebra, 37(12)(2009), 4431-4475.
* [9] McConnell J. C., Robson J.C., Noncommutative Noetherian rings, AMS Graduate Studies in Mathematics, Providence, 2001\.
* [10] Tikaradze, A., Khare, A., A Center and representations of infinitesimal Hecke algebras of $\mathfrak{sl}_{2}$, Comm. Algebra 38(2)(2010), 405-439.
* [11] Turaev V.G., Operator invariants of tangles and $R$-matrices, Izv. Akad. Nauk SSSR Ser. Math. 53(5)(1989), 1073-1107.
* [12] Weibel C. A., An introduction to homological algebra, China Machine Press, Beijing, 2004.
* [13] Wu Z., Extended quantum enveloping algebra of $\mathfrak{sl}(2)$, Glasgow Math. J., 51(2009), 441-465.
* [14] Wu Z., Extension of a quantized enveloping algebra by a Hopf algebra, Science China Math., 53(5)(2010), 1151-1406.
* [15] Zhang J.J., On Gelfand-Kirillov transcendence degree, Trans. AMS 348(7)(1996), 2867-2899.
|
arxiv-papers
| 2013-03-03T14:00:41 |
2024-09-04T02:49:42.340721
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhixiang Wu",
"submitter": "Zhixiang Wu",
"url": "https://arxiv.org/abs/1303.0498"
}
|
1303.0571
|
SLAC-PUB-15381
BABAR-PUB-13/001
††thanks: Deceased
The BABAR Collaboration
# Measurement of an Excess of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ Decays and
Implications for Charged Higgs Bosons
J. P. Lees V. Poireau V. Tisserand Laboratoire d’Annecy-le-Vieux de
Physique des Particules (LAPP), Université de Savoie, CNRS/IN2P3, F-74941
Annecy-Le-Vieux, France E. Grauges Universitat de Barcelona, Facultat de
Fisica, Departament ECM, E-08028 Barcelona, Spain A. Palanoab INFN Sezione di
Baria; Dipartimento di Fisica, Università di Barib, I-70126 Bari, Italy G.
Eigen B. Stugu University of Bergen, Institute of Physics, N-5007 Bergen,
Norway D. N. Brown L. T. Kerth Yu. G. Kolomensky M. Lee G. Lynch
Lawrence Berkeley National Laboratory and University of California, Berkeley,
California 94720, USA H. Koch T. Schroeder Ruhr Universität Bochum,
Institut für Experimentalphysik 1, D-44780 Bochum, Germany C. Hearty T. S.
Mattison J. A. McKenna R. Y. So University of British Columbia, Vancouver,
British Columbia, Canada V6T 1Z1 A. Khan Brunel University, Uxbridge,
Middlesex UB8 3PH, United Kingdom V. E. Blinov A. R. Buzykaev V. P.
Druzhinin V. B. Golubev E. A. Kravchenko A. P. Onuchin S. I. Serednyakov
Yu. I. Skovpen E. P. Solodov K. Yu. Todyshev A. N. Yushkov Budker
Institute of Nuclear Physics SB RAS, Novosibirsk 630090, Russia D. Kirkby A.
J. Lankford M. Mandelkern University of California at Irvine, Irvine,
California 92697, USA B. Dey J. W. Gary O. Long G. M. Vitug University of
California at Riverside, Riverside, California 92521, USA C. Campagnari M.
Franco Sevilla T. M. Hong D. Kovalskyi J. D. Richman C. A. West
University of California at Santa Barbara, Santa Barbara, California 93106,
USA A. M. Eisner W. S. Lockman A. J. Martinez B. A. Schumm A. Seiden
University of California at Santa Cruz, Institute for Particle Physics, Santa
Cruz, California 95064, USA D. S. Chao C. H. Cheng B. Echenard K. T. Flood
D. G. Hitlin P. Ongmongkolkul F. C. Porter California Institute of
Technology, Pasadena, California 91125, USA R. Andreassen Z. Huard B. T.
Meadows M. D. Sokoloff L. Sun University of Cincinnati, Cincinnati, Ohio
45221, USA P. C. Bloom W. T. Ford A. Gaz U. Nauenberg J. G. Smith S. R.
Wagner University of Colorado, Boulder, Colorado 80309, USA R. Ayad Now at
the University of Tabuk, Tabuk 71491, Saudi Arabia W. H. Toki Colorado State
University, Fort Collins, Colorado 80523, USA B. Spaan Technische
Universität Dortmund, Fakultät Physik, D-44221 Dortmund, Germany K. R.
Schubert R. Schwierz Technische Universität Dresden, Institut für Kern- und
Teilchenphysik, D-01062 Dresden, Germany D. Bernard M. Verderi Laboratoire
Leprince-Ringuet, Ecole Polytechnique, CNRS/IN2P3, F-91128 Palaiseau, France
S. Playfer University of Edinburgh, Edinburgh EH9 3JZ, United Kingdom D.
Bettonia C. Bozzia R. Calabreseab G. Cibinettoab E. Fioravantiab I. Garziaab
E. Luppiab L. Piemontesea V. Santoroa INFN Sezione di Ferraraa; Dipartimento
di Fisica e Scienze della Terra, Università di Ferrarab, I-44122 Ferrara,
Italy R. Baldini-Ferroli A. Calcaterra R. de Sangro G. Finocchiaro S.
Martellotti P. Patteri I. M. Peruzzi Also with Università di Perugia,
Dipartimento di Fisica, Perugia, Italy M. Piccolo M. Rama A. Zallo INFN
Laboratori Nazionali di Frascati, I-00044 Frascati, Italy R. Contriab E.
Guidoab M. Lo Vetereab M. R. Mongeab S. Passaggioa C. Patrignaniab E. Robuttia
INFN Sezione di Genovaa; Dipartimento di Fisica, Università di Genovab,
I-16146 Genova, Italy B. Bhuyan V. Prasad Indian Institute of Technology
Guwahati, Guwahati, Assam, 781 039, India M. Morii Harvard University,
Cambridge, Massachusetts 02138, USA A. Adametz U. Uwer Universität
Heidelberg, Physikalisches Institut, Philosophenweg 12, D-69120 Heidelberg,
Germany H. M. Lacker Humboldt-Universität zu Berlin, Institut für Physik,
Newtonstr. 15, D-12489 Berlin, Germany P. D. Dauncey Imperial College
London, London, SW7 2AZ, United Kingdom U. Mallik University of Iowa, Iowa
City, Iowa 52242, USA C. Chen J. Cochran W. T. Meyer S. Prell A. E. Rubin
Iowa State University, Ames, Iowa 50011-3160, USA A. V. Gritsan Johns
Hopkins University, Baltimore, Maryland 21218, USA N. Arnaud M. Davier D.
Derkach G. Grosdidier F. Le Diberder A. M. Lutz B. Malaescu P. Roudeau
A. Stocchi G. Wormser Laboratoire de l’Accélérateur Linéaire, IN2P3/CNRS et
Université Paris-Sud 11, Centre Scientifique d’Orsay, B. P. 34, F-91898 Orsay
Cedex, France D. J. Lange D. M. Wright Lawrence Livermore National
Laboratory, Livermore, California 94550, USA J. P. Coleman J. R. Fry E.
Gabathuler D. E. Hutchcroft D. J. Payne C. Touramanis University of
Liverpool, Liverpool L69 7ZE, United Kingdom A. J. Bevan F. Di Lodovico R.
Sacco Queen Mary, University of London, London, E1 4NS, United Kingdom G.
Cowan University of London, Royal Holloway and Bedford New College, Egham,
Surrey TW20 0EX, United Kingdom J. Bougher D. N. Brown C. L. Davis
University of Louisville, Louisville, Kentucky 40292, USA A. G. Denig M.
Fritsch W. Gradl K. Griessinger A. Hafner E. Prencipe Johannes Gutenberg-
Universität Mainz, Institut für Kernphysik, D-55099 Mainz, Germany R. J.
Barlow Now at the University of Huddersfield, Huddersfield HD1 3DH, UK G. D.
Lafferty University of Manchester, Manchester M13 9PL, United Kingdom E.
Behn R. Cenci B. Hamilton A. Jawahery D. A. Roberts University of
Maryland, College Park, Maryland 20742, USA R. Cowan D. Dujmic G. Sciolla
Massachusetts Institute of Technology, Laboratory for Nuclear Science,
Cambridge, Massachusetts 02139, USA R. Cheaib P. M. Patel S. H. Robertson
McGill University, Montréal, Québec, Canada H3A 2T8 P. Biassoniab N. Neria F.
Palomboab INFN Sezione di Milanoa; Dipartimento di Fisica, Università di
Milanob, I-20133 Milano, Italy L. Cremaldi R. Godang Now at University of
South Alabama, Mobile, Alabama 36688, USA P. Sonnek D. J. Summers
University of Mississippi, University, Mississippi 38677, USA X. Nguyen M.
Simard P. Taras Université de Montréal, Physique des Particules, Montréal,
Québec, Canada H3C 3J7 G. De Nardoab D. Monorchioab G. Onoratoab C. Sciaccaab
INFN Sezione di Napolia; Dipartimento di Scienze Fisiche, Università di Napoli
Federico IIb, I-80126 Napoli, Italy M. Martinelli G. Raven NIKHEF, National
Institute for Nuclear Physics and High Energy Physics, NL-1009 DB Amsterdam,
Netherlands C. P. Jessop J. M. LoSecco University of Notre Dame, Notre
Dame, Indiana 46556, USA K. Honscheid R. Kass Ohio State University,
Columbus, Ohio 43210, USA J. Brau R. Frey N. B. Sinev D. Strom E.
Torrence University of Oregon, Eugene, Oregon 97403, USA E. Feltresiab M.
Margoniab M. Morandina M. Posoccoa M. Rotondoa G. Simia F. Simonettoab R.
Stroiliab INFN Sezione di Padovaa; Dipartimento di Fisica, Università di
Padovab, I-35131 Padova, Italy S. Akar E. Ben-Haim M. Bomben G. R.
Bonneaud H. Briand G. Calderini J. Chauveau Ph. Leruste G. Marchiori J.
Ocariz S. Sitt Laboratoire de Physique Nucléaire et de Hautes Energies,
IN2P3/CNRS, Université Pierre et Marie Curie-Paris6, Université Denis Diderot-
Paris7, F-75252 Paris, France M. Biasiniab E. Manonia S. Pacettiab A. Rossiab
INFN Sezione di Perugiaa; Dipartimento di Fisica, Università di Perugiab,
I-06100 Perugia, Italy C. Angeliniab G. Batignaniab S. Bettariniab M.
Carpinelliab Also with Università di Sassari, Sassari, Italy G. Casarosaab A.
Cervelliab F. Fortiab M. A. Giorgiab A. Lusianiac B. Oberhofab E. Paoloniab A.
Pereza G. Rizzoab J. J. Walsha INFN Sezione di Pisaa; Dipartimento di Fisica,
Università di Pisab; Scuola Normale Superiore di Pisac, I-56127 Pisa, Italy
D. Lopes Pegna J. Olsen A. J. S. Smith Princeton University, Princeton, New
Jersey 08544, USA R. Facciniab F. Ferrarottoa F. Ferroniab M. Gasperoab L. Li
Gioia G. Pireddaa INFN Sezione di Romaa; Dipartimento di Fisica, Università di
Roma La Sapienzab, I-00185 Roma, Italy C. Bünger O. Grünberg T. Hartmann
T. Leddig C. Voß R. Waldi Universität Rostock, D-18051 Rostock, Germany T.
Adye E. O. Olaiya F. F. Wilson Rutherford Appleton Laboratory, Chilton,
Didcot, Oxon, OX11 0QX, United Kingdom S. Emery G. Hamel de Monchenault G.
Vasseur Ch. Yèche CEA, Irfu, SPP, Centre de Saclay, F-91191 Gif-sur-Yvette,
France F. Anullia D. Aston D. J. Bard J. F. Benitez C. Cartaro M. R.
Convery J. Dorfan G. P. Dubois-Felsmann W. Dunwoodie M. Ebert R. C. Field
B. G. Fulsom A. M. Gabareen M. T. Graham C. Hast W. R. Innes P. Kim M.
L. Kocian D. W. G. S. Leith P. Lewis D. Lindemann B. Lindquist S. Luitz
V. Luth H. L. Lynch D. B. MacFarlane D. R. Muller H. Neal S. Nelson M.
Perl T. Pulliam B. N. Ratcliff A. Roodman A. A. Salnikov R. H. Schindler
A. Snyder D. Su M. K. Sullivan J. Va’vra A. P. Wagner W. F. Wang W. J.
Wisniewski M. Wittgen D. H. Wright H. W. Wulsin V. Ziegler SLAC National
Accelerator Laboratory, Stanford, California 94309 USA W. Park M. V. Purohit
R. M. White Now at Universidad Técnica Federico Santa Maria, Valparaiso,
Chile 2390123 J. R. Wilson University of South Carolina, Columbia, South
Carolina 29208, USA A. Randle-Conde S. J. Sekula Southern Methodist
University, Dallas, Texas 75275, USA M. Bellis P. R. Burchat T. S.
Miyashita E. M. T. Puccio Stanford University, Stanford, California
94305-4060, USA M. S. Alam J. A. Ernst State University of New York,
Albany, New York 12222, USA R. Gorodeisky N. Guttman D. R. Peimer A.
Soffer Tel Aviv University, School of Physics and Astronomy, Tel Aviv, 69978,
Israel S. M. Spanier University of Tennessee, Knoxville, Tennessee 37996,
USA J. L. Ritchie A. M. Ruland R. F. Schwitters B. C. Wray University of
Texas at Austin, Austin, Texas 78712, USA J. M. Izen X. C. Lou University
of Texas at Dallas, Richardson, Texas 75083, USA F. Bianchiab F. De Moriab A.
Filippia D. Gambaab S. Zambitoab INFN Sezione di Torinoa; Dipartimento di
Fisica Sperimentale, Università di Torinob, I-10125 Torino, Italy L.
Lanceriab L. Vitaleab INFN Sezione di Triestea; Dipartimento di Fisica,
Università di Triesteb, I-34127 Trieste, Italy F. Martinez-Vidal A.
Oyanguren P. Villanueva-Perez IFIC, Universitat de Valencia-CSIC, E-46071
Valencia, Spain H. Ahmed J. Albert Sw. Banerjee F. U. Bernlochner H. H.
F. Choi G. J. King R. Kowalewski M. J. Lewczuk T. Lueck I. M. Nugent J.
M. Roney R. J. Sobie N. Tasneem University of Victoria, Victoria, British
Columbia, Canada V8W 3P6 T. J. Gershon P. F. Harrison T. E. Latham
Department of Physics, University of Warwick, Coventry CV4 7AL, United Kingdom
H. R. Band S. Dasu Y. Pan R. Prepost S. L. Wu University of Wisconsin,
Madison, Wisconsin 53706, USA
(March 2, 2013)
###### Abstract
Based on the full BABAR data sample, we report improved measurements of the
ratios ${\cal R}(D)={\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau})/{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell})$ and ${\cal R}(D^{*})={\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau})/{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell})$, where $\ell$ refers to either an
electron or muon. These ratios are sensitive to new physics contributions in
the form of a charged Higgs boson. We measure ${\cal R}(D)=0.440\pm 0.058\pm
0.042$ and ${\cal R}(D^{*})=0.332\pm 0.024\pm 0.018$, which exceed the
Standard Model expectations by $2.0\sigma$ and $2.7\sigma$, respectively.
Taken together, the results disagree with these expectations at the
$3.4\sigma$ level. This excess cannot be explained by a charged Higgs boson in
the type II two-Higgs-doublet model. Kinematic distributions presented here
exclude large portions of the more general type III two-Higgs-doublet model,
but there are solutions within this model compatible with the results.
###### pacs:
13.20.He, 14.40.Nd, 14.80.Da
††preprint: BABAR-PUB-13/001††preprint: SLAC-PUB-15381
## I Introduction
In the Standard Model (SM), semileptonic decays of $B$ mesons proceed via
first-order electroweak interactions and are mediated by the $W$ boson
Heiliger and Sehgal (1989); Körner and Schuler (1990); Hwang and Kim (2000).
Decays involving electrons and muons are expected to be insensitive to non-SM
contributions and therefore have been the bases of the determination of the
Cabibbo-Kobayashi-Maskawa (CKM) matrix elements $|V_{cb}|$ and $|V_{ub}|$
Amhis _et al._ (2012). Decays involving the higher-mass $\tau$ lepton provide
additional information on SM processes and are sensitive to additional
amplitudes, such as those involving an intermediate charged Higgs boson Tanaka
(1995); Itoh _et al._ (2005); Nierste _et al._ (2008); Tanaka and Watanabe
(2010); Fajfer _et al._ (2012a). Thus, they offer an excellent opportunity to
search for this and other non-SM contributions.
Over the past two decades, the development of heavy-quark effective theory
(HQET) and precise measurements of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays con at the $B$ factories
Antonelli _et al._ (2010); Nakamura _et al._ (2010) have greatly improved
our understanding of exclusive semileptonic decays. The relative rates
${\cal R}(D)=\frac{{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau})}{{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell})},\hskip 5.69054pt{\cal R}(D^{*})=\frac{{\cal
B}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau})}{{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell})}$ (1)
are independent of the CKM element $|V_{cb}|$ and also, to a large extent, of
the parameterization of the hadronic matrix elements. SM expectations Fajfer
_et al._ (2012a) for the ratios ${\cal R}(D)$ and ${\cal R}(D^{*})$ have
uncertainties of less than 6% and 2%, respectively. Calculations Tanaka
(1995); Itoh _et al._ (2005); Nierste _et al._ (2008); Tanaka and Watanabe
(2010); Fajfer _et al._ (2012a) based on two-Higgs-doublet models predict a
substantial impact on the ratio ${\cal R}(D)$, and a smaller effect on ${\cal
R}(D^{*})$ due to the spin of the $D^{*}$ meson.
The decay $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ was first observed in 2007 by the Belle
Collaboration Matyja _et al._ (2007). Since then, both BABAR and Belle have
published improved measurements, and have found evidence for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays Aubert _et al._ (2008a); Adachi _et
al._ (2009); Bozek _et al._ (2010). Up to now, the measured values for ${\cal
R}(D)$ and ${\cal R}(D^{*})$ have consistently exceeded the SM expectations,
though the significance of the excess is low due to the large statistical
uncertainties.
We recently presented an update of the earlier measurement Aubert _et al._
(2008a) based on the full BABAR data sample Lees _et al._ (2012). This update
included improvements to the event reconstruction that increased the signal
efficiency by more than a factor of 3. In the following, we describe the
analysis in greater detail, present the distributions of some important
kinematic variables, and expand the interpretation of the results.
We choose to reconstruct only the purely leptonic decays of the $\tau$ lepton,
$\tau^{-}\rightarrow e^{-}\overline{\nu}_{e}\nu_{\tau}$ and
$\tau^{-}\rightarrow\mu^{-}\overline{\nu}_{\mu}\nu_{\tau}$, so that $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays are identified by the same
particles in the final state. This leads to the cancellation of various
detection efficiencies and the reduction of related uncertainties on the
ratios ${\cal R}(D^{(*)})$.
Candidate events originating from $\mathchar 28935\relax{(4S)}\rightarrow
B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ decays are selected by fully
reconstructing the hadronic decay of one of the $B$ mesons ($B_{\rm tag}$),
and identifying the semileptonic decay of the other $B$ by a charm meson
(charged or neutral $D$ or $D^{*}$ meson), a charged lepton (either $e$ or
$\mu$) and the missing momentum and energy in the whole event.
Yields for the signal decays $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ and the normalization decays $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ are extracted by an unbinned maximum-
likelihood fit to the two-dimensional distributions of the invariant mass of
the undetected particles $m_{\rm miss}^{2}=p^{2}_{\rm
miss}=(p_{e^{+}e^{-}}-p_{B_{\rm tag}}-p_{D^{(*)}}-p_{\ell})^{2}$ (where
$p_{e^{+}e^{-}}$, $p_{B_{\rm tag}}$, $p_{D^{(*)}}$, and $p_{\ell}$ refer to
the four-momenta of the colliding beams, the $B_{\rm tag}$, the $D^{(*)}$, and
the charged lepton, respectively) versus the lepton three-momentum in the $B$
rest frame, $|\boldsymbol{p}^{*}_{\ell}|$. The $m_{\rm miss}^{2}$ distribution
for decays with a single missing neutrino peaks at zero, whereas signal
events, which have three missing neutrinos, have a broad $m_{\rm miss}^{2}$
distribution that extends to about $9\mathrm{\,Ge\kern-1.00006ptV}^{2}$. The
observed lepton in signal events is a secondary particle from the $\tau$
decay, so its $|\boldsymbol{p}^{*}_{\ell}|$ spectrum is softer than for
primary leptons in normalization decays.
The principal sources of background originate from $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ decays and from continuum events,
i.e., $e^{+}e^{-}\rightarrow f\overline{f}(\gamma)$ pair production, where
$f=u,d,s,c,\tau$. The yields and distributions of these two background sources
are derived from selected data control samples. The background decays that are
most difficult to separate from signal decays come from semileptonic decays to
higher-mass, excited charm mesons, since they can produce similar $m_{\rm
miss}^{2}$ and $|\boldsymbol{p}^{*}_{\ell}|$ values to signal decays and their
branching fractions and decay properties are not well known. Thus, their
impact on the signal yield is examined in detail.
The choice of the selection criteria and fit configuration are based on
samples of simulated and data events. To avoid bias in the determination of
the signal yield, the signal region was blinded for data until the analysis
procedure was settled.
## II Theory of $\boldsymbol{\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}}$ Decays
### II.1 Standard Model
Given that leptons are not affected by quantum chromodynamic (QCD)
interactions (see Fig. 1), the matrix element of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays can be factorized in the form
Tanaka (1995)
$\mathcal{M}^{\lambda_{\tau}}_{\lambda_{D^{(*)}}}(q^{2},\theta_{\tau})=\frac{G_{F}V_{cb}}{\sqrt{2}}\sum_{\lambda_{W}}\eta_{\lambda_{W}}L^{\lambda_{\tau}}_{\lambda_{W}}(q^{2},\theta_{\tau})H^{\lambda_{D^{(*)}}}_{\lambda_{W}}(q^{2}),$
(2)
where $L^{\lambda_{\tau}}_{\lambda_{W}}$ and
$H^{\lambda_{D^{(*)}}}_{\lambda_{W}}$ are the leptonic and hadronic currents
defined as
$\displaystyle L^{\lambda_{\tau}}_{\lambda_{W}}(q^{2},\theta_{\tau})\;\equiv$
$\displaystyle\;\;\epsilon_{\mu}(\lambda_{W})\left<\tau\;\overline{\nu}_{\tau}|\overline{\tau}\;\gamma^{\mu}(1-\gamma_{5})\;\nu_{\tau}|0\right>,$
(3) $\displaystyle H^{\lambda_{D^{(*)}}}_{\lambda_{W}}(q^{2})\;\equiv$
$\displaystyle\;\;\epsilon^{*}_{\mu}(\lambda_{W})\left<D^{(*)}\;|\overline{c}\;\gamma^{\mu}(1-\gamma_{5})\;b|\overline{B}\right>.$
(4)
Here, the indices $\lambda$ refer to the helicities of the $W$, $D^{(*)}$, and
$\tau$, $q=p_{B}-p_{D^{(*)}}$ is the four-momentum of the virtual $W$, and
$\theta_{\tau}$ is the angle between the $\tau$ and the $D^{(*)}$ three-
momenta measured in the rest frame of the virtual $W$. The metric factor
$\eta$ in Eq. 2 is $\eta_{\\{\pm,0,s\\}}=\\{1,1,-1\\}$, where
$\lambda_{W}=\pm$, 0, and $s$ refer to the four helicity states of the virtual
$W$ boson ($s$ is the scalar state which, of course, has helicity 0).
Figure 1: Parton level diagram for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays. The gluon lines illustrate the
QCD interactions that affect the hadronic part of the amplitude.
The leptonic currents can be calculated analytically with the standard
framework of electroweak interactions. In the rest frame of the virtual $W$
($W^{*}$), they take the form Hagiwara _et al._ (1989):
$\displaystyle L^{-}_{\pm}$ $\displaystyle=-2\sqrt{q^{2}}vd_{\pm},$
$\displaystyle L^{+}_{\pm}$ $\displaystyle=\mp\sqrt{2}m_{\tau}vd_{0},$ (5)
$\displaystyle L^{-}_{0}$ $\displaystyle=-2\sqrt{q^{2}}vd_{0},$ $\displaystyle
L^{+}_{0}$ $\displaystyle=\sqrt{2}m_{\tau}v(d_{+}-d_{-}),$ (6) $\displaystyle
L^{-}_{s}$ $\displaystyle=0,$ $\displaystyle L^{+}_{s}$
$\displaystyle=-2m_{\tau}v,$ (7)
with
$v=\sqrt{1-\frac{m_{\tau}^{2}}{q^{2}}},\hskip
11.38109ptd_{\pm}=\frac{1\pm\cos\theta_{\tau}}{\sqrt{2}},\hskip
11.38109ptd_{0}=\sin\theta_{\tau}.$ (8)
Given that the average $q^{2}$ in $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays is about 8
$\mathrm{\,Ge\kern-1.00006ptV}^{2}$, the fraction of $\tau^{-}$ leptons with
positive helicity is about 30% in the SM.
Due to the nonperturbative nature of the QCD interaction at this energy scale,
the hadronic currents cannot be calculated analytically. They are expressed in
terms of form factors (FF) as functions of $q^{2}$ (see Secs. II.1.1 and
II.1.2).
The differential decay rate, integrated over angles, is derived from Eq. 2 and
Eqs..5–7 Körner and Schuler (1990):
$\displaystyle\frac{{\rm d}\Gamma_{\tau}}{{\rm d}q^{2}}=$
$\displaystyle\frac{G_{F}^{2}\;|V_{cb}|^{2}\;|\boldsymbol{p}^{*}_{D^{(*)}}|\;q^{2}}{96\pi^{3}m_{B}^{2}}\left(1-\frac{m_{\tau}^{2}}{q^{2}}\right)^{2}\Bigl{[}(|H_{+}|^{2}+|H_{-}|^{2}$
$\displaystyle+|H_{0}|^{2})\left(1+\frac{m_{\tau}^{2}}{2q^{2}}\right)+\frac{3m_{\tau}^{2}}{2q^{2}}|H_{s}|^{2}\Bigr{]},$
(9)
where $|\boldsymbol{p}^{*}_{D^{(*)}}|$ is the three-momentum of the $D^{(*)}$
meson in the $B$ rest frame. For simplicity, the helicities of the $D^{(*)}$
meson and the $q^{2}$ dependence of the hadron helicity amplitudes
$H_{\pm,0,s}$ have been omitted. The assignment is unambiguous because in
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays, $H_{\pm}$ only receive
contributions from $\lambda_{D^{*}}=\pm$, while $H_{0,s}$ require
$\lambda_{D^{*}}=0$. In $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays, only $\lambda_{D}=s$ is possible,
which implies $H_{\pm}=0$.
#### II.1.1 Form factor parameterization of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays
Four independent FFs, $V$, $A_{0}$, $A_{1}$, and $A_{2}$, describe the non-
perturbative QCD interactions in $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays. Based on the FF convention of Ref.
Fajfer _et al._ (2012a), the hadronic currents take the following form:
$\displaystyle H_{\pm}(q^{2})$
$\displaystyle=(m_{B}+m_{D^{*}})A_{1}(q^{2})\mp\frac{2m_{B}}{m_{B}+m_{D^{*}}}|\boldsymbol{p}^{*}_{D^{*}}|V(q^{2}),$
$\displaystyle H_{0}(q^{2})$
$\displaystyle=\frac{-1}{2m_{D^{*}}\sqrt{q^{2}}}\Biggl{[}\frac{4m_{B}^{2}|\boldsymbol{p}^{*}_{D^{*}}|^{2}}{m_{B}+m_{D^{*}}}A_{2}(q^{2})$
$\displaystyle-(m_{B}^{2}-m_{D^{*}}^{2}-q^{2})(m_{B}+m_{D^{*}})A_{1}(q^{2})\Biggr{]},$
$\displaystyle H_{s}(q^{2})$
$\displaystyle=\frac{2m_{B}|\boldsymbol{p}^{*}_{D^{*}}|}{\sqrt{q^{2}}}A_{0}(q^{2})\,.$
(10)
In this analysis, we use an HQET-based parameterizations for the FFs that is
expressed in terms of the scalar product of the $B$ and $D^{*}$ four-
velocities
$w\equiv v_{B}\cdot
v_{D^{*}}=\frac{m_{B}^{2}+m_{D^{*}}^{2}-q^{2}}{2m_{D^{*}}m_{B}}.$ (11)
Its minimum value $w_{\rm min}=1$ corresponds to $q^{2}_{\rm
max}=(m_{B}-m_{D^{*}})^{2}$. The maximum value is obtained for the lowest
possible value of $q^{2}$, which is the square of the mass of the lepton.
Thus, $w_{\rm max}=1.35$ for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays and $w_{\rm max}=1.51$ for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell}$ decays.
In this framework, the FFs are usually expressed in terms of a universal form
factor $h_{A_{1}}(w)$ and ratios $R_{i}(w)$:
$\displaystyle A_{1}(w)$ $\displaystyle=\frac{w+1}{2}r_{D^{*}}h_{A_{1}}(w),$
$\displaystyle A_{0}(w)$
$\displaystyle=\frac{R_{0}(w)}{r_{D^{*}}}h_{A_{1}}(w),$ $\displaystyle
A_{2}(w)$ $\displaystyle=\frac{R_{2}(w)}{r_{D^{*}}}h_{A_{1}}(w),$
$\displaystyle V(w)$ $\displaystyle=\frac{R_{1}(w)}{r_{D^{*}}}h_{A_{1}}(w),$
where $r_{D^{*}}=2\sqrt{m_{B}m_{D^{*}}}/(m_{B}+m_{D^{*}})$. Using dispersion
relations and analyticity constraints Caprini _et al._ (1998); Fajfer _et
al._ (2012a), the universal FF and the ratios can be expressed in terms of
just five parameters:
$\displaystyle h_{A_{1}}(w)$
$\displaystyle=h_{A_{1}}(1)\;[1-8\rho^{2}_{D^{*}}z(w)+(53\rho^{2}_{D^{*}}-15)z(w)^{2}$
$\displaystyle\hskip 48.36967pt-(231\rho^{2}_{D^{*}}-91)z(w)^{3}],$
$\displaystyle R_{1}(w)$ $\displaystyle=R_{1}(1)-0.12(w-1)+0.05(w-1)^{2},$
$\displaystyle R_{2}(w)$ $\displaystyle=R_{2}(1)+0.11(w-1)-0.06(w-1)^{2},$
$\displaystyle R_{0}(w)$ $\displaystyle=R_{0}(1)-0.11(w-1)+0.01(w-1)^{2}.$
Here, $z(w)=(\sqrt{w+1}-\sqrt{2})/(\sqrt{w+1}+\sqrt{2})$. The factor
$h_{A_{1}}(1)$ only affects the overall normalization, so it cancels in the
ratio ${\cal R}(D^{*})$.
Three of the remaining four FF parameters, $R_{1}(1)$, $R_{2}(1)$, and
$\rho_{D^{*}}^{2}$, have been measured in analyses of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell}$ decays. The most recent averages by the
Heavy Quark Averaging Group (HFAG) Amhis _et al._ (2012) and their
correlations $C$ are:
$\displaystyle\rho_{D^{*}}^{2}=$ $\displaystyle 1.207\pm 0.028,$
$\displaystyle C(\rho_{D^{*}}^{2},R_{1}(1))=$ $\displaystyle 0.566,$
$\displaystyle R_{1}(1)=$ $\displaystyle 1.401\pm 0.033,$ $\displaystyle
C(\rho_{D^{*}}^{2},R_{2}(1))=$ $\displaystyle-0.807,$ $\displaystyle
R_{2}(1)=$ $\displaystyle 0.854\pm 0.020,$ $\displaystyle
C(R_{1}(1),R_{2}(1))=$ $\displaystyle-0.758.$
$R_{0}(w)$ affects the decay rate only via the scalar hadronic amplitude
$H_{s}(q^{2})$. The corresponding leptonic amplitude
$L_{s}(q^{2},\theta_{\tau})$ is helicity suppressed, i.e., its rate is
proportional to the mass of the lepton (Eq. 6). As a result, $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell}$ decays are not sensitive to this FF, and
$R_{0}(w)$ has not been measured. We therefore rely on a theoretical estimate,
$R_{0}(1)=1.14\pm 0.07$, based on HQET Fajfer _et al._ (2012a).
#### II.1.2 Form factor parameterization of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays
The non-perturbative QCD interactions in $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays are described by two independent FFs,
referred to as $V_{1}$ and $S_{1}$ Tanaka and Watanabe (2010). The helicity
amplitudes take the form:
$\displaystyle H_{0}(w)=$
$\displaystyle\sqrt{m_{B}m_{D}}\,\frac{m_{B}+m_{D}}{\sqrt{q^{2}(w)}}\sqrt{w^{2}-1}\,V_{1}(w),$
(12) $\displaystyle H_{s}(w)=$
$\displaystyle\sqrt{m_{B}m_{D}}\,\frac{m_{B}-m_{D}}{\sqrt{q^{2}(w)}}(w+1)\,S_{1}(w).$
(13)
The amplitudes corresponding to the helicities $\lambda_{W}=\pm$ vanish
because the $D$ meson has spin 0. For this decay mode, the variable $w$ is
defined as in Eq. 11, except that the $D^{*}$ meson mass is replaced by the
$D$ meson mass $m_{D}$.
Taking into account dispersion relations Caprini _et al._ (1998), $V_{1}$ can
be expressed as
$\displaystyle V_{1}(w)=V_{1}(1)\times[$ $\displaystyle
1-8\rho_{D}^{2}z(w)+(51\rho_{D}^{2}-10)z(w)^{2}$
$\displaystyle-(252\rho_{D}^{2}-84)z(w)^{3}],$ (14)
where $V_{1}(1)$ and $\rho_{D}^{2}$ are FF parameters. The normalization
$V_{1}(1)$ cancels in the ratio ${\cal R}(D)$. Based on $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell}$ decays, the average value of the shape
parameter is $\rho_{D}^{2}=1.186\pm 0.055$ Amhis _et al._ (2012). As for
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays, the scalar hadronic amplitude is
helicity suppressed and as a result, $S_{1}(w)$ cannot be measured with $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell}$ decays. We use instead the following estimate
based on HQET Tanaka and Watanabe (2010):
$\displaystyle S_{1}(w)=V_{1}(w)\bigl{\\{}1+\Delta[$
$\displaystyle-0.019+0.041(w-1)$ $\displaystyle-0.015(w-1)^{2}]\bigr{\\}},$
(15)
with $\Delta=1\pm 1$.
We have employed this FF parameterization to generate $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell}$ decays, as described in Sec. III.3.2. Though
we used the same FF definitions and parameters, we found a difference of 1%
between the value of ${\cal R}(D)$ that we obtained by integrating Eq. 9 and
the value quoted in Ref. Tanaka and Watanabe (2010).
On the other hand, if we adopt the FF parameters of Ref. Kamenik and Mescia
(2008), we perfectly reproduce the ${\cal R}(D)$ predictions presented there.
The translation of the FF parameterization of Ref. Kamenik and Mescia (2008)
into standard hadronic amplitudes is not straightforward, so we do not use
these FFs in the Monte Carlo simulation. Since both parameterizations yield
essentially identical $q^{2}$ spectra, they are equivalent with respect to
Monte Carlo generation, which is not sensitive to differences in
normalization.
#### II.1.3 SM calculation of ${\cal R}(D^{(*)})$ and $q^{2}$ spectrum
We determine the SM predictions for the ratios ${\cal R}(D^{(*)})$ integrating
the expression for the differential decay rate (Eq. 9) as follows:
${\cal R}(D^{(*)})\equiv\frac{{\cal B}(B\rightarrow D^{(*)}\tau\nu)}{{\cal
B}(B\rightarrow D^{(*)}\ell\nu)}=\frac{\int_{m^{2}_{\tau}}^{q^{2}_{\rm
max}}\frac{{\rm d}\Gamma_{\tau}}{{\rm d}q^{2}}\;{\rm
d}q^{2}}{\int_{m^{2}_{\ell}}^{q^{2}_{\rm max}}\frac{{\rm d}\Gamma_{\ell}}{{\rm
d}q^{2}}\;{\rm d}q^{2}},$ (16)
with $q^{2}_{\rm max}=(m_{B}-m_{D^{(*)}})^{2}$.
The uncertainty of this calculation is determined by generating one million
random sets of values for all the FF parameters assuming Gaussian
distributions for the uncertainties and including their correlations. We
calculate ${\cal R}(D^{(*)})$ with each set of values, and assign the root
mean square (RMS) of its distribution as the uncertainty. We apply this
procedure for $B^{0}$ and $B^{-}$ decays, and for $\ell=e$ and $\mu$, and
average the four results to arrive at the following predictions,
$\displaystyle{\cal R}(D)_{\rm SM}$ $\displaystyle=0.297\pm 0.017,$ (17)
$\displaystyle{\cal R}(D^{*})_{\rm SM}$ $\displaystyle=0.252\pm 0.003.$ (18)
Additional uncertainties that have not been taken into account could
contribute at the percent level. For instance, some electromagnetic
corrections could affect $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays differently Fajfer _et al._
(2012a). The experimental uncertainty on ${\cal R}(D^{(*)})$ is expected to be
considerably larger.
The $q^{2}$ spectra for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays in Fig. 2 clearly show the
threshold at $q^{2}_{\rm min}=m^{2}_{\tau}$, while for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays $q^{2}_{\rm min}\sim 0$. We take
advantage of this difference in the signal selection by imposing
$q^{2}>4\mathrm{\,Ge\kern-1.00006ptV}^{2}$. The spectra for $\ell=e$ and $\mu$
are almost identical, except for
$q^{2}<m^{2}_{\mu}=0.011\mathrm{\,Ge\kern-1.00006ptV}^{2}$.
Figure 2: (Color online). Predicted $q^{2}$ spectra for (a) $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ and $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell}$ decays for $V_{1}(1)V_{cb}=0.0427$ and (b)
$\kern 1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ and $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell}$ decays for $h_{A_{1}}(1)V_{cb}=0.0359$
Amhis _et al._ (2012).
### II.2 Two-Higgs-Doublet Model Type II
As we noted in the introduction, $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays are potentially sensitive to new
physics (NP) processes. Of particular interest is the two-Higgs-doublet model
(2HDM) of type II, which describes the Higgs sector of the Minimal
Supersymmetric model at tree level. In this model, one of the two Higgs
doublets couples to up-type quarks, while the other doublet couples to down-
type quarks and leptons.
The contributions of the charged Higgs to $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays can be encapsulated in the scalar
helicity amplitude in the following way Tanaka (1995); Kamenik and Mescia
(2008):
$H^{\rm 2HDM}_{s}\approx H^{\rm SM}_{s}\times\left(1-\frac{{\rm
tan}^{2}\beta}{m_{H^{\pm}}^{2}}\frac{q^{2}}{1\mp m_{c}/m_{b}}\right).$ (19)
Here, ${\rm tan}\beta$ is the ratio of the vacuum expectation values of the
two Higgs doublets, $m_{H^{\pm}}$ is the mass of the charged Higgs, and
$m_{c}/m_{b}=0.215\pm 0.027$ Xing _et al._ (2008) is the ratio of the $c$\-
and $b$-quark masses at a common mass scale. The negative sign in Eq. 19
applies to $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays and the positive sign applies to $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays. This expression is accurate to 1%
for $m_{H^{+}}$ larger than $15\mathrm{\,Ge\kern-1.00006ptV}$. The region for
$m_{H^{+}}\leq 15\mathrm{\,Ge\kern-1.00006ptV}$ has already been excluded by
$B\rightarrow X_{s}\gamma$ measurements Misiak _et al._ (2007).
The ${\rm tan}\beta/m_{H^{+}}$ dependence of the ratios ${\cal R}(D^{(*)})$ in
the type II 2HDM can be studied by substituting $H^{\rm 2HDM}_{s}$ for $H^{\rm
SM}_{s}$ in Eq. 9. Given that charged Higgs bosons are not expected to
contribute significantly to $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays, ${\cal R}(D^{(*)})_{\rm 2HDM}$
can be described by a parabola in the variable ${\rm
tan}^{2}\beta/m_{H^{+}}^{2}$,
${\cal R}(D^{(*)})_{\rm 2HDM}={\cal R}(D^{(*)})_{\rm SM}+A_{D^{(*)}}\frac{{\rm
tan}^{2}\beta}{m^{2}_{H^{+}}}+B_{D^{(*)}}\frac{{\rm
tan}^{4}\beta}{m^{4}_{H^{+}}},$ (20)
Table 1 lists the values of $A_{D^{(*)}}$ and $B_{D^{(*)}}$, which are
determined by averaging over $B^{0}$ and $B^{-}$ decays. The uncertainty
estimation includes the uncertainties on the mass ratio $m_{c}/m_{b}$ and the
FF parameters, as well as their correlations.
Table 1: Dependence of ${\cal R}(D^{(*)})$ on ${\rm tan}\beta/m_{H^{+}}$ in the 2HDM according to Eq. 20 for $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D\tau^{-}\overline{\nu}_{\tau}$ and $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays: the values of ${\cal R}(D^{(*)})$, the parameters A and B with their uncertainties, and correlations $C$. | $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D\tau^{-}\overline{\nu}_{\tau}$ | $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{*}\tau^{-}\overline{\nu}_{\tau}$
---|---|---
${\cal R}(D^{(*)})_{\rm SM}$ | 0.297 $\pm$ | 0.017 | 0.252 $\pm$ | 0.003
$A_{D^{(*)}}$ (GeV2) | $-3.25$ $\pm$ | 0.32 | $-0.230$ $\pm$ | 0.029
$B_{D^{(*)}}$ (GeV4) | 16.9 $\pm$ | 2.0 | 0.643 $\pm$ | 0.085
$C({\cal R}(D^{(*)})_{\rm SM},A_{D^{(*)}})$ | $-0.928$ | $-0.946$
$C({\cal R}(D^{(*)})_{\rm SM},B_{D^{(*)}})$ | $0.789$ | $0.904$
$C(A_{D^{(*)}},B_{D^{(*)}})$ | $-0.957$ | $-0.985$
Due to the destructive interference between the SM and 2HDM amplitudes in Eq.
19, charged Higgs contributions depress the ratios ${\cal R}(D^{(*)})$ for low
values of ${\rm tan}\beta/m_{H^{+}}$. For larger values of ${\rm
tan}\beta/m_{H^{+}}$, the Higgs contributions dominate and ${\cal R}(D)$ and
${\cal R}(D^{*})$ increase rapidly. As the coefficients of Table 1 show, the
2HDM impact is expected to be larger for ${\cal R}(D)$ than for ${\cal
R}(D^{*})$. This is because charged Higgs contributions only affect the scalar
amplitude $H_{s}^{\rm 2HDM}$, but $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays also receive contributions from
$H_{\pm}$, diluting the effect on the total rate.
Figure 3 shows the impact of the 2HDM on the $q^{2}$ spectrum. Given that the
$B$ and $D$ mesons have spin $J=0$, the SM decays $B\rightarrow
DW^{*}\rightarrow D\tau\nu$ proceed via $P$-wave for $J_{W^{*}}=1$, and via
$S$-wave for $J_{W^{*}}=0$. For the $P$-wave decay, which accounts for about
96% of the total amplitude, the decay rate receives an additional factor
$|\boldsymbol{p}^{*}_{D}|^{2}$, which suppresses the $q^{2}$ spectrum at high
values. Since charged Higgs bosons have $J_{H}=0$, their contributions proceed
via $S$-wave, and, thus, have a larger average $q^{2}$ than the SM
contributions. As a result, for low values of ${\rm tan}\beta/m_{H^{+}}$ where
the negative interference depresses $H^{\rm 2HDM}_{s}$, the $q^{2}$ spectrum
shifts to lower values. For large values of ${\rm tan}\beta/m_{H^{+}}$, the
Higgs contributions dominate the decay rate and the average $q^{2}$
significantly exceeds that of the SM.
Figure 3: (Color online). Predicted $q^{2}$ distributions for (a) $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ and (b) $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays for different values of ${\rm
tan}\beta/m_{H^{+}}$. All curves are normalized to unit area.
The situation is different for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays because the $D^{*}$ meson has spin
$J_{D^{*}}=1$. The SM decays can proceed via $S$, $P$, or $D$-waves, while the
decay via an intermediate Higgs boson must proceed via $P$-wave, suppressing
the rate at high $q^{2}$.
When searching for charged Higgs contributions, it is important to account for
the changes in the $q^{2}$ spectrum. This distribution has a significant
impact on the analysis due to the close relation between $q^{2}$ and $m_{\rm
miss}^{2}$, one of the fit variables.
Charged Higgs contributions also affect the $|\boldsymbol{p}^{*}_{\ell}|$
distribution. Given the spin 0 of the Higgs boson and the positive helicity
(right-handedness) of the anti-neutrino, the decays
$H^{-}\rightarrow\tau^{-}\overline{\nu}_{\tau}$ always produce $\tau^{-}$
leptons with positive helicities ($\lambda_{\tau}=+$). As a result, the
fraction of right-handed $\tau^{-}$ leptons produced in $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays changes from 30% in the SM, to
close to 100% when the 2HDM contributions dominate.
The lepton spectrum of polarized
$\tau^{\pm}\rightarrow\ell^{\pm}\nu_{\ell}\nu_{\tau}$ decays is well known
Tsai (1971). For $\tau^{-}$ leptons with $\lambda_{\tau^{-}}=-$, the
$\ell^{-}$ is emitted preferentially in the $\tau^{-}$ direction, while the
opposite is true for positive helicities. In the $B$ rest frame, leptons of a
certain momentum in the $\tau^{-}$ rest frame have larger momentum if they are
emitted in the direction of the $\tau^{-}$ momentum than in the opposite
direction. As a result, the $|\boldsymbol{p}^{*}_{\ell}|$ spectrum for SM
decays is harder than for Higgs dominated decays. For low values of ${\rm
tan}\beta/m_{H^{+}}$ for which the destructive interference depresses the
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ rate, the proportion of left-handed
$\tau^{-}$ leptons increases, and therefore, the $|\boldsymbol{p}^{*}_{\ell}|$
spectrum is harder than in the SM.
,
## III Data Sample, Detector and Simulation
### III.1 Data Sample
This analysis is based on the full data sample recorded with the BABAR
detector Aubert _et al._ (2002) at the PEP-II energy-asymmetric $e^{+}e^{-}$
storage rings Seeman (2008). It operated at a center-of-mass (c.m.) energy of
10.58 $\mathrm{\,Ge\kern-1.00006ptV}$, equal to the mass of the $\mathchar
28935\relax{(4S)}$ resonance. This resonance decays almost exclusively to
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ pairs. The collected data
sample of 471 million $\mathchar 28935\relax{(4S)}\rightarrow B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ events (on-peak data), corresponds to
an integrated luminosity of 426 $\mathrm{fb}^{-1}$ Lees _et al._ (2013). To
study continuum background, an additional sample of $40\mbox{\,fb}^{-1}$ (off-
peak data) was recorded approximately 40 $\mathrm{\,Me\kern-1.00006ptV}$ below
the $\mathchar 28935\relax{(4S)}$ resonance, i.e., below the threshold for
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ production.
### III.2 The BABAR Detector and Single Particle Reconstruction
The BABAR detector and event reconstruction have been described in detail
elsewhere Aubert _et al._ (2002). The momentum and angles of charged
particles were measured in a tracking system consisting of a 5-layer, double-
sided silicon-strip detector (SVT) and a 40-layer, small-cell drift chamber
(DCH) filled with a helium-isobutane gas mixture. Charged particles of
different masses were distinguished by their ionization energy loss in the
tracking devices and by a ring-imaging Cerenkov detector (DIRC). A finely
segmented CsI(Tl) calorimeter (EMC) measured the energy and position of
electromagnetic showers generated by electrons and photons. The EMC was
surrounded by a superconducting solenoid providing a 1.5-T magnetic field and
by a segmented flux return with a hexagonal barrel section and two endcaps.
The steel of the flux return was instrumented (IFR) with resistive plate
chambers and limited streamer tubes to detect particles penetrating the magnet
coil and steel.
Within the polar angle acceptance of the SVT and DCH
($0.4<\theta_{\mathrm{l}ab}<2.6$) the efficiency for the reconstruction of
charged particles exceeds 99% for momenta above 1
$\mathrm{\,Ge\kern-1.00006ptV}$. For low momentum pions, especially from
$D^{*+}\rightarrow D^{0}\pi^{+}$ decays, the efficiency drops to about 90% at
0.4 $\mathrm{\,Ge\kern-1.00006ptV}$ and to 50% at 0.1
$\mathrm{\,Ge\kern-1.00006ptV}$.
The electron and muon identification efficiencies and the probabilities to
misidentify a pion, a kaon, or a proton as an electron or a muon are measured
as a function of the laboratory momentum and angles using high-purity data
samples.
Electrons are separated from charged hadrons primarily on the basis of the
ratio of the energy deposited in the EMC to the track momentum. A special
algorithm has been developed to identify photons from bremsstrahlung in the
inner detector, and to correct the electron momentum for the energy loss.
Within the polar angle acceptance, the average electron efficiency for
laboratory momenta above 0.5 $\mathrm{\,Ge\kern-1.00006ptV}$ is 97%, largely
independent of momentum. The average pion misidentification rate is less than
0.5%.
Muon identification relies on a new multivariate algorithm that significantly
increases the reconstruction efficiency at low muon momenta,
$|\boldsymbol{p}_{\mu}|<1\mathrm{\,Ge\kern-1.00006ptV}$. This algorithm
combines information on the measured DCH track, the track segments in the IFR,
and the energy deposited in the EMC. The average muon efficiency is close to
90% independent of momentum, except in the forward endcap, where it decreases
for laboratory momenta below 1 $\mathrm{\,Ge\kern-1.00006ptV}$. The average
pion misidentification rate is about 2% above 1.2
$\mathrm{\,Ge\kern-1.00006ptV}$, rising at lower momenta and reaching a
maximum of 9% at 0.8 $\mathrm{\,Ge\kern-1.00006ptV}$.
By choosing a fairly loose selection of charged leptons and taking advantage
of improved PID algorithms, we increased the lepton efficiencies by 6% for
electrons and 50% for muons compared to the previous BABAR analysis Aubert
_et al._ (2008a).
Charged kaons are identified up to 4 $\mathrm{\,Ge\kern-1.00006ptV}$ on the
basis of information from the DIRC, SVT, and DCH. The efficiency exceeds 80%
over most of the momentum range and varies with polar angle. The probability
that a pion is misidentified as a kaon is close to 2%, varying by about 1% as
a function of momentum and polar angle.
The decays $K^{0}_{\scriptscriptstyle S}\rightarrow\pi^{+}\pi^{-}$ are
reconstructed as pairs of tracks of opposite charge originating from a
displaced vertex. The invariant mass of the pair $m_{\pi\pi}$ is required to
be in the range $m_{\pi\pi}\in[0.491,0.506]\mathrm{\,Ge\kern-1.00006ptV}$. No
attempt is made to identify interactions of $K^{0}_{\scriptscriptstyle L}$ in
the EMC or IFR.
To remove beam-generated background in the EMC and electronic noise, photon
candidates are required to have a minimum energy of 30
$\mathrm{\,Me\kern-1.00006ptV}$ and a shower shape that is consistent with
that of an electromagnetic shower. Neutral pions are reconstructed from pairs
of photon candidates with an invariant mass in the range
$m_{\gamma\gamma}\in[120,150]\mathrm{\,Me\kern-1.00006ptV}$.
### III.3 Monte Carlo Simulation
#### III.3.1 Simulated Samples
This analysis relies on Monte Carlo (MC) techniques to simulate the production
and decay of continuum and $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$
events. The simulation is based on the EvtGen generator Lange (2001). The
$q\overline{q}$ fragmentation is performed by Jetset Sjostrand (1994), and the
detector response by Geant4 Agostinelli _et al._ (2003). Radiative effects
such as bremsstrahlung in the detector material and initial-state and final-
state radiation Barberio and Was (1994) are included.
We derive predictions for the distributions and efficiencies of the signal and
backgrounds from the simulation. The size of the simulated sample of generic
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ events exceeds that of the
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ data sample by about a factor
of ten, while the sample for $q\overline{q}$ events corresponds to twice the
size of the off-peak data sample. We assume that the $\mathchar
28935\relax{(4S)}$ resonance decays exclusively to $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ pairs and use recent measurements of
branching fractions Nakamura _et al._ (2010) for all produced particles. The
impact of their uncertainties on the final results is assessed as a systematic
uncertainty.
Information extracted from studies of selected data control samples is used to
improve the accuracy of the simulation. Specifically, we reweight simulated
events to account for small differences observed in comparisons of data and
simulation (Sec. V).
#### III.3.2 Implementation of the Form Factor Parameterizations
For reasons of simplicity, the simulation of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays is based on the ISGW2 model Scora
and Isgur (1995), and $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell}$ decays are generated using an HQET-based
parameterization Isgur and Wise (1990). A change to a different FF
parameterization is implemented by reweighting the generated events with the
weights
$w_{\rm HQET}(q^{2},\theta_{i})=\left(\frac{{\cal M}(q^{2},\theta_{i})_{\rm
HQET}}{{\cal M}(q^{2},\theta_{i})_{\rm MC}}\right)^{2}\times\frac{{\cal
B}_{\rm MC}}{{\cal B}_{\rm HQET}}.$ (21)
Here, ${\cal M}(q^{2},\theta_{i})_{HQET}$ refers to the matrix element for the
FF parameterizations described in Secs. II.1.1 and II.1.2, and ${\cal
M}(q^{2},\theta_{i})_{MC}$ is the matrix element employed in the MC
generation. The matrix element of decays involving the scalar $D$ meson
depends on one angular variable, the lepton helicity angle $\theta_{\ell}$,
with ${\ell=e,\mu,\tau}$. In addition to $\theta_{\ell}$, the matrix element
of decays involving the vector meson $D^{*}$ is sensitive to two additional
angular variables describing the $D^{*}$ decay. The ratio of the branching
fractions ${\cal B}_{\rm MC}/{\cal B}_{\rm HQET}$ ensures that the sum of all
weights equals the number of generated events.
In the SM, this reweighting results in a small shift of the $q^{2}$
distribution to higher values, while the changes in the helicity angle
$\theta_{\tau}$ and the $\tau$ polarization are negligible. Therefore, the
distributions of the secondary charged lepton are not affected.
In the presence of a charged Higgs boson, however, the $\tau$ polarization can
change substantially, affecting the momentum of the secondary lepton $\ell$
originating from the $\tau\rightarrow\ell\nu_{\ell}\nu_{\tau}$ decays. We
account for the potential presence of a charged Higgs of 2HDM type II by
reweighting the simulation with the following weights,
$\displaystyle w_{\rm 2HDM}(q^{2},\theta_{i},|\boldsymbol{p}^{*}_{\ell}|)=$
$\displaystyle\left(\frac{{\cal M}(q^{2},\theta_{i})_{\rm 2HDM}}{{\cal
M}(q^{2},\theta_{i})_{\rm MC}}\right)^{2}\times$
$\displaystyle\frac{\Gamma(|\boldsymbol{p}^{*}_{\ell}|)_{\rm
2HDM}}{\Gamma(|\boldsymbol{p}^{*}_{\ell}|)_{\rm MC}}\times\frac{{\cal B}_{\rm
MC}}{{\cal B}_{\rm 2HDM}}.$ (22)
where $\theta_{i}$ refers again to the angular variables. The second factor
represents the ratio of the $|\boldsymbol{p}^{*}_{\ell}|$ distributions
$\Gamma(|\boldsymbol{p}^{*}_{\ell}|)$ in the 2HDM parameterization and in the
MC simulation. This factorization is necessary because in the MC generation
the polarization is handled in a probabilistic manner, so it cannot be
corrected on an event-per-event basis. It is only applicable if
$|\boldsymbol{p}^{*}_{\ell}|$ is uncorrelated with $q^{2}$ and the angular
variables, which is largely the case. In some regions of phase space, the 2HDM
weights have a much larger dispersion than the weights applied in the SM
reweighting, leading to larger statistical uncertainties for the simulation of
the Higgs boson contributions.
#### III.3.3 Simulation of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ decays
By $D^{**}$ we refer to excited charm resonances heavier than the $D^{*}$
meson. We include in the simulation the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\tau^{-}\overline{\nu}_{\tau}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell}$ decays that involve the four $D^{**}$
states with $L=1$ that have been measured Amhis _et al._ (2012). This
simulation takes into account their helicities Leibovich _et al._ (1998) and
the following decay modes: $D^{*}_{0},D^{*}_{2}\rightarrow D\pi$ and
$D^{\prime}_{1},D_{1},D^{*}_{2}\rightarrow D^{*}\pi$. Three-body decays
$D^{**}\rightarrow D^{(*)}\pi\pi$ are not included in the nominal fit for lack
of reliable measurements.
To estimate the rate of $B\rightarrow D^{**}\tau\nu_{\tau}$ decays, we rely on
ratios of the available phase space $\Phi$,
${\cal R}(D^{**})\equiv\frac{{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\tau^{-}\overline{\nu}_{\tau})}{{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell})}\approx\frac{\Phi(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\tau^{-}\overline{\nu}_{\tau})}{\Phi(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell})}.$ (23)
The value of this ratio depends on the mass of the $D^{**}$ state involved in
the $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ decay. We use the largest of the four
possible choices, ${\cal R}(D^{**})=0.18$.
Possible contributions from non-resonant $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\pi(\pi)\ell^{-}\overline{\nu}_{\ell}$ decays and semileptonic decays
involving higher-mass excited charm mesons are not included in the nominal
fit, and will be treated as a systematic uncertainty.
## IV Event selection
The event selection proceeds in two steps. First, we select $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ events in which one of the $B$ mesons,
the $B_{\rm tag}$, is fully reconstructed in a hadronic decay, while the other
$B$ meson decays semileptonically. To increase the event selection efficiency
compared to earlier analyses, we have added more decay chains to the $B_{\rm
tag}$ selection and have chosen a looser charged lepton selection. This leads
to significantly higher backgrounds, primarily combinatorial background from
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ and continuum events, and
charge-crossfeed events. Charge-crossfeed events are $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}(\tau^{-}/\ell^{-})\overline{\nu}$ decays in which the charge of the
reconstructed $B_{\rm tag}$ and $D^{(*)}$ mesons are wrong, primarily because
of an incorrectly assigned low-momentum $\pi^{\pm}$.
Semileptonic decays to higher mass charm mesons have a signature similar to
that of signal events and their composition is not well measured. This
background is fitted in selected control samples that are enriched with these
decays.
As the second step in the event selection, we introduce kinematic criteria
that increase the fraction of selected signal events with respect to
normalization and background decays. We also apply a multivariate algorithm to
further improve the signal-to-background ratio.
### IV.1 Selection of Events with a ${\boldsymbol{B}_{\text{tag}}}$ and a
Semileptonic $\boldsymbol{B}$ Decay
$\mathchar 28935\relax(4S)\rightarrow B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ events are tagged by the hadronic
decay of one of the $B$ mesons. We use a semi-exclusive algorithm which
includes additional $B_{\rm tag}$ decay chains and enhances the efficiency by
a factor of 2 compared to the earlier version employed by BABAR Aubert _et
al._ (2008a). We look for decays of the type $B_{\rm tag}\rightarrow
SX^{\pm}$, where $S$ refers to a seed meson and $X^{\pm}$ is a charged state
comprising of up to five hadrons, pions or kaons, among them up to two neutral
mesons, $\pi^{0}$ or $K^{0}_{\scriptscriptstyle S}$. The seed mesons, $D$,
$D^{*}$, $D_{s}$, $D_{s}^{*}$, and ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}$, are reconstructed in 56 decay modes. As a result, the $B_{\rm tag}$
is reconstructed in 1,680 different decay chains, which are further subdivided
into 2,968 kinematic modes.
To isolate the true tag decays from combinatorial background, we use two
kinematic variables: the energy substituted mass $m_{ES}=\sqrt{E^{2}_{\rm
beam}-\mathbf{p}^{2}_{\rm tag}}$ and the energy difference $\Delta E=E_{\rm
tag}-E_{\rm beam}$. Here $\mathbf{p}_{\rm tag}$ and $E_{\rm tag}$ refer to the
c.m. momentum and energy of the $B_{\rm tag}$, and $E_{\rm beam}$ is the c.m.
energy of a single beam particle. These variables make optimum use of the
precisely known energies of the colliding beams. For correctly reconstructed
$B$ decays, the $m_{ES}$ distribution is centered at the $B$-meson mass with a
resolution of 2.5 $\mathrm{\,Me\kern-1.00006ptV}$, while $\Delta E$ is
centered at zero with a resolution of 18$\mathrm{\,Me\kern-1.00006ptV}$ which
is dominated by the detector resolution. We require
$m_{ES}>5.27\mathrm{\,Ge\kern-1.00006ptV}$ and $|\Delta
E|<0.072\mathrm{\,Ge\kern-1.00006ptV}$.
For each $B_{\rm tag}$ candidate in a selected event, we look for the
signature of the semileptonic decay of the second $B$ meson, a $D$ or $D^{*}$
meson and a charged lepton $\ell$. We combine charged $B_{\rm tag}$ candidates
with $D^{(*)0}\ell^{-}$ systems and neutral $B_{\rm tag}$ candidates with both
$D^{(*)+}\ell^{-}$ and $D^{(*)-}\ell^{+}$ systems, where the inclusion of both
charge combinations allows for neutral $B$ mixing. We require all charged
particles to be associated with the $B_{\rm tag}D^{(*)}\ell$ candidate, but we
allow for any number of additional photons in the event.
The laboratory momentum of the electron or muon is required to exceed 300
$\mathrm{\,Me\kern-1.00006ptV}$ or 200 $\mathrm{\,Me\kern-1.00006ptV}$,
respectively. For $D$ mesons, we reconstruct the following decay modes:
$D^{0}\rightarrow K^{-}\pi^{+}$, $K^{-}K^{+}$, $K^{-}\pi^{+}\pi^{0}$,
$K^{-}\pi^{+}\pi^{-}\pi^{+}$, $K^{0}_{\scriptscriptstyle S}\pi^{+}\pi^{-}$,
and $D^{+}\rightarrow K^{-}\pi^{+}\pi^{+}$, $K^{-}\pi^{+}\pi^{+}\pi^{0}$,
$K^{0}_{\scriptscriptstyle S}\pi^{+}$, $K^{0}_{\scriptscriptstyle
S}\pi^{+}\pi^{+}\pi^{-}$, $K^{0}_{\scriptscriptstyle S}\pi^{+}\pi^{0},$
$K^{0}_{\scriptscriptstyle S}K^{+},$ with $K^{0}_{\scriptscriptstyle
S}\rightarrow\pi^{+}\pi^{-}$. The reconstructed invariant mass of $D$
candidates is required to be consistent with the nominal $D$ mass to within
four standard deviations ($\sigma$). The combined reconstructed branching
fractions are 35.8% and 27.3% for $D^{0}$ and $D^{+}$, respectively. We
identify $D^{*}$ mesons by their decays $D^{*+}\rightarrow
D^{0}\pi^{+},D^{+}\pi^{0}$, and $D^{*0}\rightarrow D^{0}\pi^{0},D^{0}\gamma$.
For these decays, the c.m. momentum of the pion or the c.m. energy of the
photon are required to be less than 400 $\mathrm{\,Me\kern-1.00006ptV}$.
Furthermore, the mass difference $\Delta m=m(D^{*})-m(D)$ is required to
differ by less than 4$\sigma$ from the expected value Nakamura _et al._
(2010).
To further reduce the combinatorial background, we perform a kinematic fit to
the event, constraining tracks of secondary charged particles to the
appropriate $B$, $D^{(*)}$, or $K^{0}_{\scriptscriptstyle S}$ decay vertices.
The fit also constrains the reconstructed masses of the $D$, $D^{*}$, and
$K^{0}_{\scriptscriptstyle S}$ mesons to their nominal values. The vertex of
the $\mathchar 28935\relax{(4S)}\rightarrow B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ decay has to be compatible with a
beam-beam interaction. Candidates for which this fit does not converge are
rejected. The $m_{\rm miss}^{2}$ resolution improves by about 25% and becomes
more symmetric for the remaining candidates.
To select a single $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ candidate,
we determine $E_{\rm extra}=\sum_{i}E_{i}^{\gamma}$, the sum of the energies
of all photons that are not associated with the reconstructed $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ pair. We only include photons of more
than 50 $\mathrm{\,Me\kern-1.00006ptV}$, thereby eliminating about $99\%$ of
the beam-generated background. We retain the candidate with the lowest value
of $E_{\rm extra}$, and if more than one candidate survives, we select the one
with the smallest $|\Delta E|$. This procedure preferentially selects
$D^{*}\ell$ candidates over $D\ell$ candidates. Thus, we reduce the fraction
of misreconstructed events with a $D^{*}\rightarrow D(\pi/\gamma)$ decay for
which the pion or photon is not properly assigned to the $D^{*}$ meson.
As a consequence of the rather loose lepton selection criteria and the
addition of decay modes with multiple neutral pions and
$K^{0}_{\scriptscriptstyle S}$ for the $B_{\rm tag}$ selection, the number of
$B_{\rm tag}D^{(*)}\ell$ candidates per event is very large. To address this
problem, we identify the $B_{\rm tag}$ decay modes that contribute primarily
to the combinatorial background. Specifically, we determine for each of the
2,968 kinematic modes $R_{\rm tc}$, the fraction of events for which all
charged particles in the $B_{\rm tag}$ final state are correctly reconstructed
and associated with the tag decay. This assessment is based on a large sample
of simulated $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ events equivalent
to 700 $\mbox{\,fb}^{-1}$. We observe that for decay chains with low
multiplicity final states and no neutral hadrons the signal-to-background
ratio ($S/B$) is very high. For instance, for the $B^{-}_{\rm
tag}\rightarrow{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}(\rightarrow\mu^{+}\mu^{-})K^{-}$ decay, we obtain $S/B=316/79$, whereas
for the decay $B^{0}_{\rm tag}\rightarrow D^{-}(\rightarrow
K^{0}_{\scriptscriptstyle S}\pi^{-})\pi^{+}\pi^{+}\pi^{+}\pi^{-}\pi^{-}$ this
ratio is $S/B=20/145$. For this decay mode, typically 3.5 of the 8 $B_{\rm
tag}$ final state particles are incorrectly associated with the second $B$
decay in the event or otherwise misidentified. Based on this study, we only
retain $B_{\rm tag}$ decay chains with $R_{\rm tc}>0.3$. With this criterion,
we remove 2100 $B_{\rm tag}$ kinematic modes, eliminate 2/3 of the
combinatorial background, and retain 85% of the signal $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays. Thanks to this procedure, the
average number of candidates per event before single candidate selection is
reduced to 1.8 for the $D^{0}\ell$ and $D^{+}\ell$ samples, and 3.1 and 4.8
for the $D^{*0}\ell$ and $D^{*+}\ell$ samples, respectively.
### IV.2 Selection of the $\boldsymbol{D^{(*)}\pi^{0}\ell}$ Control Samples
To constrain the $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ background, we select four
$D^{(*)}\pi^{0}\ell$ control samples, identical to the $D^{(*)}\ell$ samples
except for an additional reconstructed $\pi^{0}$. The $\pi^{0}$ is selected in
the mass range $m_{\gamma\gamma}\in[120,150]\mathrm{\,Me\kern-1.00006ptV}$.
Decays of the form $B\rightarrow D^{(*)}\pi\ell\nu$ peak at $m_{\rm
miss}^{2}=0$ in these samples. As a result, we can extract their yields
together with the signal and normalization yields by fitting the $D^{(*)}\ell$
and $D^{(*)}\pi^{0}\ell$ samples simultaneously.
More than half of the events in these control samples originate from continuum
$e^{+}e^{-}\rightarrow q\overline{q}(\gamma)$ events. Since the fragmentation
of light quarks leads to a two-jet event topology, this background is very
effectively suppressed by the requirement $|\cos\Delta\theta_{\rm
thrust}|<0.8$, where $\Delta\theta_{\rm thrust}$ is the angle between the
thrust axes of the $B_{\rm tag}$ and of the rest of the event. Since $B$
mesons originating from $\mathchar 28935\relax{(4S)}$ decays are produced just
above threshold, their final state particles are emitted almost isotropically,
and, therefore, the $\cos\Delta\theta_{\rm thrust}$ distribution is uniform.
As a result, the loss of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ decays due to this restriction is
significantly smaller than the amount of continuum events rejected.
Figure 4: (Color online). Input variables for the BDT selector trained on the
$D^{*0}\ell$ sample. Histograms are normalized to 1000 entries.
### IV.3 Optimization of the Signal Selection
We introduce criteria that discriminate signal from background, and also
differentiate between signal $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays. For semileptonic decays the
minimum momentum transfer is largely determined by the mass of the charged
lepton. For decays involving $\tau$ leptons, $q^{2}_{\rm
min}=m^{2}_{\tau}\simeq 3.16\mathrm{\,Ge\kern-1.00006ptV}^{2}$. Thus the
selection $q^{2}>4\mathrm{\,Ge\kern-1.00006ptV}^{2}$ retains 98% of the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays and rejects more that 30% of the
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays. The event sample with
$q^{2}<4\mathrm{\,Ge\kern-1.00006ptV}^{2}$ is dominated by $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ and serves as a very clean data sample
for comparisons with the MC simulation. To reject background from hadronic $B$
decays in which a pion is misidentified as muon, we require
$|\boldsymbol{p}_{\rm miss}|>200\mathrm{\,Me\kern-1.00006ptV}$, where
$|\boldsymbol{p}_{\rm miss}|$ is the missing momentum in the c.m. frame.
Figure 5: (Color online). Comparison of data control samples (data points)
with MC simulated samples (histograms) of the $|\boldsymbol{p}^{*}_{\ell}|$
distributions for (a) off-peak data prior to $|\boldsymbol{p}^{*}_{\ell}|$
reweighting, (b) the intermediate $E_{\rm extra}$ sample prior to
$|\boldsymbol{p}^{*}_{\ell}|$ reweighting, and (c) the intermediate $E_{\rm
extra}$ sample after $|\boldsymbol{p}^{*}_{\ell}|$ reweighting; (d) the
$E_{\rm extra}$ distribution for the combinatorial background; and the $m_{\rm
ES}$ distributions for (e) the intermediate $E_{\rm extra}$ sample, and (f)
the low $E_{\rm extra}$ sample. The results are shown for the four
$D^{(*)}\ell$ samples combined.
To further improve the separation of well-reconstructed signal and
normalization decays from various backgrounds, we employ a boosted decision
tree (BDT) multivariate method Speckmayer _et al._ (2010). This method relies
on simple classifiers which determine signal and background regions by using
binary selections on various input distributions. For each of the four
$D^{(*)}\ell$ samples, we train a BDT to select signal and normalization
events and reject $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ and charge cross-feed backgrounds.
Each BDT selector relies on the simulated distributions of the following
variables: (a) $E_{\rm extra}$; (b) $\Delta E$; (c) the reconstructed mass of
the signal $D^{(*)}$ meson; (d) the mass difference for the reconstructed
signal $D^{*}$: $\Delta m=m(D\pi)-m(D)$; (e) the reconstructed mass of the
seed meson of the $B_{\rm tag}$; (f) the mass difference for a $D^{*}$
originating from the $B_{\rm tag}$, $\Delta m_{\rm tag}=m(D_{\rm
tag}\pi)-m(D_{\rm tag})$; (g) the charged particle multiplicity of the $B_{\rm
tag}$ candidate; and (h) $\cos\Delta\theta_{\rm thrust}$. The input
distributions for one of the BDT selectors are shown in Fig. 4. For the
$D^{(*)}\pi^{0}\ell$ samples, we use similar BDT selectors that are trained to
reject continuum, $D^{(*)}(\ell/\tau)\nu$, and other $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ background. After the BDT requirements
are applied, the fraction of events attributed to signal in the $m_{\rm
miss}^{2}>1.5\mathrm{\,Ge\kern-1.00006ptV}^{2}$ region, which excludes most of
the normalization decays, increases from 2% to 39%. The background remaining
in that region is composed of normalization events (10%), continuum (19%),
$D^{**}(\ell/\tau)\nu$ events (13%), and other $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ events (19%), primarily from
$B\rightarrow D^{(*)}D^{(*)+}_{s}$ decays with
$D^{+}_{s}\rightarrow\tau^{+}\nu_{\tau}$.
## V Correction and Validation of the MC Simulation
The simulation of the full reconstruction of high-multiplicity events,
including the veto of events with extra tracks or higher values of $E_{\rm
extra}$ is a rather challenging task. To validate the simulation, we compare
simulated distributions with data control samples, and, when necessary,
correct the MC simulations for the observed differences. The figures shown in
this section combine events from all four channels ($D^{0}\ell$, $D^{*0}\ell$,
$D^{+}\ell$, and $D^{*+}\ell$); the observed differences are similar in the
individual samples.
The control samples are selected to have little or no contamination from
signal decays. Specifically we select,
* •
Continuum events: off-peak data.
* •
Normalization decays: $q^{2}\leq 4\mathrm{\,Ge\kern-1.00006ptV}^{2}$.
* •
Combinatorial $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ and continuum
backgrounds: $5.20<\mbox{$m_{\rm ES}$}<5.26\mathrm{\,Ge\kern-1.00006ptV}$.
* •
Incorrectly reconstructed events: events in three $E_{\rm extra}$ intervals,
high ($1.2<E_{\rm extra}<2.4\mathrm{\,Ge\kern-1.00006ptV}$), intermediate
($0.5<E_{\rm extra}<1.2\mathrm{\,Ge\kern-1.00006ptV}$), and low ($E_{\rm
extra}<0.5\mathrm{\,Ge\kern-1.00006ptV}$ for events that fail the BDT
selection). N.B. the BDT selection results in the elimination of all events
with $E_{\rm extra}>0.4\mathrm{\,Ge\kern-1.00006ptV}$.
The off-peak data sample confirms the $m_{\rm miss}^{2}$ distribution of
simulated continuum events, but shows discrepancies in the
$|\boldsymbol{p}^{*}_{\ell}|$ spectrum and overall normalization of the
simulation [Fig. 5(a)]. These features are also observed in other control
samples, such as on-peak data with high $E_{\rm extra}$ [Fig. 5(b)]. We
correct the simulated $|\boldsymbol{p}^{*}_{\ell}|$ spectrum and yield of the
continuum contribution by reweighting it to match off-peak data, on an event-
by-event basis. After this correction, the $|\boldsymbol{p}^{*}_{\ell}|$
distributions of the expected backgrounds agree well in independent control
samples down to low lepton momenta where the misidentification rates are
significant [Fig. 5(c)]. We observe that in the high $E_{\rm extra}$ region,
the simulation exceeds data yield by $(1.3\pm 0.5)$%. This small excess is
corrected by decreasing the expected $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ background yield by $(4.3\pm 1.9)\%$.
After this correction, the simulation provides accurate yield predictions for
the backgrounds at intermediate and high $E_{\rm extra}$. For instance, the
ratio of the expected to observed yield of events with $m_{\rm
miss}^{2}>1.5\mathrm{\,Ge\kern-1.00006ptV}^{2}$ is $0.998\pm 0.006$. The
$m_{\rm miss}^{2}$ distributions of the continuum and $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ backgrounds are described well in all
control samples.
The region of low $E_{\rm extra}$, which includes the signal region, is more
difficult to model, primarily due to low energy photons and
$K^{0}_{\scriptscriptstyle L}$ mesons interacting in the EMC. Figure 5(d)
shows that the data in the $m_{\rm ES}$ sideband agree well with the
combinatorial background predictions for $E_{\rm
extra}>0.5\mathrm{\,Ge\kern-1.00006ptV}$, but are underestimated for low
$E_{\rm extra}$. This, and small differences in the other BDT input
distributions, result in a underestimation of the combinatorial background
when the BDT requirements are applied. Based on the $5.20<\mbox{$m_{\rm
ES}$}<5.26\mathrm{\,Ge\kern-1.00006ptV}$ sideband, we find scale factors of
$1.099\pm 0.019$ and $1.047\pm 0.034$ for the combinatorial background in the
$D\ell$ and $D^{*}\ell$ samples, respectively. The uncertainties are given by
the statistics of the data and simulated samples. The ratio between the
observed and expected $m_{\rm ES}$ distribution is independent of $E_{\rm
extra}$ [Figs. 5(e,f)], so we apply these corrections to the continuum and
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ backgrounds in the signal
region. The same correction is applied to $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ events, which cannot be easily
isolated, because their simulated $E_{\rm extra}$ distributions are very
similar to those of combinatorial background. These corrections affect the
fixed $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ and continuum yields in
the fit, as well as the relative efficiency of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ events in the $D^{(*)}\ell$ and
$D^{(*)}\pi^{0}\ell$ samples. As a result, these corrections are the source of
the dominant systematic uncertainties.
Relying on the $q^{2}\leq 4\mathrm{\,Ge\kern-1.00006ptV}^{2}$ control sample,
where $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays account for 96% of the events, we
correct the $E_{\rm extra}$ distribution and an 8.5% overestimation of the
simulated normalization events. We apply the same correction to simulated
signal events which are expected to have a similar $E_{\rm extra}$
distribution. This procedure does not affect the relative efficiency of signal
to normalization events, so it has a very small impact on the ${\cal
R}(D^{(*)})$ measurements.
We use the same $q^{2}\leq 4\mathrm{\,Ge\kern-1.00006ptV}^{2}$ control sample
to compare and validate the $|\boldsymbol{p}^{*}_{\ell}|$ distributions of
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ events. We observe that the $m_{\rm
miss}^{2}$ resolution of the narrow peaks at $m_{\rm miss}^{2}=0$ is slightly
underestimated by the simulation. This effect is corrected by convolving the
simulated distributions with a Gaussian resolution function, for which the
width is adjusted by iteration.
Table 2: Contributions to the four $D^{(*)}\ell$ samples. The expected relative abundance of events in each data sample is represented by $f_{\mathrm{exp}}$. The columns labeled _Yield_ indicate whether the contribution is free in the fit, fixed, or linked to another component through a cross-feed constraint. The charged cross-feed components, marked with _Fix./It._ , are fixed in the fit, but updated in the iterative process. | $D^{0}\ell$ | $D^{*0}\ell$ | $D^{+}\ell$ | $D^{*+}\ell$
---|---|---|---|---
Source | $f_{\mathrm{exp}}$ (%) | | Yield | $f_{\mathrm{exp}}$ (%) | | Yield | $f_{\mathrm{exp}}$ (%) | | Yield | $f_{\mathrm{exp}}$ (%) | | Yield
$D^{(*)}\tau\nu$ signal | 2.6 | | Free | 4.9 | | Free | 4.3 | | Free | 5.0 | | Free
$D^{(*)}\tau\nu$ signal feed-down/up | 2.8 | | $D^{*0}\ell$ | 0.4 | | $D^{0}\ell$ | 1.8 | | $D^{*+}\ell$ | 0.1 | | $D^{+}\ell$
$D^{(*)}\ell\nu$ normalization | 24.5 | | Free | 80.7 | | Free | 37.3 | | Free | 88.0 | | Free
$D^{(*)}\ell\nu$ norm. feed-down/up | 53.5 | | Free | 2.7 | | $D^{0}\ell$ | 35.0 | | Free | 0.3 | | $D^{+}\ell$
$D^{**}(\ell/\tau)\nu$ background | 4.3 | | $D^{0}\pi^{0}\ell$ | 3.6 | | $D^{*0}\pi^{0}\ell$ | 6.6 | | $D^{+}\pi^{0}\ell$ | 3.0 | | $D^{*+}\pi^{0}\ell$
Cross-feed background | 3.8 | | Fix./It. | 1.3 | | Fix./It. | 2.1 | | Fix./It. | 0.4 | | Fix./It.
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ background | 4.1 | | Fixed | 3.7 | | Fixed | 7.1 | | Fixed | 2.8 | | Fixed
Continuum background | 4.4 | | Fixed | 2.6 | | Fixed | 5.9 | | Fixed | 0.5 | | Fixed
Table 3: Contributions to the four $D^{(*)}\pi^{0}\ell$ samples. The expected relative abundance of events in each data sample is represented by $f_{\mathrm{exp}}$. The columns labeled _Yield_ indicate whether the contribution is free in the fit, fixed, or linked to another component through a cross-feed constraint. The $D(\ell/\tau)\nu$ components are linked to the $D^{*}(\ell/\tau)\nu$ components, and the cross-feed constraint is updated in the iteration. The charged cross-feed components, marked with _Fix./It._ , are fixed in the fit, but updated in the iterative process. | $D^{0}\pi^{0}\ell$ | $D^{*0}\pi^{0}\ell$ | $D^{+}\pi^{0}\ell$ | $D^{*+}\pi^{0}\ell$
---|---|---|---|---
Source | $f_{\mathrm{exp}}$ (%) | | Yield | $f_{\mathrm{exp}}$ (%) | | Yield | $f_{\mathrm{exp}}$ (%) | | Yield | $f_{\mathrm{exp}}$ (%) | | Yield
$D^{**}(\ell/\tau)\nu$ background | 20.1 | | Free | 16.4 | | Free | 19.9 | | Free | 22.1 | | Free
$D^{*}(\ell/\tau)\nu$ feed-up | 19.1 | | Free | 20.6 | | Free | 10.0 | | Free | 25.2 | | Free
$D(\ell/\tau)\nu$ feed-up | 6.4 | | $D^{0}\pi^{0}\ell$ | 2.3 | | $D^{*0}\pi^{0}\ell$ | 4.7 | | $D^{+}\pi^{0}\ell$ | 0.8 | | $D^{*+}\pi^{0}\ell$
Cross-feed background | 4.9 | | Fix./It. | 3.6 | | Fix./It. | 4.4 | | Fix./It. | 2.5 | | Fix./It.
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ background | 28.4 | | Free | 36.4 | | Free | 38.7 | | Free | 37.4 | | Free
Continuum background | 21.0 | | Fixed | 20.8 | | Fixed | 22.2 | | Fixed | 12.0 | | Fixed
## VI Fit procedure and results
### VI.1 Overview
We extract the signal and normalization yields from an extended, unbinned
maximum-likelihood fit to two-dimensional $m_{\rm
miss}^{2}$–$|\boldsymbol{p}^{*}_{\ell}|$ distributions. The fit is performed
simultaneously to the four $D^{(*)}\ell$ samples and the four
$D^{(*)}\pi^{0}\ell$ samples. The distribution of each $D^{(*)}\ell$ and
$D^{(*)}\pi^{0}\ell$ sample is fit to the sum of eight or six contributions,
respectively. Each of the $4\times 8+4\times 6=56$ contributions is described
by a probability density function (PDF). Their relative scale factor
determines the number of events from each source. Tables 2 and 3 summarize the
contributions to the fit for the four $D^{(*)}\ell$ sample and the four
$D^{(*)}\pi^{0}\ell$ samples. These tables also list the relative yield for
each contribution as estimated from MC simulation (for SM signal), and specify
whether the yield is free, fixed, or constrained in the fit.
We introduce the following notation to uniquely identify each contribution to
the fit: _source_ $\Rightarrow$ _sample_. For instance,
$D^{*0}\tau\nu\Rightarrow D^{*0}\ell$ refers to signal $D^{*0}\tau\nu$ decays
that are correctly reconstructed in the $D^{*0}\ell$ sample, while
$D^{*0}\tau\nu\Rightarrow D^{0}\ell$ refers to the same decays, but
incorrectly reconstructed in the $D^{0}\ell$ sample. We refer to the latter as
feed-down. Contributions of the form $D(\tau/\ell)\nu\Rightarrow
D^{*}(\tau/\ell)$ and $D^{(*)}(\tau/\ell)\nu\Rightarrow D^{**}(\tau/\ell)$ are
referred to as feed-up.
The contributions from the continuum, $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$, and cross-feed backgrounds, with the
exception of $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ background in the
$D^{(*)}\pi^{0}\ell$ samples, are fixed to the yields determined by MC
simulation after small adjustments based on data control regions. The yields
of the remaining 36 contributions are determined in the fit. Some of these
contributions share the same source and therefore the ratio of their yields is
constrained to the expected value, _e.g._ , $D^{*0}\tau\nu\Rightarrow
D^{*0}\ell$ and $D^{*0}\tau\nu\Rightarrow D^{0}\ell$. Of special importance
are the constraints linking the $D^{**}(\ell/\tau)\nu$ yields in the
$D^{(*)}\ell$ samples ($N_{D^{**}\Rightarrow D^{(*)}}$) to the yields in the
$D^{(*)}\pi^{0}\ell$ samples ($N_{D^{**}\Rightarrow D^{(*)}\pi^{0}}$),
$f_{D^{**}}=\frac{N_{D^{**}\Rightarrow D^{(*)}}}{N_{D^{**}\Rightarrow
D^{(*)}\pi^{0}}}=\frac{\varepsilon_{D^{**}\Rightarrow
D^{(*)}}}{\varepsilon_{D^{**}\Rightarrow D^{(*)}\pi^{0}}},$ (24)
Given that these constraints share the same source, $f_{D^{**}}$ is equivalent
to the ratio of the $D^{**}(\ell/\tau)\nu$ reconstruction efficiencies for the
two samples.
Table 4: Number of free parameters in the isospin-unconstrained ($N_{\rm un}$) and constrained ($N_{\rm cons}$) fits. Sample | Contribution | $N_{\rm un}$ | $N_{\rm cons}$
---|---|---|---
$D^{(*)}\ell$ | $D^{(*)}\tau\nu$ signal | 4 | 2
$D^{(*)}\ell$ | $D^{(*)}\ell\nu$ normalization | 4 | 2
$D^{(*)}\ell$ | $D^{*}\ell\nu$ norm. feed-down | 2 | 1
$D^{(*)}\pi^{0}\ell$ | $D^{**}(\ell/\tau)\nu$ background | 4 | 4
$D^{(*)}\pi^{0}\ell$ | $D^{(*)}\ell\nu$ norm. feed-up | 4 | 4
$D^{(*)}\pi^{0}\ell$ | $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ background | 4 | 4
Taking into account the constraints imposed on event yields from a common
source, there are 22 free parameters in the standard fit, as listed in Table
4. In addition, we perform a fit in which we impose the isospin relations
${\cal R}(D^{0})={\cal R}(D^{+})\equiv{\cal R}(D)$ and ${\cal R}(D^{*0})={\cal
R}(D^{*+})\equiv{\cal R}(D^{*})$. We choose not to impose isospin relations
for the $D^{(*)}\pi^{0}\ell$ samples. Consequently, this fit has a total of 17
free parameters.
The following inputs are updated by iterating the fit:
* •
The eight $D^{(*)}(\ell/\tau)\nu\Rightarrow D^{(*)}\pi^{0}\ell$ PDFs are
recalculated taking into account the fitted $D^{(*)}\ell\nu$ and
$D^{(*)}\tau\nu$ contributions to the $D^{(*)}\ell$ samples.
* •
The fixed charge cross-feed yields are updated based on the deviation of the
fitted $D^{(*)}\ell\nu$ yields from the expected values.
* •
The continuum, $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$, and
$D^{**}(\ell/\tau)\nu$ background corrections are recalculated. They have a
slight dependence on the fitted $D^{(*)}\ell\nu$ events because some of these
events extend into the $m_{\rm ES}$ sideband.
* •
The correction to the $m_{\rm miss}^{2}$ resolution of the normalization
contributions is readjusted.
* •
The two feed-down constraints for $D^{*}\tau\nu$ are updated using the fitted
feed-down constraints for the normalization contributions in the following
way:
$\displaystyle\left.\frac{N_{D^{*}\tau\nu\Rightarrow
D\ell}}{N_{D^{*}\tau\nu\Rightarrow D^{*}\ell}}\right|_{\rm Iter.}=$
$\displaystyle\left.\frac{N_{D^{*}\tau\nu\Rightarrow
D\ell}}{N_{D^{*}\tau\nu\Rightarrow D^{*}\ell}}\right|_{\rm
MC}\times\left.\frac{N_{D^{*}\ell\nu\Rightarrow
D\ell}}{N_{D^{*}\ell\nu\Rightarrow D^{*}\ell}}\right|_{\rm Fit}$
$\displaystyle\times\left.\frac{N_{D^{*}\ell\nu\Rightarrow
D^{*}\ell}}{N_{D^{*}\ell\nu\Rightarrow D\ell}}\right|_{\rm MC}..$ (25)
The iterations continue until the change on the values of ${\cal R}(D^{(*)})$
is less than 0.01%. The update of the feed-down rates has a significant impact
on the fits to the $D^{0}$ and $D^{+}$ samples because of the large signal
feed-down. The other iterative updates have only a marginal impact.
### VI.2 Probability Density Functions and Validation
The fit relies on 56 PDFs, which are derived from MC samples of continuum and
$B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ events equivalent to 2 and 9
times the size of the data sample, respectively. The two-dimensional $m_{\rm
miss}^{2}$–$|\boldsymbol{p}^{*}_{\ell}|$ distributions for each of the 56
contributions to the fit are described by smooth non-parametric kernel
estimators Cranmer (2001). These estimators enter a two-dimensional Gaussian
function centered at the $m_{\rm miss}^{2}$ and $|\boldsymbol{p}^{*}_{\ell}|$
values of each simulated event. The width of the Gaussian function determines
the smoothness of the PDF. We find the optimum level of global smoothing with
a cross-validation algorithm Bowman and Azzalini (1997). For PDFs that have
variations in shape that require more than one level of smoothing, we combine
estimators with different Gaussian widths in up to four areas in the $m_{\rm
miss}^{2}$–$|\boldsymbol{p}^{*}_{\ell}|$ space. For instance, we use different
levels of smoothing in the $D^{*0}\ell\nu\Rightarrow D^{*0}\ell$ contribution
for the narrow peak at $m_{\rm miss}^{2}=0$ and the smooth $m_{\rm miss}^{2}$
tail that extends up to $7\mathrm{\,Ge\kern-1.00006ptV}^{2}$. Figure 6 shows
one-dimensional projections of five two-dimensional PDFs. The bands indicate
the statistical uncertainty on the PDFs estimated with a bootstrap algorithm
Bowman and Azzalini (1997).
The $m_{\rm miss}^{2}$ distributions of signal and normalization are very
distinct due to the different number of neutrinos in the final state. The
$m_{\rm miss}^{2}$ distributions of the backgrounds resemble those of the
signal, and therefore these contributions to the fit are either fixed or
constrained by the $D^{(*)}\pi^{0}\ell$ samples.
Figure 6: (Color online) Projections of the simulated $m_{\rm miss}^{2}$ and
$|\boldsymbol{p}^{*}_{\ell}|$ distributions and the PDFs for the following
contributions to the $D^{0}\ell$ sample: (a), (b) $D^{0}\tau\nu$; (c), (d)
$D^{0}\ell\nu$; (e), (f) $D^{*0}\ell\nu$; (g), (h) $D^{**}(\ell/\tau)\nu$, and
(i), (j) $B\kern 1.61993pt\overline{\kern-1.61993ptB}{}$ background. The light
and dark blue (gray) bands mark the $1\sigma$ and $2\sigma$ envelopes of the
variations of the PDF projections due to their statistical uncertainty.
To validate the PDFs and the fit procedure, we divide the large sample of
simulated $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ events into two:
sample A with about $3.3\times 10^{9}$ $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ events, and sample B with $9.4\times
10^{8}$ $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ events. We determine
the PDFs with sample A, and create histograms by integrating the PDFs in bins
of their $m_{\rm miss}^{2}$ and $|\boldsymbol{p}^{*}_{\ell}|$ projections. We
compare the resulting histograms with the events in sample A, and derive a
$\chi^{2}$ based on the statistical significance of the difference for each
bin. The distribution of the corresponding $p$ values for these PDFs is
uniform, as expected for an unbiased estimation. As another test, we extract
the signal and normalization yields from fits to the events of sample B, using
the PDFs obtained from sample A. Again, the results are compatible with an
unbiased fit. Furthermore, we validate the fit procedure based on a large
number of pseudo experiments generated from these PDFs. Fits to these samples
also show no bias in the extracted signal and normalization yields.
### VI.3 Fit Results
Figures 7 and 8 show the $m_{\rm miss}^{2}$ and $|\boldsymbol{p}^{*}_{\ell}|$
projections of the fits to the $D^{(*)}\ell$ samples. In Fig. 7, the
$|\boldsymbol{p}^{*}_{\ell}|$ projections do not include events with $m_{\rm
miss}^{2}>1\mathrm{\,Ge\kern-1.00006ptV}^{2}$, i.e., most of the signal
events. In Fig. 8, the vertical scale is enlarged and the horizontal axis is
extended for the $m_{\rm miss}^{2}$ projection to reveal the signal and
background contributions. The $|\boldsymbol{p}^{*}_{\ell}|$ projections
emphasize the signal events by excluding events with $m_{\rm
miss}^{2}<1\mathrm{\,Ge\kern-1.00006ptV}^{2}$. Both figures demonstrate that
the fit describes the data well and the observed differences are consistent
with the statistical and systematic uncertainties on the PDFs and the
background contributions.
Figure 7: (Color online). Comparison of the $m_{\rm miss}^{2}$ and
$|\boldsymbol{p}^{*}_{\ell}|$ distributions of the $D^{(*)}\ell$ samples (data
points) with the projections of the results of the isospin-unconstrained fit
(stacked colored distributions). The $|\boldsymbol{p}^{*}_{\ell}|$
distributions show the normalization-enriched region with $m_{\rm
miss}^{2}<1\mathrm{\,Ge\kern-0.80005ptV}^{2}$, thus excluding most of the
signal events in these samples.
Figure 8: (Color online). Comparison of the $m_{\rm miss}^{2}$ and
$|\boldsymbol{p}^{*}_{\ell}|$ distributions of the $D^{(*)}\ell$ samples (data
points) with the projections of the results of the isospin-unconstrained fit
(stacked colored distributions). The region above the dashed line of the
background component corresponds to $B\kern
1.43994pt\overline{\kern-1.43994ptB}{}$ background and the region below
corresponds to continuum. The peak at $m_{\rm miss}^{2}=0$ in the background
component is due to charge cross-feed events. The
$|\boldsymbol{p}^{*}_{\ell}|$ distributions show the signal-enriched region
with $m_{\rm miss}^{2}\geq 1\mathrm{\,Ge\kern-0.80005ptV}^{2}$, thus excluding
most of the normalization events in these samples.
Figure 9 shows the $m_{\rm miss}^{2}$ and $|\boldsymbol{p}^{*}_{\ell}|$
projections of the fit to the four $D^{(*)}\pi^{0}\ell$ samples. The narrow
$m_{\rm miss}^{2}$ peak is described well by the fit. It tightly constrains
contributions from $B\rightarrow D^{(*)}\pi\ell\nu$ decays, including the
nonresonant $D^{(*)}\pi$ states as well as decays of $D^{**}$ states, narrow
or wide. There appears to be a small excess of events in the data for
$1<m_{\rm miss}^{2}<2\mathrm{\,Ge\kern-1.00006ptV}^{2}$. This might be an
indication for an underestimation of the $D^{**}(\ell/\tau)\nu$ background.
The impact of this effect is assessed as a systematic uncertainty.
Figure 9: (Color online). Comparison of the $m_{\rm miss}^{2}$ and
$|\boldsymbol{p}^{*}_{\ell}|$ distributions of the $D^{(*)}\pi^{0}\ell$
samples (data points) with the projections of the results of the isospin-
unconstrained fit (stacked colored distributions). The region above the dashed
line of the background component corresponds to $B\kern
1.43994pt\overline{\kern-1.43994ptB}{}$ background and the region below
corresponds to continuum.
The fit determines, for each signal decay mode, the number of signal events in
the data sample, $N_{\text{sig}}$, and the corresponding number of
normalization events, $N_{\text{norm}}$. We derive the ratios of branching
fractions as
${\cal
R}(D^{(*)})=\frac{N_{\text{sig}}}{N_{\text{norm}}}\frac{\varepsilon_{\rm
norm}}{\varepsilon_{\rm sig}},$ (26)
where $\varepsilon_{\text{sig}}/\varepsilon_{\text{norm}}$ is the ratio of
efficiencies (including the $\tau^{\pm}$ branching fractions) taken from MC
simulation. These relative efficiencies are larger for ${\cal R}(D)$ than for
${\cal R}(D^{*})$, because the $q^{2}>4\mathrm{\,Ge\kern-1.00006ptV}^{2}$
requirement rejects a larger fraction of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell}$ decays than of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell}$ decays, while keeping almost 100% of
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays..
The results of the fits in terms of the number of events, the efficiency
ratios, and ${\cal R}(D^{(*)})$ are listed in Table 8, for both the standard
and the isospin-constrained fits. Due to the large signal feed-down, there are
significant negative correlations between the fits to the $D\ell$ and
$D^{*}\ell$ samples. The statistical correlations are $-0.59$ for ${\cal
R}(D^{0})$ and ${\cal R}(D^{*0})$, $-0.23$ for ${\cal R}(D^{+})$ and ${\cal
R}(D^{*+})$, and $-0.45$ for ${\cal R}(D)$ and ${\cal R}(D^{*})$.
## VII Systematic uncertainties
Table 5 lists the systematic uncertainties considered in this analysis, as
well as their correlations in the measurements of ${\cal R}(D)$ and ${\cal
R}(D^{*})$. We distinguish two kinds of uncertainties that affect the
measurement of ${\cal R}(D^{(*)})$: _additive_ uncertainties which impact the
signal and background yields and thereby the significance of the results, and
_multiplicative_ uncertainties that affect the $\varepsilon_{\rm
sig}/\varepsilon_{\rm norm}$ ratios and, thus, do not change the significance.
The limited size of the simulated signal and background samples impact both
additive and multiplicative uncertainties.
Table 5: Systematic uncertainties and correlations on ${\cal R}(D^{(*)})$ for the isospin-unconstrained (columns 1–4 and 7–8) and isospin-constrained (columns 5–6 and 9) fits. The total uncertainties and correlations are calculated based on Eq. 27. | Fractional uncertainty (%) | Correlation
---|---|---
Source of uncertainty | ${\cal R}(D^{0})$ | ${\cal R}(D^{*0})$ | ${\cal R}(D^{+})$ | ${\cal R}(D^{*+})$ | ${\cal R}(D)$ | ${\cal R}(D^{*})$ | $D^{0}/D^{*0}$ | $D^{+}/D^{*+}$ | $D/D^{*}$
Additive uncertainties | | | | | | | | |
PDFs | | | | | | | | |
MC statistics | 6.5 | 2.9 | 5.7 | 2.7 | 4.4 | 2.0 | $-0.70$ | $-0.34$ | $-0.56$
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{(*)}(\tau^{-}/\ell^{-})\overline{\nu}$ FFs | 0.3 | 0.2 | 0.2 | 0.1 | 0.2 | 0.2 | $-0.52$ | $-0.13$ | $-0.35$
$D^{**}\rightarrow D^{(*)}(\pi^{0}/\pi^{\pm})$ | 0.7 | 0.5 | 0.7 | 0.5 | 0.7 | 0.5 | 0.22 | 0.40 | 0.53
${\cal B}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{**}\ell^{-}\overline{\nu}_{\ell})$ | 1.0 | 0.4 | 1.0 | 0.4 | 0.8 | 0.3 | $-0.63$ | $-0.68$ | $-0.58$
${\cal B}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{**}\tau^{-}\overline{\nu}_{\tau})$ | 1.2 | 2.0 | 2.1 | 1.6 | 1.8 | 1.7 | 1.00 | 1.00 | 1.00
$D^{**}\rightarrow D^{(*)}\pi\pi$ | 2.1 | 2.6 | 2.1 | 2.6 | 2.1 | 2.6 | 0.22 | 0.40 | 0.53
Cross-feed constraints | | | | | | | | |
MC statistics | 2.6 | 0.9 | 2.1 | 0.9 | 2.4 | 1.5 | 0.02 | $-0.02$ | $-0.16$
$f_{D^{**}}$ | 6.2 | 2.6 | 5.3 | 1.8 | 5.0 | 2.0 | 0.22 | 0.40 | 0.53
Feed-up/feed-down | 1.9 | 0.5 | 1.6 | 0.2 | 1.3 | 0.4 | 0.29 | 0.51 | 0.47
Isospin constraints | – | – | – | – | 1.2 | 0.3 | – | – | $-0.60$
Fixed backgrounds | | | | | | | | |
MC statistics | 4.3 | 2.3 | 4.3 | 1.8 | 3.1 | 1.5 | $-0.48$ | $-0.05$ | $-0.30$
Efficiency corrections | 4.8 | 3.0 | 4.5 | 2.3 | 3.9 | 2.3 | $-0.53$ | 0.20 | $-0.28$
Multiplicative uncertainties | | | | | | | | |
MC statistics | 2.3 | 1.4 | 3.0 | 2.2 | 1.8 | 1.2 | 0.00 | 0.00 | 0.00
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{(*)}(\tau^{-}/\ell^{-})\overline{\nu}$ FFs | 1.6 | 0.4 | 1.6 | 0.3 | 1.6 | 0.4 | 0.00 | 0.00 | 0.00
Lepton PID | 0.6 | 0.6 | 0.6 | 0.5 | 0.6 | 0.6 | 1.00 | 1.00 | 1.00
$\pi^{0}$/$\pi^{\pm}$ from $D^{*}\rightarrow D\pi$ | 0.1 | 0.1 | 0.0 | 0.0 | 0.1 | 0.1 | 1.00 | 1.00 | 1.00
Detection/Reconstruction | 0.7 | 0.7 | 0.7 | 0.7 | 0.7 | 0.7 | 1.00 | 1.00 | 1.00
${\cal B}(\tau^{-}\rightarrow\ell^{-}\bar{\nu}_{\ell}\nu_{\tau})$ | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 1.00 | 1.00 | 1.00
Total syst. uncertainty | 12.2 | 6.7 | 11.4 | 6.0 | 9.6 | 5.5 | $-0.21$ | 0.10 | 0.05
Total stat. uncertainty | 19.2 | 9.8 | 18.0 | 11.0 | 13.1 | 7.1 | $-0.59$ | $-0.23$ | $-0.45$
Total uncertainty | 22.7 | 11.9 | 21.3 | 12.5 | 16.2 | 9.0 | $-0.48$ | $-0.15$ | $-0.27$
### VII.1 Additive uncertainties:
Additive uncertainties affect the results of the fit. To asses their impact,
we vary the source of uncertainty 1000 times following a given distribution,
and repeat the fit for each variation. We adopt as the uncertainty the
standard deviation of the distribution of the resulting $R(D^{(*)})$ values.
From this ensemble of fits, we also estimate the correlation between the
uncertainties of ${\cal R}(D)$ and ${\cal R}(D^{*})$.
#### VII.1.1 PDF Estimation
##### MC statistics:
We employ a bootstrap algorithm Bowman and Azzalini (1997) to estimate the
uncertainty due to the limited size of the simulated event samples on which we
base the 56 PDFs. We generate 1000 samples of simulated events by sampling the
original MC sample with replacement Roe (2001). The PDFs are recalculated with
each bootstrapped sample, and the fit is repeated for each set of PDFs. Figure
6 shows the $1\sigma$ and $2\sigma$ bands for the projections of five selected
PDFs. The impact on the final result is 4.4% for ${\cal R}(D)$ and 2.0% for
${\cal R}(D^{*})$.
##### Form factors for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}(\tau^{-}/\ell^{-})\overline{\nu}$:
We estimate the impact on the signal and normalization PDFs due to the
uncertainties on the FF parameters, $\rho^{2}_{D}$, $\Delta$,
$\rho^{2}_{D^{*}}$, $R_{0}(1)$, $R_{1}(1)$, and $R_{2}(1)$, taking into
account their uncertainties and correlations. We recalculate the
$D^{(*)}\tau\nu$ and $D^{(*)}\ell\nu$ PDFs with each set of 1000 Gaussian
variations of the parameter values, and repeat the fit with each set of PDFs
to determine the impact on ${\cal R}(D^{(*)})$.
##### $D^{**}\rightarrow D^{(*)}(\pi^{0}/\pi^{\pm})$ fraction:
The simulation of $D^{**}(\ell/\tau)\nu$ decays only includes the two-body
decays $D^{**}\rightarrow D^{(*)}\pi$ of the four $L=1$ charm meson states.
The ratio of $D^{**}\rightarrow D^{(*)}\pi^{0}$ decays to $D^{**}\rightarrow
D^{(*)}\pi^{\pm}$ decays which is fixed by isospin relations has a significant
impact on the PDFs, because $D^{**}\rightarrow D^{(*)}\pi^{0}$ decays result
in a sharply peaked $m_{\rm miss}^{2}$ distribution for the
$D^{(*)}\pi^{0}\ell$ samples. The measured uncertainty on the $\pi^{0}$
detection efficiency is 3%. We assume a 4% uncertainty to the probability that
a low momentum charged pion from $D^{**}\rightarrow D^{(*)}\pi^{\pm}$ decays
is misassigned to the $B_{\rm tag}$ decay. Combining these two uncertainties,
we arrive at an uncertainty on the relative proportion of the two-body decays
of $D^{**}$ of 5%. We repeat the fit increasing and decreasing this ratio by
5%, and adopt the largest variation of the isospin-constrained fit results as
the systematic uncertainty.
##### $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell}$ branching fractions:
Since decays to the four $D^{**}$ states are combined in the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}(\tau^{-}/\ell^{-})\overline{\nu}$ samples, the PDFs depend on the
relative $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell}$ branching fractions for the four $L=1$
states Amhis _et al._ (2012). The impact of the branching fraction
uncertainties is assessed by recalculating the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}(\tau^{-}/\ell^{-})\overline{\nu}$ PDFs and adopting the variation of
the fit results from the ensemble of PDFs as the uncertainty.
##### $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ branching fractions:
As noted above, the sharp peak in the $m_{\rm miss}^{2}$ distribution of the
$D^{(*)}\pi^{0}\ell$ samples constrains contributions from $B\rightarrow
D^{(*)}\pi\ell\nu$ decays. Events with additional unreconstructed particles
contribute to the tail of the $m_{\rm miss}^{2}$ distribution and, thus, are
more difficult to separate from other backgrounds and signal events. This is
the case for $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\tau^{-}\overline{\nu}_{\tau}$ decays, which are combined with $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell}$ decays in the $D^{**}(\ell/\tau)\nu$ PDFs
with the relative proportion ${\cal R}(D^{**})_{\rm PS}=0.18$. This value has
been derived from the ratio of the available phase space. The same estimate
applied to $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays results in ${\cal R}(D)_{\rm
PS}=0.279$ and ${\cal R}(D^{*})_{\rm PS}=0.251$, values that are 58% and 32%
smaller than the measured values. Taking this comparison as guidance for the
error on ${\cal R}(D^{**})$, we increase ${\cal R}(D^{**})$ by 50%,
recalculate the $D^{**}(\ell/\tau)\nu$ PDFs, and repeat the fit. As a result,
the values of ${\cal R}(D)$ and ${\cal R}(D^{*})$ decrease by 1.8% and 1.7%,
respectively. The impact is relatively small, because $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\tau^{-}\overline{\nu}_{\tau}$ contributions are small with respect to
signal decays, which have much higher reconstruction efficiencies.
##### Unmeasured $B\rightarrow D^{**}(\rightarrow
D^{(*)}\pi\pi)\ell\nu_{\ell}$ decays:
To assess the impact of other potential $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell}$ contributions, we modify the standard fit
by adding an additional component. Out of the four contributions listed in
Table 6, the three-body decays of the $D^{**}$ states with $L=1$ give the best
agreement in the fits to the $D^{(*)}\pi^{0}\ell$ samples. For this decay
chain, the $m_{\rm miss}^{2}$ distribution has a long tail due to an
additional undetected pion. This could account for some of the observed excess
at $1<m_{\rm miss}^{2}<2\mathrm{\,Ge\kern-1.00006ptV}^{2}$ in Fig. 9. We
assign the observed change in ${\cal R}(D^{(*)})$ as a systematic uncertainty.
Table 6: Additional $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{**}\ell^{-}\overline{\nu}_{\ell}$ decays and the MC model implemented for their decays. The fourth decay mode refers to three-body decay of the four $L=1$ $D^{**}$ states. Decay | Decay model
---|---
Non-resonant $B\rightarrow D^{(*)}\pi\ell\nu_{\ell}$ | Goity-Roberts Goity and Roberts (1995)
Non-resonant $B\rightarrow D^{(*)}\pi\pi\ell\nu_{\ell}$ | Phase Space
$B\rightarrow D^{(*)}\eta\ell\nu_{\ell}$ | Phase Space
$B\rightarrow D^{**}(\rightarrow D^{(*)}\pi\pi)\ell\nu_{\ell}$ | ISGW2 Scora and Isgur (1995)
#### VII.1.2 Cross-feed Constraints
##### MC statistics:
Constraints on the efficiency ratios that link contributions from the same
source are taken from MC simulation. The impact of their statistical
uncertainty is assessed by varying the simulated event yields assuming Poisson
errors.
##### The ratios $f_{D^{**}}$:
We assess the uncertainty on $f_{D^{**}}$, the constraints linking the
$D^{**}(\ell/\tau)\nu$ yields in the $D^{(*)}\ell$ and $D^{(*)}\pi^{0}\ell$
samples, by estimating the relative efficiencies of the selection criteria
that differ in the two samples. The main differences in the selection of these
samples are due to differences in the $D^{(*)}\ell$ and $D^{(*)}\pi^{0}\ell$
BDTs.
In the $D^{(*)}\ell$ samples, we observed that differences between data and
simulation cause a 5%-10% underestimation of the continuum and $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ backgrounds after the BDT requirements
are applied. Since the $D^{**}(\ell/\tau)\nu$ contributions have similar
$E_{\rm extra}$ distributions, and these distributions are the key inputs to
the BDTs, we applied the same 5%-10% corrections to these contributions. We
conservatively assign 100% of this correction as the systematic uncertainty on
the $D^{**}(\ell/\tau)\nu$ efficiency in the $D^{(*)}\ell$ samples.
Since $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ decays are difficult to isolate in
samples other than the $D^{(*)}\pi^{0}\ell$ control samples, we estimate the
uncertainty on the $D^{**}(\ell/\tau)\nu$ efficiency due to the
$D^{(*)}\pi^{0}\ell$ BDT selection by relying on the observed data-MC
difference of the BDT selection efficiency for the $D^{(*)}\ell\nu$ sample. We
assign the full 8.5% overestimate of the $D^{(*)}\ell\nu$ contribution as the
systematic uncertainty on the $D^{**}(\ell/\tau)\nu$ efficiency in the
$D^{(*)}\pi^{0}\ell$ samples.
The $f_{D^{**}}$ constraints also depend on the relative branching fractions
of the four $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell}$ decays that are combined in the
$D^{**}(\ell/\tau)\nu$ contributions. We estimate their impact on $f_{D^{**}}$
from the branching fraction variations observed in the evaluation of the PDF
uncertainty. The largest standard deviation for the four $f_{D^{**}}$
distributions is 1.8%.
By adding the uncertainties on $f_{D^{**}}$ described above in quadrature, we
obtain total uncertainties of 13.2% for the $D$ samples, and 10.0% for the
$D^{*}$ samples. Given that there are similarities between the BDT selections
applied to the $D$ and $D^{*}$ samples, we adopt a 50% correlation between
their uncertainties. With these uncertainties and correlations, we derive the
total impact on the results, 5.0% for ${\cal R}(D)$ and 2.0% for ${\cal
R}(D^{*})$.
##### Feed-down constraints:
The feed-down constraints of the signal yields are corrected as part of the
iteration of the fit. The uncertainties on these corrections are given by the
statistical uncertainty on the ratios of the fitted $D^{*}\ell\nu\Rightarrow
D^{*}\ell$ and $D^{*}\ell\nu\Rightarrow D\ell$ yields. They are 2.4% and 4.4%
on the $D^{*0}\tau\nu$ and $D^{*+}\tau\nu$ feed-down constraints,
respectively.
##### Feed-up constraints:
We estimate the uncertainty on the $D\tau\nu$ and $D\ell\nu$ feed-up
constraints as 100% of the corrections on the feed-down constraints. This
results in 6.8% on the $D^{0}(\ell/\tau)\nu$ feed-up and 9.9% on the
$D^{+}(\ell/\tau)\nu$ feed-up. These two effects combined lead to an
uncertainty of 1.3% on ${\cal R}(D)$ and 0.4% on ${\cal R}(D^{*})$.
##### Isospin constraints:
In the isospin-constrained fit, we employ five additional constraints to link
the signal and normalization yields of the samples corresponding to $B^{-}$
and $B^{0}$ decays. Since we reweight these contributions with the $q^{2}\leq
4\mathrm{\,Ge\kern-1.00006ptV}^{2}$ control sample, the uncertainty on the
isospin constraints is given by the statistical uncertainty on the ratios of
the $q^{2}\leq 4\mathrm{\,Ge\kern-1.00006ptV}^{2}$ yields. This uncertainty is
3.4% in the $D\ell$ samples and 3.6% in the $D^{*}\ell$ samples. This
translates into uncertainties of 1.2% on ${\cal R}(D)$ and 0.3% on ${\cal
R}(D^{*})$.
#### VII.1.3 Fixed Background Contributions
##### MC statistics:
The yields of the continuum, $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$,
and cross-feed backgrounds are fixed in the fit. The uncertainty due to the
limited size of the MC samples is estimated generating Poisson variations of
these yields, and repeating the fit with each set of values. A significant
part of this uncertainty is due to the continuum yields, since the size of
simulated continuum sample is equivalent to only twice the data sample,
##### Efficiency corrections:
To account for the correlations among the various corrections applied to the
continuum and $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ backgrounds, we
follow this multi-step procedure:
* •
We vary the continuum corrections within their statistical uncertainties of
3%–9% , given by the number of events in the off-peak data control samples.
* •
The branching fractions of the most abundant decays in the $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ background are varied within their
uncertainties Nakamura _et al._ (2010).
* •
The $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ correction is reestimated
in the high $E_{\rm extra}$ control sample, and varied within the statistical
uncertainty of 1.9%.
* •
The BDT bias corrections are reestimated in the $m_{\rm ES}$ sideband, and
varied within their statistical uncertainties, 2.1% in the $D\ell$ samples and
3.6% in the $D^{*}\ell$ samples.
* •
The $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ background PDFs are
recalculated.
* •
The fit is repeated for each set of PDF and yield variations.
Table 7 shows the size of the continuum and $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ backgrounds and their uncertainties
due to the limited size of the MC samples and the various corrections
implemented by comparisons with control samples.
Table 7: Continuum and other $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ background yields; the first uncertainty is due to MC statistics, the second to efficiency corrections, and $\sigma$ refers to the total uncertainty. Sample | Continuum | $\sigma$ (%) | $B\kern 1.79993pt\overline{\kern-1.79993ptB}{}$ | $\sigma$ (%)
---|---|---|---|---
$D^{0}\ell$ | 355 $\pm$ | 13 $\pm$ | 12 | 4.9 | 330 $\pm$ | 6 $\pm$ | 17 | 5.3
$D^{*0}\ell$ | 132 $\pm$ | 8 $\pm$ | 6 | 7.6 | 188 $\pm$ | 4 $\pm$ | 10 | 5.9
$D^{+}\ell$ | 157 $\pm$ | 9 $\pm$ | 6 | 6.9 | 191 $\pm$ | 5 $\pm$ | 9 | 5.5
$D^{*+}\ell$ | 12 $\pm$ | 3 $\pm$ | 1 | 23.6 | 72 $\pm$ | 3 $\pm$ | 4 | 6.9
### VII.2 Multiplicative Uncertainties
##### MC statistics:
The relative efficiency $\varepsilon_{\rm sig}/\varepsilon_{\rm norm}$ is
estimated as the ratio of expected yields, so the limited size of the MC
samples contributes to its uncertainty. We estimate it assuming Poisson errors
on the MC yields.
##### Form factors for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}(\tau^{-}/\ell^{-})\overline{\nu}$:
The $q^{2}>4\mathrm{\,Ge\kern-1.00006ptV}^{2}$ requirement introduces some
dependence on the FF parameterization. This uncertainty is assessed based on
the effect of the FF variations calculated for the uncertainty on the PDFs.
##### $\pi^{0}$/$\pi^{\pm}$ from $D^{*}\rightarrow D\pi$:
There is a significant momentum-dependent uncertainty on the reconstruction
efficiency of soft pions originating from $D^{*}\rightarrow D\pi$ decays.
However, the momentum spectra of soft pions in signal and normalization decays
are rather similar, see Fig. 10. As a result, the uncertainty on ${\cal
R}(D^{(*)})$ is less than 0.1%.
##### Detection and Reconstruction:
Given that signal and normalization decays are reconstructed by the same
particles in the final state, many of the uncertainties that impact their
efficiencies cancel in the ratios $\varepsilon_{\rm sig}/\varepsilon_{\rm
norm}$. Uncertainties due to final-state radiation, soft-pion reconstruction,
and others related to the detector performance contribute less than 1%.
Similarly, the tagging efficiency for events with signal and normalization
decays show only very small differences.
##### $\tau^{-}\rightarrow\ell^{-}\bar{\nu}_{\ell}\nu_{\tau}$ branching
fraction:
We use the world averages ${\cal B}(\tau^{-}\rightarrow
e^{-}\overline{\nu}_{e}\nu_{\tau})=(17.83\pm 0.04)\%$ and ${\cal
B}(\tau^{-}\rightarrow\mu^{-}\overline{\nu}_{\mu}\nu_{\tau})=(17.41\pm
0.04)\%$ Nakamura _et al._ (2010).
Figure 10: (Color online). Pion momentum in the laboratory from $B\rightarrow
D^{*+}\ell\nu$ and $B\rightarrow D^{*+}\tau\nu$ decays: (a) $D^{*+}\rightarrow
D^{0}\pi^{+}$, and (b) $D^{*+}\rightarrow D^{+}\pi^{0}$ decays. Histograms are
normalized to 1000 entries.
### VII.3 Correlations
Even though several of the uncertainties listed in Table 5 have the same
source, their impact on ${\cal R}(D^{(*)})$ is largely uncorrelated, i.e., the
correlation between uncertainties in different rows of Table 5 is negligible.
However, the correlation between the uncertainties on ${\cal R}(D)$ and ${\cal
R}(D^{*})$ (different columns) is significant, and important for the
comparison of these measurements with theoretical predictions.
For most of the additive systematic uncertainties, we estimate the correlation
from the two-dimensional ${\cal R}(D)$–${\cal R}(D^{*})$ distribution
resulting from the fit variations. This is not possible for the
$D^{**}\rightarrow D^{(*)}\pi^{0}/\pi^{\pm}$ and $D^{**}\rightarrow
D^{(*)}\pi\pi$ uncertainties. These uncertainties affect the size of the
$D^{**}(\ell/\tau)\nu$ background in the $D^{(*)}\ell$ samples in the same way
that as $f_{D^{**}}$ does. Thus, we derive their correlations from the
$f_{D^{**}}$ correlations. Since the signal and $D^{**}\tau\nu$ PDFs are very
similar, we assign a 100% correlation on ${\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\tau^{-}\overline{\nu}_{\tau})$.
The multiplicative uncertainties on the efficiency due to the MC statistics
are uncorrelated. The FFs for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\ell^{-}\overline{\nu}_{\ell}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\ell^{-}\overline{\nu}_{\ell}$ decays are measured separately, so their
uncertainties are also not correlated. The uncertainty on ${\cal
B}(\tau^{-}\rightarrow\ell^{-}\bar{\nu}_{\ell}\nu_{\tau})$ affects all
channels equally. We assume that the remaining small uncertainties on the
efficiencies due to detector effects are 100% correlated as well.
The uncertainties and their correlations are listed in Table 5. We combine
these correlations $\rho_{i}$ and the uncertainties by adding their covariance
matrices,
$\sum_{i}\left(\\!\begin{array}[]{cc}\sigma_{i}^{2}&\\!\\!\rho_{i}\sigma_{i}\sigma_{i}^{*}\\\
\rho_{i}\sigma_{i}\sigma_{i}^{*}&\sigma_{i}^{*2}\end{array}\\!\right)=\left(\\!\begin{array}[]{cc}\sigma_{\rm
tot}^{2}&\\!\\!\\!\rho_{\rm tot}\sigma_{\rm tot}\sigma_{\rm tot}^{*}\\\
\rho_{\rm tot}\sigma_{\rm tot}\sigma_{\rm tot}^{*}&\sigma_{\rm
tot}^{*2}\end{array}\\!\\!\right).$ (27)
Here, $\sigma_{i}$ and $\sigma_{i}^{*}$ refer to the uncertainties on ${\cal
R}(D)$ and ${\cal R}(D^{*})$, respectively.
## VIII Stability checks and Kinematic Distributions
### VIII.1 Stability tests
We have checked the stability of the fit results for different data subsamples
and different levels of background suppression.
To look for possible dependence of the results on the data taking periods, we
divide the data sample into four periods corresponding to approximately equal
luminosity, and fit each sample separately. The results are presented in Fig.
11. The eight measurements each for ${\cal R}(D)$ and ${\cal R}(D^{*})$,
separately for $B^{+}$ and $B^{0}$, are compared to the isospin-constrained
fit results obtained from the complete data sample. Based on the values of
$\chi^{2}$ for 7 degrees of freedom, we conclude that the results of these
fits are statistically consistent with the fit to the whole data sample.
A similar test is performed for two samples identified by the final state
lepton, an electron or a muon. This test includes the uncertainties on the
background corrections that affect the electron and muon samples differently.
These uncertainties are statistically dominated and, thus, independent for
both samples. The results are presented in the bottom panels of Fig. 11. The
$\chi^{2}$ tests confirm the stability of these measurements within the
uncertainties.
Figure 11: (Color online). Measurements of ${\cal R}(D)$ and ${\cal R}(D^{*})$
for different data subsamples. Top: for four run periods with statistical
uncertainties only. Bottom: for electrons and muons with statistical and
uncorrelated systematic uncertainties. The vertical bands labeled “SM” and
“All data” mark the SM predictions and the results of the fits to the whole
data sample, respectively.
To assess the sensitivity of the fit results on the purity of the data sample
and the BDT selection, we perform fits for samples selected with different BDT
requirements. We identify each sample by the relative number of events in the
signal region ($m_{\rm miss}^{2}>1\mathrm{\,Ge\kern-1.00006ptV}^{2}$) with
respect to the nominal sample, which is labeled as the 100% sample. The ratio
of the number of fitted signal events $S$ to the number of background events
$B$ varies from $S/B=1.27$ in the 30% sample, to $S/B=0.27$ in the 300%
sample, while the backgrounds increase by a factor of 18. The BDT bias
correction and the PDFs are recalculated for each sample. Figure 12 shows the
results of fits to the different samples with tighter and looser BDT
requirements. We take into account the large correlations between these nested
samples and conclude that the results are stable for the very large variations
of the BDT requirements.
Figure 12: (Color online). Measurements of ${\cal R}(D)$ and ${\cal R}(D^{*})$
for different BDT requirements, impacting the signal/background ratio. The
horizontal bands mark the ${\cal R}(D)$ and ${\cal R}(D^{*})$ results for the
isospin-constrained fit to the nominal (100%) sample. The data points
represent the results of the fits for $B^{+}$ and $B^{0}$ mesons with their
statistical uncertainties.
### VIII.2 Gaussian Uncertainties
For a maximum likelihood fit with Gaussian uncertainties, the logarithm of the
likelihood is described by the parabola $P(Y)=(Y-Y_{\rm
fit})^{2}/2\sigma^{2}_{\rm fit}$, where $Y_{\rm fit}$ is the fitted yield and
$\sigma_{\rm fit}$ is the uncertainty on $Y_{\rm fit}$. Figure 13 compares the
likelihood scan of the signal yields for the isospin-constrained fit with the
parabola that results from the fitted yields, presented in Table 8. There is a
slight asymmetry in the likelihood function, but good agreement overall. Thus,
we conclude that the statistical uncertainty on ${\cal R}(D)$ and ${\cal
R}(D^{*})$ may be considered Gaussian.
Figure 13: (Color online). Likelihood scan for the two signal yields compared
to a parabola. The dashed lines indicate the number of standard deviations
($n_{\sigma}$) away from the fit result.
Figure 14 shows the effect on ${\cal R}(D)$ and ${\cal R}(D^{*})$ from
variations on $f_{D^{**}}$, the largest source of systematic uncertainty. The
distributions are well described by a Gaussian function. This is also the case
for the other major sources of systematic uncertainty.
Figure 14: (Color online). Histograms: ${\cal R}(D^{(*)})$ distributions
resulting from 1000 variations of $f_{D^{**}}$. Solid curves: Gaussian fits to
the ${\cal R}(D^{(*)})$ distributions.
### VIII.3 Kinematic Distributions
We further study the results of the fit by comparing the kinematic
distributions of data events with the SM expectations. Specifically, we focus
on the signal-enriched region with $m_{\rm
miss}^{2}>1.5\mathrm{\,Ge\kern-1.00006ptV}^{2}$ and scale each component in
the simulation by the results of the fits. To compare the data and MC
distributions we calculate a $\chi^{2}$ per degree of freedom which only
includes the statistical uncertainty of bins with 8 or more events. The number
of degrees of freedom is given by the number of bins minus the number of
fitted signal yields.
Figure 15: (Color online). $E_{\rm extra}$ distributions for events with
$m_{\rm miss}^{2}>1.5\mathrm{\,Ge\kern-0.80005ptV}^{2}$ scaled to the results
of the isospin-unconstrained (first two columns) and isospin-contrained (last
column) fits. The region above the dashed line of the background component
corresponds to $B\kern 1.43994pt\overline{\kern-1.43994ptB}{}$ background and
the region below corresponds to continuum. In the third column, the $B^{0}$
and $B^{+}$ samples are combined, and the normalization and background events
are subtracted. Figure 16: (Color online). $m_{\rm ES}$ distributions before
(left) and after (center) subtraction of normalization of background events,
and lepton momentum distributions after this subtraction (right) for events
with $m_{\rm miss}^{2}>1.5\mathrm{\,Ge\kern-0.90005ptV}^{2}$ scaled to the
results of the isospin-constrained fit. The $B^{0}$ and $B^{+}$ samples are
combined. See Fig. 15 for a legend.
Figure 15 shows the $E_{\rm extra}$ distribution of events in the
$D^{(*)}\ell$ samples. This variable is key in the BDT selection and overall
background suppression. There is a clear enhancement of signal events at
$E_{\rm extra}=0$ in all four $D^{(*)}\ell$ samples. The background
contributions, which are significantly more uniform in $E_{\rm extra}$ than
those of signal, appear to be well reproduced. We conclude that the simulation
agrees well with the data distribution.
Figure 16 also shows clear signal enhancements in the $m_{\rm ES}$ and
$|\boldsymbol{p}^{*}_{\ell}|$ distributions of events in the $m_{\rm
miss}^{2}>1.5\mathrm{\,Ge\kern-1.00006ptV}^{2}$ region. The data and
simulation agree well within the limited statistics.
## IX Results
### IX.1 Comparison with SM expectations
Table 8 shows the results of the measurement of ${\cal R}(D)$ and ${\cal
R}(D^{*})$ extracted from the fit without and with isospin constraints linking
$B^{+}$ and $B^{0}$ decays.
Table 8: Results of the isospin-unconstrained (top four rows) and isospin-constrained fits (last two rows). The columns show the signal and normalization yields, the ratio of their efficiencies, ${\cal R}(D^{(*)})$, the signal branching fractions, and $\mathchar 28934\relax_{\rm stat}$ and $\mathchar 28934\relax_{\rm tot}$, the statistical and total significances of the measured signal yields. Where two uncertainties are given, the first is statistical and the second is systematic. The second and third uncertainties on the branching fractions ${\cal B}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{(*)}\tau^{-}\overline{\nu}_{\tau})$ correspond to the systematic uncertainties due to ${\cal R}(D^{(*)})$ and ${\cal B}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{(*)}\ell^{-}\overline{\nu}_{\ell})$, respectively. The stated branching fractions for the isospin-constrained fit refer to $B^{-}$ decays. Decay | $N_{\mathrm{sig}}$ | $N_{\mathrm{norm}}$ | $\varepsilon_{\rm sig}/\varepsilon_{\rm norm}$ | ${\cal R}(D^{(*)})$ | ${\cal B}(B\rightarrow D^{(*)}\tau\nu)\,(\%)$ | $\mathchar 28934\relax_{\text{stat}}$ | $\mathchar 28934\relax_{\text{tot}}$
---|---|---|---|---|---|---|---
$B^{-}\\!\rightarrow$ | $D^{0}\tau^{-}\overline{\nu}_{\tau}$ | 314 $\pm$ | 60 | 1995 $\pm$ | 55 | 0.367 $\pm$ | 0.011 | 0.429 $\pm$ | 0.082 $\pm$ | 0.052 | 0.99 $\pm$ | 0.19 $\pm$ | 0.12 $\pm$ | 0.04 | 5.5 | 4.7
$B^{-}\\!\rightarrow$ | $D^{*0}\tau^{-}\overline{\nu}_{\tau}$ | 639 $\pm$ | 62 | 8766 $\pm$ | 104 | 0.227 $\pm$ | 0.004 | 0.322 $\pm$ | 0.032 $\pm$ | 0.022 | 1.71 $\pm$ | 0.17 $\pm$ | 0.11 $\pm$ | 0.06 | 11.3 | 9.4
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow$ | $D^{+}\tau^{-}\overline{\nu}_{\tau}$ | 177 $\pm$ | 31 | 986 $\pm$ | 35 | 0.384 $\pm$ | 0.014 | 0.469 $\pm$ | 0.084 $\pm$ | 0.053 | 1.01 $\pm$ | 0.18 $\pm$ | 0.11 $\pm$ | 0.04 | 6.1 | 5.2
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}\rightarrow$ | $D^{*+}\tau^{-}\overline{\nu}_{\tau}$ | 245 $\pm$ | 27 | 3186 $\pm$ | 61 | 0.217 $\pm$ | 0.005 | 0.355 $\pm$ | 0.039 $\pm$ | 0.021 | 1.74 $\pm$ | 0.19 $\pm$ | 0.10 $\pm$ | 0.06 | 11.6 | 10.4
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\;\rightarrow$ | $D\tau^{-}\overline{\nu}_{\tau}$ | 489 $\pm$ | 63 | 2981 $\pm$ | 65 | 0.372 $\pm$ | 0.010 | 0.440 $\pm$ | 0.058 $\pm$ | 0.042 | 1.02 $\pm$ | 0.13 $\pm$ | 0.10 $\pm$ | 0.04 | 8.4 | 6.8
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\;\rightarrow$ | $D^{*}\tau^{-}\overline{\nu}_{\tau}$ | 888 $\pm$ | 63 | 11953 $\pm$ | 122 | 0.224 $\pm$ | 0.004 | 0.332 $\pm$ | 0.024 $\pm$ | 0.018 | 1.76 $\pm$ | 0.13 $\pm$ | 0.10 $\pm$ | 0.06 | 16.4 | 13.2
The $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ branching fractions are calculated from
the measured values of ${\cal R}(D^{(*)})$,
${\cal B}(\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau})={\cal R}(D^{(*)})\times{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}).$ (28)
For $B^{-}$, we use the average branching fractions measured by BABAR Aubert
_et al._ (2010, 2009, 2008b),
$\displaystyle{\cal B}(B^{-}\rightarrow D^{0}\ell^{-}\overline{\nu}_{\ell})$
$\displaystyle=(2.32\pm 0.03\pm 0.08)\%,$ $\displaystyle{\cal
B}(B^{-}\rightarrow D^{*0}\ell^{-}\overline{\nu}_{\ell})$
$\displaystyle=(5.31\pm 0.02\pm 0.19)\%,$
and for $\kern 1.79993pt\overline{\kern-1.79993ptB}{}^{0}$, the corresponding
branching fractions related by isospin.
We estimate the statistical significance of the measured signal branching
fractions as $\mathchar
28934\relax_{\text{stat}}=\sqrt{2\Delta(\textrm{ln}\cal{L})}$, where
$\Delta(\textrm{ln}\cal{L})$ is the increase in log-likelihood for the nominal
fit relative to the no-signal hypothesis. The total significance $\mathchar
28934\relax_{\text{tot}}$ is determined as
$\mathchar 28934\relax_{\text{tot}}=\mathchar
28934\relax_{\text{stat}}\frac{\sigma_{\text{stat}}}{\sqrt{\sigma_{\text{stat}}^{2}+\sigma_{\text{asys}}^{2}}}.$
(29)
In this expression, the statistical significance is scaled by the sum of the
statistical uncertainty $\sigma_{\text{stat}}$ and the additive systematic
uncertainty $\sigma_{\text{asys}}$. The significance of the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ signal is $6.8\sigma$, the first such
measurement exceeding $5\sigma$.
We compare the measured ${\cal R}(D^{(*)})$ to the calculations based on the
SM,
$\displaystyle{\cal R}(D)_{\rm exp}$ $\displaystyle=0.440\pm 0.072$
$\displaystyle{\cal R}(D^{*})_{\rm exp}$ $\displaystyle=0.332\pm 0.030,$
$\displaystyle{\cal R}(D)_{\rm SM}$ $\displaystyle=0.297\pm 0.017$
$\displaystyle{\cal R}(D^{*})_{\rm SM}$ $\displaystyle=0.252\pm 0.003,$
and observe an excess over the SM predictions for ${\cal R}(D)$ and ${\cal
R}(D^{*})$ of $2.0\sigma$ and $2.7\sigma$, respectively. We combine these two
measurements in the following way
$\chi^{2}=\left(\Delta,\Delta^{*}\right)\left(\begin{array}[]{cc}\sigma^{2}_{\mathrm{exp}}+\sigma^{2}_{\mathrm{th}}&\rho\,\sigma_{\mathrm{exp}}\;\sigma^{*}_{\mathrm{exp}}\\\
\rho\,\sigma_{\mathrm{exp}}\;\sigma^{*}_{\mathrm{exp}}&\sigma^{*2}_{\mathrm{exp}}+\sigma^{*2}_{\mathrm{th}}\end{array}\right)^{-1}\left(\begin{array}[]{c}\Delta\\\
\Delta^{*}\end{array}\right),$ (30)
where $\Delta^{(*)}={\cal R}(D^{(*)})_{\rm exp}-{\cal R}(D^{(*)})_{\rm th}$,
and $\rho$ is the total correlation between the two measurements, $\rho({\cal
R}(D),{\cal R}(D^{*}))=-0.27$. Since the total uncertainty is dominated by the
experimental uncertainty, the expression in Eq. 30 is expected to be
distributed as a $\chi^{2}$ distribution for two degrees of freedom. Figure 17
shows this distribution in the ${\cal R}(D)$–${\cal R}(D^{*})$ plane. The
contours are ellipses slightly rotated with respect to the ${\cal
R}(D)$–${\cal R}(D^{*})$ axes, due to the non-zero correlation.
Figure 17: (Color online). Representation of $\chi^{2}$ (Eq. 30) in the ${\cal
R}(D)$–${\cal R}(D^{*})$ plane. The white cross corresponds to the measured
${\cal R}(D^{(*)})$, and the black cross to the SM predictions. The shaded
bands represent one standard deviation each.
For the assumption that ${\cal R}(D^{(*)})_{\rm th}={\cal R}(D^{(*)})_{\rm
SM}$, we obtain $\chi^{2}=14.6$, which corresponds to a probability of
$6.9\times 10^{-4}$. This means that the possibility that the measured ${\cal
R}(D)$ and ${\cal R}(D^{*})$ both agree with the SM predictions is excluded at
the $3.4\sigma$ level 111In this paper, the significance of an observation
with probability $p$ is expressed by the number of standard deviations
$\sigma$ of a one-dimensional Gaussian function for this probability. The
shaded bands in Figs. 17, 21, and 22 correspond to $p$ values of 0.683, 0.955,
0.997 and so on.. Recent calculations Nierste _et al._ (2008); Tanaka and
Watanabe (2010); Bailey _et al._ (2012); Becirevic _et al._ (2012) have
resulted in values of ${\cal R}(D)_{\rm SM}$ that slightly exceed our
estimate. For the largest of those values, the significance of the observed
excess decreases to $3.2\sigma$.
### IX.2 Search for a charged Higgs
To examine whether the excess in ${\cal R}(D^{(*)})$ can be explained by
contributions from a charged Higgs boson in the type II 2HDM, we study the
dependence of the fit results on ${\rm tan}\beta/m_{H^{+}}$.
Figure 18: (Color online). $m_{\rm miss}^{2}$ and
$|\boldsymbol{p}^{*}_{\ell}|$ projections of the $D^{0}\tau\nu\Rightarrow
D^{0}\ell$ PDF for various values of ${\rm tan}\beta/m_{H^{+}}$. Figure 19:
(Color online). Left: Variation of the $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ (top) and $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ (bottom) efficiency in the 2HDM with
respect to the SM efficiency. The band indicates the increase on statistical
uncertainty with respect to the SM value. Right: Variation of the fitted
$\kern 1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ (top) and $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ (bottom) yields as a function of ${\rm
tan}\beta/m_{H^{+}}$. The band indicates the statistical uncertainty of the
fit.
For 20 values of ${\rm tan}\beta/m_{H^{+}}$, equally spaced in the
$[0.05,1.00]\mathrm{\,Ge\kern-1.00006ptV}^{-1}$ range, we recalculate the
eight signal PDFs, accounting for the charged Higgs contributions as described
in Sec. II. Figure 18 shows the $m_{\rm miss}^{2}$ and
$|\boldsymbol{p}^{*}_{\ell}|$ projections of the $D^{0}\tau\nu\Rightarrow
D^{0}\ell$ PDF for four values of ${\rm tan}\beta/m_{H^{+}}$. The impact of
charged Higgs contributions on the $m_{\rm miss}^{2}$ distribution mirrors
those in the $q^{2}$ distribution, see Fig. 3, because of the relation
$m_{\rm miss}^{2}=\left(p_{e^{+}e^{-}}-p_{B_{\rm
tag}}-p_{D^{(*)}}-p_{\ell}\right)^{2}=\left(q-p_{\ell}\right)^{2},$
The changes in the $|\boldsymbol{p}^{*}_{\ell}|$ distribution are due to the
change in the $\tau$ polarization.
We recalculate the value of the efficiency ratio $\varepsilon_{\rm
sig}/\varepsilon_{\rm norm}$ as a function of ${\rm tan}\beta/m_{H^{+}}$ (see
Fig. 19). The efficiency increases up to 8% for large values of ${\rm
tan}\beta/m_{H^{+}}$, and, as we noted earlier, its uncertainty increases due
to the larger dispersion of the weights in the 2HDM reweighting.
The variation of the fitted signal yields as a function of ${\rm
tan}\beta/m_{H^{+}}$ is also shown in Fig. 19. The sharp drop in the $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ yield at ${\rm tan}\beta/m_{H^{+}}\approx
0.4\mathrm{\,Ge\kern-1.00006ptV}^{-1}$ is due to the large shift in the
$m_{\rm miss}^{2}$ distribution which occurs when the Higgs contribution
begins to dominate the total rate. This shift is also reflected in the $q^{2}$
distribution and, as we will see in the next section, the data do not support
it. The change of the $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ yield, mostly caused by the correlation
with the $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ sample, is much smaller.
Figure 20 compares the measured values of ${\cal R}(D)$ and ${\cal R}(D^{*})$
in the context of the type II 2HDM to the theoretical predictions as a
function of ${\rm tan}\beta/m_{H^{+}}$. The increase in the uncertainty on the
signal PDFs and the efficiency ratio as a function of ${\rm
tan}\beta/m_{H^{+}}$ are taken into account. Other sources of systematic
uncertainty are kept constant in relative terms.
The measured values of ${\cal R}(D)$ and ${\cal R}(D^{*})$ match the
predictions of this particular Higgs model for ${\rm
tan}\beta/m_{H^{+}}=0.44\pm 0.02\mathrm{\,Ge\kern-1.00006ptV}^{-1}$ and ${\rm
tan}\beta/m_{H^{+}}=0.75\pm 0.04\mathrm{\,Ge\kern-1.00006ptV}^{-1}$,
respectively. However, the combination of ${\cal R}(D)$ and ${\cal R}(D^{*})$
excludes the type II 2HDM charged Higgs boson at 99.8% confidence level for
any value of ${\rm tan}\beta/m_{H^{+}}$, as illustrated in Fig. 21. This
calculation is only valid for values of $m_{H^{+}}$ greater than
$15\mathrm{\,Ge\kern-1.00006ptV}$ Tanaka (1995); Tanaka and Watanabe (2010).
The region for $m_{H^{+}}\leq 15\mathrm{\,Ge\kern-1.00006ptV}$ has already
been excluded by $B\rightarrow X_{s}\gamma$ measurements Misiak _et al._
(2007), and therefore, the type II 2HDM is excluded in the full ${\rm
tan}\beta$–$m_{H^{+}}$ parameter space.
Figure 20: (Color online). Comparison of the results of this analysis (light
band, blue) with predictions that include a charged Higgs boson of type II
2HDM (dark band, red). The widths of the two bands represent the
uncertainties. The SM corresponds to ${\rm tan}\beta/m_{H^{+}}=0$. Figure 21:
(Color online). Level of disagreement between this measurement of ${\cal
R}(D^{(*)})$ and the type II 2HDM predictions for all values in the ${\rm
tan}\beta$–$m_{H^{+}}$ parameter space.
The excess in both ${\cal R}(D)$ and ${\cal R}(D^{*})$ can be explained in
more general charged Higgs models Datta _et al._ (2012); Fajfer _et al._
(2012b); Crivellin _et al._ (2012); Becirevic _et al._ (2012). The effective
Hamiltonian for a type III 2HDM is
$\displaystyle{\cal H}_{\rm eff}=$
$\displaystyle\frac{4G_{F}V_{cb}}{\sqrt{2}}\Bigl{[}(\overline{c}\gamma_{\mu}P_{L}b)\,(\overline{\tau}\gamma^{\mu}P_{L}\nu_{\tau})$
$\displaystyle+S_{L}(\overline{c}P_{L}b)\,(\overline{\tau}P_{L}\nu_{\tau})+S_{R}(\overline{c}P_{R}b)\,(\overline{\tau}P_{L}\nu_{\tau})\Bigr{]},$
(31)
where $S_{L}$ and $S_{R}$ are independent complex parameters, and
$P_{L,R}\equiv(1\mp\gamma_{5})/2$. This Hamiltonian describes the most general
type of 2HDM for which $m_{H^{+}}^{2}\gg q^{2}$.
In this context, the ratios ${\cal R}(D^{(*)})$ take the form
$\displaystyle{\cal R}(D)$ $\displaystyle={\cal R}(D)_{\rm
SM}+A_{D}^{{}^{\prime}}{\rm
Re}(S_{R}+S_{L})+B_{D}^{{}^{\prime}}|S_{R}+S_{L}|^{2},$ $\displaystyle{\cal
R}(D^{*})$ $\displaystyle={\cal R}(D^{*})_{\rm SM}+A_{D^{*}}^{{}^{\prime}}{\rm
Re}(S_{R}-S_{L})+B_{D^{*}}^{{}^{\prime}}|S_{R}-S_{L}|^{2}.$
The sign difference arises because $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays probe scalar operators, while $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays are sensitive to pseudo-scalar
operators.
Figure 22: (Color online). Favored regions for real values of the type III
2HDM parameters $S_{R}$ and $S_{L}$ given by the measured values of ${\cal
R}(D^{(*)})$. The bottom two solutions are excluded by the measured $q^{2}$
spectra.
The type II 2HDM corresponds to the subset of the type III 2HDM parameter
space for which $S_{R}=-m_{b}m_{\tau}{\rm tan}^{2}\beta/m_{H^{+}}^{2}$ and
$S_{L}=0$.
The ${\cal R}(D^{(*)})$ measurements in the type II 2HDM context correspond to
values of $S_{R}\pm S_{L}$ in the range $[-7.4,0]$. Given that the amplitude
impacted by NP contributions takes the form
$|H_{s}(S_{R}\pm S_{L};q^{2})|\propto|1+(S_{R}\pm S_{L})\times F(q^{2})|,$
(32)
we can extend the type II results to the full type III parameter space by
using the values of ${\cal R}(D^{(*)})$ obtained with $H_{s}(S_{R}\pm S_{L})$
for $H_{s}(-S_{R}\mp S_{L})$. Given the small ${\rm tan}\beta/m_{H^{+}}$
dependence of ${\cal R}(D^{*})$ (Fig. 20), this is a good approximation for
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays. For $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays, this is also true when the decay
amplitude is dominated either by SM or NP contributions, that is, for small or
large values of $|S_{R}+S_{L}|$. The shift in the $m_{\rm miss}^{2}$ and
$q^{2}$ spectra, which results in the 40% drop on the value of ${\cal R}(D)$
shown in Fig. 20, occurs in the intermediate region where SM and NP
contributions are comparable. In this region, $H_{s}(S_{R}+S_{L})\neq
H_{s}(-S_{R}-S_{L})$, and, as a result, the large drop in ${\cal R}(D)$ is
somewhat shifted. However, given that the asymptotic values of ${\cal R}(D)$
are correctly extrapolated, ${\cal R}(D)$ is monotonous, and the measured
value of ${\cal R}(D^{*})$ is fairly constant, the overall picture is well
described by the $H_{s}(S_{R}\pm S_{L})\approx H_{s}(-S_{R}\mp S_{L})$
extrapolation.
Figure 22 shows that for real values of $S_{R}$ and $S_{L}$, there are four
regions in the type III parameter space that can explain the excess in both
${\cal R}(D)$ and ${\cal R}(D^{*})$. In addition, a range of complex values of
the parameters are also compatible with this measurement.
Figure 23: (Color online) Efficiency corrected $q^{2}$ distributions for
$\kern 1.61993pt\overline{\kern-1.61993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ (top) and $\kern
1.61993pt\overline{\kern-1.61993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ (bottom) events with $m_{\rm
miss}^{2}>1.5\mathrm{\,Ge\kern-0.90005ptV}^{2}$ scaled to the results of the
isospin-constrained fit. The points and the shaded histograms correspond to
the measured and expected distributions, respectively. Left: SM. Center: ${\rm
tan}\beta/m_{H^{+}}=0.30\mathrm{\,Ge\kern-0.90005ptV}^{-1}$. Right: ${\rm
tan}\beta/m_{H^{+}}=0.45\mathrm{\,Ge\kern-0.90005ptV}^{-1}$. The $B^{0}$ and
$B^{+}$ samples are combined and the normalization and background events are
subtracted. The distributions are normalized to the number of detected events.
The uncertainty on the data points includes the statistical uncertainties of
data and simulation. The values of $\chi^{2}$ are based on this uncertainty.
### IX.3 Study of the $\boldsymbol{q^{2}}$ spectra
As shown in Sec. II.2, the $q^{2}$ spectrum of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays could be significantly impacted by
charged Higgs contributions. Figure 23 compares the $q^{2}$ distribution of
background subtracted data, corrected for detector efficiency, with the
expectations of three different scenarios. Due to the subtraction of the large
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ feed-down in the $D\ell$ samples, the
measured $q^{2}$ spectrum of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays depends on the signal hypothesis. This
dependence is very small, however, because the $q^{2}$ spectrum of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays is largely independent of ${\rm
tan}\beta/m_{H^{+}}$.
The measured $q^{2}$ spectra agree with the SM expectations within the
statistical uncertainties. For $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays, there might be a small shift to lower
values, which is indicated by the increase in the $p$ value for ${\rm
tan}\beta/m_{H^{+}}=0.30\mathrm{\,Ge\kern-1.00006ptV}^{-1}$. As we showed in
Sec. II.2, the average $q^{2}$ for ${\rm
tan}\beta/m_{H^{+}}=0.30\mathrm{\,Ge\kern-1.00006ptV}^{-1}$ shifts to lower
values because the charged Higgs contribution to $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays, which always proceeds via an $S$-wave,
interferes destructively with the SM $S$-wave. As a result, the decay proceeds
via an almost pure $P$-wave and is suppressed at large $q^{2}$ by a factor of
$p_{D}^{2}$, thus improving the agreement with data. The negative interference
suppresses the expected value of ${\cal R}(D)$ as well, however, so the region
with small ${\rm tan}\beta/m_{H^{+}}$ is excluded by the measured ${\cal
R}(D)$.
The two favored regions in Fig. 22 with $S_{R}+S_{L}\sim-1.5$ correspond to
${\rm tan}\beta/m_{H^{+}}=0.45\mathrm{\,Ge\kern-1.00006ptV}^{-1}$ for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays. However, as we saw in Fig. 3, the
charged Higgs contributions dominate $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays for values of ${\rm
tan}\beta/m_{H^{+}}>0.4\mathrm{\,Ge\kern-1.00006ptV}^{-1}$ and the $q^{2}$
spectrum shifts significantly to larger values. The data do not appear to
support this expected shift to larger values of $q^{2}$.
Table 9: Maximum $p$ value for the $q^{2}$ distributions in Fig. 23 corresponding to the variations due to the systematic uncertainties. | $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D\tau^{-}\overline{\nu}_{\tau}$ | $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow D^{*}\tau^{-}\overline{\nu}_{\tau}$
---|---|---
SM | 83.1% | 98.8%
${\rm tan}\beta/m_{H^{+}}=0.30\mathrm{\,Ge\kern-1.00006ptV}^{-1}$ | 95.7% | 98.9%
${\rm tan}\beta/m_{H^{+}}=0.45\mathrm{\,Ge\kern-1.00006ptV}^{-1}$ | 0.4% | 97.9%
To quantify the disagreement between the measured and expected $q^{2}$
spectra, we conservatively estimate the systematic uncertainties that impact
the distributions shown in Fig. 23 (Appendix). Within these uncertainties, we
find the variation that minimizes the $\chi^{2}$ value of those distributions.
Table 9 shows that, as expected, the conservative uncertainties give rise to
large $p$ values in most cases. However, the $p$ value is only 0.4% for $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays and ${\rm
tan}\beta/m_{H^{+}}=0.45\mathrm{\,Ge\kern-1.00006ptV}^{-1}$. Given that this
value of ${\rm tan}\beta/m_{H^{+}}$ corresponds to $S_{R}+S_{L}\sim-1.5$, we
exclude the two solutions at the bottom of Fig. 22 with a significance of at
least $2.9\sigma$.
The other two solutions corresponding to $S_{R}+S_{L}\sim 0.4$ do not impact
the $q^{2}$ distributions of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ to the same large degree, and, thus, we cannot
exclude them with the current level of uncertainty. However, these solutions
also shift the $q^{2}$ spectra to larger values due to the $S$-wave
contributions from the charged Higgs boson, so the agreement with the measured
spectra is worse than in the case of the SM. This is also true for any other
solutions corresponding to complex values of $S_{R}$ and $S_{L}$.
On the other hand, contributions to $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ decays proceeding via $P$-wave tend to shift
the expected $q^{2}$ spectra to lower values. Thus, NP processes with spin 1
could simultaneously explain the excess in ${\cal R}(D^{(*)})$ Datta _et al._
(2012); Becirevic _et al._ (2012) and improve the agreement with the measured
$q^{2}$ distributions.
,
## X Conclusions
In summary, we have measured the ratios ${\cal R}(D^{(*)})={\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau})/{\cal B}(\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell})$ based on the full BABAR data sample,
resulting in
$\displaystyle{\cal R}(D)$ $\displaystyle=0.440\pm 0.058\pm 0.042,$
$\displaystyle{\cal R}(D^{*})$ $\displaystyle=0.332\pm 0.024\pm 0.018,$
where the first uncertainty is statistical and the second is systematic. These
results supersede the previous BABAR measurements Aubert _et al._ (2008a).
Improvements of the event selection have increased the reconstruction
efficiency of signal events by more than a factor of 3, and the overall
statistical uncertainty has been reduced by more than a factor of 2.
Table 10 shows the results of previous $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ analyses. In 2007 and 2010, the Belle
collaboration measured the absolute $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ branching fractions which we translate
to ${\cal R}(D^{(*)})$ with ${\cal B}(B^{-}\rightarrow
D^{0}\ell^{-}\overline{\nu}_{\ell})=(2.26\pm 0.11)\%$ Nakamura _et al._
(2010) and ${\cal B}(B^{0}\rightarrow
D^{*+}\ell^{-}\overline{\nu}_{\ell})=(4.59\pm 0.26)\%$ Dungel _et al._
(2010). For the translation of ${\cal R}(D^{*})$, we choose Belle’s
measurement of the branching fraction, instead of the world average, because
of the current large spread of measured values. For Belle 2009, we average the
results for $B^{0}$ and $B^{-}$ decays.
The values measured in this analysis are compatible with those measured by the
Belle Collaboration, as illustrated in Fig. 24.
Table 10: Previous measurements of ${\cal R}(D^{(*)})$. Measurement | ${\cal R}(D)$ | ${\cal R}(D^{*})$
---|---|---
Belle 2007 Matyja _et al._ (2007) | — | 0.44 $\pm$ | 0.08 $\pm$ | 0.08
BABAR 2008 Aubert _et al._ (2008a) | 0.42 $\pm$ | 0.12 $\pm$ | 0.05 | 0.30 $\pm$ | 0.06 $\pm$ | 0.02
Belle 2009 Adachi _et al._ (2009) | 0.59 $\pm$ | 0.14 $\pm$ | 0.08 | 0.47 $\pm$ | 0.08 $\pm$ | 0.06
Belle 2010 Bozek _et al._ (2010) | 0.34 $\pm$ | 0.10 $\pm$ | 0.06 | 0.43 $\pm$ | 0.06 $\pm$ | 0.06
Figure 24: (Color online). Comparison of the previous measurements of ${\cal
R}(D^{(*)})$ with statistical and total uncertainties (Table 10) with this
measurement (BABAR 2012). The vertical bands represent the average of the
previous measurements (light shading) and SM predictions (dark shading),
separately for ${\cal R}(D)$ and ${\cal R}(D^{*})$. The widths of the bands
represents the uncertainties.
The results presented here exceed the SM predictions of ${\cal R}(D)_{\rm
SM}=0.297\pm 0.017$ and ${\cal R}(D^{*})_{\rm SM}=0.252\pm 0.003$ by
$2.0\sigma$ and $2.7\sigma$, respectively. The combined significance of this
disagreement, including the negative correlation between ${\cal R}(D)$ and
${\cal R}(D^{*})$, is $3.4\sigma$. Together with the measurements by the Belle
Collaboration, which also exceed the SM expectations, this could be an
indication of NP processes affecting $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays.
These results are not compatible with a charged Higgs boson in the type II
2HDM, and, together with $B\rightarrow X_{s}\gamma$ measurements, exclude this
model in the full ${\rm tan}\beta$–$m_{H^{+}}$ parameter space. More general
charged Higgs models, or NP contributions with nonzero spin, are compatible
with the measurements presented here.
An analysis of the efficiency corrected $q^{2}$ spectra of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ decays shows good agreement with the SM
expectations, within the estimated uncertainties. The combination of the
measured values of ${\cal R}(D^{(*)})$ and the $q^{2}$ spectra exclude a
significant portion of the type III 2HDM parameter space. Charged Higgs
contributions with small scalar terms, $|S_{R}+S_{L}|<1.4$, are compatible
with the measured ${\cal R}(D^{(*)})$ and $q^{2}$ distributions, but NP
contributions with spin 1 are favored by data.
###### Acknowledgements.
The concept for this analysis is to a large degree based on earlier BABAR work
and we acknowledge the guidance provided by M. Mazur. The authors consulted
with theorists A. Datta, S. Westhoff, S. Fajfer, J. Kamenik, and I. Nišandžić
on the calculations of the charged Higgs contributions to the decay rates. We
are grateful for the extraordinary contributions of our PEP-II colleagues in
achieving the excellent luminosity and machine conditions that have made this
work possible. The success of this project also relied critically on the
expertise and dedication of the computing organizations that support BABAR.
The collaborating institutions wish to thank SLAC for its support and the kind
hospitality extended to them. This work is supported by the US Department of
Energy and National Science Foundation, the Natural Sciences and Engineering
Research Council (Canada), the Commissariat à l’Energie Atomique and Institut
National de Physique Nucléaire et de Physique des Particules (France), the
Bundesministerium für Bildung und Forschung and Deutsche
Forschungsgemeinschaft (Germany), the Istituto Nazionale di Fisica Nucleare
(Italy), the Foundation for Fundamental Research on Matter (Netherlands), the
Research Council of Norway, the Ministry of Education and Science of the
Russian Federation, Ministerio de Economía y Competitividad (Spain), and the
Science and Technology Facilities Council (United Kingdom). Individuals have
received support from the Marie-Curie IEF program (European Union) and the A.
P. Sloan Foundation (USA).
Figure 25: (Color online). Assessment of the uncertainties on the $q^{2}$
distributions of background events with $m_{\rm
miss}^{2}>1.5\mathrm{\,Ge\kern-0.90005ptV}^{2}$. Left: results of the isospin-
constrained fit for the SM. Center: sample with $0.5<E_{\rm
extra}<1.2\mathrm{\,Ge\kern-0.90005ptV}$ and $5.27<\mbox{$m_{\rm
ES}$}<5.29\mathrm{\,Ge\kern-0.90005ptV}$. Right: sample satisfying the BDT
requirements in the $5.20<\mbox{$m_{\rm
ES}$}<5.26\mathrm{\,Ge\kern-0.90005ptV}$ region. The data/MC plots show a
fourth order polynomial fit and the total systematic uncertainty considered.
The simulation in the control samples is normalized to the number of events in
data. See Fig. 15 for a legend. Figure 26: (Color online). Left: $q^{2}$
distributions for the different $\kern
1.43994pt\overline{\kern-1.43994ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ contributions, all normalized to 100
events. Center: $q^{2}$ distributions for events with $m_{\rm
miss}^{2}<1.5\mathrm{\,Ge\kern-0.80005ptV}^{2}$ scaled to the results of the
isospin-constrained fit for the SM. See Fig. 15 for a legend. Right: $q^{2}$
dependence of the efficiency. The scale for the efficiency of the
normalization decays is chosen so that the maximum value is 1. The efficiency
data for the signal are adjusted so that they overlap with the data for
normalization decays in the central part of the $q^{2}$ range. The signal
efficiencies with and without the $m_{\rm miss}^{2}$ selection have the same
scale.
## APPENDIX: SYSTEMATIC UNCERTAINTIES ON THE $\boldsymbol{q^{2}}$ SPECTRA
To assess the systematic uncertainty on the measured $q^{2}$ distributions of
$\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\tau^{-}\overline{\nu}_{\tau}$ decays, we examine their sensitivity to
the estimated contributions from background and normalization events. The
$q^{2}$ distributions of signal and the various backgrounds are presented in
Fig. 25 (left). There is good agreement between the data and the background
contributions as derived from the isospin-constrained fit. To further examine
the shape of the fixed contributions from $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ and continuum background, we show two
comparisons with data control samples: one for medium values of $E_{\rm
extra}$ in the $m_{\rm ES}$ peak regions without the BDT requirements imposed,
and the other for the $m_{\rm ES}$ sidebands with the BDT requirements. While
the first sample shows excellent agreement over the full $q^{2}$ range, the
smaller second sample shows some deviations at low and high $q^{2}$. We
approximate the deviation of the data from the simulation by a fourth order
polynomial, and we adopt this difference plus the statistical uncertainty of
each bin as the overall uncertainty of the $B\kern
1.79993pt\overline{\kern-1.79993ptB}{}$ and continuum backgrounds. We
conservatively consider it uniformly distributed between the limits of the
band shown in Fig. 25 and uncorrelated between different bins.
The systematic uncertainty on the shape of the $q^{2}$ distribution of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ decays is estimated by varying the
relative abundance of the contributions shown in Fig. 26. We allow a variation
of ${\cal R}(D^{**})$, the ratio of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\tau^{-}\overline{\nu}_{\tau}$ decays to $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell}$ decays, between $-20\%$ and $+50\%$. We
also allow a contribution of up to 30% of $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}\ell^{-}\overline{\nu}_{\ell}$ decays with the $D^{**}$ decaying into
$D^{(*)}\pi^{+}\pi^{-}$. In addition, we assume a $\pm 15\%$ variation of the
total $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{**}(\tau^{-}/\ell^{-})\overline{\nu}$ yield.
The $q^{2}$ spectrum of normalization decays, both well reconstructed and
cross-feed $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{(*)}\ell^{-}\overline{\nu}_{\ell}$ decays, is well described by the
simulation, see Fig. 26. Given that the normalization decays are well
understood theoretically, we adopt the statistical uncertainty of the
simulated distributions as the overall uncertainty of this contribution.
Except for $q^{2}<5\mathrm{\,Ge\kern-1.00006ptV}^{2}$, where the rate of
signal decays is highly suppressed, the efficiency and detector effects are
very similar for signal and normalization. Thus, we also derive the overall
uncertainty from the statistical uncertainty of the simulated signal $q^{2}$
distributions.
Since it is not feasible to repeat the $m_{\rm
miss}^{2}$–$|\boldsymbol{p}^{*}_{\ell}|$ fit for each variation of the
background contributions, we adopt the following procedure to account for the
impact of these changes on the $\chi^{2}$: for each of the three $q^{2}$
distributions in Fig. 23 and each variation of the background components, we
determine the $\kern 1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D\tau^{-}\overline{\nu}_{\tau}$ and $\kern
1.79993pt\overline{\kern-1.79993ptB}{}\rightarrow
D^{*}\tau^{-}\overline{\nu}_{\tau}$ yields by a fit that minimizes the
$\chi^{2}$ of those distributions.
## References
* Heiliger and Sehgal (1989) P. Heiliger and L. Sehgal, Phys. Lett. B 229, 409 (1989).
* Körner and Schuler (1990) J. G. Körner and G. A. Schuler, Z. Phys. C 46, 93 (1990).
* Hwang and Kim (2000) D. S. Hwang and D. W. Kim, Eur. Phys. J. C14, 271 (2000).
* Amhis _et al._ (2012) Y. Amhis _et al._ (Heavy Flavor Averaging Group), (2012), arXiv:hep-ex/1207.1158 [hep-ex] .
* Tanaka (1995) M. Tanaka, Z. Phys. C 67, 321 (1995).
* Itoh _et al._ (2005) H. Itoh, S. Komine, and Y. Okada, Prog. Theor. Phys. 114, 179 (2005), arXiv:hep-ph/0409228 .
* Nierste _et al._ (2008) U. Nierste, S. Trine, and S. Westhoff, Phys. Rev. D 78, 015006 (2008), arXiv:0801.4938 [hep-ph] .
* Tanaka and Watanabe (2010) M. Tanaka and R. Watanabe, Phys. Rev. D 82, 034027 (2010).
* Fajfer _et al._ (2012a) S. Fajfer, J. F. Kamenik, and I. Nišandžić, Phys. Rev. D 85, 094025 (2012a), arXiv:1203.2654 [hep-ph] .
* (10) Throughout this letter, $\ell$ refers only to the light leptons $e$ and $\mu$, $D^{(*)}$ refers to a $D$ or a $D^{*}$ meson, and charge-conjugate decay modes are implied.
* Antonelli _et al._ (2010) M. Antonelli _et al._ , Phys. Rept. 494, 197 (2010), arXiv:0907.5386 [hep-ph] .
* Nakamura _et al._ (2010) K. Nakamura _et al._ (Particle Data Group), J. Phys. G 37, 075021 (2010).
* Matyja _et al._ (2007) A. Matyja _et al._ (Belle Collaboration), Phys. Rev. Lett. 99, 191807 (2007), arXiv:0706.4429 [hep-ex] .
* Aubert _et al._ (2008a) B. Aubert _et al._ (BABAR Collaboration), Phys. Rev. Lett. 100, 021801 (2008a), arXiv:0709.1698 [hep-ex] .
* Adachi _et al._ (2009) I. Adachi _et al._ (Belle Collaboration), (2009), arXiv:0910.4301 [hep-ex] .
* Bozek _et al._ (2010) A. Bozek _et al._ (Belle Collaboration), Phys. Rev. D 82, 072005 (2010), arXiv:1005.2302 [hep-ex] .
* Lees _et al._ (2012) J. Lees _et al._ (BABAR Collaboration), Phys. Rev. Lett. 109, 101802 (2012), arXiv:1205.5442 [hep-ex] .
* Hagiwara _et al._ (1989) K. Hagiwara, A. D. Martin, and M. Wade, Nucl. Phys. B327, 569 (1989).
* Caprini _et al._ (1998) I. Caprini, L. Lellouch, and M. Neubert, Nucl. Phys. B530, 153 (1998), arXiv:hep-ph/9712417 [hep-ph] .
* Kamenik and Mescia (2008) J. F. Kamenik and F. Mescia, Phys. Rev. D 78, 014003 (2008), arXiv:0802.3790 [hep-ph] .
* Xing _et al._ (2008) Z.-Z. Xing, H. Zhang, and S. Zhou, Phys. Rev. D 77, 113016 (2008), arXiv:0712.1419 [hep-ph] .
* Misiak _et al._ (2007) M. Misiak _et al._ , Phys. Rev. Lett. 98, 022002 (2007), arXiv:hep-ph/0609232 [hep-ph] .
* Tsai (1971) Y.-S. Tsai, Phys. Rev. D 4, 2821 (1971), Erratum-ibid. D13 (1976) 771 .
* Aubert _et al._ (2002) B. Aubert _et al._ (BABAR Collaboration), Nucl. Instrum. Meth. A479, 1 (2002), arXiv:hep-ex/0105044 .
* Seeman (2008) J. Seeman, “Last Year of PEP-II B-Factory Operation,” (2008), presented at the 11th European Particle Accelerator Conference (EPAC 2008), Genoa, Italy, 23-28 June 2008.
* Lees _et al._ (2013) J. Lees _et al._ (BABAR Collaboration), (2013), arXiv:1301.2703 [hep-ex] .
* Lange (2001) D. Lange, Nucl. Instrum. Meth. A462, 152 (2001).
* Sjostrand (1994) T. Sjostrand, Comput. Phys. Commun. 82, 74 (1994).
* Agostinelli _et al._ (2003) S. Agostinelli _et al._ (GEANT4), Nucl. Instrum. Meth. A506, 250 (2003).
* Barberio and Was (1994) E. Barberio and Z. Was, Comput.Phys.Commun. 79, 291 (1994).
* Scora and Isgur (1995) D. Scora and N. Isgur, Phys. Rev. D 52, 2783 (1995), arXiv:hep-ph/9503486 .
* Isgur and Wise (1990) N. Isgur and M. B. Wise, Phys. Lett. B 237, 527 (1990).
* Leibovich _et al._ (1998) A. K. Leibovich, Z. Ligeti, I. W. Stewart, and M. B. Wise, Phys. Rev. D 57, 308 (1998), arXiv:hep-ph/9705467 [hep-ph] .
* Speckmayer _et al._ (2010) P. Speckmayer, A. Hocker, J. Stelzer, and H. Voss, J.Phys.Conf.Ser. 219, 032057 (2010).
* Cranmer (2001) K. S. Cranmer, Comput. Phys. Commun. 136, 198 (2001), arXiv:hep-ex/0011057 [hep-ex] .
* Bowman and Azzalini (1997) A. Bowman and A. Azzalini, (1997), Applied Smoothing Techniques for Data Analysis, Clarendon Press, Oxford.
* Roe (2001) B. P. Roe, _Probability and statistics in experimental physics_ , $2^{\rm nd}$ ed. (Springer, 2001) pp. 29–34.
* Goity and Roberts (1995) J. L. Goity and W. Roberts, Phys. Rev. D 51, 3459 (1995), arXiv:hep-ph/9406236 .
* Aubert _et al._ (2010) B. Aubert _et al._ (BABAR Collaboration), Phys. Rev. Lett. 104, 011802 (2010).
* Aubert _et al._ (2009) B. Aubert _et al._ (BABAR Collaboration), Phys. Rev. D 79, 012002 (2009).
* Aubert _et al._ (2008b) B. Aubert _et al._ (BABAR Collaboration), Phys. Rev. D 77, 032002 (2008b).
* Note (1) In this paper, the significance of an observation with probability $p$ is expressed by the number of standard deviations $\sigma$ of a one-dimensional Gaussian function for this probability. The shaded bands in Figs. 17, 21, and 22 correspond to $p$ values of 0.683, 0.955, 0.997 and so on.
* Bailey _et al._ (2012) J. A. Bailey _et al._ , Phys. Rev. Lett. 109, 071802 (2012), arXiv:1206.4992 [hep-ph] .
* Becirevic _et al._ (2012) D. Becirevic, N. Kosnik, and A. Tayduganov, Phys. Lett. B 716, 208 (2012), arXiv:1206.4977 [hep-ph] .
* Datta _et al._ (2012) A. Datta, M. Duraisamy, and D. Ghosh, Phys. Rev. D 86, 034027 (2012), arXiv:1206.3760 [hep-ph] .
* Fajfer _et al._ (2012b) S. Fajfer, J. F. Kamenik, I. Nisandzic, and J. Zupan, Phys. Rev. Lett. 109, 161801 (2012b), arXiv:1206.1872 [hep-ph] .
* Crivellin _et al._ (2012) A. Crivellin, C. Greub, and A. Kokulu, Phys. Rev. D 86, 054014 (2012), arXiv:1206.2634 [hep-ph] .
* Dungel _et al._ (2010) W. Dungel _et al._ (Belle Collaboration), Phys. Rev. D 82, 112007 (2010), arXiv:1010.5620 [hep-ex] .
|
arxiv-papers
| 2013-03-03T21:25:32 |
2024-09-04T02:49:42.358813
|
{
"license": "Public Domain",
"authors": "The BABAR Collaboration: J. P. Lees and others",
"submitter": "Manuel Franco Sevilla",
"url": "https://arxiv.org/abs/1303.0571"
}
|
1303.0601
|
Introduction of the CDEX experiment
Ke-Jun KANG1,Jian-Ping CHENG1,Jin LI1, Yuan-Jing LI1, Qian YUE1, Yang
BAI3,Yong BI5, Jian-Ping CHANG1, Nan CHEN1, Ning Chen1, Qing-Hao CHEN1, Yun-
Hua CHEN6, Zhi DENG1,Qiang DU1, Hui GONG1, Xi-Qing HAO1, Hong-Jian HE1, Qing-
Ju HE1, Xin-Hui HU3, Han-Xiong HUANG2, Hao JIANG1, Jian-Min LI1, Xia LI2, Xin-
Ying LI3, Xue-Qian LI3,1, Yu-Lan LI1, Shu-Kui LIU5, Ya-Bin LIU1, Lan-Chun Lü1,
Hao MA1, Jian-Qiang QIN1, Jie REN2, Jing Ren1, Xi-Chao RUAN2, Man-Bin SHEN6,
Jian SU1, Chang-Jian TANG5, Zhen-Yu TANG5, Ji-Min WANG6, Qing
WANG1,111corresponding author.
E-mail addresses: [email protected](Xue-Qian
Li),[email protected](Qing WANG), Xu-Feng Wang1, Shi-Yong WU6, Yu-
Cheng WU1, Zhong-Zhi Xianyu1, Hao-Yang XING5, Xun-Jie Xu1, Yin XU3, Tao XUE1,
Li-Tao YANG1, Nan YI1, Hao YU1, Chun-Xu YU3, Xiong-Hui ZENG6, Zhi ZENG1, Lan
ZHANG4, Guang-Hua ZHANG6, Ming-Gang ZHAO3, Su-Ning ZHONG3, Jin ZHOU6, Zu-Ying
ZHOU2, Jing-Jun ZHU5, Wei-Bin ZHU4, Xue-Zhou ZHU1, Zhong-Hua ZHU6
(CDEX Collaboration)
1Tsinghua University, Beijing, China 100084
2Institute Of Atomic Energy, Beijing, China, 102413
3Nankai University, Tianjin, China, 300071
4NUCTECH company, Beijing, China, 100084
5Sichuan University, Chengdu, China, 610065
6Yalongjiang Hydropower Development Company, Chengdu, China, 627450
Abstract
Weakly Interacting Massive Particles (WIMPs) are the candidates of dark matter
in our universe. Up to now any direct interaction of WIMP with nuclei has not
been observed yet. The exclusion limits of the spin-independent cross section
of WIMP-nucleon which have been experimentally obtained is about
$\mathrm{10^{-7}pb}$ at high mass region and only $\mathrm{10^{-5}pb}$ at low
mass region. China Jin-Ping underground laboratory CJPL is the deepest
underground lab in the world and provides a very promising environment for
direct observation of dark matter. The China Dark Matter Experiment (CDEX)
experiment is going to directly detect the WIMP flux with high sensitivity in
the low mass region. Both CJPL and CDEX have achieved a remarkable progress in
recent two years. The CDEX employs a point-contact germanium semi-conductor
detector PCGe whose detection threshold is less than 300 eV. We report the
measurement results of Muon flux, monitoring of radioactivity and Radon
concentration carried out in CJPL, as well describe the structure and
performance of the 1 kg PCGe detector CDEX-1 and 10kg detector array CDEX-10
including the detectors, electronics, shielding and cooling systems. Finally
we discuss the physics goals of the CDEX-1, CDEX-10 and the future CDEX-1T
detectors.
## I Introduction
Discovery of dark matter undoubtedly was one of the greatest scientific events
of the 20th century, then directly searching for dark matter and identifying
it will be the most important and challengeable task of this century.
As a matter of fact, the conjecture about existence of dark matter was
proposed quite a long time ago in 1933 by Zwicky zwicky to explain the
anomalously large velocity of test stars near the Coma star astronomically
observed at that time. The astronomical observation shows that the rotational
curves of the test stars in the galaxy did not obey the gravitational law if
only the luminous matter which clusters at the center of the galaxy existed.
Namely, the velocities of test stars were supposed to be inversely
proportional to square roots of their distances from the center of the galaxy,
instead, the rotational curve turns flat. It implies that there must be some
unseen matter in the galaxy, i.e. the dark matter. Moreover, hints about
existence of dark matter also appear when a collision of two clusters was
observed. It was observed that the center of mass of each cluster does not
coincide with its center of the luminous matter after a collision between two
clusters. It is explained as that two clusters both of which are composed of
dark matter and luminous matter, collide, after collisions, the dark
components penetrate each other because they do not participate in
electromagnetic interaction (EM) neither strong interaction, but the luminous
fractions of the two clusters interact with each other via EM interaction, so
remain near the collision region while the dark parts left.
Moreover, all astronomical observations confirm that our universe is
approximately flat, i.e. the total $\Omega$ defined as $\rho/\rho_{c}$ where
$\rho_{c}$ is the critical density and $\rho$ is the matter density in our
universe, is close to unity. However, observation also indicates that the
fraction of luminous baryonic density $\Omega_{m}$ is less than 4%. By fitting
the observational data, it is confirmed that over 96% of matter is dark.
Further analysis indicates that the dark matter may take a fraction of 24%
while the dark energy occupies the rest over 72%. The dark energy is the most
mysterious subject for which so far our understanding of the universe is not
enough to give a reasonable answer even though there are many plausible
models. By contrary, the dark matter may have a particle correspondence.
Commonly accepted point of view v.trimble is that the main fraction of the
dark matter in our universe is the cold dark matter, i.e. the weakly-
interacting-massive-particles (WIMPs). The criteria of being a dark matter
candidate are that the particle does not participate in EM interaction (so it
is dark!), namely it must be neutral both electric and color) and does not
possess inner structure, otherwise it may have intrinsic anomalous magnetic
moments and interact with EM field. Thus generally, it is the so called
elementary particles. The most favorable candidate of the WIMPs is the
neutralino em , even though one cannot exclude other possibilities at the
present lars bergstrom ; j.l.feng . For example, He et al. He proposed that a
scalar particle darkon can be a possible dark matter candidate which interacts
with detector matter via exchange of Higgs boson. Besides, the technicolor
meson beylyaev , asymmetric mirror dark matter an:2010kc , new particles in
the little Higgs model gilloz as well as many other candidates have also been
predicted by various models as the dark matter candidates. It is interesting
that, thanks to the LHC which has already successfully been operating, one may
search for such beyond standard model particles at accelerators.
One of the main goals of the detection of the dark matter is to help finding
and identifying the dark matter particle(s).
To realize the task, one should design sort of experiments. Detection of dark
matter is by no means a trivial job. It can be classified into the direct and
indirect detections. For the first one, we should set an underground detector
and try to catch the dark matter flux from outside space. In this scheme it is
supposed that the dark matter flux comes from outer space, such as the halo of
galaxy or even the sun and the dark matter particles (WIMPs) weakly interact
with the standard model (SM) matter (mainly quarks). However, we really do not
have a solid knowledge to guarantee that the dark matter particles indeed
weakly interact with the regular matter as described by the gauge theory. It
is just like that a black cat is confined in a dark room where there is no
light, and we are supposed to catch it. Even though the chance of catching it
is slim, we still have probability to get it. But if it does not exist in the
room, no matter how hard we strive, we can never succeed222This is a story
presented by Prof. T. Han at a Jin-Ping conference on dark matter detection,
Sichuan, China, 2011.. Therefore, generally our project of directly detecting
the dark matter flux is based on such hypothesis that it does interact with
the detector matter via weak interaction.
Another line is indirect detection of dark matter. In that scheme, it is
hypothesized that the dark matter particle may decay or annihilate into SM
particles via collisions among dark matter particles. This proposal is to
explain the peculiar phenomena of $e^{+}$ excess excess and the cosmic-ray
energy spectrum tibet observed by the earth laboratories and satellites.
The China Dark Matter Experiment (CDEX) is designed to directly detect the
dark matter with the highly pure germanium detector, thus later in this work
we will concentrate ourselves on the discussion of direct search for dark
matter. Unfortunately, the kinetic energy of the dark matter particle ${1\over
2}m\beta^{2}$ is rather low, generally the velocity of the WIMPs is $200\sim
1000$ km/s and the value of $\beta=v/c$ is about $10^{-3}\sim 10^{-2}$, thus
for a WIMP particle of 50 GeV, its kinetic energy is only a few tens of keV,
which is too small to cause an inelastic transition for the nucleus (not
exactly, see below discussions). When the WIMPs hit on the nucleus in the
detector material, the impact makes the nucleus to recoil and then the atom
would be ionized. When the nucleus is recombined with the electron clouds,
photons may be radiated accompanied by the electric and thermal signals which
can be detected by sensitive detectors. Each detector may be designed to be
sensitive to one or a few of the signals and then record them to be analyzed
off-line.
Recently, several groups have reported limited success for detecting dark
matter flux. The DAMA dama reported their observation of the annual
modulation signal and then CoGeNT cogent observed several cosmogenic peaks
and their results are consistent with the DAMA’s. The CDMS group reported to
observe a few events of dark matter flux Brink and recently CRESST-II
published their new observation resultscresst and numerous underground
laboratories are working hard to search for the signals of dark matter. With
its special advantages, such as the thickest rock covering which can shield
out most of the cosmic rays and convenient transportation and comfortable
working and living condition, the Jin-Ping underground laboratory would
provide an ideal circumstance for the dark matter detection and the new CDEX
collaboration has joined the club for direct dark matter search. The detailed
descriptions about the CDEX project will be given in the following sections.
The detection is carried out via collisions between WIMPs and nuclei. To
extract any information about the fundamental interaction (SUSY, technicolor,
darkon or even little Higgs etc.) from the data, much theoretical work and
careful analysis must be done. For the theoretical preparation, one has to
divide the whole process into three stages. The first is to consider the
elementary scattering between WIMPs and quarks or gluons inside the nucleons,
from there we need to derive an effective Hamiltonian describing elastic
scattering between DM particle and nucleon, then the subsequent stage is the
nuclear stage. Since the kinetic energy of WIMPs is rather low and generally
the elastic transition dominates, the energy absorbed by the nucleons would
eventually pass to the whole nucleus which recoils as described above. But
this allegation is not absolutely true. Indeed the gap between radial
excitations of nucleus (the principal quantum numbers of valence nucleons
change) and its ground state is rather large comparing with the available
collision kinetic energy. However, if the ${\bf L}\cdot{\bf S}$ coupling is
taken into account, the energy levels which were degenerate would be split and
the resultant gaps between the split levels could be small and comparable with
the kinetic energy of the dark matter particles, thus the collision between
the dark matter particle and nucleon can induce an inelastic transition and
radiate photons. But this is not our concern for the CDEX project.
The highly pure germanium detector is designed to be sensitive to low energy
dark matter flux.
The interaction can also be categorized into spin-independent and spin-
dependent ones. So far, the measurements on the spin-independent WIMP-nucleon
cross section can reach an accuracy to $10^{-44}$ cm2, but for the spin-
dependent case, it is only about $10^{-39}$ cm2. It is believed that beyond
$10^{-46}$ cm2, the cosmic neutrino would compose unavoidable background and
the measurements do not make any sense after all.
## II A Brief Review of Theoretical framework for detection of dark matter
flux
### II.1 Kinematics
We are dealing with collisions between WIMPs and nucleus, let us first present
the necessary expressions which are related to the detection.
The recoil energy of the nucleus is an:2010kc
$Q={1\over
2}mv^{2}={2mM\over(m+M)^{2}}(1-\cos\theta_{cm})={\mu_{red}^{2}v^{2}(1-\cos\theta_{cm})},$
(1)
where m, M stand for the masses of the WIMP and nucleus, $v$ is the absolute
value of the velocity of the WIMP in the laboratory frame, $\mu_{red}$ is the
reduced mass of the WIMP and nucleus and $\theta_{cm}$ is the scattering angle
in the center of mass frame of the WIMP and nucleus. The recoil momentum is
$|{\bf q}|^{2}=2\mu_{red}^{2}v^{2}(1-\cos\theta_{cm}).$ (2)
The recoil rate per unit detector mass is
$R={n\sigma v\over M_{D}}={\rho\sigma v\over mM_{D}},$ (3)
where $\rho$ is the local dark matter density which is 0.3 GeV/cm3 in the
standard halo model. The differential rate is jungman:1995
${dR\over dQ}=2M_{D}{dR\over d|{\bf q}|^{2}}={2\rho\over m}\int
vf(v){d\sigma\over d|{\bf q}|^{2}}(|{\bf q}|^{2}=0)d^{3}v,$ (4)
where $f(v)$ is the velocity distribution function of WIMP and
${d\sigma\over d|{\bf q}|^{2}}(|{\bf q}|^{2}=0)={\sigma_{0}\over
4\mu_{red}^{2}v^{2}},$ (5)
with $\sigma_{0}$ being the scattering cross section at the zero momentum
transfer limit. The function $f(v)$ of DM in the galactic halo is assumed to
be in the Gaussian form an:2010kc as
$f({\bf u})=f({\bf v}+{\bf v_{e}})={1\over(\pi v_{0}^{2})^{3/2}}{e^{-{\bf
u}^{2}\over v_{0}^{2}}},$ (6)
where $v_{0}\sim 270$ km/s, ${\bf v}$ is the velocity of the DM with respect
to the detector and ${\bf v_{e}}$ is the velocity of the earth with
$v_{e}=v_{\odot}+14.4\cos[2\pi(t-t_{0})/T]$, and $t_{0}=152$ days, $T=1$ year
which reflects the annual modulation effects. It is noted that here $f({\bf
u})$ is exactly $f(v)$ which we used in above expressions. The average
velocity of the dark matter is 200 km/s $\sim$ 600 km/s, the kinetic energy
${1\over 2}m({v\over c})^{2}$ is as small as 20$\sim$ 200 keV as m=100 GeV.
That is the maximum energy which can be transferred to the nucleus and cause
its recoil. If the dark matter particle is as light as 10 GeV, the kinetic
energy is only at the order of a few hundreds of eV. The small available
energy results in a small recoil energy of the nucleus and demands a high
sensitivity of our detector at low energy ranges. That raises a serious
requirement for the detector design and it is the goal of the CDEX. It is
worth noticing that for very low mass DM candidates which are predicted by
some special models, or very low recoil energy, the contribution of DM-
electron scattering is not negligible, moreover, the recoil of electron may
compose the main signalessig .
### II.2 Cross section and amplitude
The calculation of the cross section of the collision between the DM particle
and nucleus can be divided into three stages: the first one is calculating the
amplitudes of the elementary collision with quarks, the second stage is
calculating the amplitudes for collision with the nucleon (proton and
neutron), then the third stage is calculating the cross section for collision
with the whole nucleus if one can assume that the nucleus is free. But in fact
it is bound on lattice of crystals, thus when the small temperature effects
are concerned a fourth stage calculation is required.
The WIMP particle interacts with quarks or gluons inside the nucleon ressell ;
griest ; shan:2011ct ; shan:2011jz . Gluons in the hadron do not participate
in the weak interaction at the leading order, so that the fundamental
processes concern only the interaction between DM particle and quarks. At the
second stage, one needs to write up the effective Hamiltonian for the
interaction between the DM particle and nucleon from the fundamental
interaction. Because it is an elastic process, the nucleon remains unchanged
after the collision and the total exchanged energy is transformed into the
kinetic energy of the nucleon. Obtaining the effective Hamiltonian for the DM
particle-nucleon interaction is by no means trivial. We will discuss the
procedure more explicitly when we talk about the SI and SD processes in next
subsections. Generally the collisions between dark matter particles with
nucleus are very low energy processes, nucleus cannot be excited to higher
radial states, thus the processes are elastic. Indeed, there might be orbital
excitations and the collision would be inelastic, in general such processes
only cause very small observable effects. Later the excited nucleus radiates a
photon and returns to the ground state, but for the germanium detector which
cannot detect photon radiation, such effects do not need to be considered. We
only concentrate on the nucleus recoil effects. Definitely, the goal of the
whole research is to identify the DM particle(s) and discover the interaction
which may be new physics beyond the standard model. But for this purpose, we
need to study the observable effects and learn how to extract useful
information about the fundamental processes.
The cross section between WIMP and nucleus is
$\sigma={1\over
mMv}{1\over(2s_{1}+1)(2s_{2}+1)}\sum\int{d^{3}p^{\prime}_{1}\over(2\pi)^{3}}{1\over
2E^{\prime}_{1}}{d^{3}p^{\prime}_{2}\over(2\pi)^{3}}{1\over
2E^{\prime}_{2}}|M|^{2}(2\pi)^{4}\delta(p_{1}+p_{2}-p^{\prime}_{1}-p^{\prime}_{2}),$
where $v$ is the velocity of the DM particle while the nucleus is assumed to
be at rest before collision, $p_{1},p_{2},p^{\prime}_{1},p^{\prime}_{2}$ are
the momenta of the DM particle and nucleus in the initial and final states
respectively, $s_{1},s_{2}$ are the spins of the DM particle and nucleus, the
sum is over all the polarizations of the outgoing DM particle and nucleus, $M$
is the collision amplitude which is what we need to calculate.
How to get the effective coupling of $DM-N\bar{N}$ depends on the concrete
model adopted in the calculation barger:2008qd ; tzeng:1996ve ; shan:2011ct ;
shan:2011jz . For example, the exchanged meson between the DM particle and
nucleon could be the SM $Z^{0}$ or Higgs boson cheng , whereas in the models
beyond the standard model the exchanged agents may be $Z^{\prime}$ or others.
#### II.2.1 Amplitude
The elementary process is an elastic scattering between the DM particle and
quarks. For example in the standard model (SM) the exchanged boson between DM
particle and quark (u,d,s) is $Z^{0}$ or Higgs boson whereas in the new
physics beyond the SM, it may be $Z^{\prime}$ etc. The Lagrangian which is
determined by the pre-postulated models of the dark matter and the effective
interaction is written as
${\cal L}=l_{\Gamma}\bar{q}\Gamma q\;\;\;(q=u.d,s),$ (7)
where $\Gamma$ is a combination of the $\gamma$ matrices
($S,\;\gamma_{\mu},\;i\sigma_{\mu\nu},\;\gamma_{\mu}\gamma_{5},\;\gamma_{5}$)
and transfer momentum $q$, $l_{\Gamma}$ is the corresponding DM current. It is
noted that only the scalar $S$ and axial vector $\gamma^{\mu}\gamma_{5}$ do
not suffer a suppression of smaller transfer momenta. The former corresponds
to the SI process which will be discussed in next subsection and the later one
is proportional to the quark spin under the non-relativistic approximation and
results in the spin-dependent (SD) cross section.
According to the general principle of the quantum field theory, the amplitude
of the elastic scattering between the dark matter particle with a single
nucleon should be written in the momentum space because during the process the
total energy and the three-momentum are conserved. Thus the amplitude is
$M=l_{\mu}\bar{u}({\bf p}^{\prime},s^{\prime}_{z})J^{\mu}u({\bf p},s_{z}),$
(8)
where $l_{\mu}$ is the DM current, ${\bf p}\;,{\bf
p}^{\prime},\,s_{z},\,s^{\prime}_{z}$ are the three-momenta and spin
projections of the initial and final states of the nucleon and $J^{\mu}$ is
the corresponding nucleon current of the effective interaction (for such as
the darkon models, $J^{\mu}$ would be replaced by a scalar, for more
complicated models, it might even be tensors). If the nucleon is free, the
amplitude can be easily calculated as depicted in any textbook of quantum
field theory. However the nucleon is not free but bound in the nucleus, thus
the wavefunction which describes the state of the nucleon is a solution of the
Schrödinger equation with very complicated nuclear potential, such as the
Paris potential, written in the coordinate space. Thus, one needs to derive
the formula in terms of the given wavefunctions via a Fourier transformation.
$\displaystyle M$ $\displaystyle=$ $\displaystyle l_{\mu}\bar{u}({\bf
p}^{\prime},s^{\prime}_{z})J^{\mu}u({\bf p},s_{z})$ (9) $\displaystyle=$
$\displaystyle l_{\mu}{1\over(2\pi)^{3}}\int d^{3}x^{\prime}e^{-i{\bf
p}^{\prime}\cdot{\bf x}^{\prime}}\bar{u}({\bf
x}^{\prime},s_{z}^{\prime})J^{\mu}{1\over(2\pi)^{3}}\int d^{3}xe^{i{\bf
p}\cdot{\bf x}}u({\bf x},s_{z})\times(2\pi)^{3}\delta({\bf x}^{\prime}-{\bf
x})$ $\displaystyle=$ $\displaystyle l_{\mu}{1\over(2\pi)^{3}}\int e^{-i({\bf
p}^{\prime}-{\bf p})\cdot{\bf x}}\bar{u}({\bf x},s_{z}^{\prime})J^{\mu}u({\bf
x},s_{z})$ $\displaystyle=$ $\displaystyle l_{\mu}{1\over(2\pi)^{3}}\int
d^{3}xe^{-i{\bf q}\cdot{\bf x}}\bar{u}({\bf x},s_{z}^{\prime})J^{\mu}u({\bf
x},s_{z}),$
where $\bar{u}({\bf x},s_{z}),\;u({\bf x},s^{\prime}_{z})$ are the wave
functions of the nucleon in the coordinate space. The delta function
$(2\pi)^{3}\delta({\bf x}^{\prime}-{\bf x})$ is introduced because the
interaction occurs at the same point (i.e. the propagator is reduced as
${1\over q^{2}-M^{2}}\approx{-1\over M^{2}}$ where $M$ is the mass of the
medium boson which is heavy and $M^{2}$ is much larger than $q^{2}$). Now let
us step forward to discuss the scattering amplitude between the DM particle
and nucleus. We re-interpret the wave function $u(x,t)$ in the above
expression as the corresponding field operator in the second quantization
scheme. The nucleon field operator $\Psi_{N}({\bf r},t)$ can be written as
$\Psi_{N}({\bf r},t)=\sum_{\alpha}[a_{\alpha}u_{\alpha}({\bf
r})e^{-iE_{\alpha}t}+b_{\alpha}^{{\dagger}}\nu_{\alpha}({\bf
r})e^{iE_{\alpha}t}].$ (10)
where $a_{\alpha}$ and $b_{\alpha}^{+}$ are annihilation and creation
operators of baryon and anti-baryon. The probability operator of the nucleon
density at ${\bf r}$ is
$\Psi_{N}^{\dagger}({\bf r},t)\Psi_{N}({\bf r},t).$ (11)
The nuclear ground state can be expressed as
$|\Psi_{A}>=e^{i{\bf P}\cdot{\bf X}}(\Pi_{i=1}^{A}a_{i}^{{\dagger}})|0>,$
where $|0>$ is the vacuum, A is the number of nucleons in the nucleus and the
product corresponds to creating A nucleons (protons and neutrons) which are
the energy eigenstates. The phase factor $e^{i{\bf P}\cdot{\bf X}}$
corresponds to the degree of freedom of the mass center of the nucleus, where
${\bf P}$ is the total momentum of the nucleus and ${\bf X}$ is the coordinate
of the mass center of the nucleus. In the center of mass frame, ${\bf P}=0$,
the phase factor does not show up. Indeed, the phase factor only corresponds
to a free nucleus, instead, if one further considers that the nucleus is bound
on the lattice of crystal such as germanium, the phase factor would be
replaced by a complicated function. Unless we discuss the thermal effect such
as for the CDMS detector, the nucleus can be treated as a free particle
without causing much errors. The thermal effect can be calculated in terms of
the phonon theory. However, in most cases, the nucleus can be treated as a
free particle, and a simple phase factor sufficiently describes this degree of
freedom.
Sandwiching the probability operator Eq. (8) between the nuclear ground state,
we have
$\int d^{3}xe^{i{\bf q}\cdot{\bf x}}<0|e^{-i{\bf P}^{\prime}\cdot{\bf
X}}(\Pi_{i=1}^{A}a_{i})\sum_{\alpha\beta}a_{\alpha}^{\dagger}\bar{u}_{\alpha}({\bf
x})e^{i\omega_{\alpha}t}(J^{\mu})a^{\beta}u_{\beta}({\bf
x})e^{-i\omega_{\beta}t}(\Pi_{i=1}^{A}a_{i}^{\dagger})e^{i{\bf P}\cdot{\bf
X}}|0>.$ (12)
Now, let us turn to the laboratory reference frame where the recoil of the
nucleus must be clearly described. By doing so, we need to consider the phase
factor $e^{i{\bf P}\cdot{\bf X}}$ in Eq.(14). As aforementioned the scattering
between the DM particle and nucleus is elastic, thus any single nucleon cannot
transit from its original state to an excited state, but transfers its kinetic
energy gained from the collision with the DM particle to the whole nucleus.
The practical process of the energy transfer is via interaction among all the
nucleons in the nucleus, so must be very complicated. Fortunately, we do not
need to know all details of the process. Since the inner state of the nucleus
does not change after the collision, the wavefunction of the nucleus can only
vary by a phase factor. We consider that the time duration of the energy
transfer process is very short compared with our measurement, we may use the
”impulse” approximation, i.e. the nucleus does not have enough time to move.
The phase factor obviously does not depend on the coordinates of any
individual nucleon, so can only be related to the coordinate of the mass
center of the nucleus ${\bf X}$. Including this phase factor, we would rewrite
the expression (14) as
$\displaystyle\int d^{3}X<0|e^{-i{\bf P}^{\prime}\cdot{\bf X}}e^{-i{\bf
q}\cdot{\bf X}}\int
d^{3}x(\Pi_{i=1}^{A}a_{i})\sum_{\alpha\beta}a_{\alpha}\bar{u}_{\alpha}e^{i\omega_{\alpha}t}(J^{\mu})a^{\beta\dagger}u_{\beta}e^{-i\omega_{\beta}t}e^{-i({\bf
p}_{\alpha}\cdot{\bf x}_{\alpha}-{\bf p^{\prime}}_{\beta}\cdot{\bf
x^{\prime}}_{\beta})}(\Pi_{i=1}^{A}a_{i}^{\dagger})e^{i{\bf P}\cdot{\bf
X}}|0>$ (13) $\displaystyle=$ $\displaystyle(2\pi)^{3}\delta({\bf P}-{\bf
P}^{\prime}-{\bf q})\int
d^{3}x<0|(\Pi_{i=1}^{A}a_{i})\sum_{\alpha\beta}a_{\alpha}\bar{u}_{\alpha}e^{i\omega_{\alpha}t}(J^{\mu})a^{\beta\dagger}u_{\beta}e^{-i\omega_{\beta}t}(\Pi_{i=1}^{A}a_{i}^{\dagger})e^{-i({\bf
q}\cdot{\bf x})}|0>,$
where the $\delta-$function explicitly embodies the momentum conservation. For
simplicity, below, we will not show this factor anymore, as well understood.
It is shown that for elastic scattering $\alpha=\beta$, so the factor
$e^{-i\omega_{\beta}t}e^{i\omega_{\alpha}t}=1$, but if, as aforementioned,
there exists orbital excitation, this factor is no longer 1, but in general
cases, it is indeed very close to unity.
There are two types of cross sections : the SI$-$ spin-independent and SD$-$
spin-dependent ones.
#### II.2.2 The SI cross section
The fundamental interaction between the DM particle and quarks is gained based
on models, various models would result in different Hamiltonian. Nevertheless,
from the fundamental interaction between the DM particle and quarks to the
level of nucleons, the procedure was explicitly demonstrated in Refs.cheng ;
he ; ellis , and the readers are suggested to refer to those enlightening
works.
As noted, if the interaction between the DM particle and quarks is spin-
independent, for example the darkon case he , the particle interaction and the
nuclear effect can be factorized and an enhancement factor proportional to $A$
appears.
The observation rates for spin-independent scattering can be written as
jungman:1995
$\displaystyle\frac{d\sigma}{d|{\bf{q}}|^{2}}$ $\displaystyle=$ $\displaystyle
G_{F}^{2}\frac{C}{v^{2}}F^{2}(|{\bf{q}}|)=\frac{\sigma_{0}}{4m_{r}^{2}v^{2}}F^{2}(|{\bf{q}}|),$
(14)
where $G_{F}$ is the universal Fermi coupling constant, $\sigma_{0}$ is the
cross section at zero-recoil, $m_{r}$ is the reduced mass of the WIMP and
nucleus, finally $F(|{\bf{q}}|)$ is the nuclear form factor. Generally
speaking, the mass density distribution of the nucleus is proportional to the
charge density or the nucleon number density, hence the form factor can also
be accounted from the nucleon number density in the nucleus, i.e. the nuclear
density. The most general form is given in Eq. (14), but for the SI cross
section, the recoil effect can be attributed into a simple form factor. The
form factor is the Fourier transformation of the nuclear density as
$\displaystyle F(q)$ $\displaystyle=$
$\displaystyle\frac{1}{A}\int\rho(r)e^{-i{\bf{q}}\cdot{\bf{r}}}d^{3}r$ (15)
$\displaystyle=$
$\displaystyle\frac{4\pi}{A}\int\frac{r}{q}\rho(r)\sin{(qr)}dr,$
where $\rho(r)$ is the nuclear density. Here we assume that the nucleus is
spherically symmetric and it is only a function of $r$. Obviously, unless one
can more accurately determine $F(q)$, extraction of useful information from
data is impossible.
There are several ansatz for determining the form factor by assuming typical
$\rho(r)$ functions helm ; sogmodel ; fbmodel ; jdl .
The form factor $F(q)$ is required to obey
$F(0)=1.$
The numerical results show rel that the form factors determined by the
various models for the nuclear density do not deviate much from each other,
therefore for not very high accuracy of measurement, one can use either of the
models.
Generally, the SI cross section can be written as
$\sigma_{0}^{SI}={4\over\pi}m_{r}^{2}[Zf_{p}+(A-Z)f_{n}]^{2},$ (16)
and eventually we have
${dR\over dQ^{2}}={\rho_{0}\sigma_{0}\over
2m_{\chi}m_{r}^{2}}F(Q)^{2}\int_{v_{min}}^{v_{max}}{f(v)\over v}dv,$ (17)
where $Q^{2}=-q^{2}$ in the time-like form. Here $f_{p}$ and $f_{n}$ are
related to the proton and neutron respectively which can be derived from the
given effective couplings between DM particle and nucleon discussed above. To
a good approximation
$f_{n}\approx f_{p},$ (18)
thus $\sigma_{0}^{SI}\propto A^{2}$ is a large enhancement factor, especially
for heavy nuclei. Therefore, thanks to the enhancement factor, the total spin-
independent cross section could be much larger than the cross section for DM
particle scattering with single nucleons. That is why, the present data can
reach an accuracy of $10^{-44}$ cm2 for SI cross section between DM and
nucleon.
#### II.2.3 The SD cross section
For the elastic scattering with small momentum transfer, the contribution of
the axial vector is dominant
$g^{2}l^{\mu}\bar{q}\gamma_{\mu}\gamma_{5}q\approx g^{2}{\rm\bf
l}\cdot\bar{q}{\bf s}q,\;\;\;\;(q=u,d.s),$ (19)
where $g$ is the coupling in the concerned theory and $l^{\mu}$ is the
leptonic current (could be vector and/or axial vector). Summing over the
contributions of all partonic spin projections, one would step to the
interaction between the DM particle and nucleon. Thus the effective current
for nucleon is ressell:1993qm ; ressell:1997kx
$<p(n)|\sum_{u,d,s}g\bar{q}s_{z}q|p(n)>=\sum_{q=u,d,s}A^{p(n)}_{q}\Delta_{q},$
(20)
where $\Delta q\equiv
s_{q}(\uparrow)-s_{q}(\downarrow)+\bar{s}_{q}(\uparrow)-\bar{s}_{q}(\downarrow)$,
$A^{p}_{q}$, $A^{n}_{q}$ can be obtained from the fundamental
Hamiltonianressell . By the data, we have
$\Delta_{u}^{(p)}=0.78\pm 0.02,\;\;\Delta_{d}^{(p)}=-0.48\pm
0.02,\;\;\Delta_{s}^{(p)}=-0.15\pm 0.02,$
and
$\Delta_{u}^{(n)}=\Delta_{d}^{(p)};\;\;\Delta_{d}^{(n)}=\Delta_{u}^{(p)};\;\;\Delta_{s}^{(n)}=\Delta_{s}^{(p)}.$
Calculating the SD cross section is much more complicated than that for SI
cross section. For the SD process, the nuclear effect cannot be factorized
out, but entangled with the fundamental sub-processes.
The unsuppressed effective nucleon current is engel ; engel:1991wq
$<p,s|{\cal J}_{5}^{\mu}(x)|p^{\prime},s^{\prime}>=\bar{u}_{N}(p,s){1\over
2}[(a_{0}+a_{1}\tau_{3})\gamma^{\mu}\gamma_{5}+(b_{0}+b_{1}\tau_{3})q^{\mu}\gamma_{5}]u_{N}(p^{\prime},s^{\prime})e^{iq\cdot
x}$ (21)
where $q=p-p^{\prime}$ and $a_{0},a_{1}$ are given as
$\displaystyle a_{p}$ $\displaystyle=$ $\displaystyle
a_{0}+a_{1}=\sum_{q=u,d,s}A^{p}_{q}\Delta_{q};$ (22) $\displaystyle a_{n}$
$\displaystyle=$ $\displaystyle
a_{0}-a_{1}=\sum_{q=u,d,s}A^{n}_{q}\Delta_{q},$ (23)
as the results of Eq.(9).
Generally, the spin-dependent cross section can be decomposed into the
isoscalar $S_{00}$, isovector $S_{11}$ and interference term $S_{01}$ as
$S_{SD}^{A}=a_{0}^{2}S_{00}(q)+a_{1}^{2}S_{11}(q)+a_{0}a_{1}S_{01}(q),$ (24)
where $a_{0},\;a_{1}$ are just the coefficients of isospin 0 and 1 components
of the effective current.
In fact, the amplitude for the spin-dependent collision between DM particle
and nucleus is similar to that of nuclear $\beta$ decay which was thoroughly
studied by the nuclear physicists long time ago walecka . The calculations on
the SD cross sections have been discussed by many authors many . Here we just
outline the work given in literature which will be useful for our analysis of
the data taken by the CDEX experiments.
Now let us forward to the stage of DM-nucleus. For zero-momentum transfer,
$\bar{N}\gamma_{\mu}\gamma_{5}N=u^{\dagger}_{N}{\mbox{\boldmath$\sigma$}}u_{N},$
where $u_{N}$ is the simple two-component fermion spinor of nucleon. When the
momentum transfer is not zero (${\bf q}\neq 0$), the isovecor part of the
axial current induces a pseudoscalar termengel as shown in Eq.(21). By the
reduction formula of the QFT, the axial current should couple to a
pseudoscalar meson, and the lightest pseudoscalar meson is the pion, so that
the virtual pion would compose the most important contribution to the new
term. Thus we have relations $b_{0}=0$ and $b_{1}={a_{1}m_{N}\over{\bf
q}^{2}+m_{\pi}^{2}}$ by assuming PCAC. Then under the non-relativistic
approximation the matrix element (21) becomes
$<p,s|{\cal
J}^{5}_{\mu}(x)|p^{\prime},s^{\prime}>=u^{\dagger}_{N}(p,s)[{1\over
2}(a_{0}+a_{1}\tau_{3}){\mbox{\boldmath$\sigma$}}-{1\over
2}{a_{1}{\mbox{\boldmath$\sigma$}}\cdot{\bf q}\tau_{3}\over{\bf
q}^{2}+m_{\pi}^{2}}{\bf q}]u_{N}(p^{\prime},s^{\prime})e^{i{\bf q}\cdot{\bf
x}-i\omega t}$ (25)
as $q_{0}$ being zero for the elastic scattering and for very low energy
inelastic scattering we still can approximate $e^{-i\omega t}\approx 1$.
The total cross section commonly is evaluated in terms of the multipole
operators method donnelly . Under the non-relativistic approximation, the
scattering amplitude can be written as
${\cal M}={\bf l}\cdot\int d^{3}x<JM|{\bf{\cal J}}|JM^{\prime}>e^{i{\bf
q}\cdot{\bf x}},$ (26)
thus the differential cross section reduces into
${d\sigma\over dq^{2}}={G\over(2J+1)v^{2}}S(q),$ (27)
where
$S(q)=\sum_{L\;odd}(|<J||{\cal T}_{L}^{el}(q)||J>|^{2}+|<J||{\cal
L}_{L}(q)||J>|^{2}),$ (28)
and $G$ is a constant depending on the model adopted for the calculation,
${\cal J}^{el}(q)$ and ${\cal L}(q)$ are the transverse and longitudinal
electric projections of the axial current. The explicit expressions of ${\cal
T}^{el}(q)$ and ${\cal L}_{L}(q)$ are defined and calculated in Refs.engel ;
engel:1991wq ; ressell:1993qm ; ressell:1997kx , for saving space, we do not
present them here and the readers who are interested in the details are
recommended to refer the original works.
It is also noted that since the matrix elements depend on the expectation
values of $\sigma$, the contributions from the nucleons at lower energy
states, in the language of the shell model, i.e. the nucleons at the inner
shells, should cancel each other. In other words, the nucleons at lower states
would make null contributions to the scattering matrix elements. Only a few
nucleons residing on the very outer shells, the so-called valence nucleons
make substantial contributions to the DM-nucleus scattering. That is why the
SD cross section is much more difficult to be measured than the SI one.
It is interesting to look deeper on the SD transitions because there may occur
inelastic scattering processes. As aforementioned, the gaps between the energy
levels pertaining to the different principal quantum numbers are large
compared to the available kinetic energy of the dark matter particle, so that
the nucleon which is colliding with the DM particle cannot transit to an
energy shell with higher principal quantum number. However, due to existence
of the ${\bf L}\cdot{\bf S}$ coupling, the energy level ($l\neq 0$) which was
degenerate would be split into two levels, and the gap between the two levels
is small and completely comparable with the kinetic energy of DM particle, so
that the nucleon may transit into a higher energy level after the collision
and the scattering is inelastic.
Moreover, via the loop effect, the axial current Lagrangian can induce an
effective scalar coupling which is SI-type and can be enhanced by the factor
$A^{2}$. Thus the effective coupling is loop-suppressed, but the total cross
section is enhanced by $A^{2}$, so its net effect may be comparable with that
of the tree contribution of the axial currents. The situation becomes more
complex and needs to be carefully investigated when the data are analyzed.
Anyhow, extracting useful information from the data is by no means an easy
job. Not only careful analysis on the background, but also a serious
theoretical study must be carried out.
As introduced above, the accuracy for detecting the SI cross section has
already reached $10^{-44}$ cm2, it is well known that if the cross section is
smaller than $10^{-46}$ cm2, the contribution of the atmospheric neutrinos
cannot be eliminated. Thus if to $10^{-46}$ cm2 (the accuracy might be reached
in a few years), the DM flux is still not observed there could be some
possibilities, one is that the DM particles do not interact with the SM
particles via weak interaction, and another possibility is that the DM
particles are no WIMPs, but something else, for example heavy sterile neutrino
etc. If it is the first, DM only participates in gravitational interaction, it
would be a disaster for us because on the earth, the present available
apparatus has no chance to measure gravitational effects, and in the future it
is an unanswered inquiry. If it is the second, we need to do more theoretical
study to explore possible channels to check the postulates.
For the SI cross section, because the Hamiltonian caused by the effective
interaction does not contain a spin-operator, the spin of the nucleon cannot
flip during the collision, thus the scattering is fully elastic. By
contraries, for the SD cross section, the Hamiltonian contains a spin operator
which may induce a spin flip, therefore the spin projection of the final state
of the nucleon may be different from the initial one, namely a transition from
lower energy level to higher one occurs during the collision. Obviously the
nucleus is excited after the inelastic collision, then it definitely will
return back to the ground state by radiating a photon. The photon should have
a characteristic spectrum, which can be ”seen” in a detector. The signal might
be weak and detection is rather difficult as expected. This scheme would be a
subject of further investigation. However, it is not applicable to our Jin-
Ping project because the light signals are not detected at our germanium
detector at all.
Indeed, we prey that the DM particle indeed interacts with SM quarks, then
nucleons via weak interaction, so that we can find its trace through the
direct search on the underground detectors, otherwise, one would be unable to
identify the mysterious matter even though we know for sure of its existence.
## III Review of experiment
### III.1 Overview
It is very important for scientists to detect dark matter particles with
different experimental methods in order to understand the essence of dark
matter. Usually experimental detection of dark matter can be divided into two
types: direct detection and indirect detection. For indirect detection of dark
matter, the SM particles, such as gammas, neutrinos, and etc., which are
generated from annihilation of two dark matter particles or decays of DM
particles, can be detected. The possible site for dark matter particles to
annihilate each other should be the vicinity near the center of a star such as
the Sun or planets including our Earth, where the gravitational force traps
the dark matter particles whose density rises to a relatively high level, so
that the possibility of dark matter particle annihilation there would be
higher than at other places. The large-scale high-energy accelerator may also
detect indirectly the dark matter particles which are generated by collisions
of energetic particle beams. But since the produced dark matter particles are
stable neutral bosons (might be fermions also), so they only manifest as
missing energy, detecting them is a challenge to our detection technology.
Instead, direct detection of dark matter particles mainly focuses on the
measurement of the deposited energy of a recoiled nucleus scattered off by the
coming dark matter particle, mainly the Weakly Interacting Massive Particle
(WIMP). The essence of direct detection of WIMP is to single out the possible
events induced by the coming dark matter particles from a large background
produced outside and inside the detector system. So the ultra-low background
level of the detecting system for direct detection of WIMP is a crucial
requirement Fig.1.
Figure 1: The principle of WIMP direct detection.
For different detectors, the deposited energy of the recoiled nucleus
scattered off by the incident WIMP in the fiducial volume of the detector
usually can be detected with three different processes: ionization,
scintillation and heating. So the three main types of detectors are ionization
detector, scintillation detector and heating detector which measure
respectively ionized electrons, scintillation light and temperature
variations. Some detectors are designed to extract information from a
combination of two such processes in order to improve the ability of
discriminating the ”real” WIMP signal from the background events.
More than tens of groups in the world now are carrying out experiments to
directly detect WIMPs and the results have been achieved to the level less
than $10^{-7}$pb for WIMP mass of around 50 GeV.
For the scintillation method, crystals such as NaI(Tl) and CsI(Tl) are used as
the target detectors. Scintillation detector can provide a pulse shape to
discriminate the WIMP events from background events and it is easy to produce
a larger prototype detector which can be a ton-mass scale detector. Ultra-pure
scintillation detectors have been studied and run for a long time to detect
WIMP. The DAMA Collaboration has built its NaI scintillation crystal array
detector with the mass scale of hundreds of kilograms bernabei . The KIMS
group has chosen almost the same technology with DAMA except the scintillation
crystal is CsI(Tl) kims2007 . The relative low photoelectron yields restrict
the application of the pulse shape discrimination method at low energy region.
Both the DAMA and KIMS experiments turn to detect the annual modulation of the
event rate induced by WIMPs(Fig.2). The DAMA Collb. has achieved an average
event rate of 1cpkkd above 2 keV and given a clear annual modulation result
with 1.17 ton$\times$yr data set and 13 annual cycles. Based on these results,
the DAMA collaboration claimed that they had found evidence of dark matter.
Figure 2: The relative velocity of the earth to the sun and the velocity of
the sun moving in the WIMP ”sea”.
For the ionization method, high purity germanium detector is the best choice
till now due to its ultra-low self-radioactivity and feasibility to develop
large-scale detector technically. Germanium detector could provide low energy
threshold and very good energy resolution, but its shortcoming is a not-good
ability to discriminate the recoiled events induced by incident WIMPs from the
background events of gamma and electron. This disadvantage limits Ge detectors
or other semiconductor detectors to reach high detection accuracy for a
relatively large background.
For the heating method, the detectors could measure a tiny vibration of
temperature and heat deposition inside of the detector materials when the
detectors run in the ultra-low temperature circumstance with only tens of
millikelvin. The heat deposition can induce a change of the current signal in
the equipped electronics and then will be recorded.
For dark matter search experiments, the primary and key task is to extract out
possible signals from the recorded events which include a large number of
background debris and determine if they are induced by the incident dark
matter particles. So several experimental groups choose detectors which could
simultaneously measure two kinds of signals induced by one interaction and
this strategy could make the detector to possess a very strong ability to
discriminate background. Now two collaborations, CDMS and XENON, have
published the most stringent and sensitive results of a detection for tens of
GeV-mass WIMP. The CDMS group has developed a new type of Ge and Si detector
which collects both the ionization and heat signals. A kind of superconducting
tungsten transition-edge sensors (TESs) has been used to read out the heat
deposition, at the same time the ionization signal is also recorded. So two
kinds of signals including ionization and heat are read out when one incident
particle hits the target in the fiducial volume of the detector. The ration of
the ionization energy and heat energy could be different for the recoils of
the nucleus which is scattered off by WIMP and the background events caused by
incident gammas and/or electrons. This scheme provides a very strong event
discrimination for the CDMS experiment. The CDMS group has run its Ge and Si
detecting system for several years and published its new observation result in
2010 and 2011 (Fig.LABEL:Fig3).
Figure 3: The physical results from CDMS, XENON and other group for WIMP
search.
The XENON group has also developed their liquid Xenon TPC( Time Project
Chamber) detector which collects both the ionized electrons and scintillation
light when a particle interacts with the xenon target. Now the XENON100
experiment has built its liquid xenon detector of 100 kg fiducial mass and
published new results in 2011 sciencexpress2010 .
### III.2 Ultra-low energy threshold experiment
In recent years, an ultra-low energy threshold of about 400 eV has been
achieved with the germanium detector based on the point-contact technology,
the point-contact germanium detector PCGe is used for scanning WIMP of mass as
low as 10 GeV. The scientists from Tsinghua university of China first started
the experimental preparation from 2003 yueq2004 and ran the first 5g-mass
planar Ge detector for a test. The TEXONO collaboration runs a 20g ULE-HPGe
detector with the shielding system on the ground near a nuclear power plant in
Taiwan and published its physical results in 2009 shown in Fig.4 texono2009 .
The CoGeNT Collaboration has also started a similar experiment for the WIMP
search with 475g PCGe detector at a ground laboratory from 2006 and published
its physical results in 2008 and 2011, respectively cogent2011 . The CoGeNT
experiment has explored a low WIMP mass region and its results are shown in
Fig.5.
Figure 4: The physical results of WIMP detection from TEXONO group.
Figure 5: The physical results of WIMP detection from CoGeNT group.
Due to the updated results from Ge detectors, the low-mass WIMP detection has
become one of the new hot topics for dark matter experiments. Many groups have
tried to develop lower threshold detectors to scan the low mass WIMP region
below 10 GeV. The Majoranamajorana and GERDAgerda groups try to reduce their
energy threshold to cover the energy range for both double beta decay and dark
matter search.
In 2009, the CDEX Collaboration is aiming to search for low mass WIMPs with a
ton-scale point-contact germanium detector located at the Jin-Ping underground
Laboratory (CJPL) in China. Now the first stage of the CDEX project-CDEX-1,
where one 1kg-mass PCGe detector is installed has already successfully run and
begun to take data.
## IV The CJPL
Experiments such as detection on dark matter, double beta decay experiment and
neutrino experiment in the particle physics domain require Ultra-low
background laboratories in order to identify the rare events. Those
deleterious background events come mainly from the radioactive isotope of
environmental materials, high energy cosmic-ray muons which originate from the
interaction of high energy protons in cosmic rays with atmospheric compounds
at outer space, as well as the internal backgrounds from active isotopes in
detector materials and the noise of the detector electronics. The ambient
radioactive background from the radioactive nuclei could be shielded with
efficient shielding system which includes possible passive and active
shielding parts. However the high energy muons, which are the main hard
contents of cosmic-ray, can pass through the environment and interact with the
shielding system materials, the structure materials and the detector itself.
Though the instantaneous events coming from direct interaction of muons with
those materials on their paths can be vetoed by an active muon veto system,
the delayed neutron and accompanying radioactive nuclei induced by incident
cosmic-ray muons can contribute to backgrounds in the detector. This channel
is the most difficult one to be shielded and extracted out from the spectrum
of the detection. So it is very necessary that the detection of dark matter
should be performed at underground laboratories where the muon particles are
efficiently stopped and absorbed by the overburden rock.
There are many underground laboratories established or under construction in
the world including the LNGS in Italy, Kamioka in Japan, Subdury in Canada,
Modane in France, Soudan in USA, and so on taup2009 . In 2010, the first deep
underground laboratory was built with excellent working and living conditions
in China. This deep underground laboratory is named The China Jin-Ping
Underground Laboratory (CJPL).
### IV.1 The CJPL environment
The Yalong River is more than 1500-km long in Sichuan province of China. A
part of almost 150 km of the river bends and encompasses the huge Jin-Ping
Mountain to make a narrow arc. The turning is very sharp at the turning point,
namely in the mathematical terminology the curvature radius is small at the
spot, thus the west and east parts of the river are not much apart, but
separated by the mountain and the height difference of their water surfaces is
quite large. If a tunnel is drilled from the east to the west along the
intercept of the arc, the water drop height is tremendously large, so that
this can serve as an ideal hydrodynamic resource. Two hydropower plants at
each side of the Jin-Ping Mountain on the Yalong River are being built now by
Yalong River Basin Hydropower Development Company. Totally seven parallel
tunnels are drilled including one drainage tunnel, two transport tunnels and
four headrace tunnelscjplscience . In 2008, these two transport tunnels were
completed and have been in use for the hydropower plant project, and the map
is shown in Fig.6. The length of these two transport tunnels is 17.5 km and
the cross-section is about 6m$\times$6m. The CJPL is located in the central
portion of one of the transport tunnels and the rock overburden is about 2400
m thick. Fig.7 shows the detailed location of the CJPL in the transport tunnel
and the transect profile of the transport tunnel.
Figure 6: The site of CJPL.
Figure 7: The cross-section of the Jin-Ping Mountain along the transportation
tunnel. The site of CJPL is in the middle part of the Jin-Ping tunnel and can
access by drive.
Tsinghua University, collaborating with Yalong River Basin (originally called
Ertan) Hydropower Development Company who owns the Jin-Ping tunnel, had made a
plan to build an underground laboratory which was large enough to host a
relatively large-scale low-background experiment. The project is under way
right now. As the first step, a small CJPL hall has been constructed for dark
matter experiment and an ultra-low background material screening facility has
also been installed. The CJPL internal space includes three parts: the
entrance tunnel of 20 m in length, 30 m-long connection tunnel and the main
hall with dimension of 7.5m(H)$\times$6.5m(W)$\times$40m(L) and the total
available volume is about $\mathrm{4000m^{3}}$. The wall of CJPL is covered by
a layer of air-proof resin to separate it from the rock of the tunnel. There
are a ground laboratory building, office and dormitory for the researchers
near the entrance of the tunnel. Apartments, restaurants, hotel and sport
facilities are also available nearby the ground laboratory.
In order to provide a good working condition with fresh air for researchers
and further decrease the radon concentration in the air of the internal space,
a 10-km long air ventilation pipe has been built to pump the fresh air from
outside the transport tunnel into the CJPL space. This ventilation system can
provide up to 2000$\mathrm{m^{3}/h}$ fresh air and keep the air clean inside
the CJPL. According to the need of dark matter experiment, the radon trapping
system will be installed in the CJPL serving as an improved radon gas filter.
The CJPL is equipped with 3G wireless network and fiber access to broad-band
internet.
### IV.2 CJPL facilities
Low background germanium spectrometer serves as the standard facility for
material screening and selection for either detection of dark matter or
neutrinoless double beta decay experiments heusser . A low background
germanium spectrometer, called GeTHU facility, with a dedicated low background
shield has been designed and set up lately at CJPL for material selection of
dark matter experiment CDEX and other rare-event experiments. Now the facility
is operating for background measurement. Moreover, another two counting
facilities are being designed to improve minimum detecting sensitivity.
The detector is a high-purity, N-type germanium detector (HPGe) with a
relative efficiency of 40$\%$ and was constructed by CANBERRA in France
canberra . The germanium crystal has a diameter of 59.9 mm and a thickness of
59.8 mm. The cryostat is made of ultra low background aluminum with a U shape
to avoid direct line-of-sight to outside (see Fig.8(a)). The preamplifier is
placed outside the shield, since it causes more radioactive contaminations.
The shielding structure has been designed to guarantee a large sample space,
low background and easy operation. The sample chamber is surrounded by 5 cm
(15 cm for the base plate) of oxygen-free, highly pure copper made by the
Chinalco Luoyang Copper Co., Ltd luoyang . Three layers of ordinary lead, each
5 cm thick, surround the copper(see Fig.LABEL:Fig8(a) and (b)). The
$\mathrm{{}^{210}Pb}$ activity of lead is about 100 Bq$/$kg. All lead bricks
were carefully cleaned by ethanol before installing. Outside the lead are 10
cm borated polyethylene plates to prevent penetration of ambient neutrons. The
upper copper plate, closing the sample chamber, carrying the upper lead bricks
and borated polyethylene plates, are placed on sliding rails in order to open
or close the shield easily. The whole system is flushed by boiling nitrogen
from the cooling dewar. The Monte Carlo simulation shows the internal
background count rate of GeTHU within 40 keV to 2700 keV is only 0.0007 cps or
so.(see Fig.8(c))
Figure 8: The ultralow background HPGe gamma spectrometry at CJPL.
### IV.3 CJPL key performance(specification) $+$ simulation
#### IV.3.1 The Radioactivity of surrounding environment at CJPL
Original rock samples at different positions in the cave were collected before
construction and the concrete samples were collected during the construction.
All samples were measured and analyzed by low-background HPGe gamma
spectrometer. The measurement results are following: the radioactivity
concentrations of $\mathrm{{}^{226}Ra,~{}^{232}Th}$ and $\mathrm{{}^{40}K}$ of
rock samples are $\mathrm{1.8\pm 0.2Bq/kg,<0.27Bq/kg}$ and
$\mathrm{<1.1Bq/kg}$; and that of concrete samples are $\mathrm{1629.3\pm
171.3Bq/kg,6.5\pm 0.9Bq/kg}$ and $\mathrm{19.93.4Bq/kg}$.
Beside the sample measurement, a portable gamma spectroscoper manufactured by
ORTECTM is used to characterize dispersed radioactive nuclides in the
environment at CJPL areas. A portable HPGe spectrometer is used to measure the
gamma flux at CJPL. The detector consists of a HPGe crystal with a mass of 709
g, the signals from which will be stored and analyzed by the ORTECTM DigiDART
multichannel analyzer(MCA). The spectrometer is set up to measure gamma ray
with energies up to 3 MeV and the energy resolution(FWHM) is 1.6 keV at
1.33MeV. Fig.9 shows the in-situ gamma spectrum in different places.
Figure 9: In-situ Gamma spectra in 3 places: CJPL Hall-green line, CJPL PE
shielding house-purple line, Ground laboratory outside Jinpin Tunnel -blue
line.
#### IV.3.2 Cosmic Ray (Muon)
The main components of cosmic ray which can pass through rock stratum of the
mountain are muon and neutrino. The neutrino component can be ignored because
of its small interaction cross section with the detector materials. However,
it’s very important to know the exact flux of cosmic-ray muons for estimating
the background event rate caused by cosmic ray directly or indirectly.
In order to obtain the exact value of the muon flux in CJPL, 6 plastic
scintillation detectors are employed. The volume of plastic scintillators is
1m$\times$0.5 m$\times$0.05 m. 6 detectors are divided into groups A and B.
Each group has three plastic scintillation detectors. The three plastic
scintillation detectors of each group are put in erect on a shelf. The up-down
distance from one detector to the neighbor one is 0.20 m. The gap between the
two groups is about 0.20 m. Data are taken by a DAQ system and the LabView is
shown in Fig.10.
Figure 10: Schematic diagram of cosmic ray muon detect system.
The performance of the whole detection system was investigated on the ground
nearby entrance of east end of the tunnel before measurements began inside
CJPL. The triple coincidence event of the 3 plastic scintillation detectors in
one group is caused by cosmic ray muons. The signal of muon event is much
larger than the noise, so the experimental environment is very clean.
The preliminary result is 0.16 per square meter per day, i.e. about 1 of
$10^{8}$ muons on the ground.
#### IV.3.3 Radon Monitoring in CJPL
A decay product of $\mathrm{{}^{238}U}$, the noble gas $\mathrm{{}^{222}Rn}$
as well as its decay daughters, also ’observably’ contribute to the natural
background radioactivity in underground laboratories as it is emanated from
the rock and can rather easily enter the detector. Variations of the air radon
concentration both in the hall and shielding house are continuously measured
using two Alphaguard radon monitor manufactured by SAPHYMO GmbH (see Fig.11).
During long period monitoring, the average air radon concentration is
$\mathrm{\sim 100Bq/m^{3}}$ without ventilation and 50 $Bq/m^{3}$ with
ventilation.
Figure 11: Radon Monitor in CJPL experiment Hall.
## V The CDEX experiment
### V.1 Introduction to CDEX
The main physics goal of the CDEX project is to search for WIMPs in a mass
range of 10 GeV with sensitivity better than $\mathrm{10^{-44}cm^{2}}$.
Because of much more advantages, such as low radioactivity, high energy
resolution, high matter density and stability at work, the CDEX adopts the
High purity point contact Germanium PCGe as the target and detector. The
recoil energy of Germanium nucleus for low mass WIMPs is only several keV (
Fig.12). Considering the quench factor the threshold of detection should be
100-300 eV for the WIMPs within the range less than 10 GeV. The mass target or
detector should be as large as possible because of low event rates. The
detector mass of the first phase CEDX-1 is 1 Kg and that of the second phase
CDEX-10 is 10 kg. The final goal of the CDEX project is to set up to a ton-
scale mass Ge detector CDEX-1t in the CJPL.
Figure 12: The recoil energy of PCGe for deferent WIMP mass.
Obviously, the CDEX is a kind of ultra-low-background experiment. The biggest
challenge is to reduce the background event rate to an acceptable level. The
expectation of background count level is less than $\mathrm{0.1cpd/keV/kg}$.
To ensure its successful operation, the CDEX must:
* •
hold the detector in the CJPL deep underground laboratory;
* •
establish an efficient shielding system to reduce the background further;
* •
design a large mass and low threshold detector with tiny amount of internal
radioactive isotopes.
As mentioned above, the CJPL, which has an overburden of about 2400m rock,
provides an ideal place to host the CDEX. In the following, we are going to
briefly introduce the detector and the shielding system.
#### V.1.1 Detector
The physical aim of the CDEX composes an unprecedented challenge to the design
of the CDEX detector. In summary, the detector should have the following
features:
* •
1 Kg to 1000 kg of detector mass to compensate the extremely low cross-
section, namely substantially increase the event rate;
* •
200-400 eV threshold; This is because the recoil energies are low, and
furthermore only a fraction of these (a low energy quenching factor for Ge
recoil of O(20)$\%$) is generally in a detectable form, such as ionization.
* •
very low concentration of internal radioactive isotopes.
As a semiconductor detector made of the purest material on the earth, the HPGe
detector was first proposed to detect WIMP by our group in 2004. By using
arrays of commercially available HPGe diodes (5 g, 1pF capacitance), TEXONO
achieved 300 eV energy threshold. But further increase of the readout channels
and monetary investment prevents the detector mass to reach an O(kg) scale.
In order to compromise the mass constraint and energy resolution, small
semiconductor detectors usually are the conventional HPGe coaxial detector
with low noise and threshold, instead, the CDEX proposes to use a point-
contact HPGe detector (herein named PCGe) to directly detect the ionization
effect of the recoiled Ge nuclei. Fig.13 shows the configurations of coaxial
HPGe (left) and PCGe (right) detectors. By using point contact technology, a
PCGe detector can reach an order of 1 kg of mass with very small capacitance
(an order of 1pF) and promising intrinsic noise characteristics. Besides the
requirements on mass, energy resolution and threshold, it also has advantages
that its intrinsic ability to distinguish multi-site from single-site particle
interaction and surface events is remarked. The mass of the detector array can
reach a scale up to O(10)kg or even O(1000)kg.
Figure 13: The configuration of Coaxial HPGe (left) and Point-contact
HPGe(right).
#### V.1.2 Shielding system
The layout of the CDEX is shown schematically in Fig.14. It is located in the
CJPL, which has a PE (polyethylene) layer with a thickness of 1 m built to
decelerate and absorb fast neutrons. Taking into account the commonly accepted
WIMP density and elastic scattering cross sections, the expected event rate is
estimated as $\mathrm{<0.1kg^{-1}day^{-1}keV^{-1}}$. These extremely rare
events are very difficult to be distinguished from the high backgrounds which
come from cosmic rays and natural radioactivity. Then the passive and active
shielding systems are proposed (shown in Fig.15). The outermost layer is a 20
cm lead layer to reduce the environmental gamma ray radiation. The next layer
is a 15 cm steel supporting structure. Then, a 20 cm PE(B)(Boron-doped
polyethylene) layer is used to absorb thermal neutrons. The innermost layer is
10 cm OFHC (oxygen-free highly-conductive copper) for absorbing residual rays.
The space inside the copper layer is the room for the HPGe detector and the
active shielding system. To reduce the background of radon gas, the inner
space of the shielding should be refreshed continuously with highly pure
nitrogen gas.
Figure 14: The layout of CDEX and surrounding rock.
Figure 15: The schematic diagram of the proposed passive shielding system.
Based on our Monte-Carlo simulation of environmental radioactivity and cosmic-
ray using Geant4 Monte Carlo codes, the environment of the CJPL (rocks and
concrete): neutrons yielded from the rock and reaching the innermost region of
the passive shielding system is $\mathrm{3.129\times 10^{-12}cpd}$, the
neutrons yielded from the concrete layer and reaching the innermost region of
the passive shielding is $\mathrm{6.490\times 10^{-10}cpd}$, these two
quantities are much smaller than the criteria for neutron background demanded
by our expected goal. For the passive shielding system itself: we mainly
consider the neutron background yielded from the lead and copper shielding,
the unit contents of $\mathrm{{}^{232}Th}$ and $\mathrm{{}^{238}U}$ have been
simulated, the result is that the neutrons which are produced by 1ppm of
$\mathrm{{}^{232}Th}$ in lead and finally reach the innermost region of the
passive neutron shielding are $\mathrm{3.909\times 10^{-5}cpd}$, the neutrons
which are produced by 1ppm of $\mathrm{{}^{238}U}$ and finally reach the
innermost region of the passive neutron shielding are 5.848cpd. The neutrons
which are produced by 1ppb of $\mathrm{{}^{232}Th}$ in copper and finally
reach the innermost region of the passive neutron shielding are
$\mathrm{1.480\times 10^{-4}cpd}$, the neutrons which are produced by 1ppm of
$\mathrm{{}^{238}U}$ and finally reach the innermost region of the passive
neutron shielding are 2.289cpd. Thus the neutron background from the
environment of the CJPL could be reduced very efficiently by the shielding of
the CDEX. The remaining neutron background mainly comes from the passive
shielding system itself. In order to realize our expected low background, we
have to restrict the radioactive content of the copper and lead bricks.
Different active shielding methods are proposed for different phases of the
experiments. In the first phase, the CDEX-1, CsI(Tl) or NaI(Tl), a veto
detector surrounding the HPGe detector is proposed. The Liquid Argon LAr veto
is proposed in the second phase CDEX-10. In this design, the LAr detector
serves as both the passive shielding detector and the low temperature media
for the HPGe detector.
### V.2 CDEX-1
As the first phase of the CDEX experiment, a detector with 1 kg mass scale of
HPGe, named CDEX-1, is designed and runs first. It includes a ready-made 20g
ULEGe stlin2009 (also Ultra-LEGe, Ultra Low Energy high purity Germanium
detector) detector and a 1kg-PPCGe(P-type Point Contact Germanium) detector.
#### V.2.1 20g-ULEGe
The 20g ULEGe (N-type), manufactured by Canberra, France in 2005, actually
consist of 4 duplicate crystal elements. Each 5g element whose cross section
structure shown as Fig.16, has a semi-planar configuration with a P+ contact
on the outer surface, and an N+ contact of small diameter. Other surface
encircling the N+ contact is passivated to suppress the surface dark current.
Fig.17 depicts the horizontal cross section of the cryostat and the positional
relationship of the 4 crystals. The cryostat end cap opens a 0.6mm thick
carbon composite window so that an external soft X-ray calibration can be
carried out. Near the N+ contact, a low noise FET is installed which reads the
signal and inputs it to a pulsed optical feedback preamplifier. Each crystal
element has two identical outputs, which are connected to high impedance input
of the downstream modules.
Figure 16: 20g-ULEGe detector geometry.
Figure 17: The horizontal cross section of cryostat and crystal array.
Fig.18 simply describes the DAQ setup of the 20g-ULEGe experiment. The output
from the preamplifier is directly connected to the conventional spectroscopy
amplifier (Canberra 2026), which has high input impedance. The signal is then
split: one is input into a FADC (CAEN V1724, 100MHz bandwidth) while the other
is input into a discriminator to generate trigger control. Meanwhile, random
trigger and pulse modules are used to study efficiency feature etc. The data
from the FADC is transferred to PC through a duplex optical fiber.
Figure 18: Simplified Schematic Diagram of 20g-ULEGe DAQ setup.
#### V.2.2 1kg-PPCGe
The Germanium detector 1kg-PPCGe(P-type) manufactured by Canberra, France, was
transported to the CJPL in 2011. This 1kg-PPCGe detector, using new point
contact technology, has a larger mass: 1kg (single crystal). The crystal
cylinder has an N+ contact on the outer surface, and a tiny P+ contact stands
as the central electrode. The diameter and depth of the P+ contact are in
order of 1 mm. The small diameter reduces the capacitance (order of 1pF) of
the detector, and readily improves the intrinsic noise standard luke1989 ;
barbeau . The cryostat is designed fully-closed within 1.5 mm thick copper
layer, and there is no thin polymer film window.
The 1kg-PPCGe possesses two preamplifiers, as shown in Fig.19. The p-type
contact signal is read out by a pulsed optical feedback preamplifier, with a
low noise EuriFET nearby, while the n-type contact signal is read out by a
resistive feedback preamplifier. Both preamplifiers have two identical
outputs: OUT T and OUT E. All outputs are connected to the downstream modules
which have high input impedance.
For the second phase of CDEX-1 experiment, the DAQ set up for running the 1kg-
PPCGe is simply illustrated in Fig.20. Considering the low energy range we are
interested in, a fast timing amplifier (Canberra 2111) is utilized to amplify
the preamplifier signal, and then input into a faster FADC (CAEN V1721, 500MHz
bandwidth). The other preamplifier outputs are directly connected into
conventional spectroscopy amplifiers (Canberra 2026). The signal from N-type
contact is then feeded into the FADC (CAEN V1724, 100MHz bandwidth); while the
one from P-type contact is split into the same FADC and a discriminator for
trigger control respectively. Again, the random trigger and pulse modules are
used to study efficiency features and etc. The data are transferred into PC’s
through a duplex optical fiber.
Figure 19: The front-end electronics of 1kg-PPCGe in CDEX-1.
Figure 20: Simplified Schematic Diagram of 1kg-PPCGe DAQ setup.
#### V.2.3 Veto detector
The passive shielding system has been well established which can efficiently
block the outside gamma ray coming inside, but cannot be 100$\%$. Besides, the
shielding materials, even the HPGe detectors, have radioactivity themselves at
certain levels. If these rays deposit energy in the HPGe detector, it would be
hard to be distinguished from the WIMP signals. Luckily, due to the extremely
low cross section of WIMPs colliding with ordinary matter, it is almost
impossible that WIMPs can deposit energy both in the HPGe detectors and
surrounding materials at the same time. So, the Veto detectors are installed
to surround the HPGe detector. By that design, the output signal from HPGe is
used as the trigger signal, if there is also signal from surrounding Veto
detectors during the same time window, the detected event is not from
collisions between WIMPs and detector material and will be filtered. In
addition, these Veto detectors are also used for passive shielding. In
summary, the Veto detector should have following features:
* •
High Z and high density to achieve high veto efficiency and the high shielding
ability;
* •
Low internal radioactivity;
* •
Low threshold to increase the veto efficiency.
In CDEX-1, CsI(Tl) or NaI(Tl) scintillation detectors are proposed for the
Veto detector. Except the features mentioned above, the CsI(Tl) or NaI(Tl)
scintillation detectors have capacity of carrying the pulse shape
discrimination (PSD), robust (reasonably soft and malleable, less brittle and
deliquescent for CsI(TI)) and easy to make a large volume. Fig.21 shows the 18
CsI(Tl) crystal array of veto detectors arranged around the cryostat, and each
one is read out by a PMT through a light guide. The advantages of this design
are: 1) the transportation length of scintillation light is short, good for
light collection and lowers the threshold; 2) easy for extension to
accommodate larger PCGe detector; 3) the CsI(Tl) crystal bar is easy to
produce.
Figure 21: The schematic diagram of CsI(Tl) veto detector system.
### V.3 CDEX-10
Two detectors of CDEX-1, the 20g ULE-HPGe and 1kg-PPCGe, are running at
present in CJPL. The data analysis framework has also been set up for the
forthcoming data analysis. The CDEX-1 experiment is just the first stage for
using low energy threshold Ge detector to directly detect dark matter. The
CDEX-1 experiment will provide more detailed information about the Ge detector
performance and the background in the active and passive shielding systems of
the CDEX in CJPL; at the same time, the preliminary physical results on dark
matter search with these two Ge detectors will be given by the CDEX
Collaboration soon.
According to the recent model-dependent estimation on the cross sections of
collision between WIMPs and nuclei, the detector target mass should be up to
ton-scale if the dark matter experiment tries to ”see” the dark matter, i.e.
achieve statistically sufficient event rates with relatively high sensitivity.
So the CDEX collaboration plans to directly detect dark matter with low energy
threshold Ge detector of ton-scale target mass. There are several basic
conditions which should be considered for the future ton-scale Ge detector.
First, the Ge detector module with low energy threshold down to sub-keV scale
and enhancing the target mass of the total Ge detector can be realized by
increasing the number of the Ge detector modules. The second one is that the
Ge detector has to include a cooling system to guarantee a low working
temperature for the Ge detector. The third one is that the Ge detector should
possess a veto detector serving as an active shielding against the background
contribution coming from the Ge detector itself and the material near the Ge
detector.
With this consideration, the whole detector system is composed of two parts:
the Ge detector which can be up to ton-scale, is equipped with 1kg mass Point-
Contact Ge detector modules; another part is the Liquid Argon veto detector
serving as the cooling system and active shielding. In order to test the
conceptual design of this detector and learn more details about the technology
and gain experience on the Ge detector and LAr veto system, the CDEX
Collaboration has designed and will run a 10kg-scale Ge detector with LAr
cooling and veto system firstly.
Based on the studies of the PCGe detector, a new idea about the active
shielding system of the PCGe has emerged. The PCGe detector will reach a ton-
scale in the future in order to be more sensitive to dark matter detection. In
that scenario, the active shielding system with the solid scintillators would
be very difficult to enclose the larger PCGe array system, while keeping the
PCGe detector to be cooled down to the liquid nitrogen temperature. So a new
type of active shielding technology for large-mass array PCGe system has to be
invented. The Liquid Argon (LAr) is a good candidate. The temperature of LAr
is suitable for the Ge detector and LAr is also a kind of scintillator and can
serve as an active shielding system while the Ge detector is immersed into it.
The CDEX Collaboration has completed the physical and structure simulation
study on the CDEX-10 detector system, and also the conceptual design and
related studies on the Ge detector, LAr veto detector, LAr cooling system and
passive shielding system.
The structure of the whole CDEX-10 detector and the shielding system are shown
as follows. The interior of a 1 m thick (red) polyethylene serves as a neutron
shielding room. This neutron shielding room had been constructed as the CJPL
was under construction and now we have started to carry out several
experiments inside this polyethylene room. The blue layer is lead shielding
against the outside gamma background. The yellow layer is Oxygen Free High
purity Copper (OFHC) which shields out the residual gamma background passing
through the lead layer and also shields the internal background radiated by
the lead shielding. Inside the OFHC layer there is the LAr veto detector
system which serves as both active and passive shielding. Three 0.5 Kg or 1 kg
PCGe detectors are encapsulated into ultra-pure copper or aluminum tubes which
are highly evacuated. One block of the OFHC (Blue box) is mounted to shield
the PCGe detector from noise produced by the electronics in the tube. Several
encapsulated tubes with three PCGe detectors are immersed in the LAr vessel
for cooling and active shielding. The Total mass of the PCGe detector is about
10 Kg. Part of the energy is deposited and the scintillation light is produced
when gamma or beta ray enters the LAr. The scintillation light is read out by
the photomultiplier tubes PMT and then the signal can serve as a veto signal.
The Ge detector tubes on the 1 kg p-type point-contact Germanium detector has
been manufactured by the Caberra Company according on the CDEX requirement and
design. The 1 kg p-type PCGe detector for dark matter search experiment is
also the biggest one in the world till now. The 1 Kg-PCGe detector has been
taking data after its performance test.
In this CDEX-10 detector, the scintillation light is read out by the PMT on
the top of the LAr volume. The number of PMT is determined by the detail of
the physical design and the following plot gives the layout plot of the PMT
deployment.
The design of the cooling and active shielding systems for CDEX-10 LAr has
been completed. From the physical point of view, the main requirements for the
LAr cooling and active shielding systems are to keep the LAr temperature
uniform and stable within a small temperature region to alleviate the
temperature dependence of the performances of the PCGe detector, and to
prevent formation of gas bubble and concomitant tiny vibration of LAr which
may induce extra noise in the Ge detector. A number of methods for satisfying
such stringent restrictions are considered to maintain constant liquid level
and control the generation of gas bubble in liquid argon under zero boil-off
conditions for long periods of time. Large convective motions and pool-boiling
are avoided by thermally optimized cryogenic systems to reduce environmental
heat leaks to the low-temperature cryogen, especially, an actively-cooled LAr
shield that surrounds the cryostat is used against heat radiation.
### V.4 Electronics (FADC, DAQ)
The CDEX electronics includes three parts: Electronic, Trigger System and Data
Acquisition System where:
* •
The electronics includes the front-end amplifier, the main amplifier, the
flash analog/digital converter, as well as HV power supply and the slow-
control electronics etc.
* •
The trigger system contains the trigger system and the clock distribution
system.
* •
The data acquisition system contains the data read-out electronics, the slow-
control electronics and related data server and memory, communication, display
and HMI etc.
#### V.4.1 The Current Design of Electronics
Considering the future development, a set of multiple-channel electronics for
the CDEX detectors has been designed. Fig.22 shows the design architecture of
the electronics for our CDEX. The mode is prevailing for both electronics and
other parts. So, in the following discussion, a thorough description about the
ongoing electronics is made first, and then we will go on describing the next
generation of electronics which is under research $\&$ development, as well as
the concerned engineering problems.
Figure 22: The ongoing design architecture of electronics: CDEX.
#### V.4.2 The CDEX-1 and CDEX-10 Electronics Architectures
It is forecasted that the signal-readout channels of the larger-scale CDEX
detector will be 100$\sim$1000 in the future. It is necessary to research and
develop an electronic system which will be used for both HPGe and veto
detectors. The whole electronic system includes the front-end amplifier, main
amplifier, FADC and data readout etc. Now the electronic group is focusing
their efforts on the design and plan of each parts of the electronic system as
described in the following.
* •
FADC:
According to the physical demand, there are two different kinds of the FADC
(Flash Analog to Digital Convertor) electronics needed for reading out the
pulse shape from the HPGe detector or Veto detector: one is 100MHz, 14 Bits,
16-channel FADC/GE electronic plug-in with standard dimensions of VME plugin;
another is 1GHz, 12Bits, 4-channel FADC/AC electronic plugin with standard
dimensions of VME plug-in. The FADC/GE electronic plug-in is used to read out
data on the slow-shaping-time signal of the HPGe, and FADC/AC electronic plug-
in is used to read out the fast-shaping-time signal from either HPGe or
photomultiplier tube (PMT) of the veto detector.
For convenience and to satisfy different physical demands, the design of the
FADC system adopts a new concept which is also being applied in many labs of
CERN. The structure uses the same VME board (Main Board) and different FADC.
The experiments facilities are realized by using different FADC Mezzanine
Cards. Different FADC mezzanine cards are made of FADC chips with different
performance indexes and numbers, the channels are connected to the board via
the high-speed plug-in. Then the same board is used to conduct data cache,
processing and triggering judgment, and then the signals are read out via the
standard VME bus of the front-end fiber-optic interface of the back board.
* •
The Readout Electronics:
The CDEX electronic readout system is integrated on the Wukong board which is
an electronic product developed by the researchers of Tsinghua University, and
the module RAIN200A is used to read out the data of each Wukong Board. The
module RAIN200A is based on PowerPC predecessor, where Linux2.6.X is running
to realize the readout data from FADC to Ethernet, with a bandwidth at least
95Mbps.
The RAIN200A has the following characteristics: The core predecessor is the
32-bit car-grade MPC5125 made by Free scale, with frequency of 400MHz; the
core board is installed with a 256MB DDR2 industrial-grade memory and 4GB NAND
Flash; the core board is supplied with 100Mbps Ethernet interface and USB 2.0
High-Speed interface; the system runs Linux 2.6.29 kernel with RT patch.
* •
DAQ:
The CDEX DAQ includes the electronic part acquiring data from the electronic
units, the online data judgment, display and memory as well as the off-line
data analysis etc.
Because the electronic unit uses the Ethernet and TCP/IP as the interface of
data readout, DAQ needs merely to acquire data per the standard protocol
TCP/IP from the electronic unit, so that our scheme is very convenient. The
architecture of the entire DAQ system is also an Ethernet-based exchange one
and so the standard commercial switches, routers and Ethernet cards can be
used to acquire data for the DAQ. Fig.23 gives the Wukong FADC/DAQ system for
the CDEX-1 and CDEX-10.
Figure 23: The Wukong FADC/DAQ system for CDEX-1 and CDEX-10.
## VI Detector performance
### VI.1 Physical requirement
As a detector used for dark matter detection, the most important requirement
for it is its low background level. The CDEX is to detect WIMPs with a high
purity germanium detector, especially focusing on WIMPs of low masses
($<$10GeV). This imposes another important requirement of energy threshold to
the detector. The CDEX Collaboration adopts the Point-Contact germanium
detectors (PCGe) for the low mass WIMP search which can realize a sub-keV
energy threshold with kg-size mass. The ton-scale detector system of the CDEX
in the future will be realized based on this kind of kg-size PCGe detector.
The PCGe detector has been developed from the conventional Ultra Low Energy
Threshold germanium detector. Many efforts have been contributed to optimize
the application of the PCGe in dark matter searches.
* •
Pulse shape analysis of near noise-edge events extends the physics scope
the PCGe detector can provide ultra-low energy threshold down to less than
500eV. According to the theories on dark matter, the differential event rate
of observing the recoiled nucleus scattered off by incident WIMPs
exponentially increases with the decrease of its energy. So the pulse shape
analysis of near noise-edge events is a very important task for the CDEX
experiment to get lower energy threshold of the PCGe detector.
* •
Pulse shape analysis of surface versus bulk events to characterize an
important background channel
The location of an event where it is detected is another important parameter
for background discrimination. For a p-type PCGe detector, due to the outside
N+ or P+ conjunction, there exists a thin layer which is called the ”dead”
layer. In fact, the thin layer is not really ”dead”, the events emerging in
this thin layer can also give a different pulse shape from those bulk events
which are produced in the interior part of the PCGe detector. So the pulse
shape discrimination methods should be developed for distinguishing the
surface events from the bulk events.
* •
Sub-keV background clarification and suppression
For the PCGe detector with a sub-keV energy threshold, it provides us a new
energy window where so far no researches have ever covered this energy region
due to relatively high energy threshold of those detectors. The CDEX
experiment should first clarify the source and types of the background in the
sub-keV energy region. Based on the present knowledge, a method for the
background discrimination and suppression will be developed.
* •
Fabrication of advanced electronics for Ge detectors
The pulse shape discrimination methods have been developed for suppressing
noise, especially near the energy threshold, and discrimination of the surface
events from bulk events. All these methods to obtain lower energy threshold
and low background level are based on the performance of the front-end
electronics of the PCGe detector. So another important requirement to the PCGe
detector is the fabrication of advanced electronics for Ge detector which
provides relatively low noise level.
### VI.2 Performance of CDEX-1
Since our first testing run of 20g-ULEGe in November of 2010, we have obtained
some data during both commissioning period and formal data taking period. Data
analysis is being undertaken for determining the final spectra, and the
properties of the 20g-ULEGe in this new CJPL environment have been known.
The Commissioning period data of 1kg-PPCGe have also been acquired. The
properties of this detector are shown in the following subsections.
#### VI.2.1 Linearity calibration of the detector 20g-ULEGe and 1kg-PPCGe
Because of its small volume, the detection efficiency of 20g-ULEGe is
relatively low to get prominent peaks for calibration, even though the
calibration period was prolonged to several months. Withal, there is no source
available for further test in the CJPL at present. An X-ray tube amtek , which
can generate X-ray of up to about 30 keV, is applied as a substitution of the
source. When the X-ray generated by the tube hits the mixed powders of
titanium dioxide (TiO2) and potassium permanganate (KMnO4) the characteristic
X-rays can be used for the high gain (low energy range) calibration. The
surrounding copper shielding contributes characteristic X-rays as well. Thanks
to the thin carbon composite window of 0.6mm, overwhelming amount of the low
energy X-rays are able to penetrate and deposit energies in the crystals.
Sketch of the calibration for the 20g-ULEGe is illustrated in Fig.24.
Figure 24: Sketch of calibration for 20g-ULEGe by X-ray tube.
The calibration result of the 20g-ULEGe are shown in Fig.25, with each sub-
figure in accord with a 5 g sub-detector. In this scheme, characteristic
X-rays of titanium, manganese and copper, coupled with random trigger tek
(zero point) are used (Table.1). From the fitting results of the seven points,
we can observe a good linearity in the low energy range below 10 keV.
Moreover, the region above 10 keV can be studied with available radiation
sources.
Figure 25: Calibration result of 20g-ULEGe.
Table 1: Sources for calibration of 20g-ULEGe Source | Random trigger | Ti | Mn | Cu
---|---|---|---|---
Energy (keV) | 0 | 4.509 | 4.932 | 5.895 | 6.490 | 8.041 | 8.904
The 1kg-PPCGe does not have a carbon composite window, but a large volume and
mass. So its detection efficiency is high enough to carry out a calibration
with a few days’ background data. The result of one data set is shown in
Fig.26, with 3 sub-figures corresponding to different gains. The period is
21.9 days and lots of background peaks are visible. Besides the zero point by
random trigger, we respectively use 4-8 points to complete the calibration
fitting for different gains (Table.2). The results also display a good
linearity of the 1kg-PPCGe.
Figure 26: Calibration result of 1kg-PPCGe.
Left: High Gain. Middle: Mediate Gain. Right: Low Gain.
Table 2: Sources for calibration of 1kg-PPCGe High Gain | Mediate Gain | Low Gain
---|---|---
Source | Energy (keV) | Source | Energy (keV) | Source | Energy (keV)
$\mathrm{{}^{65}Zn}$ | 8.979 | $\mathrm{{}^{214}Pb}$ | 214.98 | $\mathrm{{}^{214}Pb}$ | 214.98
$\mathrm{{}^{68}Ga}$ | 9.659 | $\mathrm{{}^{214}Pb}$ | 295.21 | $\mathrm{{}^{214}Pb}$ | 295.21
$\mathrm{{}^{68}Ge}$ | 10.367 | $\mathrm{{}^{214}Pb}$ | 351.92 | $\mathrm{{}^{214}Pb}$ | 351.92
$\mathrm{{}^{73,74}As}$ | 11.103 | $\mathrm{Ann}$ | 511.0 | $\mathrm{{}^{214}Bi}$ | 609.3
$-$ | - | $\mathrm{{}^{208}Tl}$ | 583.2 | $\mathrm{{}^{214}Bi}$ | 1120.3
$-$ | - | $\mathrm{{}^{214}Bi}$ | 609.3 | $\mathrm{{}^{214}Bi}$ | 1764.5
$-$ | - | $\mathrm{{}^{228}Ac}$ | 911.2 | $\mathrm{{}^{214}Bi}$ | 2204.2
$-$ | - | $\mathrm{{}^{214}Bi}$ | 1120.3 | - | -
#### VI.2.2 Energy resolution of detectors 20g-ULEGe and 1kg-PPCGe
With the calibration data, the energy spectra as well as the properties of
energy resolution have been obtained.
The calibrated spectra of the 20g-ULEGe are plotted in Fig.27. FWHMs (Full
Width at Half Maximum, fitted to a Gaussian function) are about 200 eV at 5
keV and about 300 eV at 9 keV. The 4 sub-detectors are very close to each
other in all technical indices and practical performance. Besides the
statistical limit, the X-ray tube which has a wide emission solid angel and
notable energy dispersion, also degenerate the FWHM. So the energy resolution
of the system is expected to be better than the predecessor products.
Figure 27: Calibrated energy spectra of 20g-ULEGe.
The calibrated spectra of 1kg-PPCGe are plotted in Fig.28. The FWHMs are about
200 eV at 10.367 keV and about 3.525 keV at 1120.3 keV. For many other peaks,
FWHMs are limited by statistical errors, thus need to be improved with more
data accumulation.
Figure 28: Calibrated energy spectra of 1kg-PPCGe. Left: High Gain. Middle:
Mediate Gain. Right: Low Gain.
#### VI.2.3 Noise level of detector 20g-ULEGe and 1kg-PPCGe
Electronic noise would crucially affect the detection threshold. To study the
noise level, events by random trigger tag are selected and projected to energy
spectra, which are then fitted with the Gaussian function. The FWHMs of the
distributions render less than 100 eV of both 20g-ULEGe (crystal 2) and 1kg-
PPCGe, which sustain the threshold lower than 500 eV. (Fig.29,30)
Figure 29: Noise distribution of 20g-ULEGe by random trigger.
Figure 30: Noise distribution of 1kg-ULEGe by random trigger (High Gain).
#### VI.2.4 Stability of the detector
The validity of data which depict the system stability needs to be checked
before overall data analysis. Several parameters describing the system
behaviors are tracked over time, including triggering rate, pedestal of pulse
shape, noise level, et. al.. Figs.31,32 present the triggering rate status of
20g-ULEGe and 1kg-PPCGe, in which an average rate per hour is counted.
Immoderate parts, especially an anomalous discrepancy deviating from the
average level, should be carefully inspected and discarded if they are proved
to be invalid.
Figure 31: Triggering rate status of 20g-ULEGe.
Figure 32: Triggering rate status of 1kg-PPCGe.
#### VI.2.5 Trigger efficiency
Trigger efficiency is to be checked before data analysis is carried on
stlin2009 . It determines the survival fractions of the events which pass the
discriminator threshold. Specific pulse generator which can supply small
signal ranged at 0-2 keV can be used to estimate the trigger efficiency at
lower energy regions. Such a kind of pulse generators is under development
right now, and the relevant test will be held soon.
## VII Data analysis
### VII.1 Process
The data analysis is a very important step to understand the energy spectra of
the HPGe detectors. Usually there are three sources contributing to the HPGe
spectra: electronic noises and microphonic interference, physical background
events as well as dark matter events. The purpose of the data analysis is
first to discriminate out the noise, physical background events from the total
energy spectra. Different methods need to be developed according to the
characteristics of different sources.
The signal shape of electronic noise and microphonic interference are not in
the Gaussian-like wave form in which the physical events should be, then can
be distinguished through some PSD (Pulse Shape Discrimination) method. Various
parameters, time-to-peak, rise time and fall time, for instance, describing
wave form of the signal can be designed and used for differentiating in
parameter plots. In contrast to the anomalous wave form, analog to the
Gaussian-like pulse shape would be a criterion for identifying the type of the
sources in the calibration data.
In particular, electronic noise exerts a crucial impact on the threshold and
needs to be handled with great caution. Physical events, in principle, follow
the Gaussian distribution in 2-D energy-energy parameter plot. So the
threshold can be lowered in this parameter plot than in a single energy
spectrum plot. A noise edge cut, which exists along the tangent of the
Gaussian distribution, is then set to lower the threshold. Since the cut is
related to energy, the efficiency-energy curve should be estimated by the
pulse calibration data or anti-Compton tagged data.
Usually in experiments of searching for dark matter one observes recoiled
nuclei, such events might be tangled with the physical background caused by
neutrons and electron recoil events. Several experiments, such as CDMScdms ,
XENON100xenon , CRESST-IIcresst et. al., use two distinct signals such as
scintillation and ionization to reject the electron recoil events
simultaneously. But it’s not the case in the CDEX. The CDEX only measures the
energy deposit of ionization. Another difference is that the CDEX needs to
consider the quenching factor of energy deposit for the recoiled nucleus.
As the pulses caused by the electron recoil and nucleus recoil do not appear
different, the normal PSD method would be less efficient here, some other
aspects need to be viewed. There are mainly two ways to subtract the
background events. One is to statistically eliminate the visible and invisible
characteristic x-ray peaks; while the other is to apply some algorithm to
reduce surface events occurring in the thin insensitive layer. In the low
energy region which we are interested in, visible characteristic peaks usually
come from the K-shell X-rays and the invisible ones are caused by the L-shell
X-rays’ contribution, both of them have been analyzed theoretically and
experimentally in some detailsaalseth . On the other hand, the surface
insensitive layer of the PPCGe detector has a weaker electric field therefore
can cause distortion of energy deposition. Such a kind of events would have
slow pulse shape from preamplifier, but is degenerate with the electronic
noise. The Wavelet method marino can reduce noise by filtering and shaping
the signal sample.
With all the aforementioned processes, the final energy spectrum is obtained.
Based on selected physics and statistical method, the final spectrum is then
interpreted into physics result, namely the common exclusion curves.
### VII.2 Strategy
In order to measure the cross sections for low WIMP masses, elaborate data
analysis is essential to lower the threshold and single out background events
from raw data. The threshold can be lowered by getting rid of noise events,
and the non-WIMP event rate can be suppressed by deducting the microphonic
events as well as physically identified non-WIMP events. Microphonic events,
coming from violent changes of environment or instability of the system,
usually have abnormal pulse shape. Non-WIMP events may consist of multi-site
events (events with several incident particles coincidentally), surface events
(events with energy deposited in the dead layer of the surface), et. al.. To
attain this goal, some variables or parameters are defined, either event by
event or statistically. Fig.33 illustrates the notation of several empirical
main parameters. Ped, e.g., is calculated as the average pedestal within the
first 200 FADC time units.
Figure 33: Notation of several parameters.
Some basic cuts for selection of pulse shapes, such as the stability of
pedestal, time-to-peak, as well as linearity between area and amplitude, are
applied firstly. After the pulse shape selection, three different crucial cuts
are utilized as described in the following subsections. Then the derived final
spectrum is used to interpret the physics, commonly as making exclusion plots
or indicating islands in the parameter space.
Moreover, to identify the noise and microphonic events, reference pulse shape
of physics events are needed. Such physics events can be selected within peaks
of characteristic x-ray, either from calibration data or from background
measurement.
#### VII.2.1 PSD cut
Noise and microphonic events having different pulse shapes from the physics
events, can be distinguished through the PSD (Pulse Shape Discrimination)
method. The parameters for each event pulse with different shaping time are
then considered, and the scattered plot is shown in Fig.34. The two parameters
are the pulse amplitude with shaping times of $\mathrm{6\mu s}$ and
$\mathrm{12\mu s}$ respectively. Red dots roughly along the diagonal stand for
the physics and noise events, while the black dots in the band near the axes
correspond to the interference events. So taking the straight lines (blue-
colored) as cuts, we are able to discard the microphonic events.
Figure 34: Illustration of PSD cut stlin2009 .
#### VII.2.2 Noise edge cut
In principle, the pulse amplitude distribution of the noise events is
Gaussian. So in the 2D (pulse amplitude and pulse area) parameter plot, the
distribution seems like an ellipse (Fig.35). Whenever they are projected to
either axis, the bulge would degrade the noise level. Noise edge cut is then
used to reduce the noise events and hence lower the threshold. The key point
is that since this cut is related to the energy, the efficiency curve has to
be applied to make corrections.
Figure 35: Illustration of noise edge cut.
#### VII.2.3 Surface event cut
Due to the manufacturing limitation, the N+-type surface layer of the PPCGe is
thick, in which the electric field is weaker than in the crystal bulk so that
the energy deposited in this layer cannot be efficiently collected. This layer
is then named as the dead layer or insensitive layer, and the recorded events
with energy deposited in this layer are called as surface events. The surface
events have a longer charge-collection time and slower pulse shape, which may
be picked out by the time parameter cut. The rising time of the fast pulse at
the preamplifier is chosen as the time parameter. However, in the low energy
range which we are highly interested in, the signal-to-noise ratio becomes not
high enough to finely deduce the parameter. Green curve in Fig.36 shows a raw
fast pulse. Marino marino suggested a method that one can use a wavelet
shrinkage to reduce noise. The result is shown as the black curve, and the
rise time of 10-90$\%$ of leading edge is then estimated.
Figure 36: Illustration of wavelet analysis marino .
A typical tendency between the rising time and energy distribution is
presented in Fig.37, where two gatherings are clearly separated. The up part
which has longer rising time corresponds to the surface events, so that should
be discarded by the red critical line. The down part is then determined to be
the final spectrum. Efficiency correction is needed afterwards, which could be
estimated by source calibration data.
Figure 37: Illustration of the surface event cut wilkerson .
## VIII The prospect
### VIII.1 Physical target
The CDEX Collaboration will realize the dark matter detection with point-
contact germanium arrays as the detector which is cooled down and actively
shielded by the liquid argon system. The most interesting region for the CDEX
experiment is the low WIMP mass region which could be detected by the ultra-
low energy threshold PCGe detector. Based on the performances of the prototype
1 kg PCGe detector which has run for several months in the shielding system at
CJPL, the CDEX collaboration plans to focus on the direct detection of dark
matter particles with a minimum mass less than 10 GeV. The Ge array system
will run in the CJPL and additional shielding system will be included such as
1 m-thick polyethylene for neutron deceleration and absorption, 20 cm lead and
20 cm Oxygen-Free High-Conductivity copper for gamma shielding. A liquid argon
cooling and active shielding system will be adopted for the CDEX-10, and the
PCGe detector will be immersed into the liquid argon system. The passive and
active shielding systems provide the Ge array system a relatively low
background circumstance, thus running experiments with low background is
expected. Except the shielding system, the effective pulse shape
discrimination (PSD) methods will also be developed to get rid of the noise
and background events from the raw data and pick up the real dark matter
events. These PSD methods mainly focus on (1) the pulse shape analysis of the
events near noise-edge to enlarge the observable physics range, (2) the pulse
shape analysis of surface events, by comparison with the bulk events one can
characterize the important background channel, (3) understanding the sub-keV
background and offering a scheme to suppress it.
By considering all possible effects in the design which might affect the
performances of the detector and carrying careful dada analysis, we wish to
achieve a level of 1cpkkd (count per kilogram per keV per day) for the CDEX-1
stage and 0.1cpkkd for CDEX-10 stage. The different exclusive curves
corresponding to different energy thresholds and background event rates are
shown in Fig.38.
Figure 38: The different exclusive curves correspond to the different energy
threshold and background event rate.
### VIII.2 CDEX-1T
The ultimate goal of the CDEX Collaboration is to set up a ton-scale mass Ge
detector based on the PCGe detector and LAr active shielding and cooling
systems. The detailed design and technology which will be developed and
utilized are based on the experience and lessons gained from the design and
studies of the CDEX-10 detector. The detector will be located in a 1 m-thick
polyethylene shielding room and the internal volume is covered by 20 cm lead
and 10 cm Oxygen-Free High-Conductivity copper. The threshold less than 400 eV
and background event rate of 0.001 count per keV per kilogram target mass per
day for the 1 T Ge mass are the main goal of the CDEX-1T detector. The
sensitivity for low mass dark matter will be down to about
$\mathrm{10^{-45}cm^{2}}$ in the region of the WIMP mass less than 10 GeV if
the 0.001cpkkd background level is achieved, as shown in Fig.38. The rough
layout of the CDEX-1T detector is shown in Fig.39.
Figure 39: Layout of the CDEX-1T detector system.
## IX Summary
Astronomical observation, especially the cosmic microwave background (CMB)
radiation and the data on the large scale structure (LSS) of the universe
indicate that a significant part of matter content of our universe is non-
baryonic dark matter. The nature of the dark matter is one of fundamental
problems of particle physics and cosmology. A favored candidate of the dark
matter is the WIMPs, weakly interacting massive particles. Direct searches of
the WIMPs are aiming at detecting the interaction of WIMPs with normal nuclei
which are SM particles. The CDEX (China Dark matter Experiment) collaboration
is a new experimental group for searching dark matter in the world whose task
is to directly search for WIMP with an Ultra-low energy threshold in terms of
high purity Germanium detector at CJPL (The China Jin-Ping deep underground
laboratory). So far, the CDEX collaboration includes several members: Tsinghua
university , Sichuan university , Nankai university, Institute of Atomic
Energy, Yalong River Basin Company and Nuctech Company.
The CJPL is located in the central portion of one of the transport tunnels of
a giant hydrodynamic engineering project at the huge Jin-Ping Mountain area of
Sichuan province, southwest of China. The rock covering thickness of CJPL is
about 2400 m where the cosmic muon flux is about 1 in $\mathrm{10^{-8}}$ of
the ground level. The Muon flux, radioactivity and radon concentration in the
underground lab have been measured and monitored time to time.
Comparing the detectors made of other materials the high purity Germanium
semi-conductor detector HPGe has many advantages such as low radioactivity,
high energy resolution, high density and remarkable stability for radiation
detection. The CDEX adopts the high purity point contact Germanium detector
PCGe with about 300 eV threshold to search for WIMPs of minimum mass as low as
10 GeV. The detector mass of first phase CEDX1 is 1 Kg and that of second
phase CDEX10 is 10 kg.
There are two detectors of CDEX1, 20 g HPGe and 1 kg PCGe, running now in the
CJPL. The detector surrounded by CsI(Tl) or NaI(Tl) veto detector and outside
the veto detector there is a shielding system. The performance of the detector
has been calibrated and the noise level is about 200 eV. The expectation of
background count is less than 0.1cpd/keV/kg near the 200 eV after event
selection with the PSD cut, noise edge cut and wavelet cut.
The CDEX-10 is a PCGe detector array which is immersed into a liquid Argon (
LAr) vessel. Each unit detector in the array is a 1 kg PCGe detector whose
threshold is about 200 eV. The scintillation light in the LAr will be read out
by the PMT, and the LAr is the cooling system providing working temperature
for the Ge detector and also serves as a veto detector. The Monte-Carlo study
shows that the background event rate will be as low as 1cpd at low energy
range. In the future, the CDEX Collaboration is going to set up a ton-scale Ge
detector composed of the PCGe detector and LAr active shielding and cooling
system in the CJPL. Hopefully, the overall threshold of the CDEX-1T detector
will be less than 400 eV and the background event rate could be reduced to
0.001 cpd. The sensitivity will be down to about $\mathrm{10^{-45}cm^{2}}$ for
the WIMP with its minimum mass as low as 10 GeV or even less.
Acknowledgements:
This work is partly supported by the National Natural Science Foundation of
China (NNSFC); Ministry of Science and Technology of China (MOSTC) and
Ministry of Education of China.
## References
* (1) F. Zwicky, Astrophys. J. 86, 217(1937).
* (2) V.Trimble, Ann. Rev. Astron. Astrophys. 25, 425(1987).
* (3) E. W. Kolb and M. S. Turner, ‘the Early Universe’ (Addison-Wesly, Reading, MA, 1990).
* (4) L. Bergstrom, New J. Phys. 11, 105006(2009)[arXiv:0903.4849 [hep-ph]].
* (5) J. L. Feng, arXiv:1003.0904[astro-ph.CO] (2010); R. J. Gaitskell, Ann. Rev. Nucl. Part. Sci. 54, 315(2004).
* (6) X. He et al. Mod. Phys. Lett. A22, 2121(2007); Phys. Rev. D79, 023521(2009).
* (7) A. Beylyaev et al. Phys. Rev. D83, 015007(2011), and the references therein.
* (8) H. An, S.-L. Chen, R. N. Mohapatra, S. Nussinov and Y. Zhang, Phys. Rev. D82, 023533(2010) [arXiv:1004.3296 [hep-ph]].
* (9) M. Gilloz et al. JHEP 1103, 048(2011), and references therein.
* (10) J. Lavalle et al, AIP Conf.Proc. 24, 398(2010); R. Yang and J. Chang, Res.Astro.Astrophys. 10, 39(2010), and references therein.
* (11) Tibet ASgamma Collab. Astrophys. Space. Sci. Trans. 7 15\.
* (12) K. Bernabei et al. Eur. Phys. J. C56, 333(2008); ibid. C67, 39(2010).
* (13) C. Aalseth et al. Phys. Rev. Lett. 106, 131301(2011).
* (14) P. Brink et al. AIP Conf.Proc. 1182, 260(2009).
* (15) The CROSST-collab. arXiv: 1109.0702.CRESST
* (16) G. Jungman, M. Kamionkowski, K. Griest. Phys. Rept. 267, 195(1996)[arXiv:hep-ph/9506380].
* (17) C.-L. Shan, arXiv:1103.4049 [hep-ph].
* (18) C.-L.Shan, arXiv:1103.0481 [hep-ph].
* (19) H.Y. Cheng et al. Phys. Lett. B 219, 347 (1989); JHEP 07, 009 (2012) [arXiv:1202.1292].
* (20) R. Essig, J. Mardon and T. Volansky, Phys.Rev. D85 (2012) 076007; R. Essig et al. Phys.Rev.Lett. 109 (2012) 021301\.
* (21) T. Ressell, M. Aufderheide, S. Bloom, K. Griest and G. Mathews, Phys. Rev. D48, 5519(1993).
* (22) G. Griest, Phys.Rev.D15, 2357(1988).
* (23) V. Barger, W.-Y. Keung and G. Shaughnessy, Phys. Rev. D78, 056007(2008) [arXiv:0806.1962 [hep-ph]].
* (24) Y. Tzeng and T. T. S. Kuo, Conference: C96-05-22, 479\.
* (25) M. T. Ressell et al., Phys. Rev. D48, 5519(1993).
* (26) M. T. Ressell and D. J. Dean, Phys. Rev. C56, 535(1997); hep-ph/9702290.
* (27) J. Ellis, K. Olve and C. Savage, Phys. Rev.D77, 065026(2008).
* (28) R. Helm , Phys. Rev.104, 1466(1956).
* (29) I. Sick, Nucl. Phys. A 218, 509(1974).
* (30) Dreher et al, Nucl. Phys. A 235, 219(1974).
* (31) J. D. Lewin and P. F. Smith, Astropart. Phys.6, 87(1996).
* (32) Y.-Z. Chen, Y.-A. Luo, L. Li, H. Shen and X.-Q. Li, Commun. Theor. Phys. 55, 1059(2011); [arXiv:1101.3049 [hep-ph]].
* (33) J. Engel, S. Pittel and P. Vogel, Inter. J. Mod. Phys. V1, 1(1992).
* (34) J. Engel, Phys. Lett. B264, 114(1991).
* (35) J. D. Walecka, Muon Physics, Vol. 2, ed. V. W. Hughes and C. W. Wu (Academic Press, New York, 1975), p. 113
* (36) M. Cannoni, arXiv: 1108.4337; J. O’Connell, T. Donnelly and J. Walecka, Phys. Rev. C6, 719(1972);
V. Bednyakov and H. Klapdor-Kleingrothaus, Phys.$\&$ Nuclei 40, 583(2009); C.
Shan, arXiv:1103.0482;
V. Bednyakov and F. Simkovic, arXiv:0406218; M. Ressell et al. Phys. Rev. D48,
5519(1993);
V. Bednyakov,arXiv: 0310041; G. Belanger et al. Computer Phys.Commun.
180,747(2009); and many other authors.
* (37) T. Donnelly and W. Haxton, Atomic Data and Nuclear DATA Tables 23, 103(1979).
* (38) R. Bernabei et al., Eur. Phys. J. C67, 39(2010) R. Bernabei et al., Prog. Part. $\&$ Nucl. Phys. 66, 169(2011).
* (39) KIMS Collaboration Phys. Rev. Lett. 99, 091301(2007).
* (40) www.sciencexpress.org /11 February 2010/Page 3/10.1126/science.1186112; XENON Collaboration, Phys. Rev. Lett.107, 131302 (2011).
* (41) Yue Qian, Cheng Jianping, Li Yuanjing, et al. HEP $\&$ NP 28, 877(2004); LI Xin, YUE Qian, et al., HEP $\&$ NP, 31, 564(2007).
* (42) TEXONO Collaboration, Phys. Rev. D79, 061101(R)(2009).
* (43) CoGeNT Collaboration, Phys. Rev. Lett. 106, 131301 (2011); Phys. Rev. Lett. 107, 141301 (2011).
* (44) For more information, see http://www.npl.washington.edu/majorana/
* (45) For more information, see http://www.mpi-hd.mpg.de/gerda/
* (46) Topics in Astroparticle and Underground Physics (TAUP 2009) , Journal of Physics: Conference Series 203 (2010) 012028 doi:10.1088/1742-6596/203/1/012028
* (47) SCIENCE , 5 JUNE, 324, 1246(2009).
* (48) G. Heusser, Ann. Rev. Nucl. Part. Sci. 45, 543(1995).
* (49) Canberra, http://www.canberra.com/
* (50) Chinalco Luoyang Copper Co., Ltd, http://www.lycopper.cn
* (51) S. T. Lin, H. B. Li, X. Li, et. al , Phys. Rev. D79, 061101(R)(2009).
* (52) P. N. Luke, F. S. Goulding, N. W. Madden, et. al., IEEE Tran. Nucl. Scie, 36, 926(1989).
* (53) P. S. Barbeau, J. I. Collar, O. Tench, J. Cosmo $\&$ Astropart Phys. (JCAP) 09, 009, 1(2007).
* (54) www.amptek.com
* (55) www.tek.com
* (56) cdms.berkeley.edu
* (57) xenon.astro.columbia.edu
* (58) www.cresst.de
* (59) C. Aalseth et al.,Phys. Rev. Lett.107, 141301(2011).
* (60) M. G. Marino, (2010) Ph. D. dissertation, Univ. of Washington, 263.
* (61) From a talk given by J. F. Wilkerson in Tsinghua university in 2011.
|
arxiv-papers
| 2013-03-04T04:58:17 |
2024-09-04T02:49:42.377507
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ke-Jun Kang, Jian-Ping Cheng, Jin Li, Yuan-Jing Li, Qian Yue, Yang\n Bai, Yong Bi, Jian-Ping Chang, Nan Chen, Ning Chen, Qing-Hao Chen, Yun-Hua\n Chen, Zhi Deng, Qiang Du, Hui Gong, Xi-Qing Hao, Hong-Jian He, Qing-Ju He,\n Xin-Hui Hu, Han-Xiong Huang, Hao Jiang, Jian-Min Li, Xia Li, Xin-Ying Li,\n Xue-Qian Li, Yu-Lan Li, Shu-Kui Liu, Ya-Bin Liu, Lan-Chun Lu, Hao Ma,\n Jian-Qiang Qin, Jie Ren, Jing Ren, Xi-Chao Ruan, Man-Bin Shen, Jian Su,\n Chang-Jian Tang, Zhen-Yu Tang, Ji-Min Wang, Qing Wang, Xu-Feng Wang, Shi-Yong\n Wu, Yu-Cheng Wu, Zhong-Zhi Xianyu, Hao-Yang Xing, Xun-Jie Xu, Yin Xu, Tao\n Xue, Li-Tao Yang, Nan Yi, Hao Yu, Chun-Xu Yu, Xiong-Hui Zeng, Zhi Zeng, Lan\n Zhang, Guang-Hua Zhang, Ming-Gang Zhao, Su-Ning Zhong, Jin Zhou, Zu-Ying\n Zhou, Jing-Jun Zhu, Wei-Bin Zhu, Xue-Zhou Zhu, Zhong-Hua Zhu",
"submitter": "Wang Qing",
"url": "https://arxiv.org/abs/1303.0601"
}
|
1303.0602
|
2013 Vol. X No. XX, 000–000
11institutetext: National Astronomical Observatories/Yunnan Observatory,
Chinese Academy of Sciences, P. O. Box 110, 650011 Kunming, China; e-mail:
[email protected]
22institutetext: Key Laboratory for the Structure and Evolution of Celestial
Objects, Chinese Academy of Sciences 33institutetext: University of the
Chinese Academy of Sciences, Beijing 100049, China
# Two particular EA-type binaries in the globular cluster $\omega$ Centauri
K. Li 112233 S.-B. Qian 112233
###### Abstract
We analyzed the $B$ and $V$ light curves of two EA-type binaries V211 and
NV358 using the WD code for the first time. Our analysis shows that V211 is a
typical Algol-type binary and NV358 is a well detached binary system. As the
two binaries are definite proper motion members of $\omega$ Centauri, we
estimated their physical parameters, obtaining $M_{1}=1.13\pm 0.03M_{\odot},$
$R_{1}=0.98\pm 0.01R_{\odot}$ and $M_{2}=0.33\pm 0.01M_{\odot},$
$R_{2}=0.92\pm 0.01R_{\odot}$ for V211; $M_{1}=1.30\pm 0.05M_{\odot},$
$R_{1}=1.03\pm 0.01R_{\odot}$ and $M_{2}=0.58\pm 0.02M_{\odot},$
$R_{2}=0.78\pm 0.01R_{\odot}$ for NV358. On the color-magnitude diagram of
$\omega$ Centauri, V211 is located in the faint blue straggler region and its
primary component is more massive than a star at the main-sequence turnoff.
Therefore, V211 is a blue straggler and should be formed by mass transfer from
the secondary component to the primary. The age of NV358 is less than 1.93
Gyr, indicating that it is much younger than first-generation stars in
$\omega$ Centauri. As NV364 in $\omega$ Centauri, NV358 should be a second-
generation binary.
###### keywords:
galaxies: globular clusters: individual($\omega$ Centauri) — stars: binaries:
close — stars: binaries: eclipsing — stars: blue stragglers — stars:
individual(V211, NV358)
## 1 Introduction
Eclipsing binaries are very rare in globular clusters, but they play a
significant role in the dynamical evolution of globular clusters and stellar
populations studies. They provide the source of energy that can against and
avoid core collapse in globular clusters (Goodman & Hut 1989). They are also
particularly interesting. Some Algol type eclipsing binaries such as NJL5,
V239 in $\omega$ Centauri and V228 in 47 Tuc (Helt et al. 1993; Li & Qian
2012; Kaluzny et al. 2007) have been identified to be blue stragglers (BSs).
They are good tools to comprehend the BS formation theories. Detached double
line spectroscopic eclipsing binaries in globular clusters provide the
opportunity to calculate the age of the cluster based on their well determined
physical properties of mass and radius (see Thompson et al. 2001, 2010).
BS stars appear to be anomalously younger than other stars of their
population. In the color-magnitude diagrams (CMDs) of clusters, these stars
are bluer and brighter than the main-sequence turnoff. They were first noticed
in the globular cluster M3 by Sandage (1953). Now, not only in globular and
open clusters, dwarf galaxies (Piotto et al. 2004; De Marchi et al. 2006;
Momany et al. 2007), but also in the field (Carney et al. 2005), investigators
have discovered BSs. The formation of BSs is still controversial. Several
mechanisms have been proposed to explain BS formation. The two most popular BS
formation mechanisms are mass transfer in close binary systems (McCrea 1964;
Carney et al. 2001) and direct collision between two stars (Hills & Day 1976).
Perets & Fabrycky (2009) discussed another possibility of BS formation in
primordial (or dynamically) hierarchical triple star systems. By studying this
type of objects, one can reveal the dynamical history of a cluster and the
role of dynamics on the stellar evolution. BSs statistics can also provide
some constrains for initial binary properties. The bimodal BS radial
distribution in many globular clusters (e.g. Ferraro et al. 2004; Mappelli et
al. 2006; Lanzoni et al. 2007; Beccari et al. 2011) has been observed. A
scenario that BSs in the dense core were formed in collisions, whereas BSs in
the low density cluster outskirts were formed by mass transfer in close
binaries has been suggested to explain this.
The traditional opinion about globular clusters is that all stars within a
globular cluster are thought to share the same age and initially homogeneous
chemical composition and they are simple stellar populations. This has one
noticeable exception: $\omega$ Cen, the most massive cluster in the Milky Way,
whose stars show a large spread in metallicity. At present, the situation has
been much more complex, and it is now identifiable that almost all the
globular clusters so far examined in detail have at least two stellar
generations. Clear evidence for multiple main sequences (Bedin et al. 2004)
and giant branches (Nataf et al. 2011), and unusual horizontal branch
(D’Antona et al. 2005) and subgiant branch (Milone et al. 2008; Moretti et al.
2009) morphologies all can be explained in straightforward ways by the
presence of multiple generations of stars. The first second-generation binary
named NV364 in $\omega$ Centauri has been identified by Li et al. (2012).
$\omega$ Centauri is one of the most metal poor globular clusters in the Milky
Way, with [Fe/H]=-1.53, and has an interstellar reddening $E(B-V)=0.12$ and a
distance modulus $(m-M)_{v}=13.94$ (Harris 1996(2010 edition)). V211, with a
period of 0.576235 $d$, was first identified in the outskirts of the cluster
by Kaluzny et al. (1996) during a search for variable stars in the central
part of the globular cluster $\omega$ Centauri. NV358, with a period of
0.59964 $d$, was first discovered in the outer region of the cluster by
Kaluzny et al. (2004) during a photometric survey for variable stars in the
field of this cluster. On the CDM of the cluster, V211 is located in the faint
BS region, while NV358 occupies a position of the bright BS domain. Light
curves of several eclipsing binaries in $\omega$ Centauri have been analyzed,
but except for the two. In this paper, we show the investigation of $B$ and
$V$ light curves of the two binaries taken from Kaluzny et al. (2004) using
the Wilson-Devinney code.
## 2 Light curve analysis of the two binaries
$\omega$ Centauri was observed using the 1.0-m Swope telescope at Las Campanas
Observatory by Kaluzny et al. (2004) under The Cluster AgeS Experiment (CASE)
project during the interval from 1999 February 6/7 to 2000 August 9/10. $B$
and $V$ light curves of 301 variables were obtained and the photometric data
are available on the VizieR Result page. As no photometric analysis has been
carried out for the two binaries, V211 and NV358, which have good enough
photometric data and are located in the BS region on the CMD of $\omega$
Centauri, we chose them to do further analysis. Using the fourth version of
the W-D program (Wilson & Devinney 1971; Wilson 1990, 1994; Wilson & Van Hamme
2003) which is a good tool for the modeling of eclipsing binaries based on
real photometric and spectroscopic (radial velocity) data, we analyzed the $B$
and $V$ light curves of the two binaries taken from Kaluzny et al. (2004) for
the first time. Some of the photometric data that have serious derivation from
the phased light curves (most in $B$ band) were deleted. The ephemerides used
to calculate the phases of the two binaries were
$Min.I=2451283.7791+0.576235E,$ (1) $Min.I=2451284.1098+0.599640E.$ (2)
During our solutions, the effective temperature of the primary component,
$T_{1}$, was determined based on the values of dereddened $B-V$ at the
secondary minimum of the two binaries described below. The gravity-darkening
coefficients of the components were taken to be 1.0 for radiative atmosphere
($T\geq 7200$) from von Zeipel (1924) and 0.32 for convective atmosphere
($T<7200$) from Lucy (1967). The bolometric albedo coefficients of the
components were fixed at 1.0 and 0.5 for radiative and convective atmospheres
following Ruciński (1969). The bolometric and bandpass limb-darkening
coefficients of the components were also fixed (van Hamme 1993). Starting with
the solutions by mode 2, we found that the solutions of V211 usually converged
when the secondary component fills its Roche lobe and that of NV358 quickly
converged. So, the final iterations of V211 were made in mode 5, which
corresponds to semi-detached configuration, while that of NV358 was made in
mode 2. The quantities varied in the solutions of the two stars were the mass
ratio $q$, the effective temperature of the secondary component $T_{2}$, the
monochromatic luminosity of primary component in $B$ and $V$ bands $L_{1}$,
the orbital inclination $i$ and the dimensionless potential of the primary
component $\Omega_{1}$. The dimensionless potential of the secondary component
$\Omega_{2}$ is also a variable for NV358.
### 2.1 V211
The temperature of the primary component, $T_{1}$, was determined from the
dereddened color index $(B-V)_{1}$ using the program provided by Worthey & Lee
(2011). Bellini et al. (2009) determined the membership probability of V211 as
90%. We adopted an interstellar reddening of $E(B-V)=0.12$ and a metallicity
value of [Fe/H] = -1.53 (Harris 1996(2010 edition)) for $\omega$ Centauri. The
color index at the secondary minimum is a good approximation to the color
index of the primary star. Based on the phased data, the color index at the
secondary minimum is measured to be $(B-V)_{1}=0.393$, leading to
$T_{1}=7035K$. The bolometric albedo and the gravity-darkening coefficients of
the components were set as $A_{1}=A_{2}=0.5$ and $g_{1}=g_{2}=0.32$ for
convective atmosphere. Bolometric and bandpass square-root limb-darkening
parameters of the components taken from van Hamme (1993) were listed in Table
1. A $q$-search method was used to determine the mass ratio of V211. Solutions
were carried out for a series of values of the mass ratio (0.2, 0.25, 0.3,
0.4, 0.5, 0.6, 0.7). The relation between the resulting sum $\Sigma$ of
weighted square deviations and $q$ is plotted in Fig. 1. The minimum value was
obtained at $q=0.30$. Therefore, we fixed the initial value of mass ratio $q$
at 0.30 and made it an adjustable parameter. Then, we executed a differential
correction until it converged and final solutions were derived. The final
photometric solutions are listed in Table 1. The comparison between observed
and the theoretical light curves is shown in Fig. 1.
### 2.2 NV358
The color index of NV358 at the secondary minimum is 0.142. Bellini et al.
(2009) determined the membership probability of NV358 is 99%. The
$(B-V)_{1,0}$ of the primary was fixed at 0.022. Using the same method of
V211, we fixed the effective temperature of the primary component of NV358 at
$T_{1}=8918K$. The bolometric albedo and the gravity-darkening coefficients of
the components were set to $A_{1}=A_{2}=1.0$ and $g_{1}=g_{2}=1.0$ for
radiative atmosphere. Bolometric and bandpass square-root limb-darkening
parameters of the components taken from van Hamme (1993) were listed in Table
2. A $q$-search method was also used to determine the mass ratio of NV358. The
final photometric solutions are listed in Table 2. Fig. 2 shows the comparison
between observed and the theoretical light curves.
Because Johnson et al. (2009) found large metallicity spread for stars in the
cluster $\omega$ Centauri, a metallicity value of [Fe/H] = -1.0 was also used
to determine the effective temperature of the primary component, and
$T_{1}=7104K$ for V211 and $T_{1}=8923K$ for NV358 were obtained. The
solutions based on the metallicity of [Fe/H] = -1.0 are listed in Tables 1 and
2. We find that the additional solution results of this metallicity value are
in accordance to previous solutions. Therefore, the results using [Fe/H] =
-1.53 are adopted to be the final solution.
Figure 1: Left panel shows $q$-search for V211. Right panel displays observed
(open symbols) and theoretical (solid lines) light curves of V211 in $BV$
passbands.
Figure 2: Left panel shows $q$-search for NV358. Right panel displays observed (open symbols) and theoretical (solid lines) light curves of NV358 in $BV$ passbands. Table 1: Photometric solutions for V211 in the globular cluster $\omega$ Centauri Parameters | [Fe/H] = -1.53 | Errors | [Fe/H] = -1.0 | Errors
---|---|---|---|---
$g_{1}=g_{2}$ | 0.32 | Assumed | 0.32 | Assumed
$A_{1}=A_{2}$ | 0.5 | Assumed | 0.5 | Assumed
$x_{1bol}$ | 0.093 | Assumed | 0.093 | Assumed
$x_{2bol}$ | 0.265 | Assumed | 0.265 | Assumed
$y_{1bol}$ | 0.632 | Assumed | 0.632 | Assumed
$y_{2bol}$ | 0.441 | Assumed | 0.441 | Assumed
$x_{1B}$ | 0.173 | Assumed | 0.173 | Assumed
$x_{2B}$ | 0.757 | Assumed | 0.757 | Assumed
$y_{1B}$ | 0.706 | Assumed | 0.706 | Assumed
$y_{2B}$ | 0.109 | Assumed | 0.109 | Assumed
$x_{1V}$ | 0.059 | Assumed | 0.059 | Assumed
$x_{2V}$ | 0.445 | Assumed | 0.445 | Assumed
$y_{1V}$ | 0.726 | Assumed | 0.726 | Assumed
$y_{2V}$ | 0.398 | Assumed | 0.398 | Assumed
$T_{1}(K)$ | 7035 | Assumed | 7104 | Assumed
$q(M_{2}/M_{1})$ | 0.2941 | $\pm 0.0094$ | 0.2942 | $\pm 0.0093$
$T_{2}(K)$ | 5219 | $\pm 24$ | 5257 | $\pm 24$
$i$ | 85.166 | $\pm 0.416$ | 85.172 | $\pm 0.415$
$L_{1}/(L_{1}+L_{2})(B)$ | 0.8739 | $\pm 0.0008$ | 0.8737 | $\pm 0.0008$
$L_{1}/(L_{1}+L_{2})(V)$ | 0.8160 | $\pm 0.0013$ | 0.8160 | $\pm 0.0013$
$\Omega_{1}$ | 3.7041 | $\pm 0.0562$ | 3.7055 | $\pm 0.0566$
$\Omega_{2}$ | 2.4532 | Assumed | 2.4534 | Assumed
$r_{1}(pole)$ | 0.2922 | $\pm 0.0048$ | 0.2921 | $\pm 0.0049$
$r_{1}(side)$ | 0.2972 | $\pm 0.0052$ | 0.2971 | $\pm 0.0052$
$r_{1}(back)$ | 0.3002 | $\pm 0.0054$ | 0.3001 | $\pm 0.0054$
$r_{2}(pole)$ | 0.2597 | $\pm 0.0023$ | 0.2596 | $\pm 0.0023$
$r_{2}(side)$ | 0.2704 | $\pm 0.0024$ | 0.2704 | $\pm 0.0024$
$r_{2}(back)$ | 0.3031 | $\pm 0.0024$ | 0.3031 | $\pm 0.0024$
Table 2: Photometric solutions for NV358 in the globular cluster $\omega$ Centauri Parameters | [Fe/H] = -1.53 | Errors | [Fe/H] = -1.0 | Errors
---|---|---|---|---
$g_{1}=g_{2}$ | 1.0 | Assumed | 1.0 | Assumed
$A_{1}=A_{2}$ | 1.0 | Assumed | 1.0 | Assumed
$x_{1bol}$ | 0.435 | Assumed | 0.435 | Assumed
$x_{2bol}$ | 0.082 | Assumed | 0.082 | Assumed
$y_{1bol}$ | 0.254 | Assumed | 0.254 | Assumed
$y_{2bol}$ | 0.646 | Assumed | 0.646 | Assumed
$x_{1B}$ | 0.061 | Assumed | 0.061 | Assumed
$x_{2B}$ | 0.125 | Assumed | 0.125 | Assumed
$y_{1B}$ | 0.792 | Assumed | 0.792 | Assumed
$y_{2B}$ | 0.753 | Assumed | 0.753 | Assumed
$x_{1V}$ | 0.039 | Assumed | 0.039 | Assumed
$x_{2V}$ | 0.036 | Assumed | 0.036 | Assumed
$y_{1V}$ | 0.701 | Assumed | 0.701 | Assumed
$y_{2V}$ | 0.743 | Assumed | 0.743 | Assumed
$T_{1}(K)$ | 8918 | Assumed | 8923 | Assumed
$q(M_{2}/M_{1})$ | 0.4442 | $\pm 0.0125$ | 0.4443 | $\pm 0.0125$
$T_{2}(K)$ | 7249 | $\pm 47$ | 7255 | $\pm 48$
$i$ | 76.033 | $\pm 0.272$ | 76.037 | $\pm 0.272$
$L_{1}/(L_{1}+L_{2})(B)$ | 0.8247 | $\pm 0.0016$ | 0.8248 | $\pm 0.0016$
$L_{1}/(L_{1}+L_{2})(V)$ | 0.7873 | $\pm 0.0019$ | 0.7875 | $\pm 0.0020$
$\Omega_{crit}$ | 2.7669 | Assumed | 2.7672 | Assumed
$\Omega_{1}$ | 4.0500 | $\pm 0.0447$ | 4.0463 | $\pm 0.0448$
$\Omega_{2}$ | 3.4006 | $\pm 0.0607$ | 3.4027 | $\pm 0.0607$
$r_{1}(pole)$ | 0.2761 | $\pm 0.0035$ | 0.2764 | $\pm 0.0035$
$r_{1}(point)$ | 0.2861 | $\pm 0.0041$ | 0.2864 | $\pm 0.0041$
$r_{1}(side)$ | 0.2805 | $\pm 0.0037$ | 0.2808 | $\pm 0.0038$
$r_{1}(back)$ | 0.2841 | $\pm 0.0039$ | 0.2845 | $\pm 0.0040$
$r_{2}(pole)$ | 0.2072 | $\pm 0.0063$ | 0.2071 | $\pm 0.0063$
$r_{2}(point)$ | 0.2192 | $\pm 0.0082$ | 0.2190 | $\pm 0.0082$
$r_{2}(side)$ | 0.2103 | $\pm 0.0067$ | 0.2101 | $\pm 0.0067$
$r_{2}(back)$ | 0.2166 | $\pm 0.0077$ | 0.2165 | $\pm 0.0077$
## 3 Results and Discussions
Based on the $B$ and $V$ light curves, photometric solutions for the two EA-
type binaries, V211 and NV358, have been derived. It is shown that V211 is a
typical Algol-type binary and NV358 is a well detached binary system, the
primary and secondary components of NV358 fill 68.3% and 81.4% of their
critical Roche lobes, respectively. Using the fractional luminosities of the
components from the $B$ and $V$ light-curve solutions, quoted above, and
adopting the observed $B$ and $V$ magnitudes of the two binaries at maximum
light, we found the following visual magnitudes of the components of the two
binaries: $V_{1}=18.331\pm 0.002$, $B_{1}=18.686\pm 0.001$, $V_{2}=19.948\pm
0.008$ and $B_{2}=20.788\pm 0.007$ for V211, $V_{1}=17.330\pm 0.003$,
$B_{1}=17.449\pm 0.002$, $V_{2}=18.751\pm 0.010$ and $B_{2}=19.131\pm 0.010$
for NV358. The mean source of the errors is the respective uncertainties in
the solutions. Fig. 3 shows the positions of the individual eclipsing
components on the CMD (Noble et al. 1991) of $\omega$ Centauri. V211 is
located in the faint BS region, while NV358 occupies a position in the bright
BS domain.
### 3.1 Physical parameters of the two binaries
According to [email protected] proper-motion catalog of the globular cluster $\omega$
Centauri conducted by Bellini et al. (2009), V211 (designation 298164) and
NV358 (designation 129801) are definite proper-motion members of the cluster
with respective probabilities of 90% and 99%. Using the same method as Liu et
al. (2011), we estimated the physical parameters of the two binaries by the
light-curve program of the WD code. First, we calculated the absolute
bolometric magnitudes of the two binaries based on the $V$-band absolute
magnitudes of them. Second, using the light-curve program of the W-D code, we
could also obtain the absolute bolometric magnitudes of the two binaries,
which can be compared to the results in the previous step. When the results of
the two steps are consistent with each other, the physical parameters of the
two binaries are obtained. The physical parameters of the two binaries are
listed in Table 3, where the errors represent the uncertainty of $q$. The mean
physical parameters of the two binaries are as follows: for V211,
$M_{1}=1.13\pm 0.03M_{\odot},$ $R_{1}=0.98\pm 0.01R_{\odot}$ and
$M_{2}=0.33\pm 0.01M_{\odot},$ $R_{2}=0.92\pm 0.01R_{\odot}$; for NV358,
$M_{1}=1.30\pm 0.05M_{\odot},$ $R_{1}=1.03\pm 0.01R_{\odot}$ and
$M_{2}=0.58\pm 0.02M_{\odot},$ $R_{2}=0.78\pm 0.01R_{\odot}$.
Table 3: Physical parameters of V211 and NV358 Parameters | Values | Errors | Values | Errors
---|---|---|---|---
| V211 | | NV358 |
$M_{1}$ ($M_{\odot}$) | 1.13 | $\pm 0.03$ | 1.30 | $\pm 0.05$
$M_{2}$ ($M_{\odot}$) | 0.33 | $\pm 0.01$ | 0.58 | $\pm 0.02$
$R_{1}$ ($R_{\odot}$) | 0.98 | $\pm 0.01$ | 1.03 | $\pm 0.01$
$R_{2}$ ($R_{\odot}$) | 0.92 | $\pm 0.01$ | 0.78 | $\pm 0.01$
$A$ ($R_{\odot}$) | 3.30 | $\pm 0.02$ | 3.68 | $\pm 0.04$
$L_{bol1}$ ($L_{\odot}$) | 2.03 | $\pm 0.05$ | 5.86 | $\pm 0.13$
$L_{bol2}$ ($L_{\odot}$) | 0.54 | $\pm 0.01$ | 1.46 | $\pm 0.13$
$\log{g_{1}}$ ($cgs$) | 4.51 | $\pm 0.01$ | 4.52 | $\pm 0.01$
$\log{g_{2}}$ ($cgs$) | 4.03 | $\pm 0.01$ | 4.42 | $\pm 0.02$
$M_{bol1}$ ($mag$) | 3.98 | $\pm 0.01$ | 2.83 | $\pm 0.01$
$M_{bol2}$ ($mag$) | 5.41 | $\pm 0.01$ | 4.34 | $\pm 0.04$
$M_{bol}$ ($mag$) | 3.722 | $\pm 0.010$ | 2.589 | $\pm 0.016$
$m_{v_{max}}$ ($mag$) | 18.11 | Assumed | 17.07 | Assumed
### 3.2 The particularity of the two binaries
Both of the two binaries are very interesting targets. V211 is located in the
faint BS region on the CMD of $\omega$ Centauri and is a definite member star
of this cluster. Therefore, V211 is an eclipsing BS. The mass of the primary
component of V211 is $1.13M_{\odot}$, which is larger than that of a star at
the main sequence turnoff ($M_{TO}=0.8M_{\odot}$). Liking other eclipsing
Algol BS such as NJL5, V239 in $\omega$ Centauri and V228 in 47 Tuc (Helt et
al. 1993; Li & Qian 2012; Kaluzny et al. 2007), V211 should be formed by mass
transfer from the secondary component to the primary inducing a reversal of
the original mass ratio so that the current primary was originally the less
massive component. NV358 occupies a position in the bright BS domain of the
CMD of $\omega$ Centauri, and the two components of NV358 also stay in the BS
region. Both are bluer than the main-sequence stars in $\omega$ Centauri,
indicating that they are younger and metal richer. We have derived that the
surface gravity of the primary component is $\log{g}=4.52$ $cgs$, suggesting
that it is a main-sequence star. Then, we can estimate its age using the
equation
$t_{MS}=\frac{3.37\times 10^{9}}{(M/M_{\odot})^{2.122}},$ (3)
derived from Yıldız (2011). An age of $T_{a}\leq 1.93$ Gyr is obtained for the
primary component. It is believed that short period binary systems are formed
by a fragmentation process and are unlikely formed from a capture process.
Therefore, the components of NV358 should be of the same age. We adopt a value
of 1.93 Gyr for the age of NV358. The age of $\omega$ Centauri is $16\pm 3$
Gyr (Noble et al. 1991), and almost at the same time the first-generation
stars in $\omega$ Centauri formed. Therefore, NV358 is much younger than the
first-generation stars in $\omega$ Centauri. As NV364 in $\omega$ Centauri (Li
et al. 2012), we deduce that NV358 is also a second-generation binary.
In summary, V211 is an eclipsing BS, making it an important object to
comprehend the hypothesis that BSs are formed by mass transfer in close
binaries. At the same time, V211 was discovered in the out region of $\omega$
Centauri, so it can improve the scenario that BSs in the low density cluster
outskirts were formed by mass transfer in close binaries. NV358 is a second-
generation binary, making it an evidence of multiple populations of the
cluster $\omega$ Centauri. NV358 was discovered in the outer region but not in
the center of $\omega$ Centauri, improving the result that $\omega$ Centauri
was previously the nucleus of a nucleated dwarf galaxy and later it has been
completely destroyed by a gravitational interaction between the Milky Way and
the nucleated dwarf galaxy so that only its nucleus is now observed (Li et al.
2012). In the future, we hope to obtain the spectroscopic data, making it
possible to evaluate more accurate physical parameters of the two binaries.
This will provide evidence needed to improve our results.
Figure 3: Positions of the two binaries in the CMD for $\omega$ Centauri. The
solid square gives the position of V211 and the solid triangle shows the
position of NV358, open symbols represent the positions of the respective
components.
###### Acknowledgements.
This work is partly supported by Chinese Natural Science Foundation (Nos.
11203066, 11133007, 11103074, 10973037 and 10903026) and by the West Light
Foundation of the Chinese Academy of Sciences. Thanks for the photometric data
published in Kaluzny et al. (2004).
## References
* Beccari et al. (2011) Beccari G., Sollima A., Ferraro F. R., Lanzoni B., BellazziniM., De Marchi G., Valls-Gabaud D., & Rood R. T., 2011, ApJ, 737, L3
* Bedin et al. (2004) Bedin L. R., Piotto G., Anderson J., Cassis S., King I. R., Momany Y., & Carraro G., 2004, ApJ, 605, L125
* Bellini et al. (2009) Bellini, A., Piotto, G., Bedin, L. R., Anderson, J., Platais, I., Momany, Y., Moretti, A., Milone, A.P., & Ortolani, S., 2009, A&A, 493, 959
* Carney et al. (2005) Carney, Bruce W., Latham, David W., & Laird, John B., 2005, AJ, 129, 466
* Carney et al. (2001) Carney, Bruce W., Latham, David W., Laird, John B., Grant, Catherine E., & Morse, Jon A., 2001, AJ, 122, 3419
* D’Antona et al. (2005) D’Antona F., Bellazzini M., Caloi V., Fusi Pecci F., Galleti S., & Rood R. T., 2005, ApJ, 631, 868
* De Marchi et al. (2006) De Marchi, F., De Angeli, F., Piotto, G.; Carraro, G., & Davies, M. B. 2006, A&A, 459, 489
* Goodman & Hut (1989) Goodman, J., & Hut, P., 1989, Nature, 339, 40
* Harris (1996(2010 edition) Harris, W.E., 1996, AJ, 112, 1487
* Ferraro et al. (2004) Ferraro F. R., Beccari G., Rood R. T., BellazziniM., Sills A., & Sabbi E., 2004, ApJ, 603, 127
* Helt et al. (1993) Helt, B. E., Jorgensen, H. E., King, S., & Larsen, A., 1993, A&A, 270, 297
* Hills & Day (1976) Hills, J. G., & Day, C. A., 1976, Astrophys. Lett., 17,87
* Kaluzny et al. (1996) Kaluzny, J., Kubiak, M., Szymanski, M., Udalski, A., Krzeminski, W., & Mateo, M., 1996, A&AS, 120, 139
* Kaluzny et al. (2004) Kaluzny, J., Olech, A., Thompson, I. B., Pych, W., Krzemiski, W., & Schwarzenberg-Czerny, A., 2004, A&A, 424, 1101
* Kaluzny et al. (2007) Kaluzny, J., Thompson, I. B., Rucinski, S. M., Pych, W., Stachowski, G., Krzeminski, W., & Burley, G. S., 2007, AJ, 134, 541
* Lanzoni et al. (2007) Lanzoni B., Dalessandro E., Perina S., Ferraro F. R., Rood R. T., & Sollima A., 2007, ApJ, 670, 1065
* Li & Qian (2012) Li, Kai, & Qian, S.-B., 2012, AJ, 144, 161
* Li et al. (2012) Li, Kai, Qian, S.-B., & Leung, K.-C., 2012, ApJ, 755, 83
* Liu et al. (2011) Liu, L., Qian, S.-B., & Fernández-LAJús, E., 2011, MNRAS, 415, 1509
* Lucy (1967) Lucy, L. B., 1967, Z. Astrophys., 65, 89
* Johnson et al. (2009) Johnson, Christian I., Pilachowski, Catherine A., Michael Rich, R., & Fulbright, Jon P., 2009, ApJ, 698, 2048
* Mappelli et al. (2006) Mappelli M., Sigurdsson S., Ferraro F. R., Colpi M., Possenti A., & Lanzoni B., 2006, MNRAS, 373, 361
* McCrea (1964) McCrea, W. H., 1964, MNRAS, 128, 147
* Milone et al. (2008) Milone A. P. et al., 2008, ApJ, 673, 241
* Momany et al. (2007) Momany, Y., Held, E. V., Saviane, I., Zaggia, S., Rizzi, L., & Gullieuszik, M., 2007, A&A, 468, 973
* Moretti et al. (2009) Moretti A. et al., 2009, A&A, 493, 539
* Nataf et al. (2011) Nataf, D. M., Gould, A., Pinsonneault, M. H., & Stetson, P. B., 2011 ApJ, 736, 94
* Noble et al. (1991) Noble, R. G., Buttress, J., Griffiths, W. K., Dickens, R. J., & Penny, A. J., 1991, MNRAS, 250, 314
* Perets & Fabrycky (2009) Perets, H. B., & Fabrycky, D. C., 2009, ApJ, 697, 1048
* Piotto et al. (2004) Piotto, G. et al., 2004, ApJ, 604, L109
* Ruciński (1969) Ruciński, S. M., 1969, Acta Astronomica, 19, 245
* Sandage (1953) Sandage, A. R., 1953, AJ, 58, 61
* Thompson et al. (2001) Thompson, I. B., Kaluzny, J., Pych, W., Burley, G., Krzeminski, W., Paczyú ski, B., Persson, S. E., & Preston, G. W., 2001, AJ, 121, 3089
* Thompson et al. (2010) Thompson, I. B., Kaluzny, J., Rucinski, S. M., Krzeminski, W., Pych, W., Dotter, A., & Burley, G. S., 2010, AJ, 139, 329
* van Hamme (1993) van Hamme, W., 1993, AJ, 106, 2096
* von Zeipel (1924) von Zeipel, H., 1924, MNRAS, 84, 665
* Wilson (1990) Wilson, R. E., 1990, ApJ, 356, 613
* Wilson (1994) Wilson, R. E., 1994, PASP, 106, 921
* Wilson & Devinney (1971) Wilson, R. E., & Devinney, E. J., 1971, ApJ, 166, 605
* Wilson & Van Hamme (2003) Wilson, R. E., & Van Hamme, W., 2003, Computing Binary Stars Observables, 4th edn of the W-D program, available at ftp.astro.ufl.edu/pub/wilson/lcdc2003
* Worthey & Lee (2011) Worthey, G., & Lee, H.-c., 2011, ApJS, 193, 1
* Yıldız (2011) Yıldız, M., 2011, PASA, 28, 66
|
arxiv-papers
| 2013-03-04T04:58:24 |
2024-09-04T02:49:42.393820
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Kai Li and Shengbang Qian",
"submitter": "Kai Li",
"url": "https://arxiv.org/abs/1303.0602"
}
|
1303.0660
|
# Complete Analysis on the Short Distance Contribution of
$B_{s}\to\ell^{+}\ell^{-}\gamma$ in Standard Model
Wenyu Wang, Zhao-Hua Xiong and Si-Hong Zhou Institute of Theoretical Physics,
College of Applied Science, Beijing University of Technology, Beijing 100124,
China
###### Abstract
Using the $B_{s}$ meson wave function extracted from non-leptonic $B_{s}$
decays, we evaluate the short distance contribution of rare decays
$B_{s}\to\ell^{+}\ell^{-}~{}\gamma~{}~{}(\ell=e,\mu)$ in the standard model,
including all the possible diagrams. We focus on the contribution from four-
quark operators which are not taken into account properly in previous
researches. We found that the contribution is large, leading to the branching
ratio of $B_{s}\to\ell^{+}\ell^{-}\gamma$ being nearly enhanced by a factor 3
and up to $1.7\times 10^{-8}$. The predictions for such processes can be
tested in the LHC-b and B factories in near future.
###### pacs:
12.15.-y, 13.20.He
## I Introduction
The standard model (SM) of electroweak interaction has been remarkably
successful in describing physics below the Fermi scale and is in good
agreement with the most experiment data. One of the most promising processes
for probing the quark-flavor sector of the SM is the rare decays. These
decays, induced by the flavor changing neutral currents (FCNC) which occur in
the SM only at loop level, play an important role in the phenomenology of
particle physics and in searching for the physics beyond the SM Aliev97 ;
Xiong01 . The observation of the penguin-induced decay $B\to X_{s}\gamma$,
$B\to X_{s}\ell^{+}\ell^{-}(\ell=e,\mu)$ are in good agreement with the SM
prediction, and the first evidence for the decay $B_{s}\to\mu^{+}\mu^{-}$ was
confirmed at the end of 2012 Aaij:2012nna , putting strong constraints on its
various extensions. Nevertheless, these processes are also important in
determining the parameters of the SM and some hadronic parameters in QCD, such
as the CKM matrix elements, the meson decay constant $f_{B_{s}}$, providing
information on heavy meson wave functions.
Thanks to the Large Hadron Collider (LHC) at CERN we have entered a new era of
particle physics. In experimental side, in the current early phase of the LHC
era, the exclusive modes such as $B_{s}\to\ell^{+}\ell^{-}\gamma$
$(\ell=e,~{}\mu)$ are among the most promising decays due to their relative
cleanliness and sensitivity to models beyond the SM Aliev97 ; Xiong01 . In
theoretically side, since no helicity suppression exists and large branching
ratios as $B_{s}\to\mu^{+}\mu^{-}$ are expected. There are mainly two kinds of
contributions of $B_{s}\to\ell^{+}\ell^{-}\gamma$ in the SM: the short
distance contribution which can be evaluated reliably by perturbation theory
buras while the long distance QCD effects describing the neutral vector-meson
resonances $\phi$ and $J/\Psi$ family Melikhov04 ; Kruger03 ; Nikitin11 . As
for the short distance contribution, it is thought in previous works that a
necessary work is only attaching real photon to any charged internal and
external lines in the Feynman diagrams of $b\to s\ell^{+}\ell^{-}$ with
statement that contributions from the attachment of photon to any charged
internal propagator are regraded as to be strongly suppressed and can be
neglected safely Aliev97 ; Xiong01 ; LU06 ; Eilam97 , i.e., one can easily
obtain the amplitude of $B_{s}\to\ell^{+}\ell^{-}\gamma$ by using the
effective weak Hamiltonian of $b\to s\ell^{+}\ell^{-}$ and the matrix elements
$\langle\gamma|\bar{s}O_{i}b|B_{s}\rangle$
$O_{i}=\gamma_{\mu}P_{L},\sigma_{\mu\nu}q^{\nu}P_{R}$ directly. Therefore
contributions from the attachment of real photon with magnetic-penguin vertex
to any charged external lines are of course omitted Aliev97 ; Xiong01 or
stated to be negligibly small LU06 . Another contribution from loop insertion
of the lower order four-quark operators are also always neglected. We note
that the complete contribution seems to have been done in Melikhov04 , however
it mainly concentrated on the long distance effects of the meson resonances,
whereas the short distance contribution was indeed incompletely analyzed. A
complete examination included all contribution to the processes in the SM is
needed.
As being well known, only short-distance contribution can be reliably
predicted, and it is more important than the long-distance contribution from
the resonances which is actually excluded partly by setting cuts in
experimental measurements. Recently we showed that the contributions from the
attachment of real photon with magnetic-penguin vertex to any charged external
lines can enhance the branching ratios of $B_{s}\to\ell^{+}\ell^{-}\gamma$ by
a factor about 2 Wang:2012na .
In this letter, we will extend our previous studies and use the $B_{s}$ meson
wave function extracted from non-leptonic $B_{s}$ decays bs to revaluate the
short distance contribution from the all categories of diagrams of
$B_{s}\to\ell^{+}\ell^{-}\gamma$ decays. Special attention will be payed on
the contribution from the four-quark operators, and a comparative study with
previous work will be discussed. The paper are organized in the following, in
sec. II, we analyse the full short distance contribution and present detail
calculation of exclusive decays $B_{s}\to\ell^{+}\ell^{-}\gamma$. The
numerical results and the comparative study are given in sec. III, and the
conclusions are given in sec. IV.
## II Complete analysis on short distance contributions
In order to simplify the decay amplitude for $B_{s}\to\ell^{+}\ell^{-}\gamma$,
we have to utilize the $B_{s}$ meson wave function, which is not known from
the first principal and model depended. Fortunately, many studies on non-
leptonic $B$ bdecay ; cdepjc24121 and $B_{s}$ decays bs have constrained the
wave function strictly. It was found that the wave function has form
$\Phi_{B_{s}}=(\not\\!p_{B_{s}}+m_{B_{s}})\gamma_{5}~{}\phi_{B_{s}}({x}),$ (1)
where the distribution amplitude $\phi_{B_{s}}(x)$ can be expressed as form :
$\phi_{B_{s}}(x)=N_{B_{s}}x^{2}(1-x)^{2}\exp\left(-\frac{m_{B}^{2}\
x^{2}}{2\omega_{b_{s}}^{2}}\right)$ (2)
with $x$ being the momentum fractions shared by $s$ quark in $B_{s}$ meson.
The normalization constant $N_{B_{s}}$ can be determined by comparing
$\displaystyle\langle
0\left|\bar{s}\gamma^{\mu}\gamma_{5}b\right|B_{s}\rangle=iN_{c}\int_{0}^{1}\phi_{B_{s}}(x)dx{\rm
Tr}\left[\gamma^{\mu}\gamma_{5}(\not\\!p_{B_{s}}+m_{B_{s}})\gamma_{5}\right]dx=-4N_{c}ip_{B_{s}}^{\mu}\int_{0}^{1}\phi_{B_{s}}(x)dx$
(3)
with $N_{c}$ being the number of quarks and
$\displaystyle\langle
0\left|\bar{s}\gamma^{\mu}\gamma_{5}b\right|B_{s}\rangle=-if_{B_{s}}p_{B_{s}}^{\mu},$
(4)
the $B$ meson decay constant $f_{B_{s}}$ is thus determined by the condition
$\displaystyle\int_{0}^{1}\phi_{B_{s}}(x)dx=\frac{1}{4N_{c}}f_{B_{s}}.$ (5)
Figure 1: Feynman diagrams without effective vertex of $b\to s\gamma$
contribution to $B_{s}\to\gamma\gamma$.
Figure 2: Feynman diagrams with effective vertex of $b\to s\gamma$
contribution to $B_{s}\to\gamma\gamma$ . The black dot stands for the
magnetic-penguin operator $O_{7}$
Let us start with the quark level processes $B_{s}\to\ell^{+}\ell^{-}\gamma$
which are subject to the QCD corrected effective weak Hamiltonian. The general
effective Hamiltonian that describes $b\to s$ transition is given by
$\displaystyle{\cal
H}_{eff}=-\frac{4G_{F}}{\sqrt{2}}V_{tb}V^{*}_{ts}\sum\limits_{j=1}^{10}C_{j}(\mu)O_{j}(\mu),$
(6)
where $O_{j}$ ($j=1,\dots,6$) stands for the four-quark operators, and the
forms and the corresponding Wilson coefficients $C_{i}$ can be found in Ref.
Misiak93 .
Generally, to describe all the short distance of the process
$B_{s}\to\ell^{+}\ell^{-}\gamma$, new effective operators for $b\to
s\gamma\gamma$ which are not included in (6) should be introduced.
Corresponding feynman diagrams without and with effective vertex $b\to
s\gamma$ are shown in FIG. 1 and FIG. 2, respectively. When connect di-lepton
line to one $\gamma$, operator $b\to s\gamma\gamma$ may contribute to
$B_{s}\to\ell^{+}\ell^{-}\gamma$. Contributions from the such kind of diagrams
with a photon attaching from internal charged lines to
$B_{s}\to\ell^{+}\ell^{-}\gamma$ are usually regraded as to be strongly
suppressed by a factor $m_{b}^{2}/m_{W}^{2}$ thus can be neglected safely
Aliev97 ; Xiong01 ; LU06 ; Eilam97 . However, as pointed out in Dong , the
conclusion of this is correct, but the explanation is not as what it is
described. Here we address the reason more clearly: the contribution from
diagrams FIG. 1 (a) and FIG. 1 (b) are not suppressed. When applying effective
vertex of $b\to s\gamma$ to describe $b\to s\gamma\gamma$ as shown in FIG. 2,
internal quarks in the effective vertex are off-shell and such off-shell
effects are also not suppressed. We have proven that the such two non-
suppressed effects in FIG. 1 and FIG. 2 cancel each other exactly Dong .
Therefore we can use the effective operators listed in Eq. (6) for on-shell
quarks to calculate the total short distance contributions of
$B_{s}\to\ell^{+}\ell^{-}\gamma$ in SM safely.
The Feynmann diagrams contributing to $B_{s}\to\ell^{+}\ell^{-}\gamma$ at
parton level can then be classified into three kinds as follows:
1. 1.
Attaching a real photon to any charged external lines in the Feynman diagrams
of $b\to s\ell^{+}\ell^{-}$;
2. 2.
Attaching a virtual photon to any charged external lines in the Feynman
diagrams of $b\to s\gamma$ with virtual photon into lepton pairs;
3. 3.
Attaching two photon to any charged external lines in the Feynman diagrams of
four-quark operators with one of two photon into lepton pairs.
Note that the third contribution is not considered in the previous studies
except for Ref. Melikhov04 which is the focus of this paper and will show the
detail in the following. We also will discuss these contributions seperatly.
Figure 3: Feynman diagrams that contribute to the matrix elements
$B_{s}\to\ell^{+}\ell^{-}\gamma$ with the contribution of
$O_{7},~{}O_{9},~{}O_{10}$ in tree level.
### II.1 External real photon contributions
The Feynman diagrams of the first kind of contributions are shown in FIG. 3.
As the contribution from the FIG. 3 (c) and (d) with photon attached to
external lepton lines, considering the fact that (i) being a pseudoscalar
meson, $B_{s}$ meson can only decay through axial current, so the magnetic
penguin operator $O_{7}$ ’s contribution vanishes; (ii) the contribution from
operators $O_{9},\ O_{10}$ has the helicity suppression factor
$m_{\ell}/m_{B_{s}}$, so for light lepton electron and muon, we can neglect
their contribution safely. These diagrams in FIG. 3 (a) and (b) are always
regarded as the dominant contributions, and they have been considered by using
the light cone sum rule Aliev97 ; Xiong01 , the simple constituent quark model
Eilam97 , and the B meson distribution amplitude extracted from non-leptonic B
decays LU06 . We rewrite the amplitude of $B_{s}\to\ell^{+}\ell^{-}\gamma$ at
meson level as Wang:2012na :
$\displaystyle A_{\rm I}$ $\displaystyle=$ $\displaystyle
iN_{c}Gee_{d}\frac{1}{p_{B_{s}}\cdot
k}\biggl{\\{}\left[C_{1}i\epsilon_{\alpha\beta\mu\nu}p^{\alpha}_{B_{s}}\varepsilon^{\beta}{k^{\nu}}+C_{2}p_{B_{s}}^{\nu}(\varepsilon_{\mu}k_{\nu}-k_{\mu}\varepsilon_{\nu})\right]\bar{\ell}\gamma^{\mu}{\ell}$
(7)
$\displaystyle+C_{10}\left[C_{+}i\epsilon_{\alpha\beta\mu\nu}p^{\alpha}_{B_{s}}\varepsilon^{\beta}{k^{\nu}}+C_{-}p_{B_{s}}^{\nu}(\varepsilon_{\mu}k_{\nu}-k_{\mu}\varepsilon_{\nu})\right]\bar{\ell}\gamma^{\mu}\gamma_{5}\ell\biggl{\\}}.$
The form factors in Eq. (7) are found to be:
$\displaystyle C_{1}$ $\displaystyle=$ $\displaystyle
C_{+}\left(C_{9}^{eff}-2\frac{m_{b}m_{B_{s}}}{q^{2}}C_{7}^{eff}\right),$
$\displaystyle C_{2}$ $\displaystyle=$ $\displaystyle
C_{9}^{eff}C_{-}-2\frac{m_{b}m_{B_{s}}}{q^{2}}C_{7}^{eff}C_{+},$ (8)
with the constant $G=\alpha_{em}G_{F}V_{tb}V_{ts}^{*}/(\sqrt{2}\pi)$, and
$\displaystyle
C_{\pm}=\int_{0}^{1}\left(\frac{1}{x}\pm\frac{1}{y}\right)\phi_{B_{s}}(x)dx.$
(9)
The expression in Eq. (7) can be compared with Ref. LU06 .
Figure 4: Feynman diagrams of $b\to s\gamma$ with virtual photon into lepton
pairs .
### II.2 External virtual photon contributions
The Feynman diagrams of the second kind of contributions are shown in FIG. 4.
Contributions from the kind of diagrams are always neglected Aliev97 ; Xiong01
or stated to be negligibly small LU06 . Note the $B_{s}$ meson wave functions
used in this work and Ref. LU06 are both from non-leptonic $B_{s}$ decays.
However, as mentioned in the introduction the authors of Ref. LU06 did not
present the expression of the contribution from FIG. 4 and only stated the
contribution is numerical negligibly small. But such statement seems to be
questionable, for that the pole of propagator of the charged line attached by
photon may enhance the decay rate greatly which make some diagrams can not be
neglected in the calculation. In these two diagrams, photon of the magnetic-
penguin operator is real, thus its contribution to
$B_{s}\to\ell^{+}\ell^{-}\gamma$ is different from the first kind
contributions. We get the amplitude Wang:2012na :
$\displaystyle A_{\rm
II}=i2N_{c}Gee_{d}C_{7}^{eff}\frac{m_{b}m_{B_{s}}}{q^{2}}\frac{1}{p_{B_{s}}\cdot
q}\overline{C}_{+}\left[k_{\mu}q\cdot\epsilon-\epsilon_{\mu}k\cdot
q-i\epsilon_{\mu\nu\alpha\beta}\epsilon^{\nu}k^{\alpha}q^{\beta}\right]\left[\bar{\ell}\gamma^{\mu}\ell\right],$
(10)
with coefficients $\overline{C_{+}}$ obtained by a replacement:
$\displaystyle\overline{C}_{+}$ $\displaystyle=$ $\displaystyle
C_{+}(x\to\bar{x}=x-z-i\epsilon;~{}y\to\bar{y}=y-z-i\epsilon)$ (11)
$\displaystyle=$ $\displaystyle
N_{B}\int_{0}^{1}dx({\frac{1}{x-z-i\epsilon}}+{\frac{1}{1-x-z-i\epsilon}})x^{2}(1-x)^{2}\exp\left[-\frac{m_{B_{s}}^{2}}{2\omega_{B_{s}}^{2}}x^{2}\right],$
where $z=\frac{q^{2}}{2p_{B_{s}}\cdot q}$ and the first, second term in (11)
denotes the contribution from FIG. 4 (a) and (b) respectively. Note that the
contribution from FIG. 3 (a) is much larger than (b) since
$m_{B_{s}}\ll\omega_{B_{s}}$ (see next section) which is very easily
understood in sample constituent quark model Eilam97 , i.e.,
$\phi_{B_{s}}(x)=\delta(x-m_{s}/m_{B_{s}})$. However, the contributions from
Fig.4 (a) and (b) are comparable, and pole in $\overline{C_{+}}$ corresponds
to the pole of the quark propagator when it is connected by the off-shell
photon propagator. Thus the $\overline{C_{+}}$ term may enhance the decay rate
of $B_{s}\to\ell^{+}\ell^{-}\gamma$ and its analytic expression reads
$\displaystyle\overline{C}_{+}$ $\displaystyle=$ $\displaystyle 2N_{B_{s}}\pi
iz^{2}(1-z)^{2}\exp\left[-\frac{m_{B_{s}}^{2}}{2\omega_{B_{s}}^{2}}z^{2}\right]$
(12) $\displaystyle+$ $\displaystyle
N_{B_{s}}\int_{0}^{1}dx({\frac{1}{x+z}}-{\frac{1}{1+x-z}})x^{2}(1+x)^{2}\exp\left[-\frac{m_{B_{s}}^{2}}{2\omega_{B_{s}}^{2}}x^{2}\right]$
$\displaystyle-$ $\displaystyle
N_{B_{s}}\int_{-1}^{1}\left(\frac{1}{\frac{1}{x}-z}+\frac{1}{1-\frac{1}{x}-z}\right)\frac{dx}{x^{4}}(1-\frac{1}{x})^{2}\exp\left[-\frac{m_{B_{s}}^{2}}{2\omega_{B_{s}}^{2}}\frac{1}{x^{2}}\right].$
Figure 5: Feynman diagrams that contribute to the process
$B_{s}\to\ell^{+}\ell^{-}\gamma$ with possible insertion of $O_{1}$ to $O_{6}$
in loop level .
### II.3 Quark weak annihilation contributions
Now we focus on the contributions from the diagrams for the four-quark
operators which are not considered properly in previous works. The Feynman
diagrams of the third kind of contribution are shown in FIG. 5. The operator
$O_{7}$ is high order than four-quark operator $O_{1}-O_{6}$, thus the
contribution of loop diagram with operator $O_{1}-O_{6}$ insertion should be
comparable with the tree level electro-weak penguin $O_{7}-O_{10}$
contributions listed above. To calculate the leading order matrix elements of
$b\to s\gamma^{\ast}\gamma$ shown in FIG. 5, we express the decay amplitude
for $b(p_{b})\to s(p_{s})\gamma^{\ast}(k_{1})\gamma(k_{2})$ as
$\displaystyle A_{\rm III}(b\to
s\gamma^{*}\gamma)=i\frac{4G_{F}}{\sqrt{2}}V_{tb}V^{*}_{ts}\frac{e^{2}}{16\pi^{2}}\bar{s}(p_{s})T_{\mu\nu}b(p_{b})\epsilon^{\mu}(k_{1})\epsilon^{\nu}(k_{2}),$
(13)
where $p_{b,s},k_{1,2}$ denotes the momentum of quarks and photon
respectively, $\epsilon$ is the vector polarization of photon. We split the
tensor $T^{\mu\nu}$ into the momentum odd and even power terms for
simplification. Keeping our physics goal in mind, Without loss of generality
we assume that photon with momentum $k_{1}$ is virtual and drop the terms
proportional to $k_{2}^{\nu}$ in the expressions. After straight calculation,
we obtain
$\displaystyle T_{\mu\nu}^{\rm odd}(q)$ $\displaystyle=$ $\displaystyle
e_{q}^{2}\left\\{\frac{1}{k_{1}k_{2}}\left[i\epsilon_{\nu\alpha\beta\lambda}\gamma^{\lambda}\gamma_{5}k_{1}^{\alpha}k_{2}^{\beta}\left[k_{1{\mu}}f_{2}(q)-k_{2\mu}f_{+}(q)\right]+i\epsilon_{\mu\alpha\beta\lambda}\gamma^{\lambda}\gamma_{5}k_{1}^{\alpha}k_{2}^{\beta}k_{1{\nu}}f_{+}(q)\right]\right.$
(14)
$\displaystyle\left.+i\epsilon_{\mu\nu\alpha\beta}\gamma^{\beta}\gamma_{5}[k_{1}^{\alpha}f_{+}(q)-k_{2}^{\alpha}f_{-}(q)]\right\\},$
$\displaystyle T_{\mu\nu}^{\rm even}(q)$ $\displaystyle=$
$\displaystyle-2\frac{e_{q}^{2}}{m_{q}}\left\\{(k_{1\nu}k_{2\mu}-k_{1}k_{2}g_{\mu\nu})\left[f_{3}(q)+(1-\frac{4}{z_{q}})f^{\prime}_{1}(q)\right]+i\epsilon_{\mu\nu\alpha\beta}\gamma_{5}k_{1}^{\alpha}k_{2}^{\beta}f_{1}^{\prime}(q)\right\\},$
(15)
where $q$ denotes the quark in the internal line which two photons are
attached and and $e_{q}$ is the number of electrical charge of the quark. The
loop functions appearing in (15) have forms as
$\displaystyle f_{\pm}(q)$ $\displaystyle=$
$\displaystyle\frac{1}{2}+\frac{1}{z_{q}}\int_{0}^{1}\ln
g(z_{q},u_{q},x)\frac{\mathrm{d}x}{x}\mp\frac{u_{q}}{2z_{q}}\int_{0}^{1}\ln
g(z_{q},u_{q},x),$ $\displaystyle f_{1}(q)$ $\displaystyle=$
$\displaystyle\frac{1}{2}[f_{+}(q)+f_{-}(q)],\
f_{2}(q)=\frac{z_{q}}{2u_{q}}[f_{+}(q)-f_{-}(q)],$ $\displaystyle f_{3}(q)$
$\displaystyle=$
$\displaystyle\frac{2}{z_{q}}-\frac{2u_{q}}{z_{q}^{2}}\int_{0}^{1}\mathrm{ln}g(z_{q},u_{q},x)\mathrm{d}x,$
$\displaystyle g(z,u,x)$ $\displaystyle=$
$\displaystyle\frac{1-(u+z)x(1-x)}{1-ux(1-x)},$ (16)
with $f_{1}^{\prime}(q)=\frac{1}{2}-f_{1}(q)$, $z_{q}=\frac{2k_{1}\cdot
k_{2}}{m_{q}^{2}}$ and $u_{q}=\frac{k_{1}^{2}}{m_{q}^{2}}$. Writing the
amplitude in this ways, one can infer easily that for example, when operators
$O_{j}$ ($j=1,\dots,4$) are inserted, only $T_{\mu\nu}^{\rm odd}$ part can
contribute to $b\to s\gamma\gamma$ while the process receives both parts
contributions as $O_{5,6}$ are inserted. It is also easily obtained the
similar result for the on-shell photons as in Ref. Cao01 by setting $u_{q}=0$
.
With the amplitude of $b\to s\gamma^{*}\gamma$ and $B_{s}$ wave function
ready, we write the total contribution from FIG. 5 to exclusive decay of
$B_{s}(p_{B_{s}})\to\gamma(k)\ell^{+}\ell^{-}$ as
$\displaystyle A_{\rm
III}=-2ie\frac{f_{B_{s}}G}{q^{2}}\sum\limits_{j=1}^{6}C_{j}(m_{b})\left[T_{1}^{j}p_{B_{s}}^{\nu}(\epsilon_{\nu}k_{\mu}-\epsilon_{\mu}k_{\nu})+T_{2}^{j}i\epsilon_{\mu\nu\alpha\beta}p_{B_{s}}^{\alpha}k^{\beta}\epsilon^{\nu}\right][\bar{\ell}\gamma^{\mu}\ell],$
(17)
with the form factors given by
$\displaystyle T_{1}^{1}$ $\displaystyle=$ $\displaystyle
T_{1}^{2}=T_{1}^{3}=T_{1}^{4}=0;$ $\displaystyle T_{2}^{1}$ $\displaystyle=$
$\displaystyle N_{c}T_{2}^{2}=N_{c}e_{u}^{2}f_{1}(\bar{z}_{c},t);$
$\displaystyle T_{2}^{3}$ $\displaystyle=$ $\displaystyle
N_{c}\left\\{\sum\limits_{q=u,d,s,c,b}e_{q}^{2}f_{1}(\bar{z}_{q},t)+e_{d}^{2}[f_{1}(\bar{z}_{b},t))+f_{1}(\bar{z}_{s},t)]\right\\}$
$\displaystyle T_{2}^{4}$ $\displaystyle=$
$\displaystyle\sum\limits_{q=u,d,s,c,b}e_{q}^{2}f_{1}(\bar{z}_{q},t)+e_{d}^{2}N_{c}\left[f_{1}(\bar{z}_{b},t))+f_{1}(\bar{z}_{s},t)\right]$
$\displaystyle T_{1}^{5}$ $\displaystyle=$
$\displaystyle=\frac{1}{N_{c}}T_{1}^{6}=2e_{d}^{2}\left\\{\frac{1}{\bar{z}_{b^{1/2}}}\left[f_{3}(\bar{z}_{b},t)+(1-\frac{4\bar{z}_{b}}{1-t})(\frac{1}{2}-f_{1}(\bar{z}_{b},t))\right]-(b\to
s)\right\\};$ $\displaystyle T_{2}^{5}$ $\displaystyle=$
$\displaystyle=-N_{c}\sum\limits_{q=u,d,s,c,b}e_{q}^{2}f_{1}(\bar{z}_{q},t)-2e_{d}^{2}\left[\frac{1}{\bar{z}_{b^{1/2}}}(\frac{1}{2}-f_{1}(\bar{z}_{b},t))+(b\to
s)\right];$ $\displaystyle T_{2}^{6}$ $\displaystyle=$
$\displaystyle-\sum\limits_{q=u,d,s,c,b}e_{q}^{2}f_{1}(\bar{z}_{q},t)-2N_{c}e_{d}^{2}\left[\frac{1}{\bar{z}_{b^{1/2}}}(\frac{1}{2}-f_{1}(\bar{z}_{b},t))+(b\to
s)\right],$ (18)
where $q^{2}$ in Eq. (17) is the invariant mass square of lepton pair. The
functions can be obtained directly from (16) by redefined parameters
$\bar{z}_{q}=m_{q}^{2}/m^{2}_{B_{s}}$, $t=q^{2}/m^{2}_{B_{s}}$,
$\displaystyle f_{1}(\bar{z},t)$ $\displaystyle=$
$\displaystyle\frac{1}{2}+\frac{\bar{z}}{1-t}\int_{0}^{1}\frac{dx}{x}\ln[\frac{\bar{z}-x(1-x)}{\bar{z}-tx(1-x)}],$
(19) $\displaystyle f_{3}(\bar{z},t)$ $\displaystyle=$
$\displaystyle\frac{2\bar{z}}{1-t}\left\\{1-\frac{t}{1-t}\int_{0}^{1}{dx}\ln[\frac{\bar{z}-x(1-x)}{\bar{z}-tx(1-x)}]\right\\},$
(20)
with explicit formula needed in calculation given by
$\displaystyle\int_{0}^{1}\frac{dy}{y}\ln[1-vy(1-y)-i\epsilon]$
$\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{cc}-2\mathrm{arctan^{2}\sqrt{\frac{v}{4-v}}}&{\rm
for}~{}v<4;\\\
-\frac{\pi^{2}}{2}-2i\pi\ln\frac{\sqrt{v}+\sqrt{v-4}}{2}+2\ln^{2}\frac{\sqrt{v}+\sqrt{v-4}}{2}&{\rm
for}~{}v>4,\\\ \end{array}\right.,$ (23)
$\displaystyle\int_{0}^{1}dy\ln[1-vy(1-y)-i\epsilon]$ $\displaystyle=$
$\displaystyle-2+|1-x|^{1/2}\left\\{\begin{array}[]{cc}\ln\left|\frac{1+\sqrt{1-x}}{1-\sqrt{1-x}}\right|-i\pi&{\rm
for}~{}x=4/v<1;\\\ 2\arctan\frac{1}{\sqrt{x-1}}&{\rm
for}~{}x=4/v>1.\end{array}\right.$ (26)
From Eq. (17) it is clear that the contribution from four-quark operators to
$B_{s}\to\gamma\ell^{+}\ell^{-}$ has the similar expression as that from
magnetic-penguin operator with real photon to
$B_{s}\to\ell^{+}\ell^{-}\gamma$. Thus the total matrix element for the decay
$B_{s}\to\ell^{+}\ell^{-}\gamma$ including contributions from three kinds of
diagrams can be obtained easily by a shift to the form factors:
$\displaystyle\overline{C}_{1}$ $\displaystyle=$ $\displaystyle
C_{+}\left[C_{9}^{eff}-\frac{2m_{b}m_{B_{s}}}{q^{2}}C_{7}^{eff}\left(1+\frac{p_{B_{s}}\cdot
k}{p_{B_{s}}\cdot q}\right)\right]-2\frac{p_{B_{s}}\cdot
k}{q^{2}}\frac{f_{B_{s}}}{e_{d}}\sum\limits_{j=1}^{6}C_{j}T_{2}^{j},$ (27)
$\displaystyle\overline{C}_{2}$ $\displaystyle=$ $\displaystyle
C_{9}^{eff}C_{-}-\frac{2m_{b}m_{B_{s}}}{q^{2}}C_{7}^{eff}C_{+}\left(1+\frac{p_{B_{s}}\cdot
k}{q^{2}}\right)+2\frac{p_{B_{s}}\cdot
k}{q^{2}}\frac{f_{B_{s}}}{e_{d}}\sum\limits_{j=1}^{6}C_{j}T_{1}^{j}.$ (28)
Finally, we get the differential decay width versus the photon energy
$E_{\gamma}$,
$\displaystyle\frac{d\Gamma}{dE_{\gamma}}$ $\displaystyle=$
$\displaystyle\frac{\alpha_{em}^{3}{G^{2}_{F}}}{108\pi^{4}}|V_{tb}V^{*}_{ts}|^{2}(m_{B_{s}}-2E_{\gamma})E_{\gamma}\left[|\overline{C}_{1}|^{2}+|\overline{C_{2}}|^{2}+C_{10}^{2}(|C_{+}|^{2}+|C_{-}|^{2})\right].$
(29)
## III Results and discussions
The decay branching ratios can be easily obtained by integrating over photon
energy. In the numerical calculations, we use the following parameters PDG2012
:
$\alpha_{em}=\frac{1}{137},~{}G_{F}=1.166\times 10^{-5}{\rm
GeV}^{-2},~{}m_{b}=4.2{\rm GeV},$
$|V_{tb}|=0.999,~{}|V_{ts}|=0.04,~{}|V_{td}|=0.0084$ $m_{B_{s}}=5.37{\rm
GeV},~{}\omega_{B_{s}}=0.5,~{}f_{B_{s}}=0.24{\rm
GeV},~{}\tau_{B_{s}}=1.47\times 10^{-12}s.$ $m_{B_{d}^{0}}=5.28{\rm
GeV},~{}\omega_{B_{d}}=0.4,~{}f_{B_{d}}=0.19{\rm
GeV},~{}\tau_{B_{d}}=1.53\times 10^{-12}s.$
The ratios of $B_{s}\to\gamma\ell^{+}\ell^{-}$ are shown in Table 1 together
with results of $B_{d,s}\to\gamma\ell^{+}\ell^{-}$ from this work and our
previous research for comparison. The errors shown in the Table 1 comes from
the heavy meson wave function, by varying the parameter $\omega_{B_{d}}=0.4\pm
0.1$, and $\omega_{B_{s}}=0.5\pm 0.1$ LU06 . Note that, the predicted
branching ratios receive errors from many parameters, such as parameters meson
decay constant, meson and quark masses.
Table 1: Comparison of branching ratios in unit of $10^{-9}$ with previous calculations Branching Ratios | Results
---|---
($\times 10^{-9}$) | The first kind of diagrams | The first two kind of diagrams | Included all diagrams
$B_{s}\to\gamma\ell^{+}\ell^{-}$ | $5.16_{-1.38}^{+2.42}$ | $10.2_{-2.51}^{+4.11}$ | $17.36_{-2.63}^{+4.55}$
$B_{d}^{0}\to\gamma\ell^{+}\ell^{-}$ | $0.21_{-0.06}^{+0.14}$ | $0.40_{-0.13}^{+0.26}$ | $0.53_{-0.12}^{+0.26}$
From the numerical results we conclude that unlike in decay $B\to
X_{s}\gamma\gamma$ where the four-quark operators just contribute a few
percent to the branching ratio Cao01 , our numerical result shows that
contribution from the four-quark operators to $B_{s}\to\gamma\ell^{+}\ell^{-}$
is large. It can be understood as follows:
1. 1.
As pointed out in Ref. LU06 , the radiative leptonic decays are very sensitive
probes in extracting the heavy meson wave functions;
2. 2.
Values of the Wilson coefficients $C_{j}(m_{b})$ ($j=3,\dots,6$) are at order
of $10^{-2}$, indicating that contribution to the corresponding operators via
$T^{j}$ is less important compared with those from $O_{1,2}$;
3. 3.
From Eq. (18), one can infer easily that the four-quarks contribution to the
form factors in (28) have coefficient
$(N_{c}C_{1}+C_{2})T_{2}^{2}f_{B_{s}}/e_{d}$ while the contribution from
magnetic-penguin operator with real photon, $m_{b}m_{B_{s}}/(p_{B_{s}}\cdot
q)C_{7}^{eff}C_{+}$. Note $(N_{c}C_{1}+C_{2})/e_{d}$ and $C_{7}^{eff}$ can be
comparable and have the same sign in $\overline{C}_{1}$ and opposite sign in
$\overline{C}_{2}$. However, with $T_{1}^{j}=0$ for $j=1,2$ thus comparable
contribution studied in this work and in Ref. Wang:2012na is expected,
leading to enhancement of branching ratios of $B_{s}\to\gamma\ell^{+}\ell^{-}$
when new diagrams are taken into account.
4. 4.
The predicted short-distance contributions from quark weak annihilation as
well as the magnetic-penguin operator with real photon to the exclusive decay
are large, and the branching ratios of $B_{s}\to\ell^{+}\ell^{-}\gamma$ are
enhanced nearly by a factor 3 compared with that only contribution from
magnetic-penguin operator with virtual photon and up to $1.7\times 10^{-8}$,
implying the search of $B_{s}\to\ell^{+}\ell^{-}\gamma$ can be achieved in
near future.
5. 5.
Due to the large contributions from magnetic-penguin operator with real photon
and quark weak annihilation , the form factors for matrix elements
$\langle\gamma|\bar{s}\gamma_{\mu}(1-\gamma_{5})b|B_{s}\rangle$ and
$\langle\gamma|\bar{s}\sigma_{\mu\nu}(1\pm\gamma_{5})q^{\nu}b|B_{s}\rangle$ as
a function of dilepton mass squared $q^{2}$ are complex and not as simple as
$1/(q^{2}-q_{0}^{2})^{2}$ where $q_{0}^{2}$ is constant Eilam95 . The
$B_{s}\to\gamma$ transition form factors predicted in this works have also
some differences from those in Ref. Melikhov04 ; Kruger03 ; Nikitin11 . For
instance, Ref. Kruger03 predicted the form factors $F_{TV}(q^{2},0)$,
$F_{TA}(q^{2},0)$ induced by tensor and pseudotensor currents with direct
emission of the virtual photon from quarks are only equal at maximum photon
energy, whereas the corresponding formula in this work have the same
expression as $-\frac{e_{d}N_{c}m_{B_{s}}}{p_{B_{s}}\cdot k}C_{+}\propto
1/(q^{2}-q_{0}^{2})$ in Eq. (7). Furthermore, the form factors are larger than
previous predictions.
To clarify things more clear, we think it is necessary to present a few more
comments about the calculation of Ref. Melikhov04 , as mentioned in
introduction. In order to estimate the contribution of direct emission of the
real photon from quarks, the authors of Ref. Melikhov04 calculated the form
factors $F_{TA,TV}(0,q^{2})$ by including the short-distance contribution in
$q^{2}\to 0$ limit and additional long-distance contribution from the
resonances of vector mesons such as $\rho^{0}$, $\omega^{0}$ for $B_{d}$ decay
and $\phi$ for $B_{s}$ decay. Obviously, this means the short-distance
contributions were not appropriately taken into account. Moreover, if
$F_{TA,TV}(0,q^{2})=F_{TA,TV}(0,0)$ stands for the short distance
contribution, it seems to double counting since in this case photons emitted
from magnetic-penguin vertex and quark lines directly are not able to be
distinguished.
We also note that for contribution from the weak annihilation the authors of
Ref. Melikhov04 only took into account $u$ and $c$ quarks in the loop by
axial anomaly as the long distance contribution, they concluded that the
anomalous contribution is suppressed by a power of a heavy quark mass. We
believe that only anomalous contribution to account contribution from the weak
annihilation is insufficient. Our numerical result shows that the
contributions from weak-annihilation diagrams are large and can not be
neglected.
## IV Conclusion
In summary, we evaluated short distance calculation of the rare decays
$B_{s}\to\gamma\ell^{+}\ell^{-}$ in the SM, including contributions from all
four kinds of diagrams. We focus on the contribution from four-quark operators
which are not taken into account properly in previous researches. We found
that the contributions are large, leading to the branching ratio of
$B_{s}\to\ell^{+}\ell^{-}\gamma$ being nearly enhanced by a factor 3. In the
current early phase of the LHC era, the exclusive modes with muon final states
are among the most promising decays. Although there are some theoretical
challenges in calculation of the hadronic form factors and non-factorable
corrections, with the predicted branching ratio at order of $10^{-8}$,
$B_{s}\to\mu^{+}\mu^{-}\gamma$ can be expected as the next goal after
$B_{s}\to\mu^{+}\mu^{-}$ since the final states can be identified easily and
the branching ratios are large. Experimentally, $B_{s}\to\mu^{+}\mu^{-}\gamma$
mode is one of the main backgrounds to $B_{s}\to\mu^{+}\mu^{-}$, and thus it
is already taken into account in $B_{s}\to\mu^{+}\mu^{-}$ searches
Aaij:2012nna . Our predictions for such processes can be tested in the LHC-b
and B factories in near future.
###### Acknowledgements.
This work was supported in part by the NSFC No. 11005006, 11172008.
## References
* (1) T.M. Aliev, A. Ozpineci, M. Savci, Phys. Rev. D 55,7059 (1997).
* (2) Z. Heng, R. J. Oakes, W. Wang, Z.H. Xiong and J. M. Yang, Phys. Rev. D 77, 095012 (2008); G. Lu, Z.H. Xiong, Y. Cao, Nucl. Phys. B 487, 43 (1997); Z.H. Xiong, J. M. Yang, Nucl. Phys. B 602, 289 (2001) and Nucl.Phys. B 628, 193 (2002); G. Lu et. al., Phys. Rev. D 54, 5647 (1996); Z. H. Xiong, High Energy Phys. Nucl. Phys. 30, 284 (2006).
* (3) RAaij et al. [LHCb Collaboration], Phys. Rev. Lett. 110, 021801 (2013) [arXiv:1211.2674 [Unknown]].
* (4) G. Buchalla, A. J. Buras, M. E. Lautenbacher, Rev. Mod. Phys. Vol 68, No. 4, 1125.
* (5) D. Melikhov and N. Nikitin, Phys. Rev. D70,114028, (2004).
* (6) F.Kruger and D.Melikhov, Phys.Rev. D67, 034002 (2003);
* (7) N. Nikitin, I. Balakireva, and D. Melikhov, arXiv:1101.4276v1 [hep-ph]; C.Q. Geng, C.C. Lih, W.M. Zhang, Phys.Rev. D62, 074017 (2000); Y. Dincer and L.M. Sehgal, Phys.Lett. B521, 7 (2001); S.Descotes-Genon, C.T. Sachrajda, Phys.Lett. B557, 213 (2003).
* (8) G. Eilam, C.-D. Lü and D.-X. Zhang, Phys. Lett. B 391, 461 (1997); G. Erkol and G. Turan, Phys. Rev. D 65, 094029 (2002).
* (9) J.-X. Chen, Z.Y. Hou , C.-D. Lü, Commun. Theor. Phys. 47, 299 (2007).
* (10) W. Wang, Z. -H. Xiong and S. -H. Zhou, arXiv:1207.1978 [hep-ph].
* (11) Y. Li, C.D. Lü, Z.J. Xiao, and X.Q. Yu, Phys. Rev. D 70, 034009 (2004); X.Q Yu, Y. Li and C.D. Lü, Phys. Rev. D 71, 074026 (2005) and 2006 Phys. Rev. D 73, 017501.
* (12) C.-D. Lü, K. Ukai and M.-Z. Yang, Phys. Rev. D 63, 074009 (2001); Y.-Y. Keum, H.-n. Li and A.I. Sanda, Phys. Lett. B 504, 6 (2001), Phys. Rev. D 63, 054008 (2001); H.-n. Li, 2001 Phys. Rev. D 64, 014019 (2001); S. Mishima, Phys. Lett. B 521, 252 (2001); C.-H. Chen, Y.-Y. Keum, and H.-n. Li, Phys. Rev. D64, 112002 (2001) and Phys. Rev. D 66, 054013 (2002).
* (13) C.D. Lü, Eur. Phys. J. C 24, 121 (2002) and Phys. Rev. D 68, 097502 (2003); Y.-Y. Keum and A.I. Sanda, Phys. Rev. D 67, 054009 (2003); Y.-Y. Keum, et al., Phys. Rev. D 69, 094018 (2004); J. Zhu, Y.L. Shen and C.D. Lü, 2005 Phys. Rev. D 72, 054015, and Eur. Phys. J. C41, 311 (2005); Y. Li and C.D. Lü, Phys. Rev. D73, 014024 (2006); C.D. Lü, M. Matsumori, A.I. Sanda, and M.Z. Yang, Phys. Rev. D 72, 094005 (2005).
* (14) C.-D. Lü and M.-Z. Yang, 2003 Eur. Phys. J. C 28, 515.
* (15) M. Misiak, Nucl. Phys. B 393, 23 (1993).
* (16) F.L. Dong, W.Y. Wang, and Z.H. Xiong, Chin. Phys. C 35, 6 (2011).
* (17) J.J. Cao, Z.J. Xiao and G.R. Lu, Phys. Rev. D 64, 014012 (2001); L. Reina, G. Ricciardi, and A. Soni, Phys. Rev. D 56, 5805 (1997).
* (18) G. Eilam, I. Halperin and R. R. Mendel, Phys. Lett. B 361, 137 (1995).
* (19) K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010) and 2011 partial update for the 2012 edition.
|
arxiv-papers
| 2013-03-04T09:59:02 |
2024-09-04T02:49:42.404081
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Wenyu Wang, Zhao-Hua Xiong and Si-Hong Zhou",
"submitter": "Wenyu Wang",
"url": "https://arxiv.org/abs/1303.0660"
}
|
1303.0704
|
1]SAS LAB, DREXEL UNIVERSITY, PHILADELPHIA, PA 19104, USA 2]DEPARTMENT OF
MATHEMATICAL SCIENCES, MONTCLAIR STATE UNIVERSITY, MONTCLAIR, NJ 07043, USA
3]NONLINEAR SYSTEMS DYNAMICS SECTION, PLASMA PHYSICS DIVISION, CODE 6792, U.S.
NAVAL RESEARCH LAB, WASHINGTON, DC 20375, USA
KENNETH MALLORY
([email protected])
# DISTRIBUTED ALLOCATION OF MOBILE SENSING SWARMS IN GYRE FLOWS
KENNETH MALLORY and M. ANI HSIEH ERIC FORGOSTON IRA B. SCHWARTZ [ [ [
###### Abstract
We address the synthesis of distributed control policies to enable a swarm of
homogeneous mobile sensors to maintain a desired spatial distribution in a
geophysical flow environment, or workspace. In this article, we assume the
mobile sensors (or robots) have a “map” of the environment denoting the
locations of the Lagrangian coherent structures or LCS boundaries. Using this
information, we design agent-level hybrid control policies that leverage the
surrounding fluid dynamics and inherent environmental noise to enable the team
to maintain a desired distribution in the workspace. We discuss the stability
properties of the ensemble dynamics of the distributed control policies. Since
realistic quasi-geostrophic ocean models predict double-gyre flow solutions,
we use a wind-driven multi-gyre flow model to verify the feasibility of the
proposed distributed control strategy and compare the proposed control
strategy with a baseline deterministic allocation strategy. Lastly, we
validate the control strategy using actual flow data obtained by our coherent
structure experimental testbed.
Geophysical flows are naturally stochastic and aperiodic, yet exhibit coherent
structure. Coherent structures are of significant importance since knowledge
of them enables the prediction and estimation of the underlying geophysical
fluid dynamics. In realistic ocean flows, these time-dependent coherent
structures, or Lagrangian coherent structures (LCS), are similar to
separatrices that divide the flow into dynamically distinct regions, and are
essentially extensions of stable and unstable manifolds to general time-
dependent flows (Haller and Yuan, 2000). As such, they encode a great deal of
global information about the dynamics and transport of the fluidic
environment. For two-dimensional (2D) flows, ridges of locally maximal finite-
time Lyapunov exponent (FTLE) (Shadden et al., 2005) values correspond, to a
good approximation (though see (Haller, 2011)), to Lagrangian coherent
structures. Details regarding the derivation of the FTLE can be found in the
literature Haller (2000, 2001, 2002); Shadden et al. (2005); Lekien et al.
(2007); Branicki and Wiggins (2010).
Recent years have seen the use of autonomous underwater and surface vehicles
(AUVs and ASVs) for persistent monitoring of the ocean to study the dynamics
of various biological and physical phenomena, such as plankton assemblages
(Caron et al., 2008), temperature and salinity profiles (Lynch et al., 2008;
Wu and Zhang, 2011; Sydney and Paley, 2011), and the onset of harmful algae
blooms (Zhang et al., 2007; Chen et al., 2008; Das et al., 2011). These
studies have mostly focused on the deployment of single, or small numbers of,
AUVs working in conjunction with a few stationary sensors and ASVs. While data
collection strategies in these works are driven by the dynamics of the
processes they study, most existing works treat the effect of the surrounding
fluid as solely external disturbances (Das et al., 2011; Williams and
Sukhatme, 2012), largely because of our limited understanding of the
complexities of ocean dynamics. Recently, LCS have been shown to coincide with
optimal trajectories in the ocean which minimize the energy and the time
needed to traverse from one point to another (Inanc et al., 2005; Senatore and
Ross, 2008). And while recent works have begun to consider the dynamics of the
surrounding fluid in the development of fuel efficient navigation strategies
(Lolla et al., 2012; DeVries and Paley, 2011), they rely mostly on historical
ocean flow data and do not employ knowledge of LCS boundaries.
A drawback to operating both active and passive sensors in time-dependent and
stochastic environments like the ocean is that the sensors will escape from
their monitoring region of interest with some finite probability. This is
because the escape likelihood of any given sensor is not only a function of
the unstable environmental dynamics and inherent noise, but also the amount of
control effort available to the sensor. Since the LCS are inherently unstable
and denote regions of the flow where escape events occur with higher
probability (Forgoston et al., 2011), knowledge of the LCS are of paramount
importance in maintaining a sensor in a particular monitoring region.
In order to maintain stable patterns in unstable flows, the objective of this
work is to develop decentralized control policies for a team of autonomous
underwater vehicles (AUVs) and/or mobile sensing resources to maintain a
desired spatial distribution in a fluidic environment. Specifically, we devise
agent-level control policies which allow individual AUVs to leverage the
surrounding fluid dynamics and inherent environmental noise to navigate from
one dynamically distinct region to another in the workspace. While our agent-
level control policies are devised using a priori knowledge of
manifold/coherent structure locations within the region of interest, execution
of these control strategies by the individual robots is achieved using only
information that can be obtained via local sensing and local communication
with neighboring AUVs. As such, individual robots do not require information
on the global dynamics of the surrounding fluid. The result is a distributed
allocation strategy that minimizes the overall control-effort employed by the
team to maintain the desired spatial formation for environmental monitoring
applications.
While this problem can be formulated as a multi-task (MT), single-robot (SR),
time-extended assignment (TA) problem (Gerkey and Mataric, 2004), existing
approaches do not take into account the effects of fluid dynamics coupled with
the inherent environmental noise (Gerkey and Mataric, 2002; Dias et al., 2006;
Dahl et al., 2006; Hsieh et al., 2008; Berman et al., 2008). The novelty of
this work lies in the use of nonlinear dynamical systems tools and recent
results in LCS theory applied to collaborative robot tracking (Hsieh et al.,
2012) to synthesize distributed control policies that enables AUVs to maintain
a desired distribution in a fluidic environment.
The paper is structured as follows: We formulate the problem and outline key
assumptions in Section 1. The development of the distributed control strategy
is presented in Section 2 and its theoretical properties are analyzed in
Section 3. Section 4 presents our simulation methodology, results, and
discussion. We end with conclusions and directions for future work in Section
5.
## 1 Problem Formulation
Consider the deployment of $N$ mobile sensing resources (AUVs/ASVs) to monitor
$M$ regions in the ocean. The objective is to synthesize agent-level control
policies that will enable the team to autonomously maintain a desired
distribution across the $M$ regions in a dynamic and noisy fluidic
environment. We assume the following kinematic model for each AUV:
$\dot{\mathbf{q}}_{k}=\mathbf{u}_{k}+\mathbf{v}^{f}_{\mathbf{q}_{k}}\quad
k\in\\{1,\ldots,n\\},$ (1)
where $\mathbf{q}_{k}=[x_{k},\,y_{k},\,z_{k}]^{T}$ denotes the vehicle’s
position, $\mathbf{u}_{k}$ denotes the $3\times 1$ control input vector, and
$\mathbf{v}^{f}_{\mathbf{q}_{k}}$ denotes the velocity of the fluid
experienced/measured by the $k^{th}$ vehicle.
In this work, we limit our discussion to 2D planar flows and motions and thus
we assume $z_{k}$ is constant for all $k$. As such,
$\mathbf{v}^{f}_{\mathbf{q}_{k}}$ is a sample of a 2D vector field denoted by
$F(\mathbf{q})$ at $\mathbf{q}_{k}$ whose $z$ component is equal to zero,
i.e., $F_{z}(\mathbf{q})=0$, for all $\mathbf{q}$. Since realistic quasi-
geostrophic ocean models exhibit multi-gyre flow solutions, we assume
$F(\mathbf{q})$ is provided by the 2D wind-driven multi-gyre flow model given
by
$\displaystyle\dot{x}=-\pi A\sin(\pi\frac{f(x,t)}{s})\cos(\pi\frac{y}{s})-\mu
x+\eta_{1}(t),$ (2a) $\displaystyle\dot{y}=\pi
A\cos(\pi\frac{f(x,t)}{s})\sin(\pi\frac{y}{s})\frac{df}{dx}-\mu
y+\eta_{2}(t),$ (2b) $\displaystyle\dot{z}=0,$ (2c) $\displaystyle
f(x,t)=x+\varepsilon\sin(\pi\frac{x}{2s})\sin(\omega t+\psi).$ (2d)
When $\varepsilon=0$, the multi-gyre flow is time-independent, while for
$\varepsilon\neq 0$, the gyres undergo a periodic expansion and contraction in
the $x$ direction. In (2), $A$ approximately determines the amplitude of the
velocity vectors, $\omega/2\pi$ gives the oscillation frequency, $\varepsilon$
determines the amplitude of the left-right motion of the separatrix between
the gyres, $\psi$ is the phase, $\mu$ determines the dissipation, $s$ scales
the dimensions of the workspace, and $\eta_{i}(t)$ describes a stochastic
white noise with mean zero and standard deviation $\sigma=\sqrt{2I}$, for
noise intensity $I$. Figures 1 and 1 show the vector field of a two-gyre model
and the corresponding FTLE curves for the time-dependent case.
Figure 1: (a) Vector field and (b) FTLE field of the model given by (2) for
two gyres with $A=10$, $\mu=0.005$, $\varepsilon=0.1$, $\psi=0$, $I=0.01$, and
$s=50$. LCS are characterized by regions with maximum FTLE measures (denoted
by red). In 2D flows, regions with maximum FTLE measures correspond to 1D
curves.
Let ${\cal W}$ denote an obstacle-free workspace with flow dynamics given by
(2). We assume a tessellation of ${\cal W}$ such that the boundaries of each
cell roughly corresponds to the stable/unstable manifolds or LCS curves
quantified by maximum FTLE ridges as shown in Fig. 2. In general, it may be
unreasonable to expect small resource constrained autonomous vehicles to be
able to track the LCS locations in real time. However, LCS boundary locations
can be determined using historical data, ocean model data, e.g., data provided
by the Navy Coastal Ocean Model (NCOM) databases, and/or data obtained a
priori using LCS tracking strategies similar to (Hsieh et al., 2012). This
information can then be used to obtain an LCS-based cell decomposition of
${\cal W}$. Fig. 2 shows two manual cell decompositions of the workspace where
the cell boundaries roughly correspond to maximum FLTE ridges. In this work,
we assume the tessellation of ${\cal W}$ is given and do not address the
problem of automatic tessellation of the workspace to achieve a decomposition
where cell boundaries correspond to LCS curves.
A tessellation of the workspace along boundaries characterized by maximum FTLE
ridges makes sense since they separate regions within the flow field that
exhibit distinct dynamic behavior and denote regions in the flow field where
more escape events may occur probabilistically (Forgoston et al., 2011). In
the time-independent case, these boundaries correspond to stable and unstable
manifolds of saddle points in the system. The manifolds can also be
characterized by maximum FTLE ridges where the FTLE is computed based on a
backward (attracting structures) or forward (repelling structures) integration
in time. Since the manifolds demarcate the basin boundaries separating the
distinct dynamical regions, they are also regions that are uncertain with
respect to velocity vectors within a neighborhood of the manifold. Therefore,
switching between regions in neighborhoods of the manifold is influenced both
by deterministic uncertainty as well as stochasticity due to external noise.
Figure 2: Two examples of LCS-based cell decomposition of the region of
interest assuming a flow field given by (2). These cell decompositions were
performed manually. (a) A $4\times 4$ time-independent grid of gyres with
$A=0.5$, $\mu=0.005$, $\varepsilon=0$, $\psi=0$, $I=35$, and $s=20$. The
stable and unstable manifolds of each saddle point in the system is shown by
the black arrows. (b) An FTLE based cell decomposition for a time-dependent
double-gyre system with the same parameters as Fig. 1.
Given an FTLE-based cell decomposition of ${\cal W}$, let ${\cal G}=({\cal
V},{\cal E})$ denote an undirected graph whose vertex set ${\cal
V}=\\{V_{1},\ldots,V_{M}\\}$ represents the collection of FTLE-derived cells
in ${\cal W}$. An edge $e_{ij}$ exists in the set ${\cal E}$ if cells $V_{i}$
and $V_{j}$ share a physical boundary or are physically adjacent. In other
words, ${\cal G}$ serves as a roadmap for ${\cal W}$. For the case shown in
Fig. 2, adjacency of an interior cell is defined based on four neighborhoods.
Let $N_{i}$ denote the number of AUVs or mobile sensing resources/robots
within $V_{i}$. The objective is to synthesize agent-level control policies,
or $\mathbf{u}_{k}$, to achieve and maintain a desired distribution of the $N$
agents across the $M$ regions, denoted by
$\mathbf{\bar{N}}=[\bar{N}_{1},\ldots,\bar{N}_{M}]^{T}$, in an environment
whose dynamics are given by (2).
We assume that robots are given a map of the environment, ${\cal G}$, and
$\mathbf{\bar{N}}$. Since the tessellation of ${\cal W}$ is given, the LCS
locations corresponding to the boundaries of each $V_{i}$ is also known a
priori. Additionally, we assume robots co-located within the same $V_{i}$ have
the ability to communicate with each other. This makes sense since coherent
structures can act as transport barriers and prevent underwater acoustic wave
propagation (Wang et al., 2009; Rypina et al., 2011). Finally, we assume
individual robots have the ability to localize within the workspace, i.e.,
determine their own positions in the workspace. These assumptions are
necessary to enable the development of a prioritization scheme within each
$V_{i}$ based on an individual robot’s escape likelihoods in order to achieve
the desired allocation. The prioritization scheme will allow robots to
minimize the control effort expenditure as they move within the set ${\cal
V}$. We describe the methodology in the following section.
## 2 Methodology
We propose to leverage the environmental dynamics and the inherent
environmental noise to synthesize energy-efficient control policies for a team
of mobile sensing resources/robots to maintain the desired allocation in
${\cal W}$ at all times. We assume each robot has a map of the environment. In
our case, this translates to providing the robots the locations of LCS
boundaries that define each $V_{i}$ in ${\cal G}$. Since LCS curves separate
${\cal W}$ into regions with distinct flow dynamics, this becomes analogous to
providing autonomous ground or aerial vehicles a map of the environment which
is often obtained a priori. In a fluidic environment, the map consists of the
locations of the maximum FTLE ridges computed from data and refined,
potentially in real-time, using a strategy similar to the one found in (Hsieh
et al., 2012). Thus, we assume each robot has a map of the environment and has
the ability to determine the direction it is moving in within the global
coordinate frame, i.e., the ability to localize.
### 2.1 Controller Synthesis
Consider a team of $N$ robots initially distributed across $M$ gyres/cells.
Since the objective is to achieve a desired allocation of $\mathbf{\bar{N}}$
at all times, the proposed strategy will consist of two phases: an auction to
determine which robots within each $V_{i}$ should be tasked to leave/stay and
an actuation phase where robots execute the appropriate leave/stay controller.
#### 2.1.1 Auction Phase
The purpose of the auction phase is to determine whether
$N_{i}(t)>\bar{N}_{i}$ and to assign the appropriate actuation strategy for
each robot within $V_{i}$. Let $Q_{i}$ denote an ordered set whose elements
provide robot identities that are arranged from highest escape likelihoods to
lowest escape likelihoods from $V_{i}$.
In general, to first order we assume a geometric measure whereby the escape
likelihood of any particle within $V_{i}$ increases as it approaches the
boundary of $V_{i}$, denoted as $\partial V_{i}$ (Forgoston et al., 2011).
Given ${\cal W}$, with dynamics given by (2), consider the case when
$\varepsilon=0$ and $I\neq 0$, i.e., the case when the fluid dynamics is time-
independent in the presence of noise. The boundaries between each $V_{i}$ are
given by the stable and unstable manifolds of the saddle points within ${\cal
W}$ as shown in Fig. 2. While there exists a stable attractor in each $V_{i}$
when $I=0$, the presence of noise means that robots originating in $V_{i}$
have a non-zero probability of landing in a neighboring gyre $V_{j}$ where
$e_{ij}\in{\cal E}$. Here, we assume that robots experience the same escape
likelihoods in each gyre/cell and assume that $P_{k}(\neg i|i)$, the
probability that a robot escapes from region $i$ to an adjacent region, can be
estimated based on a robot’s proximity to a cell boundary with some assumption
of the environmental noise profile (Forgoston et al., 2011).
Let $d(\mathbf{q}_{k},\partial V_{i})$ denote the distance between a robot $k$
located in $V_{i}$ and the boundary of $V_{i}$. We define the set
$Q_{i}=\\{k_{1},\ldots,k_{N_{i}}\\}$ such that $d(q_{k_{1}},\partial
V_{i})\leq d(q_{k_{2}},\partial V_{i})\leq\ldots\leq d(q_{N_{i}},\partial
V_{i})$. The set $Q_{i}$ provides the prioritization scheme for tasking robots
within $V_{i}$ to leave if $N_{i}(t)>\bar{N}_{i}$. The assumption is that
robots with higher escape likelihoods are more likely to be “pushed” out of
$V_{i}$ by the environment dynamics and will not have to exert as much control
effort when moving to another cell, minimizing the overall control effort
required by the team.
In general, a simple auction scheme can be used to determine $Q_{i}$ in a
distributed fashion by the robots in $V_{i}$ (Dias et al., 2006). If
$N_{i}(t)>\bar{N}_{i}$, then the first $N_{i}-\bar{N}_{i}$ elements of
$Q_{i}$, denoted by $Q_{i_{L}}\subset Q_{i}$, are tasked to leave $V_{i}$. The
number of robots in $V_{i}$ can be established in a distributed manner in a
similar fashion. The auction can be executed periodically at some frequency
$1/T_{a}$ where $T_{a}$ denotes the length of time between each auction and
should be greater than the relaxation time of the AUV/ASV dynamics.
#### 2.1.2 Actuation Phase
For the actuation phase, individual robots execute their assigned controllers
depending on whether they were tasked to stay or leave during the auction
phase. As such, the individual robot control strategy is a hybrid control
policy consisting of three discrete states: a leave state, $U_{L}$, a stay
state, $U_{S}$, which is further distinguished into $U_{S_{A}}$ and
$U_{S_{P}}$. Robots who are tasked to leave will execute $U_{L}$ until they
have left $V_{i}$ or until they have been once again tasked to stay. Robots
who are tasked to stay will execute $U_{S_{P}}$ if $d(q_{k},\partial
V_{i})>d_{min}$ and $U_{S_{A}}$ otherwise. In other words, if a robot’s
distance to the cell boundary is below some minimum threshold distance
$d_{min}$, then the robot will actuate and move itself away from $\partial
V_{i}$. If a robot’s distance to $\partial V_{i}$ is above $d_{min}$, then the
robot will execute no control actions. Robots will execute $U_{S_{A}}$ until
they have reached a state where $d(q_{k},\partial V_{i})>d_{min}$ or until
they are tasked to leave at a later assignment round. Similarly, robots will
execute $U_{S_{P}}$ until either $d(q_{k},\partial V_{i})\leq d_{min}$ or they
are tasked to leave. The hybrid robot control policy is given by
$\displaystyle U_{L}(\mathbf{q}_{k})$ $\displaystyle=\bm{\omega}_{i}\times
c\frac{F(\mathbf{q}_{k})}{\|F(\mathbf{q}_{k})\|},$ (3a) $\displaystyle
U_{S_{A}}(\mathbf{q}_{k})$ $\displaystyle=-\bm{\omega}_{i}\times
c\frac{F(\mathbf{q}_{k})}{\|F(\mathbf{q}_{k})\|},$ (3b) $\displaystyle
U_{S_{P}}(\mathbf{q}_{k})$ $\displaystyle=0.$ (3c)
Here, $\mathbf{\omega}_{i}=[0,\,0,\,1]^{T}$ denotes counterclockwise rotation
with respect to the centroid of $V_{i}$, with clockwise rotation being denoted
by the negative and $c$ is a constant that sets the linear speed of the
robots. The hybrid control policy generates a control input perpendicular to
the velocity of the fluid as measured by robot $k$111The inertial velocity of
the fluid can be computed from the robot’s flow-relative velocity and
position. and pushes the robot towards $\partial V_{i}$ if $U_{L}$ is
selected, away from $\partial V_{i}$ if $U_{S_{A}}$ is selected, or results in
no control input if $U_{S_{P}}$ is selected The hybrid control policy is
summarized by Algorithm 1 and Fig. 3.
Algorithm 1 Auction Phase
1: if $ElapsedTime==T_{a}$ then
2: Determine $N_{i}(t)$ and $Q_{i}$
3: $\forall k\in Q_{i}$
4: if $N_{i}(t)>\bar{N}_{i}$ then
5: if $k\in Q_{L}$ then
6: $u_{k}\leftarrow U_{L}$
7: else
8: $u_{k}\leftarrow U_{S}$
9: end if
10: else
11: $u_{k}\leftarrow U_{S}$
12: end if
13: end if
In general, the Auction Phase is executed at a frequency of $1/T_{a}$ which
means robots also switch between controller states at a frequency of
$1/T_{a}$. To further reduce actuation efforts exerted by each robot, it is
possible to limit a robot’s actuation time to a period of time $T_{c}\leq
T_{a}$. Such a scheme may prolong the amount of time required for the team to
achieve the desired allocation, but may result in significant energy-
efficiency gains. We further analyze the proposed strategy in the following
sections.
Figure 3: Schematic of the single-robot hybrid robot control policy.
## 3 Analysis
In this section, we discuss the theoretical feasibility of the proposed
distributed allocation strategy. Instead of the traditional agent-based
analysis, we employ a macroscopic analysis of the proposed distributed control
strategy given by Algorithm 1 and (3). We first note that while the single
robot controller shown in Fig. 3 results in an agent-level stochastic control
policy, the ensemble dynamics of a team of $N$ robots each executing the same
hybrid control strategy can be modeled using a polynomial stochastic hybrid
system (pSHS). The advantage of this approach is that it allows the use of
moment closure techniques to model the time evolution of the distribution of
the team across the various cells. This, in turn, enables the analysis of the
stability of the closed-loop ensemble dynamics. The technique was previously
illustrated in (Mather and Hsieh, 2011). For completeness, we briefly
summarize the approach here and refer the interested reader to (Mather and
Hsieh, 2011) for further details.
The system state is given by
$\mathbf{N}(t)=\left[N_{1}(t),\ldots,N_{M}(t)\right]^{T}$. As the team
distributes across the $M$ regions, the rate in which robots leave a given
$V_{i}$ can be modeled using constant transition rates. For every edge
$e_{ij}\in{\cal E}$, we assign a constant $a_{ij}>0$ such that $a_{ij}$ gives
the transition probability per unit time for a robot from $V_{i}$ to land in
$V_{j}$. Different from Mather and Hsieh (2011), the $a_{ij}$s are a function
of the parameters $c$, $T_{c}$, and $T_{a}$ of the individual robot control
policy (3), the dynamics of the surrounding fluid, and the inherent noise in
the environment. Furthermore, $a_{ij}$ is a macroscopic description of the
system and thus a parameter of the ensemble dynamics rather than the agent-
based system. As such, the macroscopic analysis is a description of the
steady-state behavior of the system and becomes exact as $N$ approaches
infinity.
Given ${\cal G}$ and the set of $a_{ij}$s, we model the ensemble dynamics as a
set of transition rules of the form:
$\displaystyle N_{i}\xrightarrow{a_{ij}}N_{j}\quad\textrm{$\forall$
$e_{ij}\in{\cal E}$}.$ (4)
The above expression represents a stochastic transition rule with $a_{ij}$ as
the per unit transition rate and $N_{i}(t)$ and $N_{j}(t)$ as discrete random
variables. In the robotics setting, (4) implies that robots at $V_{i}$ will
move to $V_{j}$ with a rate of $a_{ij}N_{i}$. We assume the ensemble dynamics
is Markovian and note that in general $a_{ij}\neq a_{ji}$ and $a_{ij}$ encodes
the inverse of the average time a robot spends in $V_{i}$.
Given (4) and employing the extended generator we can obtain the following
description of the moment dynamics of the system:
$\displaystyle\tfrac{d}{dt}\mathrm{E}[\mathbf{N}]$
$\displaystyle=\mathbf{A}\mathrm{E}[\mathbf{N}]$ (5)
where $[\mathbf{A}]_{ij}=a_{ji}$ and
$[\mathbf{A}]_{ii}=-\sum_{(i,j)\in\cal{E}}a_{ij}$ (Mather and Hsieh, 2011). It
is important to note that $\mathbf{A}$ is a Markov process matrix and thus is
negative semidefinite. This, coupled with the conservation constraint
$\sum_{i}N_{i}=N$ leads to exponential stability of the system given by (5)
(Klavins, 2010).
In this work, we note that $a_{ij}$s can be determined experimentally after
the selection of the various parameters in the distributed control strategy.
While the $a_{ij}$s can be chosen to enable the team of robots to autonomously
maintain the desired steady-state distribution (Hsieh et al., 2008),
extraction of the control parameters from user specified transition rates is a
direction for future work. Thus, using the technique described by Mather and
Hsieh (2011), the following result can be stated for our current distributed
control strategy
###### Theorem 1
Given a team of $N$ robots with kinematics given by (1) and $\mathbf{v}_{f}$
given by (2), the distributed allocation strategy given by Algorithm 1 and
(3), at the ensemble level is stable and achieves the desired allocation
strategy.
For the details of the model development and the proof, we refer the
interested reader to (Mather and Hsieh, 2011).
## 4 Simulation Results
We validate the proposed control strategy described by Algorithm (1) and (3)
using three different flow fields:
1. 1.
the time invariant wind driven multi-gyre model given by (2) with
$\varepsilon=0$,
2. 2.
the time varying wind driven multi-gyre model given by (2) for a range of
$\omega\neq 0$ and $\varepsilon\neq 0$ values, and
3. 3.
an experimentally generated flow field using different values of $T_{a}$ and
$c$ in (3).
We refer to each of these as Cases 1, 2, and 3 respectively. Two metrics are
used to compare the three cases. The first is the mean vehicle control effort
to indicate the energy expenditure of each robot. The second is the population
root mean square error (RMSE) of the resulting robot population distribution
with respect to the desired population. The RMSE is used to show effectiveness
of the control policy in achieving the desired distribution.
All cases assume a team of $N=500$ robots. The robots are randomly distributed
across the set of M gyres in ${\cal W}$. For the theoretical models, the
workspace ${\cal W}$ consists of a $4\times 4$ set of gyres, and each
$V_{i}\in{\cal V}$ corresponds to a gyre as shown in Fig. 2. We considered
three sets sets of desired distributions, namely a Ring formation, a Block
formation, and an L-shaped formation as shown in Fig. 4. The experimental flow
data had a set of $3\times 4$ regions. The inner two cells comprised ${\cal
W}$, while the complement, ${\cal W}^{C}$ consisted of the remaining cells.
This designation of cells helped to isolate the system from boundary effects,
and allowed the robots to escape the center gyres in all directions. The
desired pattern for this experimental data set was for all the agents to be
contained within a single cell. Each of the three cases was simulated a
minimum of five times and for a long enough period of time until steady-state
was reached.
Figure 4: Three desired distributions of the team of $N=500$ mobile sensing
resources/robots. (a) A Ring pattern formation, (b) a Block pattern formation,
and (c) an L-shaped pattern formation. Each box represents a gyre and the
number designates the desired number of robots contained within each gyre.
### 4.1 Case I: Time-Invariant Flows
For time-invariant flows, we assume $\varepsilon=0$, $A=0.5$, $s=20$,
$\mu=0.005$, and $I=35$ in (2). For the ring pattern, we consider the case
when the actuation was applied for $T_{c}=fT_{a}$ amount of time where
$f=0.1,0.2,\ldots,1.0$, and $T_{a}=10$. For the Block and L-Shape patterns, we
considered the cases when $T_{c}=0.5T_{a}$ and $T_{c}=T_{a}$. The final
population distribution of the team for the case with no controls and the
cases with controls for each of the patterns are shown in Fig. 5.
((a)) No Control
((b)) Ring
((c)) Block
((d)) L-Shape
Figure 5: Histogram of the final allocations in the time-invariant flow field
for the swarm of (a) passive robots exerting no controls and robots exerting
control forming the (b) Ring pattern with $T_{c}=0.8T_{a}$ at $t=450$, (c)
Block pattern with $T_{c}=T_{a}$ at $t=450$, and (d) L-shape pattern with
$T_{c}=0.5T_{a}$ at $t=450$.
We compared our results to a baseline deterministic allocation strategy where
the desired allocation is pre-computed and individual robots follow fixed
trajectories when navigating from one gyre to another. For this baseline case,
robots travel in straight lines at fixed speeds using a simple PID trajectory
follower and treat the surrounding fluid dynamics as an external disturbance
source. The RMSE results for all patterns are summarized in Table 1 and Fig.
6. The cumulative control effort per agent is shown in Fig. 7. From Fig. 6, we
see that our proposed control strategy performs comparable to the baseline
case especially when $T_{c}=T_{a}=10$ $sec$. In fact, even when $T_{c}<T_{a}$,
our proposed strategy achieves the desired distribution. The advantage of the
proposed approach lies in the significant energy gains when compared to the
baseline case, especially when $T_{c}<T_{a}$, as seen in Fig. 7. We omit the
cumulative control effort plots for the other cases since they are similar to
Fig. 7.
Figure 6: Comparison of the population RMSE in the time-varying flow for the
(a) Ring formation, (b) the Block formation, and (c) the L-shape formation for
different $T_{c}$, and for the PID control baseline controller in the Ring
case with time-invariant flows. Figure 7: Comparison of the total control
effort for the Ring pattern for different $T_{c}$ with the baseline controller
for time-invariant flows.
In time-invariant flows, we note that for large enough $T_{c}$, our proposed
distributed control strategy performs comparable to the baseline controller
both in terms of steady-state error and convergence time. As $T_{c}$
decreases, less and less control effort is exerted and thus it becomes more
and more difficult for the team to achieve the desired allocation. This is
confirmed by both the RMSE results summarized in Table 1 and Fig. 6-6.
Furthermore, while the proposed control strategy does not beat the baseline
strategy as seen in Fig. 6, it does come extremely close to matching the
baseline strategy performance. while requiring much less control effort as
shown in Fig. 7 even at high duty cycles, i.e., when $T_{c}/T_{a}>0.5$.
More interestingly, we note that executing the proposed control strategy at
100% duty cycle, i.e., when $T_{c}=T_{a}$, in time-invariant flows did not
always result in better performance. This is true for the cases when
$T_{c}=0.5T_{a}=5$ for the Block and L-shaped patterns shown in Fig. 6 and 6.
In these cases, less control effort yielded improved performance. However,
further studies are required to determine the critical value of $T_{c}$ when
less control yields better overall performance. In time-invariant flows, our
proposed controller can more accurately match the desired pattern while using
approximately $20\%$ less effort when compared to the baseline controller.
### 4.2 Case II: Time-Varying Flows
For the time-varying, periodic flow, we assume $A=0.5$, $s=20$, $\mu=0.005$,
$I=35$, and $\psi=0$ in (2). Additionally, we considered the performance of
our control strategy for different values of $\omega$ and $\varepsilon$ with
$T_{a}=10$ and $T_{c}=8$ for the Ring formation and $T_{c}=5$ for the L-shaped
formation. In all these simulations, we use the FTLE ridges obtained for the
time-independent case to define the boundaries of each $V_{i}$. The final
population distribution of the team for the case with no controls and the
cases with controls for the Ring and L-shape patterns are shown in Fig. 8. The
final population RMSE for the cases with different $\omega$ and $\varepsilon$
values for the Ring and L-shape patterns are shown in Fig. 9. These figures
show the average of $10$ runs for each $\omega$ and $\varepsilon$ pair. In
each of these runs, the swarm of mobile sensors were initially randomly
distributed within the grid of $4\times 4$ cells. Finally, Fig. 10 shows the
population RMSE as a function of time for the Ring and L-shape patterns
respectively.
In time-varying, periodic flows we note that our proposed control strategy is
able to achieve the desired final allocation even at 80% duty cycle, i.e.,
$T_{c}=0.8T_{a}$. This is supported by the results shown in Fig. 9. In
particular, we note that the proposed control strategy performs quite well for
a range of $\omega$ and $\varepsilon$ parameters for both the Ring and L-shape
patterns. While the variation in final RMSE values for the Ring pattern is
significantly lower than the L-shape pattern, the variations in final RMSE
values for the L-shape are all within 10% of the total swarm population.
((a)) No Control
((b)) Ring
((c)) L-Shape
Figure 8: Histogram of the final allocations in periodic flows, with
parameters of $\omega=\frac{5*\pi}{40}$ and $\varepsilon=5$, for the swarm of
(a) passive robots exerting no controls and robots exerting control forming
the (b) Ring pattern with $T_{c}=0.8T_{a}$ at $t=450$, and (c) L-Shape pattern
with $T_{c}=0.5T_{a}$ at $t=450$.
((a)) Ring
((b)) L-Shape
Figure 9: Final population RMSE for different values of $\omega$ and
$\varepsilon$ for (a) the Ring formation and (b) the L-shaped formation.
Figure 10: Comparison of RMSE over time for select $\omega$ and $\varepsilon$
pairs for the (a) Ring and (b) L-shaped patterns in periodic flows.
### 4.3 Case III: Experimental Flows
Using our $0.6m\times 0.6m\times 0.3m$ experimental flow tank equipped with a
grid of $4x3$ set of driving cylinders, we generated a time-invariant multi-
gyre flow field to use in simulation. Particle image velocimetry (PIV) was
used to extract the surface flows at $7.5$ $Hz$ resulting in a $39\times 39$
grid of velocity measurements. The data was collected for a total of $60$
$sec$. Figure 11 shows the top view of our experimental testbed and the
resulting flow field obtained via PIV. Further details regarding the
experimental testbed can be found in (Michini et al., 2013). Using this data,
we simulated a swarm of $500$ mobile sensors executing the control strategy
given by (3).
Figure 11: (a) Experimental setup of flow tank with $12$ driven cylinders. (b)
Flow field for image (a) obtained via particle image velocimetry (PIV).
To determine the appropriate tessellation of the workspace, we used the LCS
ridges obtained for the temporal mean of the velocity field. This resulted in
the discretization of the space into a grid of $4\times 3$ cells. Each cell
corresponds to a single gyre as shown Fig. 12. The cells of primary concern
are the central pair and the remainder boundary cells were not used to avoid
boundary effects and to allow robots to escape the center gyres in all
directions. The robots were initially uniformly distributed across the two
center cells and all $500$ robots were tasked to stay within the upper center
cell. When no control effort is exerted by the robots, the final population
distribution achieved by the team is shown in Fig. 13. With controls, the
final population distribution is shown in Fig. 13. The control strategy was
applied assuming $T_{c}/T_{a}=0.8$. The final RMSE for different values of $c$
in (3) and $T_{a}$ is shown in Fig. 14 and RMSE as a function of time for
different values of $c$ and $T_{a}$ are shown in Fig. 14.
The results obtained using the experimental flow field shows that the proposed
control strategy has the potential to be effective in realistic flows.
However, the resulting performance will require good matching between the
amount of control effort a vehicle can realistically exert, the frequency in
which the auctions occur within a cell, and the time scales of the
environmental dynamics as shown in in Figs. 14 and 14. This is an area for
future investigation.
Figure 12: FTLE field for the temporal mean of the experimental velocity data.
The field is discretized into a grid of $4\times 3$ cells whose boundaries are
shown in black.
Figure 13: Population distribution for a swarm of $500$ mobile sensors over a
period of $60$ $sec$ (a) with no controls, i.e., passive, and (b) with
controls with $T_{c}=0.8T_{a}$.
Figure 14: (a) Final RMSE for different values of $c$ and $T_{a}$ using the
experimental flow field. $T_{c}/T_{a}=0.8$ is kept constant throughout. (b)
RMSE over time for select $c$ and $T_{a}$ parameters on an experimental flow
field. The duty cycle $T_{c}/T_{a}=0.8$ is kept constant throughout.
## 5 Conclusions and Future Outlook
In this work, we presented the development of a distributed hybrid control
strategy for a team of robots to maintain a desired spatial distribution in a
stochastic geophysical fluid environment. We assumed robots have a map of the
workspace which in the fluid setting is akin to having some estimate of the
global fluid dynamics. This can be achieved by knowing the locations of the
material lines within the flow field that separate regions with distinct
dynamics. Using this knowledge, we leverage the surrounding fluid dynamics and
inherent environmental noise to synthesize energy efficient control strategies
to achieve a distributed allocation of the team to specific regions in the
workspace. Our initial results show that using such a strategy can yield
similar performance as deterministic approaches that do not explicitly account
for the impact of the fluid dynamics while reducing the control effort
required by the team.
For future work we are interested in using actual ocean flow data to further
evaluate our distributed allocation strategy in the presence of jets and
eddies (Rogerson et al., 1999; Miller et al., 2002; Kuznetsov et al., 2002;
Mancho et al., 2008; Branicki et al., 2011; Mendoza and Mancho, 2012). We also
are interested in using more complicated flow models including a bounded
single-layer PDE ocean model (Forgoston et al., 2011), a multi-layer PDE ocean
model (Wang et al., 2009; Lolla et al., 2012), and realistic 2D and 3D
unbounded flow models provided by the Navy Coastal Ocean Model (NCOM)
database. Particularly, we are interested in extending our strategy to non-
periodic, time-varying flows. In addition, we are currently developing an
experimental testbed capable of generating complex 2D flows in a controlled
laboratory setting. The objective is to be able to evaluate the proposed
control strategy using experimentally generated flow field data whose dynamics
are similar to realistic ocean flows. Finally, since our proposed strategy
requires robots to have some estimate of the global fluid dynamics, another
immediate direction for future work is to determine how well one can estimate
the fluid dynamics given knowledge of the locations of Lagrangian coherent
structures (LCS) in the flow field.
###### Acknowledgements.
KM and MAH were supported by the Office of Naval Research (ONR) Award No.
N000141211019. EF was supported by the U.S. Naval Research Laboratory (NRL)
Award No. N0017310-2-C007. IBS was supported by ONR grant N0001412WX20083 and
the NRL Base Research Program N0001412WX30002. The authors additionally
acknowledge support by the ICMAT Severo Ochoa project SEV-2011-0087.
## References
* Berman et al. (2008) Berman, S., Halasz, A., Hsieh, M. A., and Kumar, V.: Navigation-based Optimization of Stochastic Deployment Strategies for a Robot Swarm to Multiple Sites, in: Proc. of the 47th IEEE Conference on Decision and Control, Cancun, Mexico, 2008.
* Branicki and Wiggins (2010) Branicki, M. and Wiggins, S.: Finite-time Lagrangian transport analysis: Stable and unstable manifolds of hyperbolic trajectories and finite-time Lyapunov exponents, Nonlinear Proc. Geoph., 17, 1–36, 2010.
* Branicki et al. (2011) Branicki, M., Mancho, A. M., and Wiggins, S.: A Lagrangian description of transport associated with a Front-Eddy interaction: application to data from the North-Western Mediterranean Sea, Physica D, 240, 282–304, 2011\.
* Caron et al. (2008) Caron, D., Stauffer, B., Moorthi, S., Singh, A., Batalin, M., Graham, E., Hansen, M., Kaiser, W., Das, J., de Menezes Pereira, A., A. Dhariwal, B. Z., Oberg, C., and Sukhatme, G.: Macro- to fine-scale spatial and temporal distributions and dynamics of phytoplankton and their environmental driving forces in a small subalpine lake in southern California, USA, Journal of Limnology and Oceanography, 53, 2333–2349, 2008.
* Chen et al. (2008) Chen, V., Batalin, M., Kaiser, W., and Sukhatme, G.: Towards Spatial and Semantic Mapping in Aquatic Environments, in: IEEE International Conference on Robotics and Automation, pp. 629–636, Pasadena, CA, 2008.
* Dahl et al. (2006) Dahl, T. S., Mataric̀, M. J., and Sukhatme, G. S.: A machine learning method for improving task allocation in distributed multi-robot transportation, in: Understanding Complex Systems: Science Meets Technology, edited by Braha, D., Minai, A., and Bar-Yam, Y., pp. 307–337, Springer, Berlin, Germany, 2006.
* Das et al. (2011) Das, J., Py, F., Maughan, T., O’Reilly, T., Messie, M., J. Ryan, G. S., and Rajan, K.: Simultaneous Tracking and Sampling of Dynamic Oceanographic Features with AUVs and Drifters, Submitted to International Journal of Robotics Research, 2011.
* DeVries and Paley (2011) DeVries, L. and Paley, D. A.: Multi-vehicle control in a strong flowfield with application to hurricane sampling, Accepted for publication in the AIAA J. Guidance, Control, and Dynamics, 2011.
* Dias et al. (2006) Dias, M. B., Zlot, R. M., Kalra, N., and Stentz, A. T.: Market-based multirobot coordination: a survey and analysis, Proceedings of the IEEE, 94, 1257–1270, 2006\.
* Forgoston et al. (2011) Forgoston, E., Billings, L., Yecko, P., and Schwartz, I. B.: Set-based corral control in stochastic dynamical systems: Making almost invariant sets more invariant, Chaos, 21, 2011.
* Gerkey and Mataric (2002) Gerkey, B. P. and Mataric, M. J.: Sold!: Auction methods for multi-robot control, IEEE Transactions on Robotics & Automation, 18, 758–768, 2002.
* Gerkey and Mataric (2004) Gerkey, B. P. and Mataric, M. J.: A Formal Framework for the Study of Task Allocation in Multi-Robot Systems, International Journal of Robotics Research, 23, 939–954, 2004.
* Haller (2000) Haller, G.: Finding finite-time invariant manifolds in two-dimensional velocity fields, Chaos, 10, 99–108, 2000.
* Haller (2001) Haller, G.: Distinguished material surfaces and coherent structures in three-dimensional fluid flows, Physica D, 149, 248–277, 2001.
* Haller (2002) Haller, G.: Lagrangian coherent structures from approximate velocity data, Phys. Fluids, 14, 1851–1861, 2002.
* Haller (2011) Haller, G.: A variational theory of hyperbolic Lagrangian Coherent Structures, Physica D, 240, 574–598, 2011.
* Haller and Yuan (2000) Haller, G. and Yuan, G.: Lagrangian coherent structures and mixing in two-dimensional turbulence, Phys. D, 147, 352–370, 10.1016/S0167-2789(00)00142-1, URL http://dl.acm.org/citation.cfm?id=366463.366505, 2000.
* Hsieh et al. (2008) Hsieh, M. A., Halasz, A., Berman, S., and Kumar, V.: Biologically inspired redistribution of a swarm of robots among multiple sites, Swarm Intelligence, 2008\.
* Hsieh et al. (2012) Hsieh, M. A., Forgoston, E., Mather, T. W., and Schwartz, I. B.: Robotic Manifold Tracking of Coherent Structures in Flows, in: in the Proc. of the IEEE International Conference on Robotics and Automation, Minneapolis, MN USA, 2012.
* Inanc et al. (2005) Inanc, T., Shadden, S., and Marsden, J.: Optimal trajectory generation in ocean flows, in: American Control Conference, 2005. Proceedings of the 2005, pp. 674 – 679, 10.1109/ACC.2005.1470035, 2005.
* Klavins (2010) Klavins, E.: Proportional-Integral Control of Stochastic Gene Regulatory Networks, in: Proc. of the 2010 IEEE Conf. on Decision and Control (CDC2010), Atlanta, GA USA, 2010.
* Kuznetsov et al. (2002) Kuznetsov, L., Toner, M., Kirwan, A. D., and Jones, C.: Current and adjacent rings delineated by Lagrangian analysis of the near-surface flow, J. Mar. Res., 60, 405–429, 2002.
* Lekien et al. (2007) Lekien, F., Shadden, S. C., and Marsden, J. E.: Lagrangian coherent structures in $n$-dimensional systems, J. Math. Phys., 48, 065 404, 2007.
* Lolla et al. (2012) Lolla, T., Ueckermann, M. P., Haley, P., and Lermusiaux, P. F. J.: Path Planning in Time Dependent Flow Fields using Level Set Methods, in: in the Proc. IEEE International Conference on Robotics and Automation, Minneapolis, MN USA, 2012.
* Lynch et al. (2008) Lynch, K. M., Schwartz, I. B. Yang, P., and Freeman, R. A.: Decentralized environmental modeling by mobile sensor networks, IEEE Trans. Robotics, 24, 710–724, 2008.
* Mancho et al. (2008) Mancho, A. M., Hernández-García, E., Small, D., and Wiggins, S.: Lagrangian Transport through an Ocean Front in the Northwestern Mediterranean Sea, J. Phys. Oceanogr., 38, 1222–1237, 2008.
* Mather and Hsieh (2011) Mather, T. W. and Hsieh, M. A.: Distributed Robot Ensemble Control for Deployment to Multiple Sites, in: 2011 Robotics: Science and Systems, Los Angeles, CA USA, 2011.
* Mendoza and Mancho (2012) Mendoza, C. and Mancho, A. M.: The Lagrangian description of aperiodic flows: a case study of the Kuroshio Current, Nonlinear Proc. Geoph., 19, 449–472, 2012.
* Michini et al. (2013) Michini, M., Mallory, K., Larkin, D., Hsieh, M. A., Forgoston, E., and Yecko, P. A.: An experimental testbed for multi-robot tracking of manifolds and coherent structures in flows, in: To appear at the 2013 ASME Dynamical Systems and Control Conference, 2013.
* Miller et al. (2002) Miller, P. D.and Pratt, L. J., Helfrich, K., Jones, C., Kanth, L., and Choi, J.: Chaotic transport of mass and potential vorticity for an island recirculation, J. Phys. Oceanogr., 32, 80–102, 2002.
* Rogerson et al. (1999) Rogerson, A. M., Miller, P. D., Pratt, L. J., and Jones, C.: Lagrangian motion and fluid exchange in a barotropic meandering jet, J. Phys. Oceanogr., 29, 2635–2655, 1999.
* Rypina et al. (2011) Rypina, I. I., Scott, S., Pratt, L. J., and Brown, M. G.: Investigating the connection between trajectory complexities, Lagrangian coherent structures, and transport in the ocean, Nonlinear Processes in Geophysics, 18, 977–987, 2011.
* Senatore and Ross (2008) Senatore, C. and Ross, S.: Fuel-efficient navigation in complex flows, in: American Control Conference, 2008, pp. 1244 –1248, 10.1109/ACC.2008.4586663, 2008.
* Shadden et al. (2005) Shadden, S. C., Lekien, F., and Marsden, J. E.: Definition and properties of Lagrangian coherent structures from finite-time Lyapunov exponents in two-dimensional aperiodic flows, Physica D: Nonlinear Phenomena, 212, 271 – 304, DOI: 10.1016/j.physd.2005.10.007, URL http://www.sciencedirect.com/science/article/pii/S0167278905004446, 2005\.
* Sydney and Paley (2011) Sydney, N. and Paley, D. A.: Multi-vehicle control and optimization for spatiotemporal sampling, in: IEEE Conf. Decision and Control, pp. 5607–5612, Orlando, FL, 2011.
* Wang et al. (2009) Wang, D., Lermusiaux, P. F., Haley, P. J., Eickstedt, D., Leslie, W. G., and Schmidt, H.: Acoustically focused adaptive sampling and on-board routing for marine rapid environmental assessment, Journal of Marine Systems, 78, 393–407, 2009.
* Williams and Sukhatme (2012) Williams, R. and Sukhatme, G.: Probabilistic Spatial Mapping and Curve Tracking in Distributed Multi-Agent Systems, in: Submitted to IEEE International Conference on Robotics and Automation, Minneapolis, MN, 2012.
* Wu and Zhang (2011) Wu, W. and Zhang, F.: Cooperative Exploration of Level Surfaces of Three Dimensional Scalar Fields, Automatica, the IFAC Journall, 47, 2044–2051, 2011\.
* Zhang et al. (2007) Zhang, F., Fratantoni, D. M., Paley, D., Lund, J., and Leonard, N. E.: Control of Coordinated Patterns for Ocean Sampling, International Journal of Control, 80, 1186–1199, 2007.
$T_{c}$ | 2 | 5 | 8 | 9 | 10
---|---|---|---|---|---
Ring Pattern | 12.99 | 5.98 | 3.45 | 3.49 | 3.66
Block Pattern | - | 11.21 | - | - | 12.72
L Pattern | - | 30.09 | - | - | 30.45
Table 1: Summary of the RMSE for each simulation pattern at $t=450$ with the
time-invariant flow field. The RMSE for the Baseline case is 4.09.
|
arxiv-papers
| 2013-03-04T14:12:46 |
2024-09-04T02:49:42.412492
|
{
"license": "Public Domain",
"authors": "Kenneth Mallory, M. Ani Hsieh, Eric Forgoston and Ira B. Schwartz",
"submitter": "Ira Schwartz",
"url": "https://arxiv.org/abs/1303.0704"
}
|
1303.0875
|
050001 2013 G. Mindlin 050001
In this paper, we present a theoretical effort to connect the theory of
program size to psychology by implementing a concrete language of thought with
Turing-computable Kolmogorov complexity (${\rm LT^{2}C^{2}}$) satisfying the
following requirements: 1) to be simple enough so that the complexity of any
given finite binary sequence can be computed, 2) to be based on tangible
operations of human reasoning (printing, repeating,…), 3) to be sufficiently
powerful to generate all possible sequences but not too powerful as to
identify regularities which would be invisible to humans. We first formalize
${\rm LT^{2}C^{2}}$, giving its syntax and semantics and defining an adequate
notion of program size. Our setting leads to a Kolmogorov complexity function
relative to ${\rm LT^{2}C^{2}}$ which is computable in polynomial time, and it
also induces a prediction algorithm in the spirit of Solomonoff’s inductive
inference theory. We then prove the efficacy of this language by investigating
regularities in strings produced by participants attempting to generate random
strings. Participants had a profound understanding of randomness and hence
avoided typical misconceptions such as exaggerating the number of
alternations. We reasoned that remaining regularities would express the
algorithmic nature of human thoughts, revealed in the form of specific
patterns. Kolmogorov complexity relative to ${\rm LT^{2}C^{2}}$ passed three
expected tests examined here: 1) human sequences were less complex than
control PRNG sequences, 2) human sequences were not stationary, showing
decreasing values of complexity resulting from fatigue, 3) each individual
showed traces of algorithmic stability since fitting of partial sequences was
more effective to predict subsequent sequences than average fits. This work
extends on previous efforts to combine notions of Kolmogorov complexity theory
and algorithmic information theory to psychology, by explicitly proposing a
language which may describe the patterns of human thoughts.
# ${\rm LT^{2}C^{2}}$: A language of thought with Turing-computable
Kolmogorov complexity
Sergio Romano [dc] Mariano Sigman E-mail: [email protected] [df, conicet]
Santiago Figueira[dc, conicet] E-mail: [email protected]:
[email protected]
(12 December 2012; 2 February 2013)
††volume: 5
99 dc Department of Computer Science, FCEN, University of Buenos Aires,
Pabellón I, Ciudad Universitaria (C1428EGA) Buenos Aires, Argentina. df
Laboratory of Integrative Neuroscience, Physics Department, FCEN, University
of Buenos Aires, Pabellón I, Ciudad Universitaria (C1428EGA) Buenos Aires,
Argentina. conicet CONICET, Argentina.
## 1 Introduction
Although people feel they understand the concept of randomness [1], humans are
unable to produce random sequences, even when instructed to do so [2, 3, 4, 5,
6], and to perceive randomness in a way that is inconsistent with probability
theory [7, 8, 9, 10]. For instance, random sequences are not perceived by
participants as such because runs appear too long to be random [11, 12] and,
similarly, sequences produced by participants aiming to be random have too
many alternations [13, 14]. This bias, known as the gambler’s fallacy, is
thought to result from an expectation of local representativeness (LR) of
randomness [10] which ascribes chance to a self-correcting mechanism, promptly
restoring the balance whenever disrupted. In words of Tversky and Kahneman
[5], people apply the law of large numbers too hastily, as if it were the law
of small numbers. The gambler’s fallacy leads to classic psychological
illusions in real-world situations such as the hot hand perception by which
people assume specific states of high performance, while analysis of records
show that sequences of hits and misses are largely compatible with Bernoulli
(random) process [15, 16].
Despite massive evidence showing that perception and productions of randomness
shows systematic distortions, a mathematical and psychological theory of
randomness remains partly elusive. From a mathematical point of view —as
discussed below— a notion of randomness for finite sequences presents a major
challenge.
From a psychological point of view, it remains difficult to ascribe whether
the inability to produce and perceive randomness adequately results from a
genuine misunderstanding of randomness or, instead, as a consequence of the
algorithmic nature of human thoughts which is revealed in the forms of
patterns and, hence, in the impossibility of producing genuine chance.
In this work, we address both issues by developing a framework based on a
specific language of thought by instantiating a simple device which induces a
computable (and efficient) definition of algorithmic complexity [17, 18, 19].
The notion of algorithmic complexity is described in greater detail below but,
in short, it assigns a measure of complexity to a given sequence as the length
of the shortest program capable of producing it. If a sequence is
algorithmically compressible, it implies that there may be a certain pattern
embedded (described succinctly by the program) and hence it is not random. For
instance, the binary version of Champernowne’s sequence [20]
$01101110010111011110001001101010111100\dots$
consisting of the concatenation of the binary representation of all the
natural numbers, one after another, is known to be normal in the scale of 2,
which means that every finite word of length $n$ occurs with a limit frequency
of $2^{-n}$ —e.g., the string $1$ occurs with probability $2^{-1}$, the string
$10$ with probability $2^{-2}$, and so on. Although this sequence may seem
random based on its probability distribution, every prefix of length $n$ is
produced by a program much shorter than $n$.
The theory of program size, developed simultaneously in the ’60s by Kolmogorov
[17], Solomonoff [21] and Chaitin [22], had a major influence in theoretical
computer science. Its practical relevance was rather obscure because most
notions, tools and problems were undecidable and, overall, because it did not
apply to finite sequences. A problem at the heart of this theory is that the
complexity of any given sequence depends on the chosen language. For instance,
the sequence
$x_{1}=1100101001111000101000110101100110011100$
which seems highly complex, may be trivially accounted by a single character
if there is a symbol (or instruction of a programming language) which accounts
for this sequence. This has its psychological analog in the kind of
regularities people often extract:
$x_{2}=1010101010101010101010101010101010101010$
is obviously a non-random sequence, as it can succinctly be expressed as
$\small\textsl{repeat 20 times: print `10'}.$ (1)
Instead, the sequence
$x_{3}=0010010000111111011010101000100010000101$
appears more random and yet it is highly compressible as it consists of the
first 40 binary digits of $\pi$ after the decimal point. This regularity is
simply not extracted by the human-compressor and demonstrates how the
exceptions to randomness reveal natural patterns of thoughts [23].
The genesis of a practical (computable) algorithmic information theory [24]
has had an influence (although not yet a major impact) in psychology. Variants
of Kolmogorov complexity have been applied to human concept learning [25], to
general theories of cognition [26] and to subjective randomness [27, 23]. In
this last work, Falk and Konold showed that a simple measure, inspired in
algorithmic notions, was a good correlate of perceived randomness [27].
Griffiths & Tenenbaum developed statistical models that incorporate the
detection of certain regularities, which are classified in terms of the
Chomsky hierarchy [23]. They showed the existence of motifs (repetition,
symmetry) and related their probability distributions to Kolmogorov complexity
via Levin’s coding theorem (cf. section 7 for more details).
The main novelty of our work is to develop a class of specific programming
languages (or Turing machines) which allows us to stick to the theory of
program size developed by Kolomogorov, Solomonoff and Chaitin. We use the
patterns of sequences of humans aiming to produce random strings to fit, for
each individual, the language which captures these regularities.
## 2 Mathematical theory of randomness
The idea behind Kolmogorov complexity theory is to study the length of the
descriptions that a formal language can produce to identify a given string.
All descriptions are finite words over a finite alphabet, and hence each
description has a finite length —or, more generally, a suitable notion of
size. One string may have many descriptions, but any description should
describe one and only one string. Roughly, the Kolmogorov complexity [17] of a
string $x$ is the length of the shortest description of $x$. So a string is
‘simple’ if it has at least one short description, and it is ‘complex’ if all
its descriptions are long. Random strings are those with high complexity.
As we have mentioned, Kolmogorov complexity uses programming languages to
describe strings. Some programming languages are Turing complete, which means
that any partial computable function can be represented in it. The commonly
used programming languages, like C++ or Java, are all Turing complete.
However, there are also Turing incomplete programming languages, which are
less powerful but more convenient for specific tasks.
In any reasonable imperative language, one can describe $x_{2}$ above with a
program like (1), of length 26, which is considerably smaller than $40$, the
size of the described string. It is clear that $x_{2}$ is ‘simple’. The case
of $x_{3}$ is a bit tricky. Although at first sight it seems to have a
complete lack of structure, it contains a hidden pattern: it consists of the
first forty binary digits of $\pi$ after the decimal point. This pattern could
hardly be recognized by the reader, but once it is revealed to us, we agree
that $x_{3}$ must also be tagged as ‘simple’. Observe that the underlying
programming language is central: $x_{3}$ is ‘simple’ with the proviso that the
language is strong enough to represent (in a reasonable way) an algorithm for
computing the bits of $\pi$ —a language to which humans are not likely to have
access when they try to find patterns in a string. Finally, for $x_{1}$, the
best way to describe it seems to be something like
print ‘1100101001111000101000110101100110011100’,
which includes the string in question verbatim, length $48$. Hence $x_{1}$
only has long descriptions and hence it is ‘complex’.
In general, both the string of length $n$ which alternates $0$s and $1$s and
the string which consists of the first $n$ binary digits of $\pi$ after the
decimal point can be computed by a program of length $\approx\log n$ —and this
applies to any computable sequence. The idea of the algorithmic randomness
theory is that a truly random string of length $n$ necessarily needs a program
of length $\approx n$ (cf. section 2.2 for details).
### 2.1 Languages, Turing machines and Kolmogorov complexity
Any programming language $\mathcal{L}$ can be formalized with a Turing machine
$M_{\mathcal{L}}$, so that programs of $\mathcal{L}$ are represented as inputs
of $M_{\mathcal{L}}$ via an adequate binary codification. If $\mathcal{L}$ is
Turing complete then the corresponding machine $M_{\mathcal{L}}$ is called
universal, which is equivalent to say that $M_{\mathcal{L}}$ can simulate any
other Turing machine.
Let $\\{0,1\\}^{*}$ denote the set of finite words over the binary alphabet.
Given a Turing machine $M$, a program $p$ and a string $x$
($p,x\in\\{0,1\\}^{*}$), we say that $p$ is an $M$-description of $x$ if
$M(p)=x$ —i.e., the program $p$, when executed in the machine $M$, computes
$x$. Here we do not care about the time that the computation needs, or the
memory it consumes. The Kolmogorov complexity of $x\in\\{0,1\\}^{*}$ relative
to $M$ is defined by the length of the shorter $M$-description of $x$. More
formally,
$\displaystyle
K_{M}(x)\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}\min\\{|p|\colon
M(p)=x\\}\cup\\{\infty\\},$
where $|p|$ denotes the length of $p$. Here $M$ is any given Turing machine,
possibly one with a very specific behavior, so it may be the case that a given
string $x$ does not have any $M$-description at all. In this case,
$M(x)=\infty$. In practical terms, a machine $M$ is a useful candidate to
measure complexity if it computes a surjective function. In this case, every
string $x$ has at least one $M$-description and therefore $K_{M}(x)<\infty$.
### 2.2 Randomness for finite words
The strength of Kolmogorov complexity appears when $M$ is set to any universal
Turing machine $U$. The invariance theorem states that $K_{U}$ is minimal, in
the sense that for every Turing machine $M$ there is a constant $c_{M}$ such
that for all $x\in\\{0,1\\}^{*}$ we have $K_{U}(x)\leq K_{M}(c)+c_{M}$. Here,
$c_{M}$ can be seen as the specification of the language $M$ in $U$ (i.e., the
information contained in $c_{M}$ tells $U$ that the machine to be simulated is
$M$). If $U$ and $U^{\prime}$ are two universal Turing then $K_{U}$ and
$K_{U^{\prime}}$ differ at most by a constant. In a few words, $K_{U}(x)$
represents the length of the ultimate compressed version of $x$, performed by
means of algorithmic processes.
For analysis of arbitrarily long sequences, $c_{M}$ becomes negligible and
hence for nonpractical aspects of the theory the choice of the machine is not
relevant. However, for short sequences, as we study here, this becomes a
fundamental problem, as notions of complexity are highly dependent on the
choice of the underlying machine through the constant $c_{M}$. The most
trivial example, as referred in the introduction, is that for any given
sequence, say $x_{1}$, there is a machine $M$ for which $x_{1}$ has minimal
complexity.
### 2.3 Solomonoff induction
Here we have presented compression as a framework to understand randomness.
Another very influential paradigm proposed by Schnorr is to use the notion of
martingale (roughly, a betting strategy), by which a sequence is random if
there is no computable martingale capable of predicting forthcoming symbols
(say, of a binary alphabet $\\{0,1\\}$) better than chance [28, 29]. In the
1960s, Solomonoff [21] proposed a universal prediction method which
successfully approximates any distribution $\mu$, with the only requirement of
$\mu$ being computable.
This theory brings together concepts of algorithmic information, Kolmogorov
complexity and probability theory. Roughly, the idea is that amongst all
‘explanations’ of $x$, those which are ‘simple’ are more relevant, hence
following Occam’s razor principle: amongst all hypothesis that are consistent
with the data, choose the simplest. Here the ‘explanations’ are formalized as
programs computing $x$, and ‘simple’ means low Kolmogorov complexity.
Solomonoff’s theory, builds on the notion of monotone (and prefix) Turing
machines. Monotone machines are ordinary Turing machines with a one-way read-
only input tape, some work tapes, and a one-way write-only output tape. The
output is written one symbol at a time, and no erasing is possible in it. The
output can be finite if the machine halts, or infinite in case the machine
computes forever. The output head of monotone machines can only “print and
move to the right” so they are well suited for the problem of inference of
forthcoming symbols based on partial (and finite) states of the output
sequence. Any monotone machine $N$ has the monotonicity property (hence its
name) with respect to extension: if $p,q\in\\{0,1\\}^{*}$ then $N(p)$ is a
prefix of $N(p{}^{\smallfrown}q)$, where $p{}^{\smallfrown}q$ denotes the
concatenation of $p$ and $q$.
One of Solomonoff’s fundamental results is that given a finite observed
sequence $x\in\\{0,1\\}^{*}$, the most likely finite continuation is the one
in which the concatenation of $x$ and $y$ is less complex in a Kolmogorov
sense. This is formalized in the following result (see theorem 5.2.3 of [24]):
for almost all infinite binary sequences $X$ (in the sense of $\mu$) we have
$\small\small-\lim\limits_{n\to\infty}\log\mu(y\ |\
X\\!\\!\upharpoonright\\!n)=$
$\small\small\lim\limits_{n\to\infty}Km_{U}((X\\!\\!\upharpoonright\\!n){}^{\smallfrown}y)-Km_{U}(X\\!\\!\upharpoonright\\!n)+O(1)<\infty.$
Here, $X\\!\\!\upharpoonright\\!n$ represents the first $n$ symbols of $X$,
and $Km_{U}$ is the monotone Kolmogorov complexity relative to a monotone
universal machine $U$. That is, $Km_{U}(x)$ is defined as the length of the
shortest program $p$ such that the output of $U(p)$ starts with $x$ —and
possibly has a (finite or infinite) continuation.
In other words, Solomonoff inductive inference leads to a method of prediction
based on data compression, whose idea is that whenever the source has output
the string $x$, it is a good heuristic to choose the extrapolation $y$ of $x$
that minimizes $Km_{U}(x{}^{\smallfrown}y)$. For instance, if one has observed
$x_{2}$, it is more likely for the continuation to be $1010$ rather than
$0101$, as the former can be succinctly described by a program like
$\displaystyle\small\textsl{repeat 22 times: print `10'}.$ (2)
and the latter looks more difficult to describe; indeed the shorter program
describing it seems to be something like
repeat 20 times: print ‘10’; (3) $\displaystyle\small\textsl{print `0101'}.$
Intuitively, as program (2) is shorter than (3), $x_{2}{}^{\smallfrown}1010$
is more probable than $x_{2}{}^{\smallfrown}0101$. Hence, if we have seen
$x_{2}$, it seems to be a better strategy to predict $1$.
## 3 A framework for human thoughts
The notion of thought is not well grounded. We lack an operative working
definition and, as also happens with other terms in neuroscience
(consciousness, self, …), the word thought is highly polysemic in common
language. It may refer, for example, to a belief, to an idea or to the content
of the conscious mind. Due to this difficulty, the mere notion of thought has
not been a principal or directed object of study in neuroscience, although of
course it is always present implicitly, vaguely, without a formal definition.
Here we do not intend to elaborate an extensive review on the philosophical
and biological conceptions of thoughts (see [30] for a good review on
thoughts). Nor are we in a theoretical position to provide a full formal
definition of a thought. Instead, we point to the key assumptions of our
framework about the nature of thoughts. This accounts to defining constraints
in the class of thoughts which we aim to describe. In other words, we do not
claim to provide a general theory of human thoughts (which is not amenable at
this stage lacking a full definition of the class) but rather of a subset of
thoughts which satisfy certain constraints defined below.
For instance, E.B. Titchener and W. Wundt, the founders of structuralist
school in psychology (seeking structure in the mind without evoking
metaphysical conceptions, a tradition which we inherit and to which we
adhere), believed that thoughts were images (there are not imageless thoughts)
and hence can be broken down to elementary sensations [30]. While we do not
necessarily agree with this propositions (see Carey [31] for more contemporary
versions denying the sensory foundations of conceptual knowledge), here we do
not intend to explain all possible thoughts but rather a subset, a simpler
class which —in agreement with the Wundt and Titchener— can be expressed in
images. More precisely, we develop a theory which may account for Boole’s [32]
notion of thoughts as propositions and statements about the world which can be
represented symbolically. Hence, a first and crucial assumption of our
framework is that thoughts are discrete. Elsewhere we have extensively
discussed [33, 34, 35, 36, 37, 38, 39] how the human brain, whose architecture
is quite different from Turing machines, can emerge in a form of computation
which is discrete, symbolic and resembles Turing devices.
Second, here we focus on the notion of “prop-less” mental activity, i.e.,
whatever (symbolic) computations can be carried out by humans without
resorting to external aids such as paper, marbles, computers or books. This is
done by actually asking participants to perform the task “in their heads”.
Again, this is not intended to set a proposition about the universality of
human thoughts but, instead, a narrower set of thoughts which we conceive is
theoretically addressable in this mathematical framework.
Summarizing:
1. 1.
We think we do not have a good mathematical (even philosophical) conception of
thoughts, as mental structures, yet.
2. 2.
Intuitively (and philosophically), we adhere to a materialistic and computable
approach to thoughts. Broadly, one can think (to picture, not to provide a
formal framework) that thoughts are formations of the mind with certain
stability which defines distinguishable clusters or objects [40, 41, 42].
3. 3.
While the set of such objects and the rules of their transitions may be of
many different forms (analogous, parallel, unconscious, unlinked to sensory
experience, non-linguistic, non-symbolic), here we work on a subset of
thoughts, a class defined by Boole’s attempt to formalize thought as symbolic
propositions about the world.
4. 4.
This states —which may correspond to human “conscious rational thoughts”, the
seed of Boole and Turing foundations [34, 34]— are discrete and defined by
symbols and potentially represented by a Turing device.
5. 5.
We focus on an even narrower space of thoughts. Binary formations (right or
left, zero or one) to focus on what kind of language better describes these
transitions. This work can be naturally extended to understand discrete
transitions in conceptual formations [43, 44, 45].
6. 6.
We concentrate on prop-less mental activity to understand limitations of the
human mind when it does not have evident external support (paper, computer…)
## 4 Implementing a language of thought with Turing-computable complexity
As explained in section 22.1, Kolmogorov complexity considers all possible
computable compressors and assigns to a string $x$ the length of the shortest
of the corresponding compressions. This seems to be a perfect theory of
compression but it has a drawback: the function $K_{U}$ is not computable,
that is, there is no effective procedure to calculate $K_{U}(x)$ given $x$.
On the other hand, the definition of randomness introduced in section 22.1,
having very deep and intricate connections with algorithmic information and
computability theories, is simply too strong to explain our own perception of
randomness. To detect that $x_{3}$ consists of the first twenty bits of $\pi$
is incompatible with human patterns of thought.
Hence, the intrinsic algorithms (or observed patterns) which make human
sequences not random are too restricted to be accounted by a universal machine
and may be better described by a specific machine. Furthermore, our hypothesis
is that each person uses his own particular specific machine or algorithm to
generate a random string.
As a first step in this complicated enterprise, we propose to work with a
specific language ${\rm LT^{2}C^{2}}$ which meets the following requirements:
* •
${\rm LT^{2}C^{2}}$ must reflect some plausible features of our mental
activity when finding succinct descriptions of words. For instance, finding
repetitions in a sequence such as $x_{2}$ seems to be something easy for our
brain, but detecting numerical dependencies between its digits as in $x_{3}$
seems to be very unlikely.
* •
${\rm LT^{2}C^{2}}$ must be able to describe any string in $\\{0,1\\}^{*}$.
This means that the map given by the induced machine
$N\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}N_{{\rm
LT^{2}C^{2}}}$ must be surjective.
* •
$N$ must be simple enough so that $K_{N}$ —the Kolmogorov complexity relative
to $N$— becomes computable. This requirement clearly makes ${\rm LT^{2}C^{2}}$
Turing incomplete, but as we have seen before, this is consistent with human
deviations from randomness.
* •
The rate of compression given by $K_{N}$ must be sensible for very short
strings, since our experiments will produce such strings. For instance, the
approach, followed in [46], of using the size of the compressed file via
general-purpose compressors like Lempel-Ziv based dictionary (gzip) or block
based (bzip2) to approximate the Kolmogorov complexity does not work in our
setting. This method works best for long files.
* •
${\rm LT^{2}C^{2}}$ should have certain degrees of freedom, which can be
adjusted in order to approximate the specific machine that each individual
follows during the process of randomness generation.
We will not go into the details on how to codify the instructions of ${\rm
LT^{2}C^{2}}$ into binary strings of $N$: for the sake of simplicity we take
$N$ as a surjective total mapping ${\rm LT^{2}C^{2}}\to\\{0,1\\}^{*}$. We
restrict ourselves to describe the grammar and semantics of our proposed
programming language ${\rm LT^{2}C^{2}}$. It is basically an imperative
language with only two classes of instructions: a sort of print $i$, which
prints the bit $i$ in the output; and a sort of repeat $n$ times $P$, which
for a fixed $n\in\mathbb{N}$ it repeats $n$ times the program $P$. The former
is simply represented as $i$ and the latter as $(P)^{n}$.
Formally, we set the alphabet $\\{0,1,(,),^{0},\dots,^{9}\\}$ and define ${\rm
LT^{2}C^{2}}$ over such alphabet with the following grammar:
$P\quad::=\quad\epsilon\quad|\quad 0\quad|\quad 1\quad|\quad
PP\quad|\quad(P)^{n},$
where $n>1$ is the decimal representation of $n\in\mathbb{N}$ and $\epsilon$
denotes the empty string. The semantics of ${\rm LT^{2}C^{2}}$ is given
through the behavior of $N$ as follows:
$\displaystyle N(i)$
$\displaystyle\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}$
$\displaystyle p\qquad\mbox{for $i\in\\{\epsilon,0,1\\}$}$ $\displaystyle
N(P_{1}P_{2})$
$\displaystyle\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}$
$\displaystyle N(P_{1}){}^{\smallfrown}N(P_{2})$ $\displaystyle N((P)^{n})$
$\displaystyle\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}$
$\displaystyle\underbrace{N(P){}^{\smallfrown}\cdots{}^{\smallfrown}N(P)}_{\mbox{\small$n$
times}}.$
$N$ is not universal, but every string $x$ has a program in $N$ which
describes it: namely $x$ itself. Furthermore, $N$ is monotone in the sense
that if $p,q\in{\rm LT^{2}C^{2}}$ then $N(p)$ is a prefix of
$N(p{}^{\smallfrown}q)$. In Table 1, the first column shows some examples of
$N$-programs which compute $1001001001$.
program | size
---|---
$1001001001$ | 10
$(100)^{2}1(0)^{2}1$ | 6.6
$(100)^{3}1$ | 4.5
$1((0)^{2}1)^{3}$ | 3.8
Table 1: Some $N$-descriptions of $1001001001$ and its sizes for $b=r=1$
### 4.1 Kolmogorov complexity for ${\rm LT^{2}C^{2}}$
The Kolmogorov complexity relative to $N$ (and hence to the language ${\rm
LT^{2}C^{2}}$) is defined as
$K_{N}(x)\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}\min\\{\|p\|\colon
p\in{\rm LT^{2}C^{2}},N(p)=x\\},$
where $\|p\|$, the size of a program $p$, is inductively defined as:
$\displaystyle\|\epsilon\|$
$\displaystyle\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}$
$\displaystyle 0$ $\displaystyle\|p\|$
$\displaystyle\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}$
$\displaystyle b\qquad\mbox{for $p\in\\{0,1\\}$}$
$\displaystyle\|P_{1}P_{2}\|$
$\displaystyle\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}$
$\displaystyle\|P_{1}\|+\|P_{2}\|$ $\displaystyle\|(P)^{n}\|$
$\displaystyle\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}$
$\displaystyle r\cdot\log n+\|P\|.$
In the above definition, $b\in\mathbb{N},r\in\mathbb{R}$ are two parameters
that control the relative weight of the print operation and the repeat $n$
times operation. In the sequel, we drop the subindex of $K_{N}$ and simply
write $K\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}K_{N}$.
Table 1 shows some examples of the size of $N$-programs when $b=r=1$. Observe
that for all $x$ we have $K(x)\leq\|x\|$.
It is not difficult to see that $K(x)$ depends only on the values of $K(y)$,
where $y$ is any nonempty and proper substring of $x$. Since $\|\cdot\|$ is
computable in polynomial time, using dynamic programming one can calculate
$K(x)$ in polynomial time. This, of course, is a major difference with respect
to the Kolmogorov complexity relative to a universal machine, which is not
computable.
### 4.2 From compression to prediction
As one can imagine, the perfect universal prediction method described in
section 22.3 is, again, non-computable. We define a computable prediction
algorithm based on Solomonoff’s theory of inductive inference but using $K$,
the Kolmogorov complexity relative to ${\rm LT^{2}C^{2}}$, instead of $Km_{U}$
(which depends on a universal machine). To predict the next symbol of
$x\in\\{0,1\\}^{*}$, we follow the idea described in section 22.3: amongst all
extrapolations $y$ of $x$ we choose the one that minimizes
$K(x{}^{\smallfrown}y)$. If such $y$ starts with $1$, we predict $1$, else we
predict $0$. Since we cannot examine the infinitely many extrapolations, we
restrict to those up to a fixed given length $\ell_{F}$. Also, we do not take
into account the whole $x$ but only a suffix of length $\ell_{P}$. Both
$\ell_{F}$ and $\ell_{P}$ are parameters which control, respectively, how many
extrapolation bits are examined ($\ell_{F}$ many Future bits) and how many
bits of the tail of $x$ ($\ell_{P}$ many Past bits) are considered.
Let $\\{0,1\\}^{n}$ (resp. $\\{0,1\\}^{\leq n}$) be the set of words over the
binary alphabet $\\{0,1\\}$ of length $n$ (resp. at most $n$). Formally, the
prediction method is as follows. Suppose $x=x_{1}\cdots x_{n}$
($x_{i}\in\\{0,1\\}$) is a string. The next symbol is determined as follows:
$\small{\rm Next}(x_{1}\cdots
x_{n})\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}\begin{cases}0&\mbox{if
$m_{0}<m_{1}$;}\\\ 1&\mbox{if $m_{0}>m_{1}$;}\\\ g(x_{n-\ell_{P}}\cdots
x_{n})&\mbox{otherwise.}\end{cases}$
where for $i\in\\{0,1\\}$,
$m_{i}\stackrel{{\scriptstyle\scriptscriptstyle\mathrm{def}}}{{=}}\min\\{K(x_{n-\ell_{P}}\cdots
x_{n}i{}^{\smallfrown}y)\colon y\in\\{0,1\\}^{\leq\ell_{F}}\\},$
and $g:\\{0,1\\}^{\ell_{P}}\to\\{0,1\\}$ is defined as $g(z)=i$ if the number
of occurrences of $i$ in $z$ is greater than the number of occurrences of
$1-i$ in $z$; in case the number of occurrences of $1$s and $0$s in $z$
coincide then $g(z)$ is defined as the last bit of $z$.
## 5 Methods
Thirty eight volunteers (mean age = 24) participated in an experiment to
examine the capacity of ${\rm LT^{2}C^{2}}$ to identify regularities in
production of binary sequences. Participants were asked to produce random
sequences, without further instruction.
All the participants were college students or graduates with programming
experience and knowledge of the theoretical foundations of randomness and
computability. This was intended to test these ideas in a hard sample where we
did not expect typical errors which results from a misunderstanding of chance.
The experiment was divided in four blocks. In each block the participant
pressed freely the left or right arrow $120$ times.
After each key press, the participant received a notification with a green
square which progressively filled a line to indicate the participant the
number of choices made. At the end of the block, participants were provided
feedback of how many times the predictor method has correctly predicted their
input. After this point, a new trial would start.
$38$ participants performed 4 sequences, yielding a total of $152$ sequences.
$14$ sequences were excluded from analysis because they had an extremely high
level of predictability. Including these sequences would have actually
improved all the scores reported here.
The experiment was programmed in ActionScript and can be seen at
http://gamesdata.lafhis-server.exp.dc.uba.ar/azarexp.
## 6 Results
### 6.1 Law of large numbers
Any reasonable notion of randomness for strings on base $2$ should imply
Borel’s normality, or the law of large numbers in the sense that if
$x\in\\{0,1\\}^{n}$ is random then the number of occurrences of any given
string $y$ in $x$ divided by $n$ should tend to $2^{-|y|}$, as $n$ goes to
infinity.
A well-known result obtained in some investigations on generation or
perception of randomness in binary sequences is that people tend to increase
the number of alternations of symbols with respect to the expected value [27].
Given a string $x$ of length $n$ with $r$ runs, there are $n-1$ transitions
between successive symbols and the number of alternations between symbol types
is $r-1$. The probability of alternation of the string $x$ is defined as
$\displaystyle P(x):\\{0,1\\}^{\geq 2}\to[0,1]$ $\displaystyle
P(x)=\frac{r-1}{n-1}.$
In our experiment, the average $P(x)$ of participants was $0.51$, very close
to the expected probability of alternation of a random sequence which should
be $0.5$. A t-test on the $P(x)$ of the strings produced by participants,
where the null hypothesis is that they are a random sample from a normal
distribution with mean $0.5$, shows that the hypothesis cannot be rejected as
the $p$-value is $0.31$ and the confidence interval on the mean is
$[0.49,0.53]$. This means that the probability of alternation is not a good
measure to distinguish participant’s strings from random ones, or at least,
that the participants in this very experiment can bypass this validation.
Although the probability of alternation was close to the expected value in a
random string, participants tend to produce $n$-grams of length $\geq 2$ with
probability distributions which are not equiprobable (see Fig. 1). Strings
containing more alternations (like $1010$, $0101$, $010$, $101$) and $3-$ and
$4-$ runs have a higher frequency than expected by chance. This might be seen
as an effort from participants to keep the probability of alternation close to
$0.5$ by compensating the excess of alternations with blocks of repetitions of
the same symbol.
$0.5$01$2^{-1}$ | $0.0625$0000000100100011010001010110011110001001101010111100110111101111$\ \ 2^{-4}\ \ $
---|---
$0.25$00011011$2^{-2}$
$0.125$000001010011100101110111$2^{-3}$
Figure 1: Frequency of sub-strings up to length 4
### 6.2 Comparing human randomness with other random sources
We asked whether $K$, the Kolmogorov complexity relative to ${\rm
LT^{2}C^{2}}$ defined in section 44.1, is able to detect and compress more
patterns in strings generated by participants than in strings produced by
other sources, which are considered random for many practical issues. In
particular, we studied strings originated by two sources: Pseudo-Random Number
Generator (PRNG) and Atmospheric Noise (AN).
For the PRNG source, we chose the Mersenne Twister algorithm [47]
(specifically, the second revision from 2002 that is currently implemented in
GNU Scientific Library). The atmospheric noise was taken from random.org site
(property of Randomness and Integrity Services Limited) which also runs real-
time statistic tests recommended by the US National Institute of Standards and
Technology to ensure the random quality of the numbers produced over time.
In Table 2, we summarize our results using $b=1$ and $r=1$ for the parameters
of $K$ as defined in section 44.1
| Participants | PRNG | AN
---|---|---|---
Mean $\mu$ | 48.43 | 52.99 | 53.88
Std $\sigma$ | 6.62 | 3.06 | 2.87
$1^{st}$ quartile | 45.30 | 50.42 | 51.88
Median | 49.23 | 53.15 | 53.85
$3^{rd}$ quartile | 51.79 | 55.21 | 55.79
Table 2: Values of $K(x)$, where $x$ is a string produced by participants,
PRNG or AN sources
The mean and median of $K$ increases when comparing participant’s string with
PRNG or AN strings. This difference was significant, as confirmed by a t-test
($p$-value of $4.9\times 10^{-11}$ when comparing participant’s sample with
PRNG one, a $p$-value of $1.2\times 10^{-15}$ when comparing participant’s
with AN and a $p$-value of $1.4\times 10^{-2}$ when comparing PRNG with AN
sample).
Therefore, despite the simplicity of ${\rm LT^{2}C^{2}}$, based merely on
prints and repeats, it is rich enough to identify regularities of human
sequences. The $K$ function relative to ${\rm LT^{2}C^{2}}$ is an effective
and significant measure to distinguish strings produced by participants with
profound understanding in the mathematics of randomness, from PRNG and AN
strings. As expected, humans produce less complex (i.e., less random) strings
than those produced by PRNG or atmospheric noise sources.
### 6.3 Mental fatigue
On cognitively demanding tasks, fatigue affects performance by deteriorating
the capacity to organize behavior [48, 49, 50, 51, 52]. Specifically, Weiss
claim that boredom may be a factor that increases non-randomness [48]. Hence,
as another test to the ability of $K$ relative to ${\rm LT^{2}C^{2}}$ to
identify idiosyncratic elements of human regularities, we asked whether the
random quality of the participant’s string deteriorated with time.
For each of the $138$ strings $x=x_{1}\cdots x_{120}$ ($x_{i}\in\\{0,1\\}$)
produced by the participants, we measured the $K$ complexity of all the sub-
strings of length $30$.
Specifically, we calculated the average $K(x_{i}\cdots x_{i+30})$ from the
$138$ strings for each $i\in[0,90]$ (see Fig. 2), using the same parameters as
in section 66.2 ($b=r=1$), and compared to the same sliding average procedure
for PRNG (Fig. 3) and AN sources (Fig. 4).
$20$$40$$60$$80$$12.6$$12.8$$13$$13.2$$13.4$$i$$K$ Figure 2: Average of
$K(x_{i}\cdots x_{i+30})$ for participants
$20$$40$$60$$80$$13.6$$13.8$$14$$14.2$$14.4$$i$$K$ Figure 3: Average of
$K(x_{i}\cdots x_{i+30})$ for PRNG
$20$$40$$60$$80$$14$$14.2$$14.4$$14.6$$14.8$$15$$i$$K$ Figure 4: Average of
$K(x_{i}\cdots x_{i+30})$ for AN
The sole source which showed a significant linear regression was human
generated data (see Table 3) which, as expected, showed a negative correlation
indicating that participants produced less complex or random strings over time
(slope $-0.007$, $p<0.02$).
| Participants | PRNG | AN
---|---|---|---
Mean slope | -0.007 | 0.0016 | -0.0005
$p$-value | 0.02 | 0.5 | 0.8
CI | [-0.01,-0.001] | [-0.003,0.006] | [-0.005,0.004]
Table 3: Predictability
The finding of a fatigue-related effect shows that the unpropped, i.e.,
resource-limited, human Turing machine is not only limited in terms of the
language it can parse, but also in terms of the amount of time it can dedicate
to a particular task.
### 6.4 Predictability
In section 44.2, we introduced a prediction method with two parameters:
$\ell_{F}$ and $\ell_{P}$. A predictor based on ${\rm LT^{2}C^{2}}$ achieved
levels of predictability close to 56% which were highly significant (see Table
5). The predictor, as expected, performed at chance for the control PRNG and
AN data. This fit was relatively insensitive to the values of $\ell_{P}$ and
$\ell_{F}$, contrary to our intuition that there may be a memory scale which
would correspond in this framework to a given length.
A very important aspect of this investigation, in line with the prior work of
[23], is to inquire whether specific parameters are stable for a given
individual. To this aim, we optimized, for each participant, the parameters
using the first 80 symbols of the sequence and then tested these parameters in
the second half of each segment (last 80 symbols of the sequence)
After this optimization procedure, mean predictability increased significantly
to $58.14\%$ ($p<0.002$, see Table 5). As expected, the optimization based on
partial data of PRNG and AN resulted in no improvement in the classifier,
which remained at chance with no significant difference ($p<0.3$, $p<0.2$,
respectively).
Hence, while the specific parameters for compression vary widely across each
individual, they show stability in the time-scale of this experiment.
| Participants | PRNG | AN
---|---|---|---
Mean $\mu$ | 56.16 | 50.69 | 49.48
Std $\sigma$ | 0.07 | 0.02 | 0.02
$1^{st}$ quartile | 49.97 | 48.84 | 48.30
Median | 55.02 | 50.77 | 49.04
$3^{rd}$ quartile | 59.75 | 52.21 | 50.46
Table 4: Average predictability
| Participants | PRNG | AN
---|---|---|---
Mean $\mu$ | 58.14 | 51.20 | 49.01
Std $\sigma$ | 0.07 | 0.04 | 0.03
$1^{st}$ quartile | 52.88 | 48.56 | 47.11
Median | 56.73 | 50.72 | 49.28
$3^{rd}$ quartile | 62.02 | 53.85 | 50.48
Table 5: Optimized predictability
## 7 Discussion
Here we analyzed strings produced by participants attempting to generate
random strings. Participants had a profound understanding of randomness and
hence avoided typical misconceptions such as exaggerating the number of
alternations. We reasoned that remaining regularities would express the
algorithmic nature of human thoughts, revealed in the form of specific
patterns.
Our effort here was to bridge the gap between Kolmogorov theory and
psychology, developing a concrete language, ${\rm LT^{2}C^{2}}$, satisfying
the following requirements: 1) to be simple enough so that the complexity of
any given sequence can be computed, 2) to be based on tangible operations of
human reasoning ( printing, repeating, …), 3) to be sufficiently powerful to
generate all possible sequences but not too powerful as to identify
regularities which would be invisible to humans.
More specifically, our aim is to develop a class of languages with certain
degrees of freedom which can then be fit to an individual (or an individual in
a specific context and time). Here, we opted for a comparably easier strategy
by only allowing the relative cost of each operation to vary. However, a
natural extension of this framework is to generate classes of languages where
structural and qualitative aspects of the language are free to vary. For
instance, one can devise a program structure for repeating portions of (not
necessarily neighboring) code, or considering the more general framework of
for-programs where the repetitions are more general than in our setting: for
i=1 to n do $P(i)$, where $P$ is a program that uses the successive values of
$i=1,2,\dots,n$ in each iteration. For instance, the following program
for i=1 to 6 do print ‘0’ repeat i times: print ‘1’
would describe the string
$010110111011110111110111111$.
The challenge from the computational theoretical point of view is to define an
extension which induces a computable (even more, feasible, whenever possible)
Kolmogorov complexity. For instance, adding simple control structures like
conditional jumps or allowing the use of imperative program variables may turn
the language into Turing-complete, with the theoretical consequences that we
already mentioned. The aim is to keep the language simple and yet include
structures to compact some patterns which are compatible with the human
language of thought.
We emphasize that our aim here was not to generate an optimal predictor of
human sequences. Clearly, restricting ${\rm LT^{2}C^{2}}$ to a very
rudimentary language is not the way to go to identify vast classes of
patterns. Our goal, instead, was to use human sequences to calibrate a
language which expresses and captures specific patterns of human thought in a
tangible and concrete way.
Our model is based on ideas from Kolmogorov complexity and Solomonoff’s
induction. It is important to compare it to what we think is the closest and
more similar approach in previous studies: the work [23] of Griffiths and
Tenenbaum’s. Griffiths and Tenenbaum devise a series of statistical models
that account for different kind of regularities. Each model $Z$ is fixed and
assigns to every binary string $x$ a probability $P_{Z}(x)$. This
probabilistic approach is connected to Kolmogorov complexity theory via
Levin’s famous Coding Theorem, which points out a remarkably numerical
relation between the algorithmic probability $P_{U}(x)$ (the probability that
the universal prefix Turing machine $U$ outputs $x$ when the input is filled-
up with the results of coin tosses) and the (prefix) Kolmogorov complexity
$K_{U}$ described in section 22.1 Formally, the theorem states that there is a
constant $c$ such that for any string $x\in\\{0,1\\}^{*}$ such that
$|-\log P_{U}(x)-K_{U}(x)|\leq c$ (4)
(the reader is referred to section 4.3.4 of [24] for more details). Griffiths
& Tenenbaum’s bridge to Kolmogorov complexity is only established through this
last theoretical result: replacing $P_{U}$ by $P_{Z}$ in Eq. (4) should
automatically give us some Kolmogorov complexity $K_{Z}$ with respect to some
underlying Turing machine $Z$.
While there is hence a formal relation to Kolmogorov complexity, there is no
explicit definition of the underlying machine, and hence no notion of program.
On the contrary, we propose a specific language of thought, formalized as the
programming language ${\rm LT^{2}C^{2}}$ or, alternatively, as a Turing
machine $N$, which assigns formal semantics to each program. Semantics are
given, precisely, through the behavior of $N$. The fundamental introduction of
program semantics and the clear distinction between inputs (programs of $N$)
and outputs (binary strings) allows us to give a straightforward definition of
Kolmogorov complexity relative to $N$, denoted $K_{N}$, which —because of the
choice of ${\rm LT^{2}C^{2}}$— becomes computable in polynomial time. Once we
count with a complexity function, we apply Solomonoff’s ideas of inductive
inference to obtain a predictor which tries to guess the continuation of a
given string under the assumption that the most probable one is the most
compressible in terms of ${\rm LT^{2}C^{2}}$-Kolmogorov complexity. As in
[23], we also make use of the Coding Theorem (4), but in the opposite
direction: given the complexity $K_{N}$, we derive an algorithmic probability
$P_{N}$.
This work is mainly a theoretical development, to develop a framework to adapt
Kolmogorov ideas in a constructive procedure (i.e., defining an explicit
language) to identify regularities in human sequences. The theory was
validated experimentally, as three tests were satisfied: 1) human sequences
were less complex than control PRNG sequences, 2) human sequences were non-
stationary, showing decreasing values of complexity, 3) each individual showed
traces of algorithmic stability since fitting of partial data was more
effective to predict subsequent data than average fits. Our hope is that this
theory may constitute, in the future, a useful framework to ground and
describe the patterns of human thoughts.
###### Acknowledgements.
The authors are thankful to Daniel Gorín and Guillermo Cecchi for useful
discussions. S. Figueira is partially supported by grants PICT-2011-0365 and
UBACyT 20020110100025.
## References
* [1] M Kac, What is random?, Am. Sci. 71, 405 (1983).
* [2] H Reichenbach, The Theory of Probability, University of California Press, Berkeley (1949).
* [3] G S Tune, Response preferences: A review of some relevant literature, Psychol. Bull. 61, 286 (1964).
* [4] A D Baddeley, The capacity for generating information by randomization, Q. J. Exp. Psychol. 18, 119 (1966).
* [5] A Tversky, D Kahneman, Belief in the law of small numbers, Psychol. Bull. 76, 105 (1971).
* [6] W A Wagenaar, Randomness and randomizers: Maybe the problem is not so big, J. Behav. Decis. Making 4, 220 (1991).
* [7] R Falk, Perception of randomness, Unpublished doctoral dissertation, Hebrew University of Jerusalem (1975).
* [8] R Falk, The perception of randomness, In: Proceedings of the fifth international conference for the psychology of mathematics education, Vol. 1, Pag. 222, Grenoble, France (1981).
* [9] D Kahneman, A Tversky, Subjective probability: A judgment of representativeness, Cognitive Psychol. 3, 430 (1972).
* [10] A Tversky, D Kahneman, Subjective probability: A judgment of representativeness, Cognitive Psychol. 3, 430 (1972).
* [11] T Gilovich, R Vallone, A Tversky, The hot hand in basketball: On the misperception of random sequences, Cognitive Psychol. 17, 295 (1985).
* [12] W A Wgenaar, G B Kerenm, Cance and luck are not the same, J. Behav. Decis. Making 1, 65 (1988).
* [13] D Budescu, A Rapoport, Subjective randomization in one-and two-person games, J. Behav. Decis. Making 7, 261 (1994).
* [14] A Rapoport, D V Budescu, Generation of random series in two-person strictly competitive games, J. Exp. Psychol. Gen. 121, 352 (1992).
* [15] P D Larkey, R A Smith, J B Kadane, It’s okay to believe in the hot hand, Chance: New Directions for Statistics and Computing 2, 22–30 (1989).
* [16] A Tversky, T Gilovich, The ”hot hand”: Statistical reality or cognitive illusion?, Chance: New Directions for Statistics and Computing 2, 31 (1989).
* [17] A N Kolmogorov, Three approaches to the quantitative definition of information, Probl. Inf. Transm. 1, 1 (1965).
* [18] G J Chaitin, A theory of program size formally identical to information theory, J. AMC 22, 329 (1975).
* [19] L A Levin, A K Zvonkin, The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms, Russ. Math. Surv. 25, 83 (1970).
* [20] D G Champernowne, The construction of decimals in the scale of ten, J. London Math. Soc. 8, 254 (1933).
* [21] R J Solomonoff, A formal theory of inductive inference: Part I, Inform. Control 7, 1 (1964); ibid. Part II 7, 224 (1964).
* [22] G Chaitin, On the length of programs for computing finite binary sequences: statistical considerations, J. ACM 13, 547 (1969).
* [23] T L Griffiths, J B Tenenbaum, Probability, algorithmic complexity, and subjective randomness, In: Proceedings of the Twenty-Fifth Annual Conference of the Cognitive Science Society, Eds. R Alterman, D Hirsh, Pag. 480, Cognitive Science Society, Boston (MA, USA), (2003).
* [24] M Li, P M Vitányi, An introduction to Kolmogorov complexity and its applications, Springer, Berlin, 3rd edition (2008).
* [25] J Feldman, Minimization of boolean copmlexity in human concept learning, Nature London 407, 630 (2000).
* [26] N Chater, The search for simplicity: A fundamental cognitive principle?, Q. J. Exp. Psychol. 52A, 273 (1999).
* [27] R Falk, C Konold, Making sense of randomness: Implicit encoding as a bias for judgment, Psychol. Rev. 104, 301 (1997).
* [28] C P Schnorr, Zufälligkeit und Wahrscheinlichkeit, Lecture Notes in Mathematics vol. 218. Springer-Verlag, Berlin, New York (1971).
* [29] C P Schnorr, A unified approach to the definition of a random sequence, Math. Syst. Theory 5, 246 (1971).
* [30] D Dellarosa, A history of thinking, In: The psychology of human thought, Eds. R J Sternberg, E E Smith, Cambridge University Press, Cambridge (USA) (1988).
* [31] S Carey, The origin of concepts, Oxford University Press, Oxford (USA) (2009).
* [32] G Boole, An investigation of the laws of thought: on which are founded the mathematical theories of logic and probabilities, Vol. 2, Walton and Maberly, London (1854).
* [33] A Zylberberg, S Dehaene, G Mindlin, M Sigman, Neurophysiological bases of exponential sensory decay and top-down memory retrieval: a model, Front. Comput. Neurosci. 3, 4 (2009).
* [34] A Zylberberg, S Dehaene, P Roelfsema, M Sigman, The human Turing machine: a neural framework for mental programs, Trends. Cogn. Sci. 15, 293 (2011).
* [35] M Graziano, P Polosecki, D Shalom, M Sigman, Parsing a perceptual decision into a sequence of moments of thought, Front. Integ. Neurosci. 5, 45 (2011).
* [36] A Zylberberg, P Barttfeld, M Sigman, The construction of confidence in a perceptual decision, Front. Integ. Neurosci. 6, 79 (2012).
* [37] D Shalom, B Dagnino, M Sigman, Looking at breakout: Urgency and predictability direct eye events, Vision Res. 51, 1262 (2011).
* [38] S Dehaene, M Sigman, From a single decision to a multi-step algorithm, Curr. Opin. Neurobiol. 22, 937 (2012).
* [39] J Kamienkowski, H Pashler, S Dehaene, M Sigman, Effects of practice on task architecture: Combined evidence from interference experiments and random-walk models of decision making, Cognition 119, 81 (2011).
* [40] A Zylberberg, D Slezak, P Roelfsema, S Dehaene, M Sigman, The brain’s router: a cortical network model of serial processing in the primate brain, PLoS Comput. Biol. 6, e1000765 (2010).
* [41] L Gallos, H Makse, M Sigman, A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks, P. Natl. Acad. Sci. USA 109, 2825 (2012).
* [42] L Gallos, M Sigman, H Makse, The conundrum of functional brain networks: small-world efficiency or fractal modularity, Front. Physiol. 3 123 (2012).
* [43] M Costa, F Bonomo, M Sigman, Scale-invariant transition probabilities in free word association trajectories, Front. Integ. Neurosci. 3 19 (2009).
* [44] N Mota, N Vasconcelos, N Lemos, A Pieretti, O Kinouchi, G Cecchi, M Copelli, S Ribeiro, Speech graphs provide a quantitative measure of thought disorder in psychosis, PloS one 7, e34928 (2012).
* [45] M Sigman, G Cecchi, Global organization of the wordnet lexicon, P. Natl. Acad. Sci. USA 99, 1742 (2002).
* [46] R Cilibrasi, P M Vitányi, Clustering by compression, IEEE T. Inform. Theory 51, 1523 (2005).
* [47] M Matsumoto, T Nishimura, Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator, ACM Trans. Model. Comput. Simul. 8, 3 (1998).
* [48] R L Weiss, On producing random responses, Psychon. Rep. 14, 931 (1964).
* [49] F Bartlett, Fatigue following highly skilled work, Nature (London) 147, 717 (1941).
* [50] D E Broadbent, Is a fatigue test now possible?, Ergonomics 22, 1277 (1979).
* [51] W Floyd, A Welford, Symposium on fatigue and symposium on human factors in equipment design, Eds. W F Floyd, A T Welford, Arno Press, New York (1953).
* [52] R Hockey, Stress and fatigue in human performance, Wiley, Chichester (1983).
|
arxiv-papers
| 2013-03-04T21:51:55 |
2024-09-04T02:49:42.427724
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Sergio Romano, Mariano Sigman, Santiago Figueira",
"submitter": "Sergio Romano",
"url": "https://arxiv.org/abs/1303.0875"
}
|
1303.0935
|
Hadron Collider Physics symposium 2012
11institutetext: Institute of Modern Physics and Center for High Energy
Physics, Tsinghua University, Beijing 100084, China 22institutetext:
Department of Physics, Tohoku University, Sendai 980-8578, Japan
33institutetext: Department of Physics, Saitama University, Saitama 355-8570,
Japan 44institutetext: Department of Physics, Kyoto University, Kyoto
606-8502, Japan 55institutetext: Institute for Cosmic Ray Research (ICRR),
University of Tokyo, Kashiwa, Chiba 277-8582, Japan
# Minimal dilaton model
Tomohiro Abe 11 [email protected] Ryuichiro Kitano 22
[email protected] Yasufumi Konishi 33
[email protected] Kin-ya Oda 44
[email protected] Joe Sato 33 [email protected] Shohei
Sugiyama 55 [email protected]
###### Abstract
Both the ATLAS and CMS experiments at the LHC have reported the observation of
the particle of mass around 125 GeV which is consistent to the Standard Model
(SM) Higgs boson, but with an excess of events beyond the SM expectation in
the diphoton decay channel at each of them. There still remains room for a
logical possibility that we are not seeing the SM Higgs but something else.
Here we introduce the minimal dilaton model in which the LHC signals are
explained by an extra singlet scalar of the mass around 125 GeV that slightly
mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum
expectation value well beyond the electroweak scale, it can be identified as a
linearly realized version of a dilaton field. Though the current experimental
constraints from the Higgs search disfavors such a region, the singlet scalar
model itself still provides a viable alternative to the SM Higgs in
interpreting its search results.
## 1 Introduction
The ATLAS :2012gk and CMS :2012gu experiments have reported the observation
of the particle of mass 125 GeV that is consistent with the Standard Model
(SM) Higgs boson, but with an excess of events in the diphoton decay channel
with the signal strength $1.80\pm 0.30\pm 0.7$ at ATLAS ATLAS diphoton and
$1.6_{-0.6}^{+0.7}$ at CMS CMS combined . There still remain a logical
possibility that the observed particle is not the SM Higgs but an extra boson
beyond the SM.
Here we report the minimal dilaton model Abe:2012eu in which the observed
particle is identified as the SM singlet scalar field $S$ of mass around 125
GeV that slightly mixes with the SM Higgs $H$ heavier than 600 GeV. A vector-
like top partner $T$ is introduced so that $S$ can couple to a pair of gluons
or photons through the $T$ loop, just as in the case of the SM Higgs coupling
to them via the top quark loop. The top partner $T$ at the same time cures the
constraints from the electroweak precision data by its loop driving the
Peskin-Takeuchi T-parameter positive and hence canceling the heavy SM Higgs
contributions.
If the vacuum expectation value (vev) of the singlet $f:=\left\langle
S\right\rangle$ is much larger than the electroweak scale $v\simeq 246\,$GeV,
then $S$ can be identified as a linear realization of a dilaton field (hence
the model’s name) associated with an almost scale invariant sector whose
strongly coupled interactions, broken at $f$, give rise to the top quark and
the SM Higgs as composite particles, and the top partner $T$ represents a
fermion in the composite sector Abe:2012eu .
In literature conventional dilaton , the terminology “dilaton model” has been
used for the one in which all the SM particles couple to the dilaton through
the trace of the energy momentum tensor of the SM. This type of model
implicitly assumes that all the SM particles are composite under the strong
dynamics in the ultraviolet region. On the other hand, we take more
conservative approach that the SM except for the top/Higgs sector is a
spectator of the dynamics, and thus the dilaton $S$ couples to the $W$, $Z$
bosons and to light fermions only through the mixing with the Higgs fields
$H$, while the couplings to the gluons and photons are generated only through
the loops of the top quark and its partner. Due to these different origin of
the couplings between two models, the production and decay properties are
quite different. Indeed, we see that our model can give better fit to the LHC
data compared to the SM Higgs boson, while the authors of Ref. conventional
dilaton have reported that the above mentioned dilaton scenario is rather
disfavored.
## 2 Minimal dilaton model
Our starting Lagrangian Abe:2012eu can be written in terms of the singlet
scalar $S$, the SM Higgs $H$, the left handed quark doublet of the third
generation $q_{3L}$, and the top partner $T$ as
$\displaystyle\mathcal{L}$ $\displaystyle=\mathcal{L}_{\text{SM}}-{1\over
2}\partial_{\mu}S\partial^{\mu}S-\tilde{V}(S,H)$
$\displaystyle\quad-\overline{T}\left(\not{D}+yS\right)T-\left[y^{\prime}\overline{T}(q_{3L}\cdot
H)+\text{h.c.}\right],$ (1)
where $\mathcal{L}_{\text{SM}}$ is the SM Lagrangian except for the Higgs
potential and $\tilde{V}(S,H)$ is the potential for the scalars
$\displaystyle\tilde{V}(S,H)$ $\displaystyle={m_{S}^{2}\over
2}S^{2}+{\lambda_{S}\over 4!}S^{4}+{\kappa\over
2}S^{2}\left|H\right|^{2}+m_{H}^{2}\left|H\right|^{2}+{\lambda_{H}\over
4}\left|H\right|^{4}.$ (2)
The detailed potential shape (2) is unimportant, as its couplings can be
translated into the following two parameters relevant to the Higgs searches:
the Higgs-singlet mixing $\theta_{H}$ and the ratio of the vevs $\eta:=v/f$.
The relation is shown in Appendix B in Ref. Abe:2012eu . We identify the
lighter mass eigenstate to be the observed 125 GeV boson and the heavier to be
almost the SM Higgs with mass $\gtrsim 600$ GeV. The vector-like top partner
$T$ is set to be an $SU(2)_{L}$ singlet and to have a hypercharge $Y=2/3$ so
that it can decay into the top quark through a small mixing $\theta_{L}\sim
y^{\prime}v/\sqrt{2}yf$.
When the SM Higgs mass is heavy, the Peskin-Takeuchi S,T parameters severely
constrain the model. In the SM, the Higgs mass heavier than 185 GeV is ruled
out at the 95% CL ALEPH:2010aa . In the minimal dilaton model, the $T$ loop
gives positive contribution to the Peskin-Takeuchi T-parameter and cures the
model from the heavy Higgs effects. In Fig. 1, we show the constraints from
the S,T parameters on the top-partner mass $m_{t^{\prime}}$ and the mixing
$\theta_{L}$ between the top and top partner, at the 95% CL for the SM Higgs
mass $m_{h}=600$ GeV and the Higgs-singlet mixing angle $\theta_{H}=0$,
$\pi/6$, and $\pi/3$ Abe:2012eu .
Figure 1: Allowed region plot by the Peskin-Takeuchi S,T parameters at the 95%
CL in the plane of top-partner mass $m_{t^{\prime}}$ vs top-top partner mixing
$\theta_{L}$; The SM Higgs mass is taken to be $m_{h}=600$ GeV and the Higgs-
singlet mixing angle is chosen as $\theta_{H}=0$, $\pi/6$, and $\pi/3$
Abe:2012eu .
## 3 Higgs signal at LHC
Let us show our prediction for the Higgs searches at the LHC. In Ref.
conventional dilaton , the parameter $c_{X}$ is defined as the ratio of the
coupling of $S$ to $XX$ (or $X\bar{X}$), in the amplitude, to that of the SM
Higgs at 125 GeV. The minimal dilaton model gives Abe:2012eu
$\displaystyle c_{V}=c_{F}$ $\displaystyle=\sin\theta_{H},$ $\displaystyle
c_{t}$
$\displaystyle=\cos^{2}\theta_{L}\sin\theta_{H}+\eta\sin^{2}\theta_{L}\cos\theta_{H},$
$\displaystyle c_{g}$ $\displaystyle=\eta\cos\theta_{H}+\sin\theta_{H},$
$\displaystyle c_{\gamma}$ $\displaystyle=\eta
A_{t^{\prime}}\cos\theta_{H}+A_{\text{SM}}\sin\theta_{H},$ $\displaystyle
c_{\text{inv}}$ $\displaystyle=0,$ (3)
where $A_{t^{\prime}}\simeq 16/9$, $A_{\text{SM}}\simeq-6.5$, the subscript
“$F$” stands for all the SM fermions except the top quark, and “$V$” for $W$
or $Z$.
In Fig. 5, we plot the resultant signal strength, which is the ratio of the
model cross section to the corresponding SM Higgs one at 125 GeV, for the
singlet vev $f=246$ GeV ($\eta=1$). We see that the diphoton signal can be
enhanced in the dilatonic region $\theta_{H}\sim 0$, whereas other processes
are suppressed.
Figure 2: Upper: Signal strength for $f=246$ GeV ($\eta=1$). “GF” denotes the
case that $S$ is produced by the gluon fusion process, while “VBF” by the
vector boson fusion, Higgs-strahlung, or $ttH$ process. Lower: The same for
the Higgs to digluon process; Shown for comparison though hardly observable at
the LHC.
The minimal dilaton model predicts different production cross sections between
GF and VBF/VH/ttH processes. In $H\to\gamma\gamma$ search, composition of
these production channels differs category by category and are summarized in
Table 2 in Ref. :2012gu for CMS and in Table 6 in Ref. ATLAS_diphoton for
ATLAS. We define $\varepsilon^{i}_{X}$ as the proportion of the production
process $X$ within a category $i$. Note that $\sum_{X}\varepsilon^{i}_{X}=1$
by definition for each category $i$, where a summation over $X$ is always
understood as for all the relevant production channels: GF, VBF, VH, and ttH.
GF is the dominant production process and satisfies
$\varepsilon^{i}_{\text{GF}}\lesssim 90\%$ in production processes other than
dijet category. In the dijet category, the dominant production process is VBF,
and $\varepsilon_{\text{VBF}}\lesssim 70\%$.
When acceptance of a production channel $X$ for a category $i$ is $a^{i}_{X}$,
the estimated value of a signal fraction under the given set of cuts $i$
becomes
$\displaystyle\varepsilon^{i}_{X}$
$\displaystyle={a^{i}_{X}\sigma^{\text{SM}}_{X}\over\sum_{Y}a^{i}_{Y}\sigma^{\text{SM}}_{Y}},$
(4)
where $\sigma^{\text{SM}}_{X}$ is the Higgs production cross section in the SM
through the channel $X$. Given $\\{\varepsilon^{i}_{X}\\}$, we can compute the
signal strength under the imposed cuts for each category $i$
$\displaystyle\hat{\mu}^{i}(h\to\gamma\gamma)$
$\displaystyle={\sum_{X}a^{i}_{X}\,\sigma_{X}\over\sum_{Y}a^{i}_{Y}\,\sigma_{Y}^{\text{SM}}}\,{\operatorname{BR}(s\to\gamma\gamma)\over\operatorname{BR}(h\to\gamma\gamma)_{\text{SM}}}$
$\displaystyle=\sum_{X}\varepsilon^{i}_{X}\left|c_{X}\right|^{2}\,{\left|c_{\gamma}\right|^{2}\over
R(s\to\text{all})},$ (5)
where
$R(s\to\text{all}):=0.913\left|c_{V}\right|^{2}+0.085\left|c_{g}\right|^{2}+0.002\left|c_{\gamma}\right|^{2}$.
We have assumed that the acceptance $a^{i}_{X}$ under the category $i$ does
not change from that of the SM for each production channel $X$.
For the $ZZ\to 4l$ and $WW\to l\nu l\nu$ decay channels, we assume that all
the signals are coming from GF and, hence, we approximate
$\displaystyle\hat{\mu}(s\to VV)$
$\displaystyle=\left|c_{g}\right|^{2}\,{\left|c_{V}\right|^{2}\over
R(s\to\text{all})}$ (6)
for $VV=WW$ and $ZZ$.
As all the signal strengths are obtained, we perform a chi-square test with
the Gaussian approximation for all the errors
$\displaystyle\chi^{2}$
$\displaystyle=\sum_{i}\left(\hat{\mu}_{i}-\mu_{i}\over\sigma_{i}\right)^{2},$
(7)
where summation over $i$ is for all the diphoton categories as well as the
$WW$ and $ZZ$ channels. Of course this is a naive fit without taking into
account the off-diagonal elements of the correlation matrix for various
categories. For example, this type of naive weighting does not reproduce the
central value nor the error of the diphoton signal strength. Although we are
aware of this fact, this is the best we can do with the current data made
public. The result should be regarded as an illustration at best.
Keeping the above caution in mind, we show the constraints from the data from
the LHC Higgs searches in Fig 3. For comparison, we also show in Fig. 4 the
same plot including more recent data presented at HCP2012 and after HCP and
after . We see that the dilaton like region $\eta^{-1}\gg 1$,
$\left|\theta_{H}\right|\ll 1$ is disfavored by the Higgs data alone, but
there still remains phenomenologically viable region at
$\left|\theta_{H}\right|\lesssim\pi/6$ and $\eta^{-1}\lesssim 1.5$.
Figure 3: Favored regions within 90, 95 and 99% confidence intervals, enclosed
by solid, dashed, and dotted lines, respectively. Density (area) of favored
region decreases (increases) in according order Abe:2012eu .
Figure 4: The same as Fig. 3, including the data presented at HCP2012 and
after.
## 4 Summary
We have examined the possibility that the 125 GeV boson observed at the LHC is
not the SM Higgs but an extra singlet scalar $S$. If the vev well exceeds the
electroweak scale, $f=\left\langle S\right\rangle\gg 246$ GeV, then $S$ can be
regarded as a linearly realized dilaton that is associated with the
quasiconformal dynamics making up the top and Higgs particles as composite
ones. However, such a parameter region is disfavored by the current Higgs
data. Though such a linear dilaton interpretation is marginally excluded, this
model with a singlet scalar and a vector-like top partner still provides a
phenomenologically viable alternative to the SM Higgs to fit the current LHC
data.
## Acknowledgments
This work is supported in part by the Grants-in-Aid for Scientific Research
No. 23740165 (R. K.), No. 23104009, No. 20244028, No. 23740192 (K. O.), and
No. 24340044 (J. S.) of JSPS.
## References
* (1) G. Aad et al. [ATLAS Collaboration], “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC,” Phys. Lett. B 716 (2012) 1 [arXiv:1207.7214 [hep-ex]].
* (2) S. Chatrchyan et al. [CMS Collaboration], “Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC,” Phys. Lett. B 716 (2012) 30 [arXiv:1207.7235 [hep-ex]].
* (3) The ATLAS Collaboration, “Observation and study of the Higgs boson candidate in the two photon decay channel with the ATLAS detector at the LHC,” ATLAS-CONF-2012-128.
* (4) The CMS Collaboration, “Combination of standard model Higgs boson searches and measurements of the properties of the new boson with a mass near 125 GeV,” CMS PAS HIG-12-045.
* (5) T. Abe, R. Kitano, Y. Konishi, K. -y. Oda, J. Sato and S. Sugiyama, “Minimal Dilaton Model,” Phys. Rev. D 86 (2012) 115016 [arXiv:1209.4544 [hep-ph]].
* (6) [ALEPH and CDF and D0 and DELPHI and L3 and OPAL and SLD and LEP Electroweak Working Group and Tevatron Electroweak Working Group and SLD Electroweak and Heavy Flavour Groups Collaborations], “Precision Electroweak Measurements and Constraints on the Standard Model,” arXiv:1012.2367 [hep-ex].
* (7) P. P. Giardino, K. Kannike, M. Raidal and A. Strumia, “Reconstructing Higgs boson properties from the LHC and Tevatron data,” JHEP 1206 (2012) 117 [arXiv:1203.4254 [hep-ph]];
P. P. Giardino, K. Kannike, M. Raidal and A. Strumia, “Is the resonance at 125
GeV the Higgs boson?,” arXiv:1207.1347 [hep-ph];
D. Carmi, A. Falkowski, E. Kuflik, T. Volansky and J. Zupan, “Higgs After the
Discovery: A Status Report,” arXiv:1207.1718 [hep-ph].
* (8) The ATLAS Collaboration, “Observation of an excess of events in the search for the Standard Model Higgs boson in the $\gamma\gamma$ channel with the ATLAS detector,” ATLAS-CONF-2012-091.
* (9) The ATLAS Collaboration, “Observation and study of the Higgs boson candidate in the two photon decay channel with the ATLAS detector at the LHC,” ATLAS-CONF-2012-168; “An update of combined measurements of the new Higgs-like boson with high mass resolution channels” ATLAS-CONF-2012-170;
The CMS Collaboration, “Combination of standard model Higgs boson searches and
measurements of the properties of the new boson with a mass near 125 GeV,” CMS
PAS HIG-12-045.
|
arxiv-papers
| 2013-03-05T05:58:39 |
2024-09-04T02:49:42.438401
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Tomohiro Abe, Ryuichiro Kitano, Yasufumi Konishi, Kin-ya Oda, Joe\n Sato, and Shohei Sugiyama",
"submitter": "Kin-ya Oda",
"url": "https://arxiv.org/abs/1303.0935"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.